Tag Archives: rpart

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

How Could Classification Trees Be So Fast on Categorical Variables?

I think that over the past months, I have been saying non-correct things about classification with categorical covariates. Because I never took time to look at it carefuly. Consider some simulated dataset, with a logistic regression,

> n=1e3
> set.seed(1)
> X1=runif(n)
> q=quantile(X1,(0:26)/26)
> q[1]=0
> X2=cut(X1,q,labels=LETTERS[1:26])
> p=exp(-.1+qnorm(2*(abs(.5-X1))))/(1+exp(-.1+qnorm(2*(abs(.5-X1)))))
> Y=rbinom(n,size=1,p)
> df=data.frame(X1=X1,X2=X2,p=p,Y=Y)

Here, we use some continuous covariate, except that is considered as not-observed. Instead, we have a categorical covariate with 26 categories. The (theoretical) relationship between the covariate and the probability is given below,

> vx1=seq(0,1,by=.001)
> vp=exp(-.1+qnorm(2*(abs(.5-vx1))))/(1+exp(-.1+qnorm(2*(abs(.5-vx1)))))
> plot(vx1,vp,type="l")

and the empirical probability, for each modality is

If we run a classification tree, we get

> library(rpart)
> tree=rpart(Y~X2,data=df)
> library(rpart.plot)
> prp(tree, type=2, extra=1)

To be more specific, the output is here

> tree
1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.3 0.302
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *
    5) X2=F,G,H,I 153  37.22876 0.4183007       *
  3) X2=A,B,C,D,E,S,T,U,V,W,X,Y,Z 501 109.61 0.67
    6) X2=B,C,D,E,S,T,U,V,W,X 385  90.38 0.623  *
    7) X2=A,Y,Z 116  14.50862 0.8534483         *

 

Note that it takes less than a second to get that output. So clearly, we did not look for all combinations between modalities. For the first node, there are like  possible groups, i.e.

> 67108864

It is big… not huge, but too big to try all combinations, since that’s only the first node, and we have to do it again on the two leaves, etc. Antoine (aka @ly_antoine) told me – while we were having a coffee after lunch today – the trick to get a fast algorithm, on categories. And as usual, the idea is very clever…

First, we need a function to compute Gini index

> gini=function(y,classe){
+    T=table(y,classe)
+    nx=apply(T,2,sum)
+    n=sum(T)
+    pxy=T/matrix(rep(nx,each=2),nrow=2)
+    omega=matrix(rep(nx,each=2),nrow=2)/n
+    g=-sum(omega*pxy*(1-pxy))
+    return(g)}

For the first node, the idea is very simple:

  • Compute empirical averages 
> cond_prob=aggregate(df$Y,by=list(df$X2),mean)
  • Then sort those values, ,
  • Based on that ordering, consider 
> Group_Letters=cond_prob[order(cond_prob$x),2]

  • Then consider (only)  possible partitions,

against 

> v_gini=rep(NA,26)
> for(v in 1:26){
+   CLASSE=df$X2 %in% Group_Letters[1:v]
+   v_gini[v]=gini(y=df$Y,classe=CLASSE)
+ }

If we plot them, we get

> plot(1:26,v_gini,type="b)

As for continuous variables, we seek for the maximum value, and then, we have our two groups,

> sort(Group_Letters[1:which.max(v_gini)])
 [1] F G H I J K L M N O P Q R

That’s exactly what we got with the tree function in R,

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30

Now, consider the leaf on the left (for instance)

> sub_df=df[df$X2 %in% sort(Group_Letters[1:which.max(v_gini)]),]

Then use the same algorithm as before: sort the conditional means,

> cond_prob=aggregate(sub_df$Y,by=
+ list(sub_df$X2),mean)
> s_Group_Letters=cond_prob[order(cond_prob$x),2]

Then compute Gini indices based on groups obtained from that ordering,

> v_gini=rep(NA,length(sub_Group_Letters))
> for(v in 1:length(sub_Group_Letters)){
+   CLASSE=sub_df$X2 %in% s_Group_Letters[1:v]
+   v_gini[v]=gini(y=sub_df$Y,classe=CLASSE)
+ }

If we plot it, we get our two groups,

> plot(1:length(s_Group_Letters),v_gini,type="b")

And the first group is here

> sort(sub_Group_Letters[1:which.max(v_gini)])
[1] J K L M N O P Q R

Again, that’s exactly what we got with the R function

1) root 1000 249.90000 0.4900000  
  2) X2=F,G,H,I,J,K,L,M,N,O,P,Q,R 499 105.30 0.30
    4) X2=J,K,L,M,N,O,P,Q,R 346  65.12 0.25144  *

Clever, isn’t?

Computational Time of Predictive Models

Tuesday, at the end of my 5-hour crash course on machine learning for actuaries, Pierre asked me an interesting question about computational time of different techniques. I’ve been presenting the philosophy of various algorithm, but I forgot to mention computational time. I wanted to try several classification algorithms on the dataset used to illustrate the techniques

> rm(list=ls())
> myocarde=read.table(
"http://freakonometrics.free.fr/myocarde.csv",
head=TRUE,sep=";")
> levels(myocarde$PRONO)=c("Death","Survival")

But the dataset is rather small, with 71 observations and 7 explanatory variables. So I decided to replicate the observations, and to add some covariates,

> levels(myocarde$PRONO)=c("Death","Survival")
> idx=rep(1:nrow(myocarde),each=100)
> TPS=matrix(NA,30,10)
> myocarde_large=myocarde[idx,]
> k=23
> M=data.frame(matrix(rnorm(k*
+ nrow(myocarde_large)),nrow(myocarde_large),k))
> names(M)=paste("X",1:k,sep="")
> myocarde_large=cbind(myocarde_large,M)
> dim(myocarde_large)
[1] 7100   31
> object.size(myocarde_large)
2049.064 kbytes

The dataset is not big… but at least, it does not take 0.0001 sec. to run a regression.  Actually, to run a logistic regression, it takes 0.1 second

> system.time(fit< glm(PRONO~.,
+ data=myocarde_large, family="binomial"))
       user      system     elapsed 
      0.114       0.016       0.134 
> object.size(fit)
9,313.600 kbytes

And I was surprised that the regression object was 9Mo, which is more than four times the size of the dataset. With a large dataset, 100 times larger,

> dim(myocarde_large_2)
[1] 710000     31

it takes 20 sec.

> system.time(fit<-glm(PRONO~.,
+ data=myocarde_large_2, family="binomial"))
utilisateur     système      écoulé 
     16.394       2.576      19.819 
> object.size(fit)
90,9025.600 kbytes

and the object is ‘only’ ten times bigger.

Continue reading Computational Time of Predictive Models

On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

https://latex.codecogs.com/gif.latex?\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})

where https://latex.codecogs.com/gif.latex?m(\cdot) is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30, 
+ col="green", ticktype="detailed",shade=TRUE)

Continue reading On Some Alternatives to Regression Models

Growing some Trees

Consider here the dataset used in a previous post, about visualising a classification (with more than 2 features),

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ header=TRUE,sep=";")

The default classification tree is

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE)
> rpart.plot(arbre,type=4,extra=6)

We can change the options here, such as the minimum number of observations, per node

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+       control=rpart.control(minsplit=10))
> rpart.plot(arbre,type=4,extra=6)

or

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+        control=rpart.control(minsplit=5))
> rpart.plot(arbre,type=4,extra=6)

Continue reading Growing some Trees

Growing one Tree

Consider the following toy dataset, with some spam/ham information, and two words, “viagra” and “lottery”.

> load(spam.RData)
> head(db)
      Y viagra lottery
27 spam      0       1
37  ham      0       1
57 spam      0       0
89  ham      0       0
20 spam      1       0
86  ham      0       0

For the first node, compute Gini index for the two variables,

> gini=function(variable){
+ T=table(db$Y,db[,variable])
+ nx=apply(T,2,sum)
+ ProbCond=T/matrix(rep(nx,each=2),2,2)
+ ProbCond
+ Gini=-ProbCond*(1-ProbCond)
+ sum(matrix(rep(nx,each=2),2,2)/sum(nx)*Gini)}
> gini("viagra")
[1] -0.44
> gini("lottery")
[1] -0.487

Here Gini index is maximal for “viagra”, so that will be the first node.

Continue reading Growing one Tree