Tag Archives: random forest

The m=√p rule for random forests

A couple of days ago, in our lab session, we discussed random forrests, and, since it was based on the example in ISLR, we had a quick discussion about the random choice of features, and the “m=\sqrt{p}” rule

Interestingly, on that one, we can play a bit, and try all choices, and do it again, on a different train/test split,

library(randomForest)
library(ISLR2)
set.seed(123)

sim = function(t){
train = sample(nrow(Boston), size = nrow(Boston)*.7)
subsim = function(i){
rf.boston <- randomForest(medv ~ ., data = Boston,
subset = train, mtry = i)
yhat.rf <- predict(rf.boston, newdata = Boston[-train, ])
mean((yhat.rf - Boston[-train, "medv"])^2)
}
Vectorize(subsim)(2:12)
}
M=Vectorize(sim)(1:499)

and now we can plot it, with the MSE on the test dataset, as a function of m, the number of features selected, at each node

boxplot(t(M))

or more clearly

vm=apply(M,1,mean)
plot(2:12,vm,type="b",pch=19,ylim=c(10.5,15))
abline(v=sqrt(12),col="red")

Even if here, the “m=\sqrt{p}” rule might not be optimal, we can see that using a random forest instead of a bagging strategy, i.e. “m<\sqrt{p}“, could improve predictions (and not only make the code run faster).

Trees and forests

For my ACT6100 weekly quiz, I usually generate some datasets, and then ask students to compare various predictive algorithms. Last week, it was about classification trees and random forests. And students were surprised to have such differences (they had to estimate the probability to have a specific label, for the barycenter of the covariates).

Usually, I use the following to generate some (here 12) covariates that could be correlated

library(FactoMineR)
n=279
library(clusterGeneration)
library(mnormt)
k=12
S=genPositiveDefMat("unifcorrmat",dim=k)
X=round(rmnorm(n,varcov=S$Sigma)+8,2)
rownames(X)=1:n
colnames(X)=LETTERS[1:k]

Then I need to generate some data, based on some covariates (5 out of 12), with various strengths

idx = sample(1:k,size=5)
u = sample(c(-(4:1),1:4),5)
beta = rep(0,k)
beta[idx] = u
U = X%*%beta
U = U-min(U)
U = U/max(U)*6-3
p = exp(( U))/(1+exp((U )))
Y = rbinom(n,size=1,prob=p)
df = data.frame(Y=as.factor(Y),X)
levels(df$Y)=levels=c("blue","red")

We can run a classification tree

library(rpart)
arbre = rpart(Y~., data=df)

and a random forest,

library(randomForest)
set.seed(1)
arbres = randomForest(Y~., data=df)

Here are the partial plots for 4 of the explanatory variables that actually have an impact

partialPlot(arbres,pred.data = df, x.var = "A")


Predictions for the “average” point of the dataset is here

(parbre = predict(arbre,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
       blue       red
1 0.8064516 0.1935484
(parbres = predict(arbres,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
   blue   red
1 0.422 0.578
attr(,"class")
[1] "matrix" "votes"

and there is a substantial difference, with a probability of 19% with a single tree, 58% with 500 trees (the default value of the function).

To understand why we can have such a difference, we should not only focus on the bagging stratgy, but look at the variability of the predictions, obtained with trees,

B=1e4
parbres = rep(NA,B)
m=data.frame(t(apply(df[,-1],2,mean)))
for(b in 1:B){
  idx = sample(1:nrow(df),size=nrow(df),replace=TRUE)
  arbre = rpart(Y~., data=df[idx,])
  parbres[b] = predict(arbre,newdata=m,type = "prob")[2]
}
hist(parbres)

Surprisingly, we have here a bimodal function for \hat{y} which is either very small for some trees, of very large for others. On average, we have a value close to 55%… I think I will use more that generative algorithm for future quiz…

ACT6100, analyse supervisée

On avance dans le cours ACT6100 d’analyse des données en actuariat. Les supports de cours sont en ligne sur https://github.com/freakonometrics/ACT6100. Les capsules présentant les principales méthodes d’analyse supervisée sont maintenant en ligne

  1. Risque video pdf (45:06)
  2. Validation Croisée video pdf (32:00)
  3. Fonction de perte video pdf (20:04)
  4. Règle de Bayes et analyse discriminante video pdf (37:23)
  5. Dimension de Vapnik-Chervonenkis video pdf
  6. Régularisation et pénalisation video pdf (27:17)
  7. Régularisation – Ridge video pdf (36:46)
  8. Régularisation – Lasso (1) video pdf (41:23)
  9. Régularisation – Lasso (2) video pdf (32:24)
  10. Régularisation – GLM (Ridge et Lasso) video pdf (20:57)
  11. Régularisation – SVM video pdf (38:25)
  12. Simulations et monte carlo video pdf (50:29)
  13. Simulations et bootstrap video pdf (36:03)
  14. Arbres (1) video pdf (44:15)
  15. Arbres (2) video pdf (39:35)
  16. Arbres (3) video pdf (37:30)
  17. Interprétabilité video pdf (36:14)
  18. Méthode d’ensembles video pdf (41:17)
  19. Stacking & bagging video pdf (40:15)
  20. Forêts aléatoires video pdf (31:55)
  21. Agrégation séquentielle et boosting (1) video pdf (17:46)
  22. Agrégation séquentielle et boosting (2) video pdf (46:06)
  23. Réseaux de neurones (1) video pdf (45:43)
  24. Réseaux de neurones (2) video pdf (32:40)
  25. Réseaux de neurones (3) video pdf (31:44)
  26. Réseaux de neurones (4) video pdf (35:54)

Si les liens des vidéos ne marchent pas, je renvoie vers l’ensemble des capsules du cours, ici.

Classification from scratch, bagging and forests 10/8

Tenth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside bagging techniques.

Often, bagging is associated with trees, to generate forests. But actually, it is possible using bagging for any kind of model. Recall that bagging means “boostrap aggregation”. So, consider a model m:\mathcal{X}\rightarrow \mathcal{Y}. Let \widehat{m}_{S} denote the estimator of m obtained from sample S=\{y_i,\mathbf{x}_i\} with i=\{1,\cdots,n\}.

Consider now some boostrap sample, S_b=\{y_i,\mathbf{x}_i\} with i is randomly drawn from \{1,\cdots,n\} (with replacement). Based on that sample, estimate \widehat{m}_{S_b}. Then draw many samples, and consider the agregation of the estimators obtained, using either a majority rule, or using the average of probabilities (if a probabilist model was considered). Hence\widehat{m}^{bag}(\mathbf{x})=\frac{1}{B}\sum_{b=1}^B \widehat{m}_{S_b}(\mathbf{x})

Bagging logistic regression #1

Consider the case of the logistic regression. To generate a bootstrap sample, it is natural to use the technique describe above. I.e. draw pairs (y_i,\mathbf{x}_i) randomly, uniformly (with probability 1/n) with replacement. Consider here the small dataset, just to visualize. For the b part of bagging, use the following code

L_logit = list()
n = nrow(df)
for(s in 1:1000){
  df_s = df[sample(1:n,size=n,replace=TRUE),]
  L_logit[[s]] = glm(y~., df_s, family=binomial)}

Then we should aggregate over the 1000 models, to get the agg part of bagging,

p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

We now have a prediction for any new observation

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Bagging logistic regression #2

Another technique that can be used to generate a bootstrap sample is to keep all \mathbf{x}_i‘s, but for each of them, to draw (randomly) a value for y, withY_{i,b}\sim\mathcal{B}(\widehat{m}_{S}(\mathbf{x}_i))since\widehat{m}(\mathbf{x})=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}].Thus, the code for the b part of bagging algorithm is now

L_logit = list()
n = nrow(df)
reg = glm(y~x1+x2, df, family=binomial)
for(s in 1:100){
  df_s = df
  df_s$y = factor(rbinom(n,size=1,prob=predict(reg,type="response")),labels=0:1)
  L_logit[[s]] = glm(y~., df_s, family=binomial)
}

The agg part of bagging algorithm remains unchanged. Here we obtain

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)


Of course, we can use that code we check the prediction obtain on the observations we have in our sample. Just to change, consider here the myocarde data. The entiere code is here

L_logit = list()
reg = glm(as.factor(PRONO)~., myocarde, family=binomial)
for(s in 1:1000){
  myocarde_s = myocarde
  myocarde_s$PRONO = 1*rbinom(n,size=1,prob=predict(reg,type="response"))
  L_logit[[s]] = glm(as.factor(PRONO)~., myocarde_s, family=binomial)
}
p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

For the first observation, with our 1000 simulated datasets, and our 1000 models, we obtained the following estimation for the probability to die.

histo = function(i){
x = as.numeric(myocarde[i,1:7])
v_x = p(x)
hist(v_x,proba=TRUE,breaks=seq(0,1,by=.05),xlab="",main="",
col=rep(c(rgb(0,0,1,.4),rgb(1,0,0,.4)),each=10),ylim=c(0,5))
segments(mean(v_x),0,mean(v_x),5,col="red",lty=2)
points(myocarde$PRONO[i],0,pch=19,cex=2)
xi = round(mean(v_x.5)*1000)/10
text(.75,-.1,paste(xi,"%",sep=""),col=rgb(1,0,0,.6))}
histo(1)
histo(4)

Hence, for the first observation, in 77.8% of the models, the predicted probability was higher than 50%, and the average probability was actually close to 75%.

or, for observation 22, predictions very close to the first one (except that the first one died, while the 22nd survived)

histo(23)
histo(11)

and, we observe here

Bagging trees

Let’s now get back on our trees, mentioned in the previous post. Bagging was introduced in 1994 by Leo Breiman in Bagging Predictors. If the first section describes the procedure, the second one introduces “Bagging Classification Trees”. Trees are nice for interpretation, but most of the time, they are rather poor predictors. The idea of bagging was to improve the accuracy of classification trees.

The idea of bagging to to generate a lot of trees

clr12 = c("#8dd3c7","#ffffb3","#bebada","#fb8072","#80b1d3","#fdb462","#b3de69","#fccde5","#d9d9d9","#bc80bd","#ccebc5","#ffed6f")
n = nrow(myocarde)
par(mfrow=c(4,3))
sed=c(1,2,4,5,6,10,11,21,22,24,27,28,30)
for(i in 1:12){
  set.seed(sed[i])
idx = sample(1:n, size=n, replace=TRUE)
cart =  rpart(PRONO~., myocarde[idx,])
prp(cart,type=2,extra=1,box.col=clr12[i])}


The strategie is actually the same as before. For the bootstrap part, store the tree in a list

L_tree = list()
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(as.factor(PRONO)~., myocarde[idx,])
}

and for the aggregation part, just take the average of predicted probabilities

p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}

Because with this example, we cannot visualize predictions, let us run the same code on the smaller dataset

L_tree = list()
n = nrow(df)
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(y~x1+x2, df[idx,],control = rpart.control(cp = 0.25,
minsplit = 2))
}
p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}
vu=seq(0,1,length=101)
vv=outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Fronm bags to forest

Here, we grew a lot of trees, but it is not stricto sensus a random forest algorithm, as introduced in 1995, in Random decision forests. Actually, the difference is in the creation of decision trees. To understand what happens, get back to the previous post on classification trees. As we’ve seen, when we have a node, we look at possible splits : we consider all possible variable, and all possible threshold. The startegy here will be to draw randomly k variables out of p (with of course k<p, for instance k=\sqrt{p}). That's interesting in high dimension, because at each split, we should look for all variables, all cutoffs, and that can take quite some time (especially with the bootstrap procedure, where the goal will be to grow 1000 trees).

To be continued…

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

Variable Importance with Correlated Features

Variable importance graphs are great tool to see, in a model, which variables are interesting. Since we usually use it with random forests, it looks like it is works well with (very) large datasets. The problem with large datasets is that a lot of features are ‘correlated’, and in that case, interpretation of the values of variable importance plots can hardly be compared. Consider for instance a very simple linear model (the ‘true’ model, used to generate data)

Here, we use a random forest to model the relationship between the features, but actually, we consider another feature – not used to generate the data – , that is correlated to . And we consider a random forest on those three features, .

In order to get some more robust results, I geneate 100 datasets, of size 1,000.

library(mnormt)

impact_correl=function(r=.9){
nsim=10
IMP=matrix(NA,3,nsim)
n=1000
R=matrix(c(1,r,r,1),2,2)
for(s in 1:nsim){
X1=rmnorm(n,varcov=R)
X3=rnorm(n)
Y=1+2*X1[,1]-2*X3+rnorm(n)
db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3)
library(randomForest)
RF=randomForest(Y~.,data=db)
IMP[,s]=importance(RF)}
apply(IMP,1,mean)}

C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999)
VI=matrix(NA,3,length(C))
for(i in 1:length(C)){VI[,i]=impact_correl(C[i])}

plot(C,VI[1,],type="l",col="red")
lines(C,VI[2,],col="blue")
lines(C,VI[3,],col="purple")

The purple line on top is the variable importance value of , which is rather stable (almost constant, as a first order approximation). The red line is the variable importance function of  while the blue line is the variable importance function of .  For instance, the importance function with two very correlated variable is

It looks like  is much more important than the other two, which is – somehow – not the case. It is just that the model cannot choose between  and : sometimes,  is slected, and sometimes it is. I think I find that graph confusing because I would probably expect the importance of  to be constant. It looks like we have a plot of the importance of each variable, given the existence of all the other variables.

Actually, what I have in mind is what we get when we consider the stepwise procedure, and when we remove each variable from the set of features,

library(mnormt)
impact_correl=function(r=.9){
  nsim=100
  IMP=matrix(NA,4,nsim)
  n=1000
  R=matrix(c(1,r,r,1),2,2)
  for(s in 1:nsim){
    X1=rmnorm(n,varcov=R)
    X3=rnorm(n)
    Y=1+2*X1[,1]-2*X3+rnorm(n)
    db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3)
    IMP[1,s]=AIC(lm(Y~X1+X2+X3,data=db))
    IMP[2,s]=AIC(lm(Y~X2+X3,data=db))
    IMP[3,s]=AIC(lm(Y~X1+X3,data=db))
    IMP[4,s]=AIC(lm(Y~X1+X2,data=db))
  }
  apply(IMP,1,mean)}

Here, if we uses the same code as previously,

C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999)
VI=matrix(NA,3,length(C))
for(i in 1:length(C)){VI[,i]=impact_correl(C[i])}

 

we get the following graph

plot(C,VI[2,],type="l",col="red")
lines(C,VI2[3,],col="blue")
lines(C,VI2[4,],col="purple")

The purple line is obtained when we remove  : it is the worst model. When we keep and , we get the blue line. And this line is constant: the quality of the does not depend on  (this is what puzzled me in the previous graph, that having  does have an impact on the importance of). The red line is what we get when we remove . With 0 correlation, it is the same as the purple line, we get a poor model. With a correlation close to 1, it is same as having ,  and we get the same as the blue line.

Nevertheless, discussing the importance of features, when we have a lot of correlation features is not that intuitive…

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)