Pour le neuvième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, un petit fourre-tout avant d’attaquer la modélisation du passif, en parlant un peu de modèles Tweedie (modèle collectif vs. modèles individuels), de choix de variables, et de choix de modèles. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)
Tag Archives: variable
Variable Importance with Correlated Features
Variable importance graphs are great tool to see, in a model, which variables are interesting. Since we usually use it with random forests, it looks like it is works well with (very) large datasets. The problem with large datasets is that a lot of features are ‘correlated’, and in that case, interpretation of the values of variable importance plots can hardly be compared. Consider for instance a very simple linear model (the ‘true’ model, used to generate data)
Here, we use a random forest to model the relationship between the features, but actually, we consider another feature – not used to generate the data – , that is correlated to . And we consider a random forest on those three features, .
In order to get some more robust results, I geneate 100 datasets, of size 1,000.
library(mnormt) impact_correl=function(r=.9){ nsim=10 IMP=matrix(NA,3,nsim) n=1000 R=matrix(c(1,r,r,1),2,2) for(s in 1:nsim){ X1=rmnorm(n,varcov=R) X3=rnorm(n) Y=1+2*X1[,1]-2*X3+rnorm(n) db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3) library(randomForest) RF=randomForest(Y~.,data=db) IMP[,s]=importance(RF)} apply(IMP,1,mean)} C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999) VI=matrix(NA,3,length(C)) for(i in 1:length(C)){VI[,i]=impact_correl(C[i])} plot(C,VI[1,],type="l",col="red") lines(C,VI[2,],col="blue") lines(C,VI[3,],col="purple")
The purple line on top is the variable importance value of , which is rather stable (almost constant, as a first order approximation). The red line is the variable importance function of while the blue line is the variable importance function of . For instance, the importance function with two very correlated variable is
It looks like is much more important than the other two, which is – somehow – not the case. It is just that the model cannot choose between and : sometimes, is slected, and sometimes it is. I think I find that graph confusing because I would probably expect the importance of to be constant. It looks like we have a plot of the importance of each variable, given the existence of all the other variables.
Actually, what I have in mind is what we get when we consider the stepwise procedure, and when we remove each variable from the set of features,
library(mnormt) impact_correl=function(r=.9){ nsim=100 IMP=matrix(NA,4,nsim) n=1000 R=matrix(c(1,r,r,1),2,2) for(s in 1:nsim){ X1=rmnorm(n,varcov=R) X3=rnorm(n) Y=1+2*X1[,1]-2*X3+rnorm(n) db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3) IMP[1,s]=AIC(lm(Y~X1+X2+X3,data=db)) IMP[2,s]=AIC(lm(Y~X2+X3,data=db)) IMP[3,s]=AIC(lm(Y~X1+X3,data=db)) IMP[4,s]=AIC(lm(Y~X1+X2,data=db)) } apply(IMP,1,mean)}
Here, if we uses the same code as previously,
C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999) VI=matrix(NA,3,length(C)) for(i in 1:length(C)){VI[,i]=impact_correl(C[i])}
we get the following graph
plot(C,VI[2,],type="l",col="red") lines(C,VI2[3,],col="blue") lines(C,VI2[4,],col="purple")
The purple line is obtained when we remove : it is the worst model. When we keep and , we get the blue line. And this line is constant: the quality of the does not depend on (this is what puzzled me in the previous graph, that having does have an impact on the importance of). The red line is what we get when we remove . With 0 correlation, it is the same as the purple line, we get a poor model. With a correlation close to 1, it is same as having , and we get the same as the blue line.
Nevertheless, discussing the importance of features, when we have a lot of correlation features is not that intuitive…
Variable Selection using Cross-Validation (and Other Techniques)
A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.
- It yields R-squared values that are badly biased to be high.
- The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
- The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
- It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
- It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
- It has severe problems in the presence of collinearity.
- It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
- Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
- It allows us to not think about the problem.
- It uses a lot of paper.
Continue reading Variable Selection using Cross-Validation (and Other Techniques)
‘Variable Importance Plot’ and Variable Selection
Classification trees are nice. They provide an interesting alternative to a logistic regression. I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.
Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).
In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables. Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.
> set.seed(1) > n=500 > library(clusterGeneration) > library(mnormt) > S=genPositiveDefMat("eigen",dim=15) > S=genPositiveDefMat("unifcorrmat",dim=15) > X=rmnorm(n,varcov=S$Sigma) > library(corrplot) > corrplot(cor(X), order = "hclust")
See Gosh & Hendersen (2003) for more details on the methodology.
Continue reading ‘Variable Importance Plot’ and Variable Selection
Sélection de variables versus sélection de modalités
En cours, nous avions évoqué (très rapidement) la sélection automatique de variables. La méthode la plus simple est une méthode stepwise, basé sur un critère de type AIC, ou BIC. Considérons la base suivante,
> N = base$nbre > E = base$exposition > X1 = base$carburant > X2 = cut(base$agevehicule,c(0,3,10,101), + right=FALSE) > X3 = cut(base$ageconducteur,c(0,22,45,101), + right=FALSE) > X4 = as.factor(base$zone) > X5 = as.factor(base$puissance) > X6 = as.factor(base$region) > X7 = as.factor(base$marque) > base1=data.frame(N,E,X1,X2,X3,X4,X5,X6,X7)
Une méthode stepwise (backward) donne ici
> reg1=glm(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)), + family="poisson",data=base1) > step(reg1) Start: AIC=20492.67 N ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + offset(log(E)) Df Deviance AIC - X5 11 15316 20482 - X3 2 15305 20490 <none> 15304 20493 - X2 2 15314 20499 - X1 1 15319 20506 - X7 10 15343 20511 - X4 5 15398 20576 - X6 14 15569 20729 Step: AIC=20482.35 N ~ X1 + X2 + X3 + X4 + X6 + X7 + offset(log(E)) Df Deviance AIC - X3 2 15317 20479 <none> 15316 20482 - X2 2 15326 20488 - X1 1 15334 20498 - X7 10 15359 20505 - X4 5 15410 20566 - X6 14 15579 20717 Step: AIC=20479.33 N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E)) Df Deviance AIC <none> 15317 20479 - X2 2 15327 20485 - X1 1 15334 20495 - X7 10 15360 20502 - X4 5 15410 20563 - X6 14 15620 20754 Call: glm(formula = N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E)), family = "poisson", data = base1) Coefficients: (Intercept) X1E X2[3,10) X2[10,101) X4B -1.0588454 -0.1653822 0.0266763 -0.1135451 -0.0004047 X4C X4D X4E X4F X60 0.1497622 0.3748811 0.5052894 0.4292016 -0.3590838 X61 X62 X63 X64 X65 -0.9300641 -1.0278887 -1.1818218 -1.0971797 -0.9459414 X66 X67 X68 X69 X610 -1.3690795 -1.1425678 -1.5309402 -1.3883549 -1.4603624 X611 X612 X613 X72 X73 -1.6763206 -1.3974092 -1.4864404 0.0246113 0.1144990 X74 X75 X76 X710 X711 -0.0932555 0.1635397 -0.1478095 0.2502030 0.1967970 X712 X713 X714 -0.2420215 0.2161411 -0.1963162 Degrees of Freedom: 49999 Total (i.e. Null); 49967 Residual Null Deviance: 15810 Residual Deviance: 15320 AIC: 20480
Autrement dit, on supprime la troisième (âge du conducteur principal, par classes arbitraires) et la cinquième variable (puissance du véhicule) en gardant toutes les autres. Mais ici, si une variable n’a pas été retenue, c’est que globalement, elle n’apportait pas beaucoup d’information. Il serait toutefois possible de garder une information partielle, en gardant éventuellement certaines modalités. L’idée est de disjoncter la base, en créant des variables indicatrices par modalités. La base sera beaucoup plus grosse, et la sélection prendra alors beaucoup plus de temps,
> base2=data.frame(model.matrix( ~ 0+X1+X2+X3+X4+X5+X6+X7, + data=base1)) > base2$E=base1$E > base2$N=base1$N > reg2=glm(N~.-E+offset(log(E)),family="poisson", + data=base2) > step(reg2) Start: AIC=20492.67 N ~ (X1D + X1E + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. X4B + X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 + X714 + E) - E + offset(log(E)) Step: AIC=20492.67 N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4B X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 + X714 + offset(log(E)) Df Deviance AIC - X4B 1 15304 20491 - X58 1 15304 20491 - X511 1 15304 20491 - X2.3.10. 1 15304 20491 - X72 1 15304 20491 - X513 1 15304 20491 - X512 1 15304 20491 - X515 1 15304 20491 - X74 1 15305 20491 - X3.45.101. 1 15305 20491 - X714 1 15305 20491 - X55 1 15305 20492 - X3.22.45. 1 15305 20492 - X711 1 15306 20492 - X76 1 15306 20492 - X59 1 15306 20492 <none> 15304 20493 - X514 1 15306 20493 - X713 1 15306 20493 - X73 1 15307 20493 - X56 1 15307 20493 - X710 1 15307 20494 - X75 1 15308 20494 - X2.10.101. 1 15308 20495 - X57 1 15309 20495 - X4C 1 15310 20496 - X510 1 15310 20496 - X60 1 15312 20498 - X4F 1 15314 20500 - X712 1 15316 20503 - X1D 1 15319 20506 - X4D 1 15337 20524 - X61 1 15345 20532 - X65 1 15350 20536 - X62 1 15352 20538 - X64 1 15359 20545 - X4E 1 15362 20549 - X63 1 15366 20553 - X67 1 15370 20556 - X612 1 15381 20568 - X69 1 15382 20569 - X66 1 15387 20574 - X610 1 15389 20576 - X68 1 15393 20580 - X611 1 15406 20592 - X613 1 15451 20637 Step: AIC=20490.67 N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4C X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 + X714 + offset(log(E))
etc etc… et si on va directement à la fin,
Step: AIC=20469.18 N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F + X57 + X510 + X60 X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 + offset(log(E)) Df Deviance AIC <none> 15315 20469 - X76 1 15317 20470 - X713 1 15317 20470 - X73 1 15317 20470 - X57 1 15318 20470 - X75 1 15318 20471 - X710 1 15319 20471 - X510 1 15319 20471 - X4C 1 15322 20474 - X60 1 15322 20475 - X2.10.101. 1 15325 20478 - X4F 1 15325 20478 - X1D 1 15333 20485 - X712 1 15338 20490 - X61 1 15356 20508 - X4D 1 15359 20511 - X62 1 15363 20515 - X65 1 15363 20515 - X64 1 15371 20524 - X63 1 15378 20530 - X67 1 15383 20536 - X4E 1 15390 20543 - X612 1 15394 20547 - X69 1 15396 20548 - X66 1 15400 20553 - X610 1 15403 20555 - X68 1 15407 20559 - X611 1 15419 20572 - X613 1 15467 20619 Call: glm(formula = N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F X57 + X510 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 + offset(log(E)), family = "poisson", data = base2) Coefficients: (Intercept) X1D X2.10.101. X4C X4D -1.20880 0.16886 -0.13808 0.14888 0.37539 X4E X4F X57 X510 X60 0.50458 0.42768 0.08381 0.18722 -0.36509 X61 X62 X63 X64 X65 -0.93836 -1.03471 -1.18803 -1.10217 -0.95624 X66 X67 X68 X69 X610 -1.37463 -1.15391 -1.54213 -1.40188 -1.47217 X611 X612 X613 X73 X75 -1.68559 -1.40582 -1.49700 0.10874 0.15022 X76 X710 X712 X713 -0.15183 0.21948 -0.27400 0.19565 Degrees of Freedom: 49999 Total (i.e. Null); 49971 Residual Null Deviance: 15810 Residual Deviance: 15310 AIC: 20470
Si la troisième variable (âge du conducteur principal, par classes arbitraires) disparait assez vite, en revanche, une information sur la cinquième (la puissance) est gardée car certaines modalités semblent être informative sur la fréquence d’accidents. En revanche, on notera qui si on fait un arbre, la troisième variable était toujours clairement significative, ce qui peut nous conforter dans l’idée de faire de la sélection de variables sur les modalités.