Classification trees are nice. They provide an interesting alternative to a logistic regression. I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.
Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).
In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables. Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.
> set.seed(1) > n=500 > library(clusterGeneration) > library(mnormt) > S=genPositiveDefMat("eigen",dim=15) > S=genPositiveDefMat("unifcorrmat",dim=15) > X=rmnorm(n,varcov=S$Sigma) > library(corrplot) > corrplot(cor(X), order = "hclust")
See Gosh & Hendersen (2003) for more details on the methodology.
Continue reading ‘Variable Importance Plot’ and Variable Selection