# Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
x2=cut(x2,breaks=
c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
$y : num 1.345 1.863 1.946 2.481 0.765 ...$ x1: num  0.266 0.372 0.573 0.908 0.202 ...
$x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ... table(b$x2)[LETTERS[1:10]]

A  B  C  D  E  F  G  H  I  J
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, factor = b$x2,
family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
$y : num 1.345 1.863 1.946 2.481 0.765 ...$ x1: num  0.266 0.372 0.573 0.908 0.202 ...
$x2: chr "Group.4" "Group.4" "Group.4" "Group.4" ... - attr(*, ".internal.selfref")= table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4
23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

# Combining automatically factor levels in R

Each time we face real applications in an applied econometrics course, we have to deal with categorial variables. And the same question arise, from students : how can we combine automatically factor levels ? Is there a simple R function ?

I did upload a few blog posts, over the pas years. But so far, nothing satistfying. Let me write down a few lines about what could be done. And if some wants to write a nice R function, that would be awesome. To illustrate the idea, consider the following (simulated dataset)

n=200 set.seed(1) x1=runif(n) x2=runif(n) y=1+2*x1-x2+rnorm(n,0,.2) LB=sample(LETTERS[1:10]) b=data.frame(y=y,x1=x1, x2=cut(x2,breaks= c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2), labels=LB)) str(b) 'data.frame': 200 obs. of 3 variables: $y : num 1.345 1.863 1.946 2.481 0.765 ...$ x1: num 0.266 0.372 0.573 0.908 0.202 ... $x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ... table(b$x2)[LETTERS[1:10]]   A B C D E F G H I J 11 12 23 34 23 36 12 32 3 14

There is one (continuous) dependent variable $y$, one continuous covariable $x_1$ and one categorical variable $x_2$, with here ten levels. We can plot the data using

plot(b$x1,y,col="white",xlim=c(0,1.1)) text(b$x1,y,as.character(b$x2),cex=.5) The output of a linear regression yield the following predictions for(i in 1:10){ p=function(x) predict(lm(y~x1+x2,data=b),newdata=data.frame(x1=x,x2=LETTERS[i])) u=seq(-1,1.065,by=.01) v=Vectorize(p)(u) lines(u,v)} the slope for $x_1$ is the same, we simply add a different constant for each level. As we can see, some levels are very very close, so it seems legitimate to combine them into one single category. Here is the output of the linear regression, summary(lm(y~x1+x2,data=b)) Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 0.843802 0.119655 7.052 3.23e-11 *** x1 1.992878 0.053838 37.016 &lt; 2e-16 *** x2A 0.055500 0.131173 0.423 0.6727 x2H 0.009293 0.121626 0.076 0.9392 x2F -0.177002 0.121020 -1.463 0.1452 x2B -0.218152 0.130192 -1.676 0.0955 . x2D -0.206970 0.121294 -1.706 0.0896 . x2G -0.407417 0.129999 -3.134 0.0020 ** x2C -0.526708 0.123690 -4.258 3.24e-05 *** x2J -0.664281 0.128126 -5.185 5.54e-07 *** x2E -0.816454 0.123625 -6.604 3.94e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2014 on 189 degrees of freedom Multiple R-squared: 0.8995, Adjusted R-squared: 0.8942 F-statistic: 169.1 on 10 and 189 DF, p-value: &lt; 2.2e-16 AIC(lm(y~x1+x2,data=b)) [1] -60.74443 BIC(lm(y~x1+x2,data=b)) [1] -21.16463 Here the reference category is “I”. And it looks like we could actually combine that category with several others. One strategy here would be to select all categories that seem to be not significantly different, and to run a (multiple) test library(car) linearHypothesis(lm(y~x1+x2,data=b), c("x2A = 0", "x2H = 0", "x2F = 0")) Hypothesis: x2A = 0 x2H = 0 x2F = 0 Model 1: restricted model Model 2: y ~ x1 + x2 Res.Df RSS Df Sum of Sq F Pr(&gt;F) 1 192 8.4651 2 189 7.6654 3 0.79971 6.5726 3e-04 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 It seems that we can combine those four categories together. Here, we can see what’s going on when we change the reference category (actually, loop on all categories) P=matrix(NA,nlevels(b$x2),nlevels(b$x2)) colnames(P)=rownames(P)=LETTERS[1:10] plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5), ylim=c(0,10.5)) text(1:10,0,LETTERS[1:10]) text(0,1:10,LETTERS[1:10]) for(i in 1:nlevels(b$x2)){ #levels(b$x2)=LETTERS[1:10] b$x2=relevel(b$x2,LETTERS[i]) p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4] names(p)=substr(names(p),3,3) P[LETTERS[i],names(p)]=p p=P[LETTERS[i],] idx=which(p&gt;.05) points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2) idx=which(p&gt;.1) points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)}

We are glad to see that it is symmetric : if “H” should be combined with “I”, “I” should also be combined with “H”.

Here black points are related with the 10% $p$-value, and white points the 5% $p$-value. This graph is actually hard to read… And actually, this reminds us of  Bertin (1967).

Here, we can predefine manually some ordering (we will see below how it might be automatised)

LETTERSord=c("I","A","H","F","B","D","G","C","J","E") P=matrix(NA,nlevels(b$x2),nlevels(b$x2)) colnames(P)=rownames(P)=LETTERSord plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5), ylim=c(0,10.5)) ct=c(3,3,2,1,1) abline(v=.5+c(0,cumsum(ct)),lty=2) abline(h=.5+c(0,cumsum(ct)),lty=2) text(1:10,0,LETTERSord) text(0,1:10,LETTERSord) for(i in 1:nlevels(b$x2)){ #levels(b$x2)=LETTERS[1:10] b$x2=relevel(b$x2,LETTERSord[i]) p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4] names(p)=substr(names(p),3,3) P[LETTERSord[i],names(p)]=p p=P[LETTERSord[i],] idx=which(p&gt;.05) points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2) idx=which(p&gt;.1) points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2) } Here we get the following It looks like we have our combined categories… Actually, it is possible to use another strategy. We start from some level, say “A”. Then, we merge it with all non-significantly different levels. If “B” is not one of them, we use it as the new reference. Etc. for(i in 1:nlevels(b$x2)){ if(LETTERS[i]%in%levels(b$x2)){ b$x2=relevel(b$x2,LETTERS[i]) p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4] names(p)=substr(names(p),3,nchar(p)) idx=which(p&gt;.05) mix=c(LETTERS[i],names(p)[idx]) b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep="")) }}

The final categories are

table(b$x2) A+I+H B+D+F C+G E J 46 82 35 23 14 with the following regression output summary(lm(y~x1+x2,data=b)) Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 0.86407 0.03950 21.877 &lt; 2e-16 *** x1 1.99180 0.05323 37.417 &lt; 2e-16 *** x2B+D+F -0.21517 0.03699 -5.817 2.44e-08 *** x2C+G -0.50545 0.04528 -11.164 &lt; 2e-16 *** x2E -0.83617 0.05128 -16.305 &lt; 2e-16 *** x2J -0.68398 0.06131 -11.156 &lt; 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2008 on 194 degrees of freedom Multiple R-squared: 0.8975, Adjusted R-squared: 0.8948 F-statistic: 339.6 on 5 and 194 DF, p-value: &lt; 2.2e-16 AIC(lm(y~x1+x2,data=b)) [1] -66.76939 BIC(lm(y~x1+x2,data=b)) [1] -43.68117 Which is consistent with the group we got before. But actually, if we change the order, we can get different combinations. For instance, if we go from “J” to “A”, instead of “A” to “J”, we obtain for(i in nlevels(b$x2):1){ #levels(b$x2)=LETTERS[1:10] if(LETTERS[i]%in%levels(b$x2)){ b$x2=relevel(b$x2,LETTERS[i]) p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4] names(p)=substr(names(p),3,nchar(p)) idx=which(p&gt;.05) mix=c(LETTERS[i],names(p)[idx]) b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep="")) }} table(b$x2)   E G+C I+A+B+D+F+H J 23 35 128 14

with different information criteria here

AIC(lm(y~x1+x2,data=b)) [1] -36.61665 BIC(lm(y~x1+x2,data=b)) [1] -16.82675

I guess it would be necessary to run randomly the order we go through the levels. Last, but not least, one can use regression trees (even if it not per se in the syllabus of the course). The problem is that there is another explanatory variable that might interphere. So I would suggest (1) to fit a linear model $y=\beta_0+\beta_1x_1+u_i$, to calculate the residuals, $\widehat{u}_i$ (2) to run a regression tree, to explain $\widehat{u}_i$ with categorical variable $x_2$ (I did explain how trees are build when the explanatory variable is a categorical one in a previous post)

library(rpart) library(rpart.plot) b$e=residuals(lm(y~x1,data=b)) arbre=rpart(e~x2,data=b) prp(arbre,type=2,extra=1) Observe that the leaves have the same groups as the one we got. arbre n= 200 node), split, n, deviance, yval * denotes terminal node 1) root 200 22.563500 7.771561e-18 2) x2=G,C,J,E 72 4.441495 -3.232525e-01 4) x2=J,E 37 1.553520 -4.578492e-01 * 5) x2=G,C 35 1.509068 -1.809646e-01 * 3) x2=I,A,H,F,B,D 128 6.366628 1.818295e-01 6) x2=F,B,D 82 2.983381 1.048246e-01 * 7) x2=I,A,H 46 2.030229 3.190993e-01 * I guess that it should be possible to put all that in an R function, to suggest combinations of level that might improve the regression. # Visualizing overdispersion (with trees) This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors, > X=as.factor(paste(sinistres$carburant,sinistres$zone, + cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101))))
> E=sinistres$exposition > Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Class D A (17,24]  average = 0.06274415  variance = 0.06174966
Class D A (24,40]  average = 0.07271905  variance = 0.07675049
Class D A (40,65]  average = 0.05432262  variance = 0.06556844
Class D A (65,101] average = 0.03026999  variance = 0.02960885
Class D B (17,24]  average = 0.2383109   variance = 0.2442396
Class D B (24,40]  average = 0.06662015  variance = 0.07121064
Class D B (40,65]  average = 0.05551854  variance = 0.05543831
Class D B (65,101] average = 0.0556386   variance = 0.0540786
Class D C (17,24]  average = 0.1524552   variance = 0.1592623
Class D C (24,40]  average = 0.0795852   variance = 0.09091435
Class D C (40,65]  average = 0.07554481  variance = 0.08263404
Class D C (65,101] average = 0.06936605  variance = 0.06684982
Class D D (17,24]  average = 0.1584052   variance = 0.1552583
Class D D (24,40]  average = 0.1079038   variance = 0.121747
Class D D (40,65]  average = 0.06989518  variance = 0.07780811
Class D D (65,101] average = 0.0470501   variance = 0.04575461
Class D E (17,24]  average = 0.2007164   variance = 0.2647663
Class D E (24,40]  average = 0.1121569   variance = 0.1172205
Class D E (40,65]  average = 0.106563    variance = 0.1068348
Class D E (65,101] average = 0.1572701   variance = 0.2126338
Class D F (17,24]  average = 0.2314815   variance = 0.1616788
Class D F (24,40]  average = 0.1690485   variance = 0.1443094
Class D F (40,65]  average = 0.08496827  variance = 0.07914423
Class D F (65,101] average = 0.1547769   variance = 0.1442915
Class E A (17,24]  average = 0.1275345   variance = 0.1171678
Class E A (24,40]  average = 0.04523504  variance = 0.04741449
Class E A (40,65]  average = 0.05402834  variance = 0.05427582
Class E A (65,101] average = 0.04176129  variance = 0.04539265
Class E B (17,24]  average = 0.1114712   variance = 0.1059153
Class E B (24,40]  average = 0.04211314  variance = 0.04068724
Class E B (40,65]  average = 0.04987117  variance = 0.05096601
Class E B (65,101] average = 0.03123003  variance = 0.03041192
Class E C (17,24]  average = 0.1256302   variance = 0.1310862
Class E C (24,40]  average = 0.05118006  variance = 0.05122782
Class E C (40,65]  average = 0.05394576  variance = 0.05594004
Class E C (65,101] average = 0.04570239  variance = 0.04422991
Class E D (17,24]  average = 0.1777142   variance = 0.1917696
Class E D (24,40]  average = 0.06293331  variance = 0.06738658
Class E D (40,65]  average = 0.08532688  variance = 0.2378571
Class E D (65,101] average = 0.05442916  variance = 0.05724951
Class E E (17,24]  average = 0.1826558   variance = 0.2085505
Class E E (24,40]  average = 0.07804062  variance = 0.09637156
Class E E (40,65]  average = 0.08191469  variance = 0.08791804
Class E E (65,101] average = 0.1017367   variance = 0.1141004
Class E F (17,24]  average = 0           variance = 0
Class E F (24,40]  average = 0.07731177  variance = 0.07415932
Class E F (40,65]  average = 0.1081142   variance = 0.1074324
Class E F (65,101] average = 0.09071118  variance = 0.1170159

Again, one can plot the variance against the average,

> plot(vm,vv,cex=sqrt(ve),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(vm,vv,cex=sqrt(ve))
> abline(a=0,b=1,lty=2)

An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines)

> library(tree)
> T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+
+ as.factor(marque)+as.factor(carburant)+as.factor(region)+
+ agevehicule+ageconducteur,data=baseFREQ,
+ split =  "gini",minsize =25000)

The tree is the following

> plot(T)
> text(T)

Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous.

> X=as.factor(T$where) > E=sinistres$exposition
> Y=sinistres\$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163
Class  8 average =   0.05191127  variance = 0.05948133
Class  9 average =   0.07442635  variance = 0.08694552
Class  10 average =  0.4143646   variance = 0.4494002
Class  11 average =  0.1917445   variance = 0.1744355
Class  15 average =  0.04754595  variance = 0.05389675
Class  20 average =  0.08129577  variance = 0.0906322
Class  22 average =  0.05813419  variance = 0.07089811
Class  23 average =  0.06123807  variance = 0.07010473
Class  24 average =  0.06707301  variance = 0.07270995
Class  25 average =  0.3164557   variance = 0.2026906
Class  26 average =  0.08705041  variance = 0.108456
Class  27 average =  0.06705214  variance = 0.07174673
Class  30 average =  0.05292652  variance = 0.06127301
Class  31 average =  0.07195285  variance = 0.08620593
Class  32 average =  0.08133722  variance = 0.08960552
Class  34 average =  0.1831559   variance = 0.2010849
Class  39 average =  0.06173885  variance = 0.06573939
Class  41 average =  0.07089419  variance = 0.07102932
Class  44 average =  0.09426152  variance = 0.1032255
Class  47 average =  0.03641669  variance = 0.03869702
Class  49 average =  0.0506601   variance = 0.05089276
Class  50 average =  0.06373107  variance = 0.06536792
Class  51 average =  0.06762947  variance = 0.06926191
Class  56 average =  0.06771764  variance = 0.07122379
Class  57 average =  0.04949142  variance = 0.05086885
Class  58 average =  0.2459016   variance = 0.2451116
Class  59 average =  0.05996851  variance = 0.0615773
Class  61 average =  0.07458053  variance = 0.0818608
Class  63 average =  0.06203737  variance = 0.06249892
Class  64 average =  0.07321618  variance = 0.07603106
Class  66 average =  0.07332127  variance = 0.07262425
Class  68 average =  0.07478147  variance = 0.07884597
Class  70 average =  0.06566728  variance = 0.06749411
Class  71 average =  0.09159605  variance = 0.09434413
Class  75 average =  0.03228927  variance = 0.03403198
Class  76 average =  0.04630848  variance = 0.04861813
Class  78 average =  0.05342351  variance = 0.05626653
Class  79 average =  0.05778622  variance = 0.05987139
Class  80 average =  0.0374993   variance = 0.0385351
Class  83 average =  0.06721729  variance = 0.07295168
Class  86 average =  0.09888492  variance = 0.1131409
Class  87 average =  0.1019186   variance = 0.2051122
Class  88 average =  0.05281703  variance = 0.0635244
Class  91 average =  0.08332136  variance = 0.09067632
Class  96 average =  0.07682093  variance = 0.08144446
Class  97 average =  0.0792268   variance = 0.08092019
Class  99 average =  0.1019089   variance = 0.1072126
Class  100 average = 0.1018262   variance = 0.1081117
Class  101 average = 0.1106647   variance = 0.1151819
Class  103 average = 0.08147644  variance = 0.08411685
Class  104 average = 0.06456508  variance = 0.06801061
Class  107 average = 0.1197225   variance = 0.1250056
Class  108 average = 0.0924619   variance = 0.09845582
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

Here, we can identify classes where remaining heterogeneity.