Tag Archives: tree

Les Arbres de Classification

J’animerai une formation lundi 28 de 14:00 à 16:00 au local N-6320 de l’UQAM sur le thème introduction aux arbres de classification. Cette formation est organisée dans le cadre des séminaires en méthodes d’analyses quantitatives et qualitatives qui se tiennent régulièrement depuis un peu plus d’un mois. animé par le collectif pour le développement et les applications en mesure et évaluation (Cdame). Les slides sont disponibles en pdf (il y a quelques animations, qui ne passent qu’avec Acrobat)

La base utilisée tout au long des exposés est la suivante
> MYOCARDE=read.table("http://freakonometrics.free.fr/saporta.csv",head=TRUE,sep=";")

ROC curves and classification

To get back to a question asked after the last course (still on non-life insurance), I will spend some time to discuss ROC curve construction, and interpretation. Consider the dataset we’ve been using last week,

> db = read.table("http://freakonometrics.free.fr/db.txt",header=TRUE,sep=";")
> attach(db)

The first step is to get a model. For instance, a logistic regression, where some factors were merged together,

> X3bis=rep(NA,length(X3))
> X3bis[X3%in%c("A","C","D")]="ACD"
> X3bis[X3%in%c("B","E")]="BE"
> db$X3bis=as.factor(X3bis)
> reg=glm(Y~X1+X2+X3bis,family=binomial,data=db)

From this model, we can predict a probability, not a  variable,

> S=predict(reg,type="response")

Let https://latex.codecogs.com/gif.latex?\widehat{S} denote this variable (actually, we can use the score, or the predicted probability, it will not change the construction of our ROC curve). What if we really want to predict a  variable. As we usually do in decision theory. The idea is to consider a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png, so that

  • if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-05.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?1, or “positive” (using a standard terminology)
  • si https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-06.png, then  https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png will be https://latex.codecogs.com/gif.latex?0, or “negative

Then we derive a contingency table, or a confusion matrix

     observed value https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-01.png
predicted
value
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-02.png
“positive“ “négative“
“positive“ TP FP
“négative“ FN TN

where TP are the so-called true positive, TN  the true negative, FP are the false positive (or type I error) and FN are the false negative (type II errors). We can get that contingency table for a given threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/ROC-04.png

> roc.curve=function(s,print=FALSE){
+ Ps=(S>s)*1
+ FP=sum((Ps==1)*(Y==0))/sum(Y==0)
+ TP=sum((Ps==1)*(Y==1))/sum(Y==1)
+ if(print==TRUE){
+ print(table(Observed=Y,Predicted=Ps))
+ }
+ vect=c(FP,TP)
+ names(vect)=c("FPR","TPR")
+ return(vect)
+ }
> threshold = 0.5
> roc.curve(threshold,print=TRUE)
        Predicted
Observed   0   1
       0   5 231
       1  19 745
      FPR       TPR 
0.9788136 0.9751309

Here, we also compute the false positive rates, and the true positive rates,

  • TPR = TP / P = TP / (TP + FN) also called sentivity, defined as the rate of true positive: probability to be predicted positve, given that someone is positive (true positive rate)
  • FPR = FP / N = FP / (FP + TN) is the rate of false positive: probability to be predicted positve, given that someone is negative (false positive rate)

The ROC curve is then obtained using severall values for the threshold. For convenience, define

> ROC.curve=Vectorize(roc.curve)

First, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (a standard predicted versus observed graph), and visualize true and false positive and negative, using simple colors

> I=(((S>threshold)&(Y==0))|((S<=threshold)&(Y==1)))
> plot(S,Y,col=c("red","blue")[I+1],pch=19,cex=.7,,xlab="",ylab="")
> abline(v=threshold,col="gray")

And for the ROC curve, simply use

> M.ROC=ROC.curve(seq(0,1,by=.01))
> plot(M.ROC[1,],M.ROC[2,],col="grey",lwd=2,type="l")

This is the ROC curve. Now, to see why it can be interesting, we need a second model. Consider for instance a classification tree

> library(tree)
> ctr <- tree(Y~X1+X2+X3bis,data=db)
> plot(ctr)
> text(ctr)

To plot the ROC curve, we just need to use the prediction obtained using this second model,

> S=predict(ctr)

All the code described above can be used. Again, we can plot https://latex.codecogs.com/gif.latex?(\widehat{S}_i,Y_i) (observe that we have 5 possible values for https://latex.codecogs.com/gif.latex?\widehat{S}_i, which makes sense since we do have 5 leaves on our tree). Then, we can plot the ROC curve,

An interesting idea can be to plot the two ROC curves on the same graph, in order to compare the two models

> plot(M.ROC[1,],M.ROC[2,],type="l")
> lines(M.ROC.tree[1,],M.ROC.tree[2,],type="l",col="grey",lwd=2)

The most difficult part is to get a proper interpretation. The tree is not predicting well in the lower part of the curve. This concerns people with a very high predicted probability. If our interest is more on those with a probability lower than 90%, then, we have to admit that the tree is doing a good job, since the ROC curve is always higher, comparer with the logistic regression.

Advanced methods in trees

I will give a talk tomorrow morning at the Mathematical  Finance Days, organized in HEC Montréal Monday and Tuesday, on Advanced methods in trees with (as mentioned in the subtitle of the first slide) a some thoughts on teaching mathematical finance. It is mainly a survey on advanced tools, based on the idea expressed in Price (1996),

The paper that showed that European option pricing could be put on a rational mathematical basis was Black and Scholes published in 1973. It was so revolutionary that the authors had to submit it to a number of journals before it was accepted. Although there are now numerous approaches to the result, they mostly require specialized methods, including Ito calculus and partial differential  equations, and perhaps Girsanov theory and Feynman-Kac methods. But it is the binomial method due initially to Sharpe and substantially extended by Cox, Ross, and Rubinstein that made the theory of option pricing accessible to everyone with limited mathematical background.  Even though it requires only routine algebraic manipulations, the method is still able to elucidate many of the ideas behind the full theory. Furthermore, all the surprising results mentioned in the opening can be located in this approach. For these reasons it is usually the first method presented in text books and finance courses; we shall follow this trend and step through it. The binomial method is, however, much more than a pedagogical breakthrough, since it allows for the development of numerical approximation methods for a wide range of options for which there are no  known analytic solutions.

Some recent results, obtained in work in progress with colleagues in combinatorial analysis will also be mentioned at the end of the talk (slides can be downloaded in a pdf format, with animations)

I will also be chairing the Numerical Methods session.

Visualizing overdispersion (with trees)

This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors,

> X=as.factor(paste(sinistres$carburant,sinistres$zone,
+ cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101))))
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Class D A (17,24]  average = 0.06274415  variance = 0.06174966 
Class D A (24,40]  average = 0.07271905  variance = 0.07675049 
Class D A (40,65]  average = 0.05432262  variance = 0.06556844 
Class D A (65,101] average = 0.03026999  variance = 0.02960885 
Class D B (17,24]  average = 0.2383109   variance = 0.2442396 
Class D B (24,40]  average = 0.06662015  variance = 0.07121064 
Class D B (40,65]  average = 0.05551854  variance = 0.05543831 
Class D B (65,101] average = 0.0556386   variance = 0.0540786 
Class D C (17,24]  average = 0.1524552   variance = 0.1592623 
Class D C (24,40]  average = 0.0795852   variance = 0.09091435 
Class D C (40,65]  average = 0.07554481  variance = 0.08263404 
Class D C (65,101] average = 0.06936605  variance = 0.06684982 
Class D D (17,24]  average = 0.1584052   variance = 0.1552583 
Class D D (24,40]  average = 0.1079038   variance = 0.121747 
Class D D (40,65]  average = 0.06989518  variance = 0.07780811 
Class D D (65,101] average = 0.0470501   variance = 0.04575461 
Class D E (17,24]  average = 0.2007164   variance = 0.2647663 
Class D E (24,40]  average = 0.1121569   variance = 0.1172205 
Class D E (40,65]  average = 0.106563    variance = 0.1068348 
Class D E (65,101] average = 0.1572701   variance = 0.2126338 
Class D F (17,24]  average = 0.2314815   variance = 0.1616788 
Class D F (24,40]  average = 0.1690485   variance = 0.1443094 
Class D F (40,65]  average = 0.08496827  variance = 0.07914423 
Class D F (65,101] average = 0.1547769   variance = 0.1442915 
Class E A (17,24]  average = 0.1275345   variance = 0.1171678 
Class E A (24,40]  average = 0.04523504  variance = 0.04741449 
Class E A (40,65]  average = 0.05402834  variance = 0.05427582 
Class E A (65,101] average = 0.04176129  variance = 0.04539265 
Class E B (17,24]  average = 0.1114712   variance = 0.1059153 
Class E B (24,40]  average = 0.04211314  variance = 0.04068724 
Class E B (40,65]  average = 0.04987117  variance = 0.05096601 
Class E B (65,101] average = 0.03123003  variance = 0.03041192 
Class E C (17,24]  average = 0.1256302   variance = 0.1310862 
Class E C (24,40]  average = 0.05118006  variance = 0.05122782 
Class E C (40,65]  average = 0.05394576  variance = 0.05594004 
Class E C (65,101] average = 0.04570239  variance = 0.04422991 
Class E D (17,24]  average = 0.1777142   variance = 0.1917696 
Class E D (24,40]  average = 0.06293331  variance = 0.06738658 
Class E D (40,65]  average = 0.08532688  variance = 0.2378571 
Class E D (65,101] average = 0.05442916  variance = 0.05724951 
Class E E (17,24]  average = 0.1826558   variance = 0.2085505 
Class E E (24,40]  average = 0.07804062  variance = 0.09637156 
Class E E (40,65]  average = 0.08191469  variance = 0.08791804 
Class E E (65,101] average = 0.1017367   variance = 0.1141004 
Class E F (17,24]  average = 0           variance = 0 
Class E F (24,40]  average = 0.07731177  variance = 0.07415932 
Class E F (40,65]  average = 0.1081142   variance = 0.1074324 
Class E F (65,101] average = 0.09071118  variance = 0.1170159

Again, one can plot the variance against the average,

> plot(vm,vv,cex=sqrt(ve),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(vm,vv,cex=sqrt(ve))
> abline(a=0,b=1,lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.58.26.png

An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines)

> library(tree)
> T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+
+ as.factor(marque)+as.factor(carburant)+as.factor(region)+
+ agevehicule+ageconducteur,data=baseFREQ,
+ split =  "gini",minsize =25000)

The tree is the following

> plot(T)
> text(T)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.55.13.png

Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous.

> X=as.factor(T$where)
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163 
Class  8 average =   0.05191127  variance = 0.05948133 
Class  9 average =   0.07442635  variance = 0.08694552 
Class  10 average =  0.4143646   variance = 0.4494002 
Class  11 average =  0.1917445   variance = 0.1744355 
Class  15 average =  0.04754595  variance = 0.05389675 
Class  20 average =  0.08129577  variance = 0.0906322 
Class  22 average =  0.05813419  variance = 0.07089811 
Class  23 average =  0.06123807  variance = 0.07010473 
Class  24 average =  0.06707301  variance = 0.07270995 
Class  25 average =  0.3164557   variance = 0.2026906 
Class  26 average =  0.08705041  variance = 0.108456 
Class  27 average =  0.06705214  variance = 0.07174673 
Class  30 average =  0.05292652  variance = 0.06127301 
Class  31 average =  0.07195285  variance = 0.08620593 
Class  32 average =  0.08133722  variance = 0.08960552 
Class  34 average =  0.1831559   variance = 0.2010849 
Class  39 average =  0.06173885  variance = 0.06573939 
Class  41 average =  0.07089419  variance = 0.07102932 
Class  44 average =  0.09426152  variance = 0.1032255 
Class  47 average =  0.03641669  variance = 0.03869702 
Class  49 average =  0.0506601   variance = 0.05089276 
Class  50 average =  0.06373107  variance = 0.06536792 
Class  51 average =  0.06762947  variance = 0.06926191 
Class  56 average =  0.06771764  variance = 0.07122379 
Class  57 average =  0.04949142  variance = 0.05086885 
Class  58 average =  0.2459016   variance = 0.2451116 
Class  59 average =  0.05996851  variance = 0.0615773 
Class  61 average =  0.07458053  variance = 0.0818608 
Class  63 average =  0.06203737  variance = 0.06249892 
Class  64 average =  0.07321618  variance = 0.07603106 
Class  66 average =  0.07332127  variance = 0.07262425 
Class  68 average =  0.07478147  variance = 0.07884597 
Class  70 average =  0.06566728  variance = 0.06749411 
Class  71 average =  0.09159605  variance = 0.09434413 
Class  75 average =  0.03228927  variance = 0.03403198 
Class  76 average =  0.04630848  variance = 0.04861813 
Class  78 average =  0.05342351  variance = 0.05626653 
Class  79 average =  0.05778622  variance = 0.05987139 
Class  80 average =  0.0374993   variance = 0.0385351 
Class  83 average =  0.06721729  variance = 0.07295168 
Class  86 average =  0.09888492  variance = 0.1131409 
Class  87 average =  0.1019186   variance = 0.2051122 
Class  88 average =  0.05281703  variance = 0.0635244 
Class  91 average =  0.08332136  variance = 0.09067632 
Class  96 average =  0.07682093  variance = 0.08144446 
Class  97 average =  0.0792268   variance = 0.08092019 
Class  99 average =  0.1019089   variance = 0.1072126 
Class  100 average = 0.1018262   variance = 0.1081117 
Class  101 average = 0.1106647   variance = 0.1151819 
Class  103 average = 0.08147644  variance = 0.08411685 
Class  104 average = 0.06456508  variance = 0.06801061 
Class  107 average = 0.1197225   variance = 0.1250056 
Class  108 average = 0.0924619   variance = 0.09845582 
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.05.08.png

Here, we can identify classes where remaining heterogeneity.

Regression tree using Gini’s index

In order to illustrate the construction of regression tree (using the CART methodology), consider the following simulated dataset,

> set.seed(1)
> n=200
> X1=runif(n)
> X2=runif(n)
> P=.8*(X1<.3)*(X2<.5)+
+   .2*(X1<.3)*(X2>.5)+
+   .8*(X1>.3)*(X1<.85)*(X2<.3)+
+   .2*(X1>.3)*(X1<.85)*(X2>.3)+
+   .8*(X1>.85)*(X2<.7)+
+   .2*(X1>.85)*(X2>.7) 
> Y=rbinom(n,size=1,P)  
> B=data.frame(Y,X1,X2)

with one dichotomos varible (the variable of interest, ), and two continuous ones (the explanatory ones  and ).

> tail(B)
    Y        X1        X2
195 0 0.2832325 0.1548510
196 0 0.5905732 0.3483021
197 0 0.1103606 0.6598210
198 0 0.8405070 0.3117724
199 0 0.3179637 0.3515734
200 1 0.7828513 0.1478457

The theoretical partition is the following

Here, the sample can be plotted below (be careful, the first variate is on the y-axis above, and the x-axis below) with blue dots when  equals one, and red dots when  is null,

> plot(X1,X2,col="white")
> points(X1[Y=="1"],X2[Y=="1"],col="blue",pch=19)
> points(X1[Y=="0"],X2[Y=="0"],col="red",pch=19)

In order to construct the tree, we need a partition critera. The most standard one is probably Gini’s index, which can be writen, when ‘s are splited in two classes, denoted here 

L'image “https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-04.png” ne peut être affichée car elle contient des erreurs.

or when ‘s are splited in three classes, denoted 
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-07.png

etc. Here,  are just counts of observations that belong to partition  such that  takes value . But it is possible to consider other criteria, such as the chi-square distance,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-01.png

where, classically

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-02.png
when we consider two classes (one knot) or, in the case of three classes (two knots)
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-05.png

Here again, the idea is to maximize that distance: the idea is to discriminate, so we want samples as not independent as possible. To compute Gini’s index consider

> GINI=function(y,i){
+ T=table(y,i)
+ nx=apply(T,2,sum)
+ pxy=T/matrix(rep(nx,each=2),2,ncol(T))
+ vxy=pxy*(1-pxy)
+ zx=apply(vxy,2,sum)
+ n=sum(T)
+ -sum(nx/n*zx)
+ }

We simply construct the contingency table, and then, compute the quantity given above. Assume, first, that there is only one explanatory variable. We split the sample in two, with all possible spliting values , i.e.

Then, we compute Gini’s index, for all those values. The knot is the value that maximizes Gini’s index. Once we have our first knot, we keep it (call it, from now on ). And we reiterate, by seeking the best second choice: given one knot, consider the value that splits the sample in three, and give the highest Gini’s index, Thus, we consider either the following partition

or this one

I.e. we cut either below, or above the previous knot. And we iterate. The code can be something like that,

> X=X2
> u=(sort(X)[2:n]+sort(X)[1:(n-1)])/2
> knot=NULL
> for(s in 1:4){
+ vgini=rep(NA,length(u))
+ for(i in 1:length(u)){
+ kn=c(knot,u[i])
+ F=function(x){sum(x<=kn)}
+ I=Vectorize(F)(X)
+ vgini[i]=GINI(Y,I)
+ }
+ plot(u,vgini)
+ k=which.max(vgini)
+ cat("knot",k,u[k],"\n")
+ knot=c(knot,u[k])
+ u=u[-k]
+ }
knot 69 0.3025479 
knot 133 0.5846202 
knot 72 0.3148172 
knot 111 0.4811517

At the first step, the value of Gini’s index was the following,

which was maximal around 0.3. Then, this value is considered as fixed. And we try to construct a partition in three parts (spliting either below or above 0.3). We get the following plot for Gini’s index (as a function of this second knot)

 which is maximum when the split the sample around 0.6 (which becomes our second knot). Etc. Now, let us compare our code with the standard R function,

> tree(Y~X2,method="gini")
node), split, n, deviance, yval
      * denotes terminal node

 1) root 200 49.8800 0.4750  
   2) X2 < 0.302548 69 12.8100 0.7536 *
   3) X2 > 0.302548 131 28.8900 0.3282  
     6) X2 < 0.58462 65 16.1500 0.4615  
      12) X2 < 0.324591 7  0.8571 0.1429 *
      13) X2 > 0.324591 58 14.5000 0.5000 *
     7) X2 > 0.58462 66 10.4400 0.1970 *

We do obtain similar knots: the first one is 0.302 and the second one 0.584. So, constructing tree is not that difficult…

Now, what if we consider our two explanatory variables? The story remains the same, except that the partition is now a bit more complex to write. To find the first knot, we consider all values on the two components, and again, keep the one that maximizes Gini’s index,

> n=nrow(B)
> u1=(sort(X1)[2:n]+sort(X1)[1:(n-1)])/2
> u2=(sort(X2)[2:n]+sort(X2)[1:(n-1)])/2
> gini=matrix(NA,nrow(B)-1,2)
> for(i in 1:length(u1)){
+ I=(X1<u1[i])
+ gini[i,1]=GINI(Y,I)
+ I=(X2<u2[i])
+ gini[i,2]=GINI(Y,I)
+ }
> mg=max(gini)
> i=1+sum(mg==max(gini[,2]))
> par(mfrow = c(1, 2))
> plot(u1,gini[,1],ylim=range(gini),col="green",type="b",xlab="X1",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==1){points(u1[which.max(gini[,1])],mg,pch=19,col="red")
+          segments(u1[which.max(gini[,1])],mg,u1[which.max(gini[,1])],-100000)}
> plot(u2,gini[,2],ylim=range(gini),col="green",type="b",xlab="X2",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==2){points(u2[which.max(gini[,2])],mg,pch=19,col="red")
+          segments(u2[which.max(gini[,2])],mg,u2[which.max(gini[,2])],-100000)}
> u2[which.max(gini[,2])]
[1] 0.3025479

The graphs are the following: either we split on the first component (and we obtain the partition on the right, below),

or we split on the second one (and we get the following partition),

Here, it is optimal to split on the second variate, first. And actually, we get back to the one-dimensional case discussed previously: as expected, it is optimal to split around 0.3. This is confirmed with the code below,

> library(tree)
> arbre=tree(Y~X1+X2,data=B,method="gini")
> arbre$frame[1:4,]
     var   n       dev      yval splits.cutleft splits.cutright
1     X2 200 49.875000 0.4750000      <0.302548       >0.302548
2     X1  69 12.811594 0.7536232      <0.800113       >0.800113
4 <leaf>  57  8.877193 0.8070175                               
5 <leaf>  12  3.000000 0.5000000

For the second knot, four cases should be considered: spliting on the second variable (again), either above, or below the previous knot (see below on the left) or spliting on the first one. Then whe have wither a partition below or above the previous knot (see below on the right),

Etc. To visualize the tree, the code is the following

> plot(arbre)
> text(arbre)
> partition.tree(arbre)

http://freakonometrics.hypotheses.org/files/2013/01/arbre-gini-x1-x2-encore.png

Note that we can also visualize the partition. Nice, isn’t it?

To go further, the book Classification and Regression Trees by Leo Breiman (and co-authors) is awesome. Note that there are also interesting sections in the bible Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman (which can be downloaded from http://www.stanford.edu/~hastie/…)

Segmentation en tarification, compléments

Dans le premier cours d’actuariat IARD, nous avons vu l’importance de la ségmentation, et son implication sur le calcul des primes (passer d’une espérance mathématique à une espérance conditionnelle). Pour aller un peu plus loin, quelques compléments,

pour une lecture plus économique de la problématique de la segmentation en assurance

ou pour une lecture plus légale

Sinon, plusieurs articles de vulgarisation peuvent être lu sur internet,

La première démo aura lieu lundi, en salle informatique. Karim sera une introduction au langage R, à la manipulation des variables (qualitatives et quantitatives). Je mettrais en ligne les transparents en fin de semaine, et les codes seront mis en ligne dans le courant de la semaine prochaine.

Comme annoncé hier, il n’y aura pas cours mercredi prochain. Le mercredi suivant, nous verrons la modélisation des variables indicatrices, i.e. la régression logistique, et les arbres de régression. On supposera que le modèle linéaire aura été vu, je mets un lien vers les transparents du cours ACT6420 de la session passée, notes de cours transparents1 et transparents2. Il est aussi possible de relire Frees (2010), chapitres 3, 4, 5 et 6.

Pour commencer à pratiquer la régression logistique, on utilisera la petite fonction suivante

logit = function(formula, lien="logit", data=NULL) {
glm(formula,family=binomial(link=lien),data)
}

Sinon, la Casualty Actuarial Society a mis en ligne plusieurs documents en ligne sur les arbres de régression (qui sont peu abordés dans les livres mentionnés auparavant),

pour une comparaison de toutes les méthodes

Les transparents seront mis en ligne en fin de semaine prochaine. A suivre donc…

Une région géographique n’est pas une variable continue

En relisant les devoirs maisons, je me suis rendu compte que certains avaient tenté de regrouper les régions (géographiques) par régions homogènes. Sauf que les régions étaient codées par un numéro (selon la codification officielle). Par exemple, dans une des bases, nous avions des assurés dans 4 zones géographiques, à savoir la région 82 (région Rhône-Alpes en rouge) la région 54 (région Poitou-Charentes en vert) la région 73 (région Midi-Pyrénées en bleu) et enfin la région 41 (région Lorraine en mauve).

> unique(baseFREQ$region) 
[1] 82 54 73 41

Une idée intéressante pour regrouper les régions pouvait être d’utiliser les arbres. Les régions étant des couleurs (on le voit bien sur la carte) et pas des variables quantitatives, il est normal de travailler sur des facteurs. D’ailleurs le code pour faire la carte est le suivant,

> library(maps) 
>  france<-map(database="france") 
>  dpt=c("Ain","Ardeche","Drome","Isere","Loire ","Rhone",  
+ "Savoie","Haute-Savoie","Charente","Charente-Maritime", 
+ "Deux-Sevres","Vienne","Ariege","Aveyron","Haute-Garonne",  
+ "Gers","Lot","Hautes-Pyrenees","Tarn","Tarn-et-Garonne", 
+ "Meurthe-et-Moselle","Meuse","Moselle","Vosges") 
>  couleur=c(rep(2,8),rep(3,4),rep(4,8),rep(6,4))  
>  match=match.map(france,dpt) 
>  color=couleur[match] 
>  map(database="france", fill=TRUE, col=color)

L’arbre sur les régions en tant que facteurs donne le découpage suivant

>  baseFREQ$fregion=as.factor(baseFREQ$region) 
>  ARBRE1=tree(nombre~fregion,data=baseFREQ,split="gini")  
>  plot(ARBRE1) 
>  text(ARBRE1)

Bon, R a la mauvaise idée de recoder les classes (mais il garde l’ordre, i.e. a correspond à la région 41, b à 54, c à 73 et d à 82). Visuellement, on retient qu’il est possible de considérer deux grandes régions, AC (i.e. 41 et 73) et BD (i.e. 54 et 72). L’intérêt des arbres sur des variables qualitatives, des facteurs, c’est que tous les regroupements sont possibles. En revanche, si on fait un arbre sur la région qui est lue en tant que nombre (quantitatif), on obtient

>  ARBRE2=tree(nombre~region,data=baseFREQ,split="gini") 
>  plot(ARBRE2) 
>  text(ARBRE2)

Il est alors impossible de regrouper dans une même classe deux régions séparées par un nombre, i.e. on ne peut regrouper 41 et 82 dans la même classe. R suggère de distinguer peut être trois régions, à savoir 82 (à droite), puis 73 (au centre) et enfin de mettre éventuellement 41 et 54 ensemble. Ce qui n’est pas la stratégie optimale quand on regroupe des facteurs.

Sélection de variables versus sélection de modalités

En cours, nous avions évoqué (très rapidement) la sélection automatique de variables. La méthode la plus simple est une méthode stepwise, basé sur un critère de type AIC, ou BIC. Considérons la base suivante,

>  N = base$nbre
>  E = base$exposition
>  X1 = base$carburant
>  X2 = cut(base$agevehicule,c(0,3,10,101),
+ right=FALSE)
>  X3 = cut(base$ageconducteur,c(0,22,45,101),
+ right=FALSE)
>  X4 = as.factor(base$zone)
>  X5 = as.factor(base$puissance)
>  X6 = as.factor(base$region)
>  X7 = as.factor(base$marque)
>  base1=data.frame(N,E,X1,X2,X3,X4,X5,X6,X7)

Une méthode stepwise (backward) donne ici

> reg1=glm(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),
+ family="poisson",data=base1)
> step(reg1)
Start:  AIC=20492.67
N ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X5   11    15316 20482
- X3    2    15305 20490
<none>       15304 20493
- X2    2    15314 20499
- X1    1    15319 20506
- X7   10    15343 20511
- X4    5    15398 20576
- X6   14    15569 20729

Step:  AIC=20482.35
N ~ X1 + X2 + X3 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X3    2    15317 20479
<none>       15316 20482
- X2    2    15326 20488
- X1    1    15334 20498
- X7   10    15359 20505
- X4    5    15410 20566
- X6   14    15579 20717

Step:  AIC=20479.33
N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
<none>       15317 20479
- X2    2    15327 20485
- X1    1    15334 20495
- X7   10    15360 20502
- X4    5    15410 20563
- X6   14    15620 20754

Call:  glm(formula = N ~ X1 + X2 + X4 + X6 + X7 
       + offset(log(E)),
       family = "poisson",
data = base1)

Coefficients:
(Intercept)          X1E     X2[3,10)   X2[10,101)          X4B
-1.0588454   -0.1653822    0.0266763   -0.1135451   -0.0004047
X4C          X4D          X4E          X4F          X60
0.1497622    0.3748811    0.5052894    0.4292016   -0.3590838
X61          X62          X63          X64          X65
-0.9300641   -1.0278887   -1.1818218   -1.0971797   -0.9459414
X66          X67          X68          X69         X610
-1.3690795   -1.1425678   -1.5309402   -1.3883549   -1.4603624
X611         X612         X613          X72          X73
-1.6763206   -1.3974092   -1.4864404    0.0246113    0.1144990
X74          X75          X76         X710         X711
-0.0932555    0.1635397   -0.1478095    0.2502030    0.1967970
X712         X713         X714
-0.2420215    0.2161411   -0.1963162

Degrees of Freedom: 49999 Total (i.e. Null);  49967 Residual
Null Deviance:	    15810
Residual Deviance: 15320 	AIC: 20480

Autrement dit, on supprime la troisième (âge du conducteur principal, par classes arbitraires) et la cinquième variable (puissance du véhicule) en gardant toutes les autres. Mais ici, si une variable n’a pas été retenue, c’est que globalement, elle n’apportait pas beaucoup d’information. Il serait toutefois possible de garder une information partielle, en gardant éventuellement certaines modalités. L’idée est de disjoncter la base, en créant des variables indicatrices par modalités. La base sera beaucoup plus grosse, et la sélection prendra alors beaucoup plus de temps,

> base2=data.frame(model.matrix( ~ 0+X1+X2+X3+X4+X5+X6+X7,
+ data=base1))
> base2$E=base1$E
> base2$N=base1$N
> reg2=glm(N~.-E+offset(log(E)),family="poisson",
+ data=base2)
>  step(reg2)
Start:  AIC=20492.67
N ~ (X1D + X1E + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101.
X4B + X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 +
X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 +
X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + E) - E + offset(log(E))

Step:  AIC=20492.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4B
X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 +
X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 +
X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + offset(log(E))

Df Deviance   AIC
- X4B         1    15304 20491
- X58         1    15304 20491
- X511        1    15304 20491
- X2.3.10.    1    15304 20491
- X72         1    15304 20491
- X513        1    15304 20491
- X512        1    15304 20491
- X515        1    15304 20491
- X74         1    15305 20491
- X3.45.101.  1    15305 20491
- X714        1    15305 20491
- X55         1    15305 20492
- X3.22.45.   1    15305 20492
- X711        1    15306 20492
- X76         1    15306 20492
- X59         1    15306 20492
<none>             15304 20493
- X514        1    15306 20493
- X713        1    15306 20493
- X73         1    15307 20493
- X56         1    15307 20493
- X710        1    15307 20494
- X75         1    15308 20494
- X2.10.101.  1    15308 20495
- X57         1    15309 20495
- X4C         1    15310 20496
- X510        1    15310 20496
- X60         1    15312 20498
- X4F         1    15314 20500
- X712        1    15316 20503
- X1D         1    15319 20506
- X4D         1    15337 20524
- X61         1    15345 20532
- X65         1    15350 20536
- X62         1    15352 20538
- X64         1    15359 20545
- X4E         1    15362 20549
- X63         1    15366 20553
- X67         1    15370 20556
- X612        1    15381 20568
- X69         1    15382 20569
- X66         1    15387 20574
- X610        1    15389 20576
- X68         1    15393 20580
- X611        1    15406 20592
- X613        1    15451 20637

Step:  AIC=20490.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4C
X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 +
X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 +
X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 +
X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 +
X714 + offset(log(E))

etc etc… et si on va directement à la fin,

Step:  AIC=20469.18
N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F + X57 + X510 + X60
X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 +
X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 +
offset(log(E))

Df Deviance   AIC
<none>             15315 20469
- X76         1    15317 20470
- X713        1    15317 20470
- X73         1    15317 20470
- X57         1    15318 20470
- X75         1    15318 20471
- X710        1    15319 20471
- X510        1    15319 20471
- X4C         1    15322 20474
- X60         1    15322 20475
- X2.10.101.  1    15325 20478
- X4F         1    15325 20478
- X1D         1    15333 20485
- X712        1    15338 20490
- X61         1    15356 20508
- X4D         1    15359 20511
- X62         1    15363 20515
- X65         1    15363 20515
- X64         1    15371 20524
- X63         1    15378 20530
- X67         1    15383 20536
- X4E         1    15390 20543
- X612        1    15394 20547
- X69         1    15396 20548
- X66         1    15400 20553
- X610        1    15403 20555
- X68         1    15407 20559
- X611        1    15419 20572
- X613        1    15467 20619

Call:  glm(formula = N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F
X57 + X510 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 +
X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 +
X710 + X712 + X713 + offset(log(E)), family = "poisson",
data = base2)

Coefficients:
(Intercept)          X1D   X2.10.101.          X4C          X4D
-1.20880      0.16886     -0.13808      0.14888      0.37539
X4E          X4F          X57         X510          X60
0.50458      0.42768      0.08381      0.18722     -0.36509
X61          X62          X63          X64          X65
-0.93836     -1.03471     -1.18803     -1.10217     -0.95624
X66          X67          X68          X69         X610
-1.37463     -1.15391     -1.54213     -1.40188     -1.47217
X611         X612         X613          X73          X75
-1.68559     -1.40582     -1.49700      0.10874      0.15022
X76         X710         X712         X713
-0.15183      0.21948     -0.27400      0.19565

Degrees of Freedom: 49999 Total (i.e. Null);  49971 Residual
Null Deviance:	    15810
Residual Deviance: 15310 	AIC: 20470

Si la troisième variable (âge du conducteur principal, par classes arbitraires) disparait assez vite, en revanche, une information sur la cinquième (la puissance) est gardée car certaines modalités semblent être informative sur la fréquence d’accidents. En revanche, on notera qui si on fait un arbre, la troisième variable était toujours clairement significative, ce qui peut nous conforter dans l’idée de faire de la sélection de variables sur les modalités.

> library(tree)
> TREE= tree(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),split="gini",
+ mincut = 2500,data=base1)
> plot(TREE)
> text(TREE,cex=.9)

Régression sur des variables catégorielles

Petit complément par rapport au cours de mardi. On avait évoqué tout d’abord la lecture des sorties lorsque l’on régresse sur des variables catégorielles (des facteurs). Commençons par supprimer la constante de la régression

> reg0=glm(nbre~0+zone,offset=log(exposition),data=base, 
+ family=poisson(link="log"))
> summary(reg0)

Call:
glm(formula = nbre ~ 0 + zone, family = poisson(link = "log"), 
    data = base, offset = log(exposition))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5717  -0.3968  -0.2996  -0.1547  12.6722  

Coefficients:
      Estimate Std. Error z value Pr(>|z|)    
zoneB -2.54187    0.06287  -40.43   <2e-16 ***
zoneA -2.54912    0.05285  -48.23   <2e-16 ***
zoneC -2.38525    0.03753  -63.56   <2e-16 ***
zoneD -2.13454    0.03878  -55.05   <2e-16 ***
zoneE -2.00204    0.03965  -50.49   <2e-16 ***
zoneF -2.06932    0.11547  -17.92   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 50966  on 50000  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

> predict(reg0,newdata=data.frame(
+ zone=c("A","B","C","D","E"),exposition=rep(1,5)))
        1         2         3         4         5 
-2.549120 -2.541870 -2.385253 -2.134543 -2.002044

On voit que toutes les modalités sont présentes, et toutes sont significatives. Si on régresse sur la constante, il faudra supprimer une modalité pour rendre le modèle identifiable. On peut forcer pour que la modalité de référence soit la seconde,

> base$zone=relevel(base$zone,"B")
> regB=glm(nbre~zone,offset=log(exposition),data=base,
+ family=poisson(link="log"))
> summary(regB)

Call:
glm(formula = nbre ~ zone, family = poisson(link = "log"), 
data = base,
offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3968  -0.2996  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54187    0.06287 -40.431  < 2e-16 ***
zoneA       -0.00725    0.08213  -0.088 0.929661
zoneC        0.15662    0.07322   2.139 0.032432 *
zoneD        0.40733    0.07387   5.514 3.50e-08 ***
zoneE        0.53983    0.07433   7.263 3.80e-13 ***
zoneF        0.47255    0.13148   3.594 0.000325 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

> predict(regB,newdata=data.frame(
+ zone=c("A","B","C","D","E"),exposition=rep(1,5)))
1         2         3         4         5
-2.549120 -2.541870 -2.385253 -2.134543 -2.002044

On notera que les prédictions ne changent pas. On peut aussi choisir la première comme modalité de référence,

> base$zone=relevel(base$zone,"A")
> reg=glm(nbre~zone,offset=log(exposition),
> data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zone, family = poisson(link = "log"), 
data = base,
offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3968  -0.2996  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54912    0.05285 -48.232  < 2e-16 ***
zoneB        0.00725    0.08213   0.088 0.929661
zoneC        0.16387    0.06482   2.528 0.011471 *
zoneD        0.41458    0.06555   6.324 2.54e-10 ***
zoneE        0.54708    0.06607   8.280  < 2e-16 ***
zoneF        0.47980    0.12699   3.778 0.000158 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

Le fait que la seconde modalité ne soit pas significative se lit par rapport à la modalité de référence (en l’occurrence la première): non significatif signifie alors non significativement différente. Autrement dit, on peut regrouper les modalités en une seule.

> base$zonesimple=base$zone
> base$zonesimple[base$zone%in%c("A","B")]="A"
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54612    0.04046 -62.937  < 2e-16 ***
zonesimpleC  0.16087    0.05518   2.915  0.00355 **
zonesimpleD  0.41158    0.05604   7.345 2.06e-13 ***
zonesimpleE  0.54408    0.05665   9.605  < 2e-16 ***
zonesimpleF  0.47681    0.12235   3.897 9.74e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

On note qu’avec ce regroupement, les autres modalités sont sensiblement différentes. On peut aussi retenir la troisième comme modalité de référence

> base$zonesimple=relevel(base$zonesimple,"C")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.38525    0.03753 -63.557  < 2e-16 ***
zonesimpleA -0.16087    0.05518  -2.915  0.00355 **
zonesimpleD  0.25071    0.05396   4.646 3.39e-06 ***
zonesimpleE  0.38321    0.05460   7.019 2.24e-12 ***
zonesimpleF  0.31593    0.12142   2.602  0.00927 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

Comme toutes les modalités semblent significatives, on peut tenter de prendre comme modalité de référence une des dernières (dont les estimations des coefficients donnent des résultats très proches)

> base$zonesimple=relevel(base$zonesimple,"F")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.06932    0.11547 -17.921  < 2e-16 ***
zonesimpleC -0.31593    0.12142  -2.602  0.00927 **
zonesimpleA -0.47681    0.12235  -3.897 9.74e-05 ***
zonesimpleD -0.06522    0.12181  -0.535  0.59232
zonesimpleE  0.06727    0.12209   0.551  0.58161
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

Au vue de cette dernière sortie, on peut tenter de fusionner toutes les dernières classes ensembles

> base$zonesimple[base$zone%in%c("D","E","F")]="F"
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5660  -0.3959  -0.3004  -0.1547  12.5929

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.07182    0.02696 -76.853  < 2e-16 ***
zonesimpleC -0.31344    0.04621  -6.783 1.18e-11 ***
zonesimpleA -0.47431    0.04861  -9.757  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15698  on 49997  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

Bon, formellement, regrouper deux modalités (i.e. décréter que deux variables sont non significatives simultanément) demande un peu plus qu’un test de Student, ou que deux tests de Student…. Si on remonte un peu en arrière, on aurait pu faire un test multiple avant de fusionner les trois modalités (un test de type Fisher, ou une ANOVA)

> base$zonesimple=relevel(base$zonesimple,"F")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> library(car)
> linearHypothesis(reg,c("zonesimpleD=0","zonesimpleE=0"))
Linear hypothesis test

Hypothesis:
zonesimpleD = 0
zonesimpleE = 0

Model 1: restricted model
Model 2: nbre ~ zonesimple

Res.Df Df  Chisq Pr(>Chisq)
1  49997
2  49995  2 5.7073    0.05763 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Manifestement, on peut accepter l’hypothèse que ces trois catégories n’en font qu’une. La zone géographique peut alors se découper en trois grandes zones, et pas en six. On notera que cela correspond à ce que propose un arbre de régression

> library(tree)
> arbre=tree(nbre~zone+offset(log(exposition)),
+ data=base,split="gini")
> plot(arbre)
> text(arbre)