Tag Archives: regression

Modeling Earthquake Dynamics

In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

The paper is online on https://hal.archives-ouvertes.fr/.

Inequalities and Quantile Regression

In the course on inequality measure, we’ve seen how to compute various (standard) inequality indices, based on some sample of incomes (that can be binned, in various categories). On Thursday, we discussed the fact that incomes can be related to different variables (e.g. experience), and that comparing income inequalities between coutries can be biased, if they have very different age structures.

So we’ve seen quantile regressions. I can mention some old slides (used in a crash course at McGill three years ago)., as well as a more technical discussion on ties, and non-unicity of the regression line.

In order to illustrate, consider  the following dataset

> salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)
> plot(salary$yd,salary$sl)
> abline(lm(sl~yd,data=salary),col="blue")

We have here the stndard regression line, obtained using ordinary least squares. Here we have the expected income given the experience. But we can also use a quantile regression,

https://latex.codecogs.com/gif.latex?Q_\tau(Y\vert\boldsymbol{X})=\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta}

> library(quantreg)
> Q10 <- rq(sl~yd,data=salary,tau=.1)
> Q90 <- rq(sl~yd,data=salary,tau=.9)
> abline(Q10,col="red")
> abline(Q90,col="purple")

A classical tool to describe inequalities is the ratio of the 90% quantile over the 10% quantile (among so many others,

> ratio9010 = function(age){
+   predict(Q90,newdata=data.frame(yd=age))/
+   predict(Q10,newdata=data.frame(yd=age))
+ }

For instance, among people with 5 years of experience, there is an inequality index of

> ratio9010(5)
1.401749

while for people with 30 years of experience, it would be

> ratio9010(30)
1.9488

If we plot the evolution of this 90-10 ratio, as a function of the experience, we get the following increasing trend

> A=0:30
> plot(A,Vectorize(ratio9010)(A),type="l",ylab="90-10 quantile ratio")

So clearly, comparing inequalitis ceteris paribus between two groups, should be performed carefully, and probably including some covariates.

Régression linéaire, quelques codes

Un rapide billet pour mettre en ligne les codes utilisés la semaine passée, complétant les codes des transparents. On travaille toujours sur la même base, ou on cherche à prévoir une distance de freinage d’un véhicule, tenant compte de la vitesse du véhicule.

> plot(cars)
> reg=lm(dist~speed,data=cars)
> summary(reg)

Call:
lm(formula = dist ~ speed, data = cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-29.069  -9.525  -2.272   9.215  43.201 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.5791     6.7584  -2.601   0.0123 *  
speed         3.9324     0.4155   9.464 1.49e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15.38 on 48 degrees of freedom
Multiple R-squared:  0.6511,	Adjusted R-squared:  0.6438 
F-statistic: 89.57 on 1 and 48 DF,  p-value: 1.49e-12

Pour faire plusieurs prévisions, à la main, on peut utiliser le code suivant (la boucle permet de faire des prévisions pour plusieurs valeurs)

> for(x in seq(3,30,by=.25)){
+ b0=coef(reg)[1]
+ b1=coef(reg)[2]
+ Yx=b0+b1*x
+ V=vcov(reg)
+ Vx=V[1,1]+2*V[1,2]*x+V[2,2]*x^2
+ IC1=Yx+c(-1,+1)*1.96*sqrt(Vx)
+ s=summary(reg)$sigma
+ IC2=Yx+c(-1,+1)*1.96*s
+ points(x,Yx,pch=19,col="red")
+ points(c(x,x),IC1,pch=3,col="blue")
+ points(c(x,x),IC2,pch=3,col="purple")}

On avait ensuite fait une régression linéaire sur une sous-base, avec 20 observations tirées au hasard

> I=sample(1:50,size=20)
> reg=lm(dist~speed,data=cars[I,])

Le but était de visualiser l’impact du nombre d’observation sur la qualité de la régression

> summary(reg)

Call:
lm(formula = dist ~ speed, data = cars[I, ])

Residuals:
    Min      1Q  Median      3Q     Max 
-23.529  -7.998  -5.394  11.634  39.348 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -20.7408     9.4639  -2.192   0.0418 *  
speed         4.2247     0.6129   6.893 1.91e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 16.62 on 18 degrees of freedom
Multiple R-squared:  0.7252,	Adjusted R-squared:   0.71 
F-statistic: 47.51 on 1 and 18 DF,  p-value: 1.91e-06

> for(x in seq(3,30,by=.25)){
+   b0=coef(reg)[1]
+   b1=coef(reg)[2]
+   Yx=b0+b1*x
+   V=vcov(reg)
+   Vx=V[1,1]+2*V[1,2]*x+V[2,2]*x^2
+   IC=Yx+c(-1,+1)*1.96*sqrt(Vx)
+   points(x,Yx,pch=19,col="purple")
+   points(c(x,x),IC,pch=3,col="green")}

Notons qu’il est possible d’utiliser des fonctions de R pour faire des prévisions, avec des intervalles de confiance

> predict(reg,
+ newdata=data.frame(speed=c(15,25)),interval= "confidence")
       fit      lwr       upr
1 42.62976 34.75450  50.50502
2 84.87677 68.92746 100.82607
> predict(reg,
+ newdata=data.frame(speed=15),interval= "prediction")
       fit      lwr      upr
1 42.62976 6.836077 78.42344

Quand on a plus d’une variable explicative, c’est plus compliqué de “visualiser” la régression

>  chicago=read.table("http://freakonometrics.free.fr/chicago.txt",
+  header=TRUE,sep=";")
>  Y=chicago$Fire
>  X1=chicago$X_1
>  X2=chicago$X_2
>  X3=chicago$X_3
>  base=data.frame(Y,X1,X2,X3)
> plot(X2,X3)
> reg=lm(Y~X2+X3,data=base)
> y=function(x2,x3) predict(reg,newdata=data.frame(X2=x2,X3=x3))
> VX2=seq(0,80)
> VX3=seq(5,25)
> VY=outer(VX2,VX3,y)
> image(VX2,VX3,VY)
> contour(VX2,VX3,VY,add=TRUE)

qui correspond à un plan de régression

> persp(VX2,VX3,VY,theta=30,ticktype=detailed)

On reviendra plus en détails sur ce point, mais il est possible de faire des régressions non linéaires assez facilement, à partir de ce modèle linéaire. On avait commencé par un modèle linéaire sur le logarithme de la distance

> plot(cars$speed,log(cars$dist))
> reg1=lm(log(dist)~speed,data=cars)
> abline(reg1,col="red")

(on le verra, ce n’est pas fini, car on n’a pas ici de prévision sur la distance, juste sur son logarithme… mais promis, on en reparlera) ou sur la racine carrée

> plot(cars$speed,sqrt(cars$dist))
> reg1=lm(sqrt(dist)~speed,data=cars)
> abline(reg1,col="red")

Au lieu de transformer la variable d’intérêt, on peut aussi transformer la variable explicative. On peut pendre des puissances, ou des fonctions simples, mais aussi mettre des ruptures. On avait commencé par une variable indicatrice,

> plot(cars$speed,cars$dist)
> s=10
> abline(v=s,col="green")
> regs=lm(dist~speed+I(speed>s),data=cars)
> summary(regs)

Call:
lm(formula = dist ~ speed + I(speed > s), data = cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-29.472  -9.559  -2.088   7.456  44.412 

Coefficients:
                 Estimate Std. Error t value Pr(>|t|)    
(Intercept)      -17.2964     6.7709  -2.555   0.0139 *  
speed              4.3140     0.5762   7.487  1.5e-09 ***
I(speed > s)TRUE  -7.5116     7.8511  -0.957   0.3436    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15.39 on 47 degrees of freedom
Multiple R-squared:  0.6577,	Adjusted R-squared:  0.6432 
F-statistic: 45.16 on 2 and 47 DF,  p-value: 1.141e-11

Mais on peut aussi mettre des fonctions afin d’avoir un modèle linéaire par morceaux, tout en étant continu

> plot(cars)
> s=15
> abline(v=s,col="green")
> positive=function(x) ifelse(x>0,x,0)
> regs=lm(dist~speed+positive(speed-s),data=cars)
> summary(regs)

Call:
lm(formula = dist ~ speed + positive(speed - s), data = cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-29.502  -9.513  -2.413   5.195  45.391 

Coefficients:
                    Estimate Std. Error t value Pr(>|t|)   
(Intercept)          -7.6519    10.6254  -0.720  0.47500   
speed                 3.0186     0.8627   3.499  0.00103 **
positive(speed - s)   1.7562     1.4551   1.207  0.23350   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15.31 on 47 degrees of freedom
Multiple R-squared:  0.6616,	Adjusted R-squared:  0.6472 
F-statistic: 45.94 on 2 and 47 DF,  p-value: 8.761e-12

On a ici une rupture, mais on pourrait imaginer en avoir plusieurs

> nouvellebase=data.frame(speed=5:25)
> y=predict(regs,newdata=nouvellebase)
> lines(5:25,y,col="red")
> 
> plot(cars$speed,cars$dist)
> s1=10
> s2=20
> abline(v=c(s1,s2),col="green")
> positive=function(x) ifelse(x>0,x,0)
> regs=lm(dist~speed+positive(speed-s1)+positive(speed-s2),data=cars)
> summary(regs)

Call:
lm(formula = dist ~ speed + positive(speed - s1) + positive(speed - s2), data = cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-24.374  -9.475  -2.625   6.639  43.914 

Coefficients:
                     Estimate Std. Error t value Pr(>|t|)  
(Intercept)           -7.6305    16.2941  -0.468   0.6418  
speed                  3.0630     1.8238   1.679   0.0998 .
positive(speed - s1)   0.2087     2.2453   0.093   0.9263  
positive(speed - s2)   4.2812     2.2843   1.874   0.0673 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15 on 46 degrees of freedom
Multiple R-squared:  0.6821,	Adjusted R-squared:  0.6613 
F-statistic: 32.89 on 3 and 46 DF,  p-value: 1.643e-11

Comme vu en cours, le test de significativité des deux derniers coefficients ne veut pas dire que la pente est nulle, mais qu’elle est significativement différente de cette obtenue sur la zone de gauche (avant les deux seuils).

There is no “Too Big” Data, is there?

A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques.

So, consider here a logistic regression , based on some covariates. We have  explanatory variables ( will be large, but not too large) and  observations (with ), Here we have a big (potentially) matrix product  i.e. with a large  matrix. Here, assume that we have a  matrix, with  individual observations, and  possible variables (and the intercept). Actually, in my model, only  variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated.

n=100000
library(mnormt)
k=50
r=.2
Sig=matrix(r,k,k)
diag(Sig)=1
X=rmnorm(n,varcov=Sig)
U=pnorm(rmnorm(n,varcov=Sig))
p=exp(-U[,1]-X[,1]-1)/(1+exp(-U[,1]-X[,1]-1))
Y=rbinom(n,size=1,p)
df=data.frame(Y,U,X)
names(df)=c("Y",paste("U",1:50,sep=""),paste("X",1:50,sep=""))
reg=glm(Y~.,data=df,family="binomial")

In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared,

  • generate   datasets with  observations, by sub-sampling
  • generate   datasets with  observations, by sub-sampling,

On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of  explanatory variables, only  were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used.

L1=L2=L1s=L2s=list()
library(MASS)
ns1=n/10
ns2=n/100
for(s in 1:100){
i=sample(1:n,size=ns1,replace=TRUE)
reg_sub=glm(Y~.,data=df[i,],family="binomial")
L1[[s]]=reg_sub
L1s[[s]]=stepAIC(reg_sub)
i=sample(1:n,size=ns2,replace=TRUE)
reg0=glm(Y~.,data=df[i,],family="binomial")
L2[[s]]=reg_sub
L2s[[s]]=stepAIC(reg_sub)
}

For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get

VC=c(-1,-1,rep(0,49),-1,rep(0,49))
coef=function(k){
C1=unlist(lapply(L1,function(x) coefficients(x)[k]))
C2=unlist(lapply(L2,function(x) coefficients(x)[k]))
m=summary(reg)$coefficients
u=seq(quantile(C2,.2),quantile(C2,.8),length=501)
v=dnorm(u,m[k,1],m[k,2])
plot(u,v,col="white",xlab="",ylab="",axes=FALSE)
axis(1)
polygon(c(u,rev(u)),c(v,rep(0,length(u))),col="grey",border=NA)
abline(v=VC[k],lty=2)
boxplot(C1,horizontal=TRUE,add=TRUE,at=max(v)/3)
boxplot(C2,horizontal=TRUE,add=TRUE,at=max(v)/3*2)
}

coef(2)

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

  • the ROC curve for the ‘s obtained with the model on the complete dataset [red]
  • the ROC curves for the ‘s obtained with the model on the ‘s subsample [light blue]
  • the ROC curve for the  ‘s obtained by averaging the previous estimates [blue]

S=predict(reg,type="response")
Y=def$Y
M.ROC=ROC.curve(S,Y)
plot(M.ROC[1,],M.ROC[2,],type="s",col="red")

Z=df$Y*0
for(si in 1:100){
S=predict(L1s[[si]],type="response",newdata=df)
Z=Z+S
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="light blue")
}

S=Z/100
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="blue",lwd=2)

If we consider sub-samples of size , we get the following, and when we consider sub-samples of size , without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

  • running one regression on our   matrix
  • averaging prediction after running  regressions on some   matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with variables (where should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.

Voting Twice in France

On the Monkey Cage blog, Baptiste Coulmont (a.k.a. @coulmont) recently uploaded a post entitled “You can vote twice ! The many political appeals of proxy votes in France“, coauthored with Joël Gombin (a.k.a. @joelgombin), and myself. The study was initially written in French as mentioned in a previous post. Baptiste posted additional information on his blog (http://coulmont.com/blog/…) and I also wanted to post some lines of code, to mention a model that was not used in that study (more complex to analyze, but more realistic, and with the same conclusions). The econometric study is based on aggregated voted, with a possible ecological misinterpretation.

  • Regression Model: Possible Explanatory Variables

The first idea was to model proxies using a binomial regression, per pooling station  where  denote the number of proxy vote, per station , and  denotes the number of voters. Proportion  can be a function of possible explanatory variables (on Baptiste’s blog there are additional information about the datasets, obtained from insee.fr and opendata.paris.fr)

> bt1=read.table("paris2007-pres-t1.csv",header=TRUE,sep=";")
> bt2=read.table("paris2007-pres-t2.csv",header=TRUE,sep=";")
> bv=read.table("paris-bv-insee-07.csv",header=TRUE,sep=";")
> bv$BV=bv$BVCOM
> baset1=merge(bt1,bv,by="BV")
> baset2=merge(bt2,bv,by="BV")
> baset1$LOGEMENT=baset1$PROPRIO+baset1$LOCNONHLM+baset1$LOCHLM+baset1$GRATUIT
> baset2$LOGEMENT=baset2$PROPRIO+baset2$LOCNONHLM+baset2$LOCHLM+baset2$GRATUIT

For instance, assume that  is a function of the proportion of owner of the place people live in, denoted  in the neighborhood of the pooling station,

> variable="PROPRIO"
> reference="LOGEMENT"
> baset1$taux=baset1[,variable]/baset1[,reference]
> baset2$taux=baset2[,variable]/baset2[,reference]

We can consider a logistic regression

or a logistic regression with splines, if we do not want to assume a linear model

With cubic splines, the code is

> b=hist(baset1$taux,plot=FALSE)
> library(splines)
> regt1=glm(PROCURATIONS/INSCRITS~bs(taux,6),family=binomial,weights=INSCRITS,data=baset1)
> regt2=glm(PROCURATIONS/INSCRITS~bs(taux,6),family=binomial,weights=INSCRITS,data=baset2)
> u=seq(min(baset1$taux)+.015,max(baset1$taux)-.015,by=.001)
> ND=data.frame(taux=u)
> ug=seq(0,max(baset1$taux)+.05,by=.001)
> pt1=predict(regt1,newdata=ND,se=TRUE,type="response")
> pt2=predict(regt2,newdata=ND,se=TRUE,type="response")
> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> plot(ug,ug*1,col="white",xlab=nom,ylab="Taux de procuration",
+ ylim=c(0,.1))
> for(i in 1:(length(b$breaks)-1)){
+ polygon(b$breaks[i+c(0,0,1,1)],c(0,b$counts[i],b$counts[i],0)
+ /max(b$counts)*.05,col="light yellow",border=NA)}
> polygon(c(u,rev(u)),c(pt1$fit+2*pt1$se.fit,rev(pt1$fit-2*pt1$se.fit)),
+ border=NA,density=30,col=CL[4])

while a standard logistic regression would be

> lines(u,pt1$fit,col=CL[6],lwd=2)
> polygon(c(u,rev(u)),c(pt2$fit+2*pt2$se.fit,rev(pt2$fit-2*pt2$se.fit)),
+ border=NA,density=30,col=CL[3])
> lines(u,pt2$fit,col=CL[1],lwd=2)
> regt1l=glm(PROCURATIONS/INSCRITS~taux,family=binomial,weights=INSCRITS,data=baset1)
> regt2l=glm(PROCURATIONS/INSCRITS~taux,family=binomial,weights=INSCRITS,data=baset2)
> ND=data.frame(taux=ug)
> pt1l=predict(regt1l,newdata=ND,se=TRUE,type="response")
> pt2l=predict(regt2l,newdata=ND,se=TRUE,type="response")
> lines(ug,pt1l$fit,col=CL[5],lty=2)
> lines(ug,pt2l$fit,col=CL[2],lty=2)
> legend(0,.1,c("Second Tour","Premier Tour"),col=CL[c(1,6)],
+ lwd=2,lty=1,border=NA)

Here it is (the confidence region is for the spline regression) with on blue the first round of the Presidential election, and in red, the second round (in France, it’s a two-round system)

(the legend of the y axis is not correct). We can consider as explanatory variable the rate of H.L.M., low-cost housing or council housing,

If I like the graph, unfortunately, the interpretation of coefficient  might be complicated

> summary(regt1l)

Call:
glm(formula = PROCURATIONS/INSCRITS ~ taux, family = binomial, 
    data = baset1, weights = INSCRITS)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-12.9549   -1.5722    0.0319    1.6292   13.1303  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -3.70811    0.01516  -244.6   <2e-16 ***
taux         1.49666    0.04012    37.3   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 12507  on 836  degrees of freedom
Residual deviance: 11065  on 835  degrees of freedom
AIC: 15699

Number of Fisher Scoring iterations: 4

> summary(regt2l)

Call:
glm(formula = PROCURATIONS/INSCRITS ~ taux, family = binomial, 
    data = baset2, weights = INSCRITS)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-15.4872   -1.7817   -0.1615    1.6035   12.5596  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -3.24272    0.01230 -263.61   <2e-16 ***
taux         1.45816    0.03266   44.65   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 9424.7  on 836  degrees of freedom
Residual deviance: 7362.3  on 835  degrees of freedom
AIC: 12531

Number of Fisher Scoring iterations: 4

So we did consider a standard linear regression model, for the proxy rate, per station,

(again, either a model with splines, or a standard linear model). The code is

> regt1=lm(PROCURATIONS/INSCRITS~bs(taux,6),weights=INSCRITS,data=baset1)
> regt2=lm(PROCURATIONS/INSCRITS~bs(taux,6),weights=INSCRITS,data=baset2)
> u=seq(min(baset1$taux)+.015,max(baset1$taux)-.015,by=.001)
> ND=data.frame(taux=u)
> ug=seq(0,max(baset1$taux)+.05,by=.001)
> pt1=predict(regt1,newdata=ND,se=TRUE,type="response")
> pt2=predict(regt2,newdata=ND,se=TRUE,type="response")
> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> plot(ug,ug*1,col="white",xlab=nom,ylab="Taux de procuration",
+ ylim=c(0,.1))
> for(i in 1:(length(b$breaks)-1)){
+ polygon(b$breaks[i+c(0,0,1,1)],c(0,b$counts[i],b$counts[i],0)
+ /max(b$counts)*.05,col="light yellow",border=NA)}
> polygon(c(u,rev(u)),c(pt1$fit+2*pt1$se.fit,rev(pt1$fit-2*pt1$se.fit)),
+ border=NA,density=30,col=CL[4])
> lines(u,pt1$fit,col=CL[6],lwd=2)
> polygon(c(u,rev(u)),c(pt2$fit+2*pt2$se.fit,rev(pt2$fit-2*pt2$se.fit)),
+ border=NA,density=30,col=CL[3])
> lines(u,pt2$fit,col=CL[1],lwd=2)
> regt1l=lm(PROCURATIONS/INSCRITS~taux,weights=INSCRITS,data=baset1)
> regt2l=lm(PROCURATIONS/INSCRITS~taux,weights=INSCRITS,data=baset2)
> ND=data.frame(taux=ug)
> pt1l=predict(regt1l,newdata=ND,se=TRUE,type="response")
> pt2l=predict(regt2l,newdata=ND,se=TRUE,type="response")
> lines(ug,pt1l$fit,col=CL[5],lty=2)
> lines(ug,pt2l$fit,col=CL[2],lty=2)
> legend(0,.1,c("Second Tour","Premier Tour"),col=CL[c(1,6)],
+ lwd=2,lty=1,border=NA)

Here is again the evolution as a function of the rate of owner of their homes,

The graph is rather close to the one before, and here, the interpretation of the summary table is more conventional,

> summary(regt1l)

Call:
lm(formula = PROCURATIONS/INSCRITS ~ taux, data = baset1, weights = INSCRITS)

Weighted Residuals:
    Min      1Q  Median      3Q     Max 
-1.9994 -0.2926  0.0011  0.3173  3.2072 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.021268   0.001739   12.23   <2e-16 ***
taux        0.054371   0.004812   11.30   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.646 on 835 degrees of freedom
Multiple R-squared:  0.1326,	Adjusted R-squared:  0.1316 
F-statistic: 127.7 on 1 and 835 DF,  p-value: < 2.2e-16

> summary(regt2l)

Call:
lm(formula = PROCURATIONS/INSCRITS ~ taux, data = baset2, weights = INSCRITS)

Weighted Residuals:
    Min      1Q  Median      3Q     Max 
-2.9029 -0.4148 -0.0338  0.4029  3.4907 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.033909   0.001866   18.17   <2e-16 ***
taux        0.079749   0.005165   15.44   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.6934 on 835 degrees of freedom
Multiple R-squared:  0.2221,	Adjusted R-squared:  0.2212 
F-statistic: 238.4 on 1 and 835 DF,  p-value: < 2.2e-16

We have used those codes to produce the graphs mentioned in the post. But before mentioning the residuals of the multiple model we considered, I wanted to share some awesome code that produce maps (I can say that those codes are awesome since Baptiste wrote most of them).

  • Visualization of Residuals on a Map of Paris

To plot the neighborhood of the pooling stations, one more time the post on Baptiste’s blog, explains how the shapefile was obtained from cartelec.net

> library(maptools)
> library(rgdal)
> library(classInt)
> paris=readShapeSpatial("paris-cartelec.shp")

To visualize the proxy rate (the average of round one and round two), here is the code

> elec=data.frame()
> elec=cbind(bt1$BV,(bt1$PROCURATIONS+bt2$PROCURATIONS),(bt1$EXPRIMES+bt2$EXPRIMES))
> colnames(elec)=c("BV","PROCURATIONS","EXPRIMES")
> elec=as.data.frame(elec)
> elec$BV=bt1$BV

To get nice colors, function of the rates, we use

> m=match(paris$BUREAU,elec$BV)
> plotvar=100*elec$PROCURATIONS/elec$EXPRIMES
> nclr=7
> plotclr=brewer.pal(nclr,"RdYlBu")[nclr:1] 
>(plotvar[m], nclr, style="fisher",dataPrecision=1)
> colcode=findColours(class, plotclr)

and finally

> par(mar=c(1,1,1,1))
> plot(paris,col=colcode,border=colcode)
> legend(656274.9, 6867308,legend=names(attr(colcode,"table")), 
+ fill=attr(colcode, "palette"), cex=1, bty="n",
+ title="Frequence procurations (%)")

If we consider a model with three explanatory variable, to explain the proxy rate,

> regt1=lm(PROCURATIONS/INSCRITS~I(POP65P/POP)+
+ I(PROPRIO/LOGEMENT)+I(CS3/POP1564),weights=INSCRITS,data=baset1)

we can plot the residuals using

> m=match(paris$BUREAU,elec$BV)
> plotvar=100*residuals(regt1)
> nclr=7
> plotclr=brewer.pal(nclr,"RdYlBu")[nclr:1] 
>(plotvar[m], nclr, style="fisher",dataPrecision=1)
> colcode=findColours(class, plotclr)
> par(mar=c(1,1,1,1))
> plot(paris,col=colcode,border=colcode)
> legend(656274.9, 6867308,legend=names(attr(colcode,"table")), 
+ fill=attr(colcode, "palette"), cex=1, bty="n",title="Residus")

It might not be a pure random spatial noise… But we could not get better with our small set of covariates.

Precision, with Imprecise Words

This morning, after my course on extreme values, some students did show me a question they got from practicals they were suppose to work on, with undergraduate students :

To be more specific, they wanted some feedback about point B. Now, let’s make it clear : I have no idea what “precision” and “variation” could mean… But let’s try and see if we can get something usefull, that might help to understand the question. In order to illustrate, consider the following regression model,

> plot(cars,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars),lty=2)

Here is the summary table of the linear regression model

> summary(lm(dist~speed,data=cars))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.5791     6.7584  -2.601   0.0123 *  
speed         3.9324     0.4155   9.464 1.49e-12 ***

My first idea was that “variation of the X’s” should be related to the “variance” of the explanatory variable. But it is stupid. For instance, If we transform the explanatory variable, say with a multiplicative factor of 100, then the variance of X will be 10,000 times larger. And the regression will be the same

> cars100=cars
> cars100$speed=100*cars$speed
> plot(cars100,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars100),lty=2)

in the sense that

> summary(lm(dist~speed,data=cars10))

             Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.57909    6.75844  -2.601   0.0123 *  
speed         0.39324    0.04155   9.464 1.49e-12 ***

And similarly, divide by 100. So, I guess using some affine transformation of the explanatory variable is clearly not the way we should get a variable with more “variability”. Let us try something else. And keep in mind the following quantities,

> var(cars$speed)
[1] 27.95918
> sd(cars$speed)/mean(cars$speed)
[1] 0.3433535

with the variance, and the coefficient of variation. Consider the following modified dataset,

> carsg=cars
> carsg$speed[12]=8
> carsg$speed[23]=25
> carsg$speed[34]=24
> carsg$speed[39]=12

Four values were changed, here. Observe that, somehow, there is more variability

> var(carsg$speed)
[1] 31.84694
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3640845

But if we consider the output of the regression model, we get

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -18.5681     5.3621  -3.463  0.00113 ** 
speed         3.9708     0.3254  12.201 2.55e-16 ***

It look like we got here a more precision on the slope, with a smaller variance, and a larger Student-t-value. But what if we consider the following transformation,

> carsg=cars
> carsg$speed[11]=5
> carsg$speed[21]=25
> carsg$speed[31]=25
> carsg$speed[50]=7

Again, we have more variability here, on the explanatory variable,

> var(carsg$speed)
[1] 32.9898
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3754036

But this time,

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -1.5078     8.0498  -0.187    0.852    
speed         2.9077     0.4932   5.896 3.61e-07 ***

the estimator of the slope has more variance, and we have a smaller Student-t-value. So here, if we increase the “variability” of X, we get get… almost anything. The intuition about those two transformations is relatively simple. In the first case, I have moved observations that were far away from the regression line – but in the center of the distribution, and I put them closer to the regression line, but more on the border of the sample (to increase the variance)

(I would not call them outliers since outliers are defined as observations far away from the model, but on Y, not on X). In the second case, I did exactly the opposite.

I am not sure if I understood correctly this sentence. But it looks like it is incorrect. Since there is only one false statement here, I will go for this one. What do you think?

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.

Non-observable vs. observable heterogeneity factor

This morning, in the ACT2040 class (on non-life insurance), we’ve discussed the difference between observable and non-observable heterogeneity in ratemaking (from an economic perspective). To illustrate that point (we will spend more time, later on, discussing observable and non-observable risk factors), we looked at the following simple example. Let  denote the height of a person. Consider the following dataset

> Davis=read.table(
+ "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

There is a small typo in the dataset, so let us make manual changes here

> Davis[12,c(2,3)]=Davis[12,c(3,2)] 

Here, the variable of interest is the height of a given person,

> X=Davis$height 

If we look at the histogram, we have

> hist(X,col="light green", border="white",proba=TRUE,xlab="",main="")

Can we assume that we have a Gaussian distribution ?

Maybe not… Here, if we fit a Gaussian distribution, plot it, and add a kernel based estimator, we get

> (param <- fitdistr(X,"normal")$estimate) 
> f1 <- function(x) dnorm(x,param[1],param[2]) 
> x=seq(100,210,by=.2) 
> lines(x,f1(x),lty=2,col="red") 
> lines(density(X))

 

If you look at that black line, you might think of a mixture, i.e. something like

(using standard mixture notations). Mixture are obtained when we have a non-observable heterogeneity factor: with probability , we have a random variable  (call it type [1]), and with probability , a random variable  (call it type [2]). So far, nothing new. And we can fit such a mixture distribution, using e.g.


> library(mixtools) 
> mix <- normalmixEM(X)
 number of iterations= 335 
> (param12 <- c(mix$lambda[1],mix$mu,mix$sigma)) 
[1] 0.4002202 178.4997298 165.2703616 6.3561363 5.9460023  

If we plot that mixture of two Gaussian distributions, we get

> f2 <- function(x){ param12[1]*dnorm(x,param12[2],param12[4])
+ (1-param12[1])*dnorm(x,param12[3],param12[5]) }
> lines(x,f2(x),lwd=2, col="red") lines(density(X))

Not bad. Actually, we can try to maximize the likelihood with our own codes,

> logdf <- function(x,parameter){
+ p <- parameter[1]
+ m1 <- parameter[2]
+ s1 <- parameter[4]
+ m2 <- parameter[3]
+ s2 <- parameter[5]
+ return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2)))
+ }
> logL <- function(parameter) -sum(logdf(X,parameter))
> Amat <- matrix(c(1,-1,0,0,0,0,
+ 0,0,0,0,1,0,0,0,0,0,0,0,0,1), 4, 5)
> bvec <- c(0,-1,0,0)
> constrOptim(c(.5,160,180,10,10), logL, NULL, ui = Amat, ci = bvec)$par

[1]   0.5996263 165.2690084 178.4991624   5.9447675   6.3564746

Here, we include some constraints, to insurance that the probability belongs to the unit interval, and that the variance parameters remain positive. Note that we have something close to the previous output.

Let us try something a little bit more complex now. What if we assume that the underlying distributions have the same variance, namely

In that case, we have to use the previous code, and make small changes,

> logdf <- function(x,parameter){
+ p <- parameter[1]
+ m1 <- parameter[2]
+ s1 <- parameter[4]
+ m2 <- parameter[3]
+ s2 <- parameter[4]
+ return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2)))
+ }
> logL <- function(parameter) -sum(logdf(X,parameter))
> Amat <- matrix(c(1,-1,0,0,0,0,0,0,0,0,0,1), 3, 4)
> bvec <- c(0,-1,0)
> (param12c= constrOptim(c(.5,160,180,10), logL, NULL, ui = Amat, ci = bvec)$par)

[1]   0.6319105 165.6142824 179.0623954   6.1072614

This is what we can do if we cannot observe the heterogeneity factor. But wait… we actually have some information in the dataset. For instance, we have the sex of the person. Now, if we look at histograms of height per sex, and kernel based density estimator of the height, per sex, we have

So, it looks like the height for male, and the height for female are different. Maybe we can use that variable, that was actually observed, to explain the heterogeneity in our sample. Formally, here, the idea is to consider a mixture, with an observable heterogeneity factor: the sex,

We now have interpretation of what we used to call class [1] and [2] previously: male and female. And here, estimating parameters is quite simple,

>  (pM <- mean(sex=="M"))
[1] 0.44
>  (paramF <- fitdistr(X[sex=="F"],"normal")$estimate)
      mean         sd 
164.714286   5.633808 
>  (paramM <- fitdistr(X[sex=="M"],"normal")$estimate)
      mean         sd 
178.011364   6.404001

And if we plot the density, we have

> f4 <- function(x) pM*dnorm(x,paramM[1],paramM[2])+(1-pM)*dnorm(x,paramF[1],paramF[2])
> lines(x,f4(x),lwd=3,col="blue")

What if, once again, we assume identical variance? Namely, the model becomes

Then a natural idea to derive an estimator for the variance, based on previous computations, is to use

The code is here

> s=sqrt((sum((height[sex=="M"]-paramM[1])^2)+sum((height[sex=="F"]-paramF[1])^2))/(nrow(Davis)-2))
> s
[1] 6.015068

and again, it is possible to plot the associated density,

> f5 <- function(x) pM*dnorm(x,paramM[1],s)+(1-pM)*dnorm(x,paramF[1],s)
> lines(x,f5(x),lwd=3,col="blue")

Now, if we think a little about what we’ve just done, it is simply a linear regression on a factor, the sex of the person,

where .  And indeed, if we run the code to estimate this linear model,

> summary(lm(height~sex,data=Davis))

Call:
lm(formula = height ~ sex, data = Davis)

Residuals:
     Min       1Q   Median       3Q      Max 
-16.7143  -3.7143  -0.0114   4.2857  18.9886 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 164.7143     0.5684  289.80   <2e-16 ***
sexM         13.2971     0.8569   15.52   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 6.015 on 198 degrees of freedom
Multiple R-squared:  0.5488,	Adjusted R-squared:  0.5465 
F-statistic: 240.8 on 1 and 198 DF,  p-value: < 2.2e-16

we get the same estimators for the means and the variance as the ones obtained previously. So, as mentioned this morning in class, if you have a non-observable heterogeneity factor, we can use a mixture model to fit a distribution, but if you can get a proxy of that factor, that is observable, then you can run a regression. But most of the time, that observable variable is just a proxy of a non-observable one…

Linear regression from a contingency table

This morning, Benoit sent me an email, about an exercise he found in an econometric textbook, about linear regression. Consider the following dataset,

Here, variable X denotes the income, and Y the expenses. The goal was to fit a linear regression (actually, in the email, it was mentioned that we should try to fit an heteroscedastic model, but let us skip this part). So Benoit’s question was more or less: how do you fit a linear regression from a contingency table?

Usually, when I got an email on Saturday morning, I try to postpone. But the kids had their circus class, so I had some time to answer. And this did not look like a complex puzzle… Let us import this dataset in R, so that we can start playing

> df=read.table("http://freakonometrics.free.fr/baseexo.csv",sep=";",header=TRUE)
> M=as.matrix(df[,2:ncol(df)])
> M[is.na(M)]<-0
> M
      X14 X19 X21 X23 X25 X27 X29 X31 X33 X35
 [1,]  74  13   7   1   0   0   0   0   0   0
 [2,]   6   4   2   7   4   0   0   0   0   0
 [3,]   2   3   2   2   4   0   0   0   0   0
 [4,]   1   1   2   3   3   2   0   0   0   0
 [5,]   2   0   1   3   2   0   6   0   0   0
 [6,]   2   0   2   1   0   0   1   2   1   0
 [7,]   0   0   0   2   0   0   1   1   3   0
 [8,]   0   1   0   1   0   0   0   0   2   0
 [9,]   0   0   0   0   1   1   0   1   0   1

The first idea I had was to use those counts as weights. Weighted least squares should be perfect. The dataset is built from this matrix,

> W=as.vector(M)
> x=df[,1]
> X=rep(x,ncol(M))
> y=as.numeric(substr(names(df)[-1],2,3))
> Y=rep(y,each=nrow(M))
> base1=data.frame(X1=X,Y1=Y,W1=W)

Here we have

> head(base1,10)
   X1 Y1 W1
1  16 14 74
2  23 14  6
3  25 14  2
4  27 14  1
5  29 14  2
6  31 14  2
7  33 14  0
8  35 14  0
9  37 14  0
10 16 19 13

The regression is the following,

> summary(reg1)

Call:
lm(formula = Y1 ~ X1, data = base1, weights = W1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  4.35569    2.03022   2.145    0.038 *  
X1           0.68263    0.09016   7.572 3.04e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 7.892 on 40 degrees of freedom
Multiple R-squared:  0.589,	Adjusted R-squared:  0.5787 
F-statistic: 57.33 on 1 and 40 DF,  p-value: 3.038e-09

It looks like the output is the same as what Benoit found, so we should be happy. Now, I had a second thought. Why not create the implied dataset. Using replicates, we should be able to create the dataset that was used to get this contingency table,

> vX=vY=rep(NA,sum(W))
> sumW=c(0,cumsum(W))
> for(i in 1:length(W)){
+ if(W[i]>0){
+ vX[(1+sumW[i]):sumW[i+1]]=X[i]
+ vY[(1+sumW[i]):sumW[i+1]]=Y[i]
+ }}
> base2=data.frame(X2=vX,Y2=vY)

Here, the dataset is much larger, and there is no weight,

> tail(base2,10)
    X2 Y2
172 31 31
173 33 31
174 37 31
175 31 33
176 33 33
177 33 33
178 33 33
179 35 33
180 35 33
181 37 35

If we run a linear regression on this dataset, we obtain

> summary(reg2)

Call:
lm(formula = Y2 ~ X2, data = base2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  4.35569    0.95972   4.538 1.04e-05 ***
X2           0.68263    0.04262  16.017  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 3.731 on 179 degrees of freedom
Multiple R-squared:  0.589,	Adjusted R-squared:  0.5867 
F-statistic: 256.5 on 1 and 179 DF,  p-value: < 2.2e-16

If we compare the two regressions, we have

> rbind(coefficients(summary(reg1)),
+ coefficients(summary(reg2)))
             Estimate Std. Error   t value     Pr(>|t|)
(Intercept) 4.3556857 2.03021637  2.145429 3.804237e-02
X1          0.6826296 0.09015771  7.571506 3.038443e-09

(Intercept) 4.3556857 0.95972279  4.538483 1.036711e-05
X2          0.6826296 0.04261930 16.016913 2.115373e-36

The estimators are exactly the same (which does not surprise me), but standard deviation (ans significance levels) are quite different. And to be honest, I find that surprising. Which approach here is the most legitimate (since they are finally not equivalent)?

Modeling individual losses with mixtures

Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following,

> n=nrow(couts)
> plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green")

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.10.26.png

It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here,

with

  • a distribution for small claims, https://latex.codecogs.com/gif.latex?{\color{Blue}%20f_1(}\cdot{\color{Blue}%20)}, e.g. an exponential distribution
  • a Dirac mass in https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\kappa}, i.e. https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\delta_{\kappa}(}\cdot{\color{Magenta}%20)}
  • a distribution for larger claims, https://latex.codecogs.com/gif.latex?{\color{Red}%20f_3(}\cdot{\color{Red}%20)}, e.g. a Gamma, or a lognormal, distribution
>  I1=which(couts$cout<1120)
>  I2=which((couts$cout>=1120)&(couts$cout<1220))
>  I3=which(couts$cout>=1220)
>  (p1=length(I1)/nrow(couts))
[1] 0.3284823
>  (p2=length(I2)/nrow(couts))
[1] 0.4152807
>  (p3=length(I3)/nrow(couts))
[1] 0.256237
>  X=couts$cout
>  (kappa=mean(X[I2]))
[1] 1171.998
>  X0=X[I3]-kappa
>  u=seq(0,10000,by=20)
>  F1=pexp(u,1/mean(X[I1]))
>  F2= (u>kappa)
>  F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa)
>  F=F1*p1+F2*p2+F3*p3
>  lines(u,F)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.13.43.png

In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e.

https://latex.codecogs.com/gif.latex?f(y|\boldsymbol{X})%20=%20p_1(\boldsymbol{X})%20{\color{Blue}%20f_1(}y|\boldsymbol{X}{\color{Blue}%20)}%20+%20p_2(\boldsymbol{X})%20{\color{Magenta}%20\delta_{\kappa}(}y{\color{Magenta}%20)}%20+%20p_3(\boldsymbol{X})%20{\color{Red}%20f_3(}y|\boldsymbol{X}{\color{Red}%20)}

which yield the following premium model,

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s_1)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple}%20{{\underbrace{\mathbb{E}(Y|Y\in(%20s_1,s_2],%20\boldsymbol{X})%20}_{B}}\cdot%20{\underbrace{\mathbb{P}(Y\in(%20s_1,s_2]|%20\boldsymbol{X})}_{D}}}}\\+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s_2,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s_2|%20\boldsymbol{X})}_{D}}}}

For the https://latex.codecogs.com/gif.latex?{\color{Blue}%20A}https://latex.codecogs.com/gif.latex?{\color{Magenta}%20B} and https://latex.codecogs.com/gif.latex?{\color{Red}%20C} terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if https://latex.codecogs.com/gif.latex?(\pi,1-\pi)=(\pi_1,\pi_2), then

https://latex.codecogs.com/gif.latex?\log%20\frac{\pi}{1-\pi}=\log%20\frac{\pi_1}{\pi_2}%20=\boldsymbol{X}%27\boldsymbol{\beta}

i.e.

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

and

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

To derive a multivariate extension, write

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

and

https://latex.codecogs.com/gif.latex?\pi_3%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

Again, maximum likelihood techniques can be used, since

https://latex.codecogs.com/gif.latex?\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto%20\prod_{i=1}^n%20\prod_{j=1}^3%20\pi_{i,j}^{Y_{i,j}}

where here, variable https://latex.codecogs.com/gif.latex?Y_{i}  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus,

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto%20\sum_{i=1}^n%20\sum_{j=1}^2%20\left(Y_{i,j}%20\boldsymbol{X}_i%27\boldsymbol{\beta}_j\right)%20-%20n_i\log\left[1+1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)\right]

and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g.

> seuils=c(0,1120,1220,1e+12)
> couts$tranches=cut(couts$cout,breaks=seuils,
+ labels=c("small","fixed","large"))
> head(couts,5)
  nocontrat    no garantie    cout exposition zone puissance agevehicule
1      1870 17219      1RC 1692.29       0.11    C         5           0
2      1963 16336      1RC  422.05       0.10    E         9           0
3      4263 17089      1RC  549.21       0.65    C        10           7
4      5181 17801      1RC  191.15       0.57    D         5           2
5      6375 17485      1RC 2031.77       0.47    B         7           4
  ageconducteur bonus marque carburant densite region tranches
1            52    50     12         E      73     13    large
2            78    50     12         E      72     13    small
3            27    76     12         D      52      5    small
4            26   100     12         D      83      0    small
5            46    50      6         E      11     13    large

Then, we can run a multinomial regression, from

> library(nnet)

using some selected covariates

> reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts)
# weights:  30 (18 variable)
initial  value 2113.730043 
iter  10 value 2063.326526
iter  20 value 2059.206691
final  value 2059.134802 
converged

The output is here

> summary(reg)
Call:
multinom(formula = tranches ~ ageconducteur + agevehicule + zone + 
    carburant, data = couts)

Coefficients:
      (Intercept) ageconducteur agevehicule      zoneB      zoneC
fixed  -0.2779176   0.012071029  0.01768260 0.05567183 -0.2126045
large  -0.7029836   0.008581459 -0.01426202 0.07608382  0.1007513
           zoneD      zoneE      zoneF   carburantE
fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715
large  0.3434686  0.1803350 -0.1969320  0.039414682

Std. Errors:
      (Intercept) ageconducteur agevehicule     zoneB     zoneC     zoneD
fixed   0.2371936   0.003738456  0.01013892 0.2259144 0.1776762 0.1838344
large   0.2753840   0.004203217  0.01189342 0.2746457 0.2122819 0.2151504
          zoneE     zoneF carburantE
fixed 0.1830139 0.3377169  0.1106009
large 0.2160268 0.3624900  0.1243560

To visualize the impact of a covariate (one, only), one can use also spline functions

> library(splines)
> reg=multinom(tranches~agevehicule,data=couts)
# weights:  9 (4 variable)
initial  value 2113.730043 
final  value 2072.462863 
converged
> reg=multinom(tranches~bs(agevehicule),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2070.496939
iter  20 value 2069.787720
iter  30 value 2069.659958
final  value 2069.479535 
converged

For instance, if the covariate is the age of the car, we do have the following probabilities

> predict(reg,newdata=data.frame(agevehicule=5),type="probs")
    small     fixed     large 
0.3388947 0.3869228 0.2741825

and for all ages from 0 to 20,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.02.55.png

For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities

> reg=multinom(tranches~bs(densite),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2068.469825
final  value 2068.466349 
converged
> predict(reg,newdata=data.frame(densite=90),type="probs")
    small     fixed     large 
0.3484422 0.3473315 0.3042263

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.05.29.png

Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset

> sbaseA=couts[couts$tranches=="small",]
> sbaseB=couts[couts$tranches=="fixed",]
> sbaseC=couts[couts$tranches=="large",]

with a threshold given by

> (k=mean(sousbaseB$cout))
[1] 1171.998

Then, let us run our four models,

> reg=multinom(tranches~bs(densite),data=couts)
> regA=glm(cout~bs(densite),data=sousbaseA,family=Gamma(link="log"))
> regB=glm(cout~1,data=sousbaseB,family=Gamma(link="log"))
> regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log"))

We can now compute predictions based on those models,

> nouveau=data.frame(densite=seq(10,100))
> proba=predict(reg,newdata=nouveau,type="probs")
> predA=predict(regA,newdata=nouveau,type="response")
> predB=predict(regB,newdata=nouveau,type="response")
> predC=predict(regC,newdata=nouveau,type="response")+k
> pred=cbind(predA,predB,predC)

To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset),

> cbind(proba,pred)[seq(10,90,by=10),]
       small     fixed     large    predA    predB    predC
10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904
20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890
30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030
40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051
50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994
60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833
70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601
80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078
90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379

Now, it is possible to plot those figures in a graph,

> barplot(t(proba*pred))
> abline(h=mean(couts$cout),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-15-a%CC%80-11.50.47.png

(the dotted horizontal line is the average cost of a claim, in our dataset).

Regression on categorical variables

This morning, Stéphane asked me tricky question about extracting coefficients from a regression with categorical explanatory variates. More precisely, he asked me if it was possible to store the coefficients in a nice table, with information on the variable and the modality (those two information being in two different columns). Here is some code I did to produce the table he was looking for, but I guess that some (much) smarter techniques can be used (comments – see below – are open). Consider the following dataset

> base
   x sex   hair
1  1   H  Black
2  4   F  Brown
3  6   F  Black
4  6   H  Black
5 10   H  Brown
6  5   H Blonde

with two factors,

> levels(base$hair)
[1] "Black"  "Blonde" "Brown" 
> levels(base$sex)
[1] "F" "H"

Let us run a (standard linear) regression,

> reg=lm(x~hair+sex,data=base)

which is here

> summary(reg)

Call:
lm(formula = x ~ hair + sex, data = base)

Residuals:
         1          2          3          4          5          6 
-3.714e+00 -2.429e+00  2.429e+00  1.286e+00  2.429e+00 -2.220e-16 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)   3.5714     3.4405   1.038    0.408
hairBlonde    0.2857     4.8655   0.059    0.959
hairBrown     2.8571     3.7688   0.758    0.528
sexH          1.1429     3.7688   0.303    0.790

Residual standard error: 4.071 on 2 degrees of freedom
Multiple R-squared: 0.2352,	Adjusted R-squared: -0.9121 
F-statistic: 0.205 on 3 and 2 DF,  p-value: 0.886

If we want to extract the names of the factors (assuming here that there are no numbers in the name of the factor), and the values of the associated modality, one can use

> VARIABLE=c("",gsub("[-^0-9]", "", names(unlist(reg$xlevels))))
> MODALITY=c("",as.character(unlist(reg$xlevels)))
> names=data.frame(VARIABLE,MODALITY,NOMVAR=c(
+ "(Intercept)",paste(VARIABLE,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))
> merge(names,regression,all.x=TRUE)
       NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2   hairBlack     hair    Black        NA
3  hairBlonde     hair   Blonde 0.2857143
4   hairBrown     hair    Brown 2.8571429
5        sexF      sex        F        NA
6        sexH      sex        H 1.1428571

or, if we want modalities exluding references,

> merge(names,regression)
       NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2  hairBlonde     hair   Blonde 0.2857143
3   hairBrown     hair    Brown 2.8571429
4        sexH      sex        H 1.1428571

In order to reproduce the table Stéphane sent me, let us use the following code to produce an html table,

> library(xtable)
> htlmtable <- xtable(merge(names,regression))
> print(htlmtable,type="html")
NOMVAR VARIABLE MODALITY COEF
1 (Intercept) 3.57
2 hairBlonde hair Blonde 0.29
3 hairBrown hair Brown 2.86
4 sexH sex H 1.14

So yes, it is possible to build a table with the variable, modalities, and coefficients. This function can be interesting on prospective mortality, when we do have a large number of modalities per factor (years, ages and year of birth). Consider the following datasets

> DEATH=read.table(
+ "http://freakonometrics.free.fr/DeathsSwitzerland.txt",
+ header=TRUE,skip=2)
> EXPOSURE=read.table(
+ "http://freakonometrics.free.fr/ExposuresSwitzerland.txt",
+ header=TRUE,skip=2)
> DEATH$Age=as.numeric(as.character(DEATH$Age))
> DEATH=DEATH[-which(is.na(DEATH$Age)),]
> EXPOSURE$Age=as.numeric(as.character(EXPOSURE$Age))
> EXPOSURE=EXPOSURE[-which(is.na(EXPOSURE$Age)),]
> base=data.frame(y=as.factor(DEATH$Year),a=as.factor(DEATH$Age),
+ c=as.factor(DEATH$Year-DEATH$Age),D=DEATH$Total,E= EXPOSURE$Total)
> base=base[base$E>0,]

and the following nonlinear model, based on Lee-Carter model (including a cohort effect),

https://latex.codecogs.com/gif.latex?N_{x,t}\sim\mathcal{P}(E_{x,t}\cdot%20\exp[\alpha_x+\beta_x%20\kappa_t%20+%20\gamma_x%20\delta_{t-x}])

can be estimated using

> library(gnm)
> reg=gnm(D~a+Mult(a,y)+Mult(a,c),offset=log(E),family=poisson,data=base)

In order to extract the 671 coefficients from the regresssion,

> length(coefficients(reg))
[1] 671

(as properly as possible) we have to be careful: names of coefficients are not that simple to handle. For instance, we can see things like

> coefficients(reg)[200]
Mult(., year).age98 
         0.04203519

In order to extract them, define

> na=length((reg$xlevels)$age)
> ny=length((reg$xlevels)$year)
> nc=length((reg$xlevels)$cohort)
> VARIABLElong=c("",rep("age",na),rep("Mult(., year).age",na),
+ rep("Mult(a, .).y",ny),
+ rep("Mult(., cohort).age",na),rep("Mult(age, .).cohort",nc))
> VARIABLEshort=c("",rep("age",na),rep("age",na),rep("year",ny),
+ rep("age",na),rep("cohort",nc))
> MODALITY=c("",(reg$xlevels)$age,(reg$xlevels)$age,
+ (reg$xlevels)$year,(reg$xlevels)$age,(reg$xlevels)$cohort)
> names=data.frame(VARIABLElong,VARIABLEshort,
+ MODALITY,NOMVAR=c("(Intercept)",paste(VARIABLElong,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))

Here we go, now we have the coefficients from the regression in a nice table,

> outputreg=merge(names,regression)
> outputreg[1:10,]
        NOMVAR VARIABLElong VARIABLEshort MODALITY        COEF
1  (Intercept)                                     -8.22225458
2         age1          age           age        1 -0.87495451
3        age10          age           age       10 -1.67145704
4       age100          age           age      100  4.91041650
5        age11          age           age       11 -1.00186990
6        age12          age           age       12 -1.05953497
7        age13          age           age       13 -0.90952859
8        age14          age           age       14  0.02880668
9        age15          age           age       15  0.42830738
10       age16          age           age       16  1.35961403

It is now possible to plot all the coefficients, as functions of the age, the year of observation, or the year of birth. For instance, for the standard average age effect (namely https://latex.codecogs.com/gif.latex?\alpha_x as a function of https://latex.codecogs.com/gif.latex?x), we can use

> typevariable=as.character(unique(outputreg$VARIABLElong))
> basegraph=outputreg[outputreg$VARIABLElong==typevariable[2],]
> x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Age")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-30-a%CC%80-15.59.12.png

while the cohort effect (https://latex.codecogs.com/gif.latex?\delta_t as a function of https://latex.codecogs.com/gif.latex?t) is obtained using

> basegraph=outputreg[outputreg$VARIABLElong==typevariable[5],]
> x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Cohort (year of birth)",ylim=c(0,10))

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-30-a%CC%80-16.07.25.png


	

Regression tree using Gini’s index

In order to illustrate the construction of regression tree (using the CART methodology), consider the following simulated dataset,

> set.seed(1)
> n=200
> X1=runif(n)
> X2=runif(n)
> P=.8*(X1<.3)*(X2<.5)+
+   .2*(X1<.3)*(X2>.5)+
+   .8*(X1>.3)*(X1<.85)*(X2<.3)+
+   .2*(X1>.3)*(X1<.85)*(X2>.3)+
+   .8*(X1>.85)*(X2<.7)+
+   .2*(X1>.85)*(X2>.7) 
> Y=rbinom(n,size=1,P)  
> B=data.frame(Y,X1,X2)

with one dichotomos varible (the variable of interest, ), and two continuous ones (the explanatory ones  and ).

> tail(B)
    Y        X1        X2
195 0 0.2832325 0.1548510
196 0 0.5905732 0.3483021
197 0 0.1103606 0.6598210
198 0 0.8405070 0.3117724
199 0 0.3179637 0.3515734
200 1 0.7828513 0.1478457

The theoretical partition is the following

Here, the sample can be plotted below (be careful, the first variate is on the y-axis above, and the x-axis below) with blue dots when  equals one, and red dots when  is null,

> plot(X1,X2,col="white")
> points(X1[Y=="1"],X2[Y=="1"],col="blue",pch=19)
> points(X1[Y=="0"],X2[Y=="0"],col="red",pch=19)

In order to construct the tree, we need a partition critera. The most standard one is probably Gini’s index, which can be writen, when ‘s are splited in two classes, denoted here 

L'image “https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-04.png” ne peut être affichée car elle contient des erreurs.

or when ‘s are splited in three classes, denoted 
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-07.png

etc. Here,  are just counts of observations that belong to partition  such that  takes value . But it is possible to consider other criteria, such as the chi-square distance,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-01.png

where, classically

https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-02.png
when we consider two classes (one knot) or, in the case of three classes (two knots)
https://perso.univ-rennes1.fr/arthur.charpentier/latex/arbre-comp-05.png

Here again, the idea is to maximize that distance: the idea is to discriminate, so we want samples as not independent as possible. To compute Gini’s index consider

> GINI=function(y,i){
+ T=table(y,i)
+ nx=apply(T,2,sum)
+ pxy=T/matrix(rep(nx,each=2),2,ncol(T))
+ vxy=pxy*(1-pxy)
+ zx=apply(vxy,2,sum)
+ n=sum(T)
+ -sum(nx/n*zx)
+ }

We simply construct the contingency table, and then, compute the quantity given above. Assume, first, that there is only one explanatory variable. We split the sample in two, with all possible spliting values , i.e.

Then, we compute Gini’s index, for all those values. The knot is the value that maximizes Gini’s index. Once we have our first knot, we keep it (call it, from now on ). And we reiterate, by seeking the best second choice: given one knot, consider the value that splits the sample in three, and give the highest Gini’s index, Thus, we consider either the following partition

or this one

I.e. we cut either below, or above the previous knot. And we iterate. The code can be something like that,

> X=X2
> u=(sort(X)[2:n]+sort(X)[1:(n-1)])/2
> knot=NULL
> for(s in 1:4){
+ vgini=rep(NA,length(u))
+ for(i in 1:length(u)){
+ kn=c(knot,u[i])
+ F=function(x){sum(x<=kn)}
+ I=Vectorize(F)(X)
+ vgini[i]=GINI(Y,I)
+ }
+ plot(u,vgini)
+ k=which.max(vgini)
+ cat("knot",k,u[k],"\n")
+ knot=c(knot,u[k])
+ u=u[-k]
+ }
knot 69 0.3025479 
knot 133 0.5846202 
knot 72 0.3148172 
knot 111 0.4811517

At the first step, the value of Gini’s index was the following,

which was maximal around 0.3. Then, this value is considered as fixed. And we try to construct a partition in three parts (spliting either below or above 0.3). We get the following plot for Gini’s index (as a function of this second knot)

 which is maximum when the split the sample around 0.6 (which becomes our second knot). Etc. Now, let us compare our code with the standard R function,

> tree(Y~X2,method="gini")
node), split, n, deviance, yval
      * denotes terminal node

 1) root 200 49.8800 0.4750  
   2) X2 < 0.302548 69 12.8100 0.7536 *
   3) X2 > 0.302548 131 28.8900 0.3282  
     6) X2 < 0.58462 65 16.1500 0.4615  
      12) X2 < 0.324591 7  0.8571 0.1429 *
      13) X2 > 0.324591 58 14.5000 0.5000 *
     7) X2 > 0.58462 66 10.4400 0.1970 *

We do obtain similar knots: the first one is 0.302 and the second one 0.584. So, constructing tree is not that difficult…

Now, what if we consider our two explanatory variables? The story remains the same, except that the partition is now a bit more complex to write. To find the first knot, we consider all values on the two components, and again, keep the one that maximizes Gini’s index,

> n=nrow(B)
> u1=(sort(X1)[2:n]+sort(X1)[1:(n-1)])/2
> u2=(sort(X2)[2:n]+sort(X2)[1:(n-1)])/2
> gini=matrix(NA,nrow(B)-1,2)
> for(i in 1:length(u1)){
+ I=(X1<u1[i])
+ gini[i,1]=GINI(Y,I)
+ I=(X2<u2[i])
+ gini[i,2]=GINI(Y,I)
+ }
> mg=max(gini)
> i=1+sum(mg==max(gini[,2]))
> par(mfrow = c(1, 2))
> plot(u1,gini[,1],ylim=range(gini),col="green",type="b",xlab="X1",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==1){points(u1[which.max(gini[,1])],mg,pch=19,col="red")
+          segments(u1[which.max(gini[,1])],mg,u1[which.max(gini[,1])],-100000)}
> plot(u2,gini[,2],ylim=range(gini),col="green",type="b",xlab="X2",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==2){points(u2[which.max(gini[,2])],mg,pch=19,col="red")
+          segments(u2[which.max(gini[,2])],mg,u2[which.max(gini[,2])],-100000)}
> u2[which.max(gini[,2])]
[1] 0.3025479

The graphs are the following: either we split on the first component (and we obtain the partition on the right, below),

or we split on the second one (and we get the following partition),

Here, it is optimal to split on the second variate, first. And actually, we get back to the one-dimensional case discussed previously: as expected, it is optimal to split around 0.3. This is confirmed with the code below,

> library(tree)
> arbre=tree(Y~X1+X2,data=B,method="gini")
> arbre$frame[1:4,]
     var   n       dev      yval splits.cutleft splits.cutright
1     X2 200 49.875000 0.4750000      <0.302548       >0.302548
2     X1  69 12.811594 0.7536232      <0.800113       >0.800113
4 <leaf>  57  8.877193 0.8070175                               
5 <leaf>  12  3.000000 0.5000000

For the second knot, four cases should be considered: spliting on the second variable (again), either above, or below the previous knot (see below on the left) or spliting on the first one. Then whe have wither a partition below or above the previous knot (see below on the right),

Etc. To visualize the tree, the code is the following

> plot(arbre)
> text(arbre)
> partition.tree(arbre)

http://freakonometrics.hypotheses.org/files/2013/01/arbre-gini-x1-x2-encore.png

Note that we can also visualize the partition. Nice, isn’t it?

To go further, the book Classification and Regression Trees by Leo Breiman (and co-authors) is awesome. Note that there are also interesting sections in the bible Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman (which can be downloaded from http://www.stanford.edu/~hastie/…)

Régression de Poisson, et biais minimal

Lors du prochain cours d’actuariat, nous allons finir les arbres de régression, et introduire la régression de Poisson. Les transparents sont en ligne ici,

Je vais présenter la régression en Poisson, en faisant un parallèle avec la régression logistique, la session suivante portera sur la généralisation obtenue avec les modèles linéaires généralisés. Sur la régression de Poisson, je suggère de lire Frees (2010) chapitre 12 (p 343-361), Greene (2012), section 18.3 (p 802-828) ou encore de Jong Heller (2008) chapitre 6. Sur les méthodes de biais minimal, de Jong Heller (2008), section 1.3 et l’article de Sholom Feldblum, http://www.casact.org/…. Sur le passage de ces dernières méthodes (introduites par Robert Bailey dans les années 60, http://www.casact.org/… et http://www.casact.org/…), je recommande la lecture de l’article de Ben Zehnwirth, Ratemaking From Bailey and Simon (1960) to Generalized Linear Regression Models, en ligne sur http://www.casact.org/…

Comme annoncé au premier cours, j’essaye de mettre en ligne les transparents au fur et à mesure, mais j’avais pris l’habitude d’écrire au tableau ces dernières années. Il faut donc que je tape tout. Pour le devoir un courriel sera envoyé d’ici la fin de semaine à tous les groupes qui se sont inscrits.

 

sur les transformations dans un modèle linéaire

Je voulais prendre 5 minutes pour reprendre une question posée par courriel, qui me permettra de poursuivre sur des choses évoquées mercredi dernier en cours. Je vais reformuler la question, mais en gros, cela disait: la méthode de Box-Cox (évoquée en début de semaine, ici) avait pour objet de choisir entre un modèle linéaire (que l’on étudié dans tous les sens depuis le début du cours) et un modèle log-linéaire (évoqué ici, par exemple). L’idée était d’associer le cas linéaire à  (dans la transformée de Box-Cox) et à  pour le cas multiplicatif (ou log-linéaire). Mais que se passe-t-il si la valeur optimale rejette ces deux cas, et est proche (disons) de  ?
Considérons le cas suivant (exemple que je ne me lasse d’utiliser)

> reglm=lm(dist~speed,data=cars)
> library(MASS)
> boxcox(reglm)

La valeur optimale est effectivement proche de . En posant , on aurait envie de considérer un modèle de la forme

(comme le suggère la transformation de Box-Cox). On peut faire la régression,

> regsqrt=lm(sqrt(dist)~speed,data=cars)
+ summary(regsqrt)

Call:
lm(formula = sqrt(dist) ~ speed, data = cars)

Residuals:
Min      1Q  Median      3Q     Max
-2.0684 -0.6983 -0.1799  0.5909  3.1534

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  1.27705    0.48444   2.636   0.0113 *
speed        0.32241    0.02978  10.825 1.77e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.102 on 48 degrees of freedom
Multiple R-squared: 0.7094,	Adjusted R-squared: 0.7034
F-statistic: 117.2 on 1 and 48 DF,  p-value: 1.773e-14

et effectivement, le résultat est concluant. Mais on ne peut pas en rester là, comme pour pour le passage au logarithme, notre variable d’intéret reste .
La prédiction sur le modèle transformé était

et

Comme avec le logarithme, on pourrait prendre comme prédiction pour  mais

Bref, on a un estimateur biaisé. Et là encore, comme on est sur des variables positives, l’inégalité de Jensen doit meme nous garantir que l’une des quantités domine toujours l’autre (je laisse les amateurs de convexité l’écrire dans le bon sens). Comment faire ?
La première solution est de noter que

Donc de manière très simple, on a notre prédiction (en prenant le carré, comme intuité, mais en rajoutant une terme – positif – lié à la variance).
Une seconde consiste à noter que Z (conditionnellement à la variable explicative) était gaussienne. Donc en prenant le carré (moyennant quelques changement d’échelle), on devrait tomber sur une loi du chi-deux qui est une loi que l’on connait bien (sinon je peux renvoyer ici).
Essayons de creuser un peu ces idées (en particulier, pour dériver des intervalles de confiance). Si on visualise la prédiction de ce modèle sur la racine carrée de la distance de freinage, on obtient

> plot(speed,sqrt(dist))
> x=seq(0,30,by=.2)
> distsqrtp=predict(regsqrt,newdata=
+ data.frame(speed=x),interval="prediction")
> polygon(c(x,rev(x)),c(distsqrtp[,2],
+ rev(distsqrtp[,3])),col="yellow",border=NA)
> lines(x,distsqrtp[,1],lwd=2,col="red")

Pour passer d’une loi normale à une loi du chi-deux, il faut retrancher la moyenne (pour centrer la variable) et diviser par l’écart-type (pour avoir une variance unitaire),

> s=summary(regsqrt)$sigma
> mu=predict(regsqrt)
> distsqrtp01=(sqrt(dist)-mu)/s

On a ainsi une variable qui, conditionnellement à la variable explicative est supposé suivre une loi ,

> plot(speed,distsqrtp01,ylim=c(-3,3))
> abline(h=qnorm(c(.025,.975),0,1),lty=2,col="red")

On prend ensuite le carré pour obtenir notre loi , on on trace la bande de confiance (il faut que la valeur à la deux soit faible),

> plot(speed,distsqrtp01^2)
> abline(h=qchisq(.95,df=1),lty=2,col="red")

On a maitenant nos intervalles de confiance, en utilisant cette loi du chi-deux, en inversant notre transformation,

> mu=predict(regsqrt,newdata=data.frame(speed=x))
> distsup=(mu+s*sqrt(qchisq(.95,df=1)))^2
> distinf=(mu-s*sqrt(qchisq(.95,df=1)))^2

que l’on peut visualiser simplement (sur les données de base, avec en ordonnée la distance)

> plot(cars)
> lines(x,distsup,lty=2,col="red")
> lines(x,distinf,lty=2,col="red")

Et la valeur prédite ? On va utiliser notre relation liant la variance, l’espérance du carré, et le carré de l’espérance,

> distesp=mu^2+s^2
> lines(x,distesp,lwd=2,col="red")

Nice, n’est-ce pas ?
Sinon, comme je le disais en cours, si la transformation optimale sur  est de prendre , peut-etre pourrait-on envisager de prendre  (de manière un peu duale) comme variable explicative. Encore une fois, c’est une idée, car rien ne le garantit. En effet

En particulier, le second modèle est un modèle linéaire classique, avec de l’homoscédasticité: on aura une dispersion uniforme autour de notre parabole. En revanche, avec le premier modèle, on a une espèce d’hétéroscédasticité (comme tenu du double produit), avec une variance du terme d’erreur qui croit avec  (car les coefficients sont positifs). Bref, en terme d’intervalles de confiance, on devrait avoir des choses assez différentes.
Regardons la régression de la distance (cette fois) sur la vitesse, et le carré de la vitesse,

> reglm=lm(dist~speed+I(speed^2),data=cars)
> distp=predict(reglm,newdata=
+ data.frame(speed=x),interval="prediction")

Si on visualise la prédiction, avec un intervalle de confiance, on obtient

> plot(cars)
> polygon(c(x,rev(x)),c(distp[,2],
+ rev(distp[,3])),col="yellow",border=NA)
> lines(x,distp[,1],lwd=2,col="red")

Autrement dit, on suppose que notre modèle est homoscédastique. Les régions de confiance entre les deux approches sont clairement différentes… meme si les prédictions sont presque superposées (oui oui, il y a deux courbes, une rouge et une bleue).