Tag Archives: CRAN

Computational Actuarial Science, with R

The book Computational Actuarial Science, with R is officially out. In the introduction of the book, and on the website of CRC, it is mentioned that the datasets can be found “in an R package on CRAN“, which is unfortunately incorrect. Some datasets are too large, so the package can not be uploaded on CRAN. Hopefully, Christophe host the package on his website.

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/")

or

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/", 
type = "source")

Here are the files :

Insurance datasets

A collection of datasets, originally for the book ‘Computational Actuarial Science with R’ edited by Arthur Charpentier (CAS with R). Now, the package contains a large variety of actuarial datasets.

Version: 0.9-8
Published: 2014-05-21
Author: Christophe Dutang
Maintainer: Christophe Dutang <christophe.dutang at ensimag.fr>
License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)]
NeedsCompilation: no

Downloads:

Reference manual: CASdatasets.pdf
Package source: CASdatasets_0.9-8.tar.gz
Package installation: go to this page
Windows binaries: r-release: CASdatasets_0.9-8.zip
OS X Snow Leopard binaries: r-release: CASdatasets_0.9-8.tgz
OS X Mavericks binaries: r-release: CASdatasets_0.9-8.tgz
Old sources: CASdatasets archive

Finding Waldo, a flag on the moon and multiple choice tests, with R

I have to admit, first, that finding Waldo has been a difficult task. And I did not succeed. Neither could I correctly spot his shirt (because actually, it was what I was looking for). You know, that red-and-white striped shirt. I guess it should have been possible to look for Waldo’s face (assuming that his face does not change) but I still have problems with size factor (and resolution issues too). The problem is not that simple. At thehttp://mlsp2009.conwiz.dk/ conference, a price was offered for writing an algorithm in Matlab. And one can even find Mathematica codes online. But most of the those algorithms are based on the idea that we look for similarities with Waldo’s face, as described in problem 3 on http://www1.cs.columbia.edu/~blake/‘s webpage. You can find papers on that problem, e.g. Friencly & Kwan (2009) (based on statistical techniques, but Waldo is here a pretext to discuss other issues actually), or more recently (but more complex) Garg et al. (2011) on matching people in images of crowds.

What about codes in R ? On http://stackoverflow.com/, some ideas can be found (and thank Robert Hijmans for his help on his package). So let us try here to do something, on our own. Consider the following picture,

With the following code (based on the following file) it is possible to import the picture, and to extract the colors (based on an RGB decomposition),

> library(raster)
> waldo=brick(system.file("DepartmentStoreW.grd",
+ package="raster"))
> waldo
class       : RasterBrick
dimensions  : 768, 1024, 786432, 3 (nrow,ncol,ncell,nlayer)
resolution  : 1, 1  (x, y)
extent      : 0, 1024, 0, 768  (xmin, xmax, ymin, ymax)
coord. ref. : NA
values      : C:\R\win-library\raster\DepartmentStoreW.grd
min values  : 0 0 0
max values  : 255 255 255

My strategy is simple: try to spot areas with white and red stripes (horizontal stripes). Note that here, I ran the code on a Windows machine, the package is not working well on Mac. In order to get a better understanding of what could be done, let us start with something much more simple. Like the picture below, with Waldo (and Waldo only). Here, it is possible to extract the three colors (red, green and blue),

> plot(waldo,useRaster=FALSE)

It is possible to extract the red zones (already on the graph above, since red is a primary color), as well as the white ones (green zones on the graphs means a white region on the picture, on the left)

# white component
white = min(waldo[[1]] , waldo[[2]] , waldo[[3]])>220
focalswhite = focal(white, w=3, fun=mean)
plot(focalswhite,useRaster=FALSE)

# red component
red = (waldo[[1]]>150)&(max(  waldo[[2]] , waldo[[3]])<90)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

i.e. here we have the graphs below, with the white regions, and the red ones,

From those two parts, it has been possible to extract the red-and-white stripes from the picture, i.e. some regions that were red above, and white below (or the reverse),

# striped component
striped = red; n=length(values(striped)); h=5
values(striped)=0
values(striped)[(h+1):(n-h)]=(values(red)[1:(n-2*h)]==
TRUE)&(values(red)[(2*h+1):n]==TRUE)
focalsstriped = focal(striped, w=3, fun=mean)
plot(focalsstriped,useRaster=FALSE)

So here, we can easily spot Waldo, i.e. the guy with the red-white stripes (with two different sets of thresholds for the RGB decomposition)

Let us try somthing slightly more complicated, with a zoom on the large picture of the department store (since, to be honest, I know where Waldo is…).

Here again, we can spot the white part (on the left) and the red one (on the right), with some thresholds for the RGB decomposition

Note that we can try to be (much) more selective, playing with threshold. Here, it is not very convincing: I cannot clearly identify the region where Waldo might be (the two graphs below were obtained playing with thresholds)

And if we look at the overall pictures, it is worst. Here are the white zones, and the red ones,

and again, playing with RGB thresholds, I cannot spot Waldo,

Maybe I was a bit optimistic, or ambitious. Let us try something more simple, like finding a flag on the moon. Consider the picture below on the left, and let us see if we can spot an American flag,

Again, on the left, let us identify white areas, and on the right, red ones

Then as before, let us look for horizontal stripes

Waouh, I did it ! That’s one small step for man, one giant leap for R-coders ! Or least for me… So, why might it be interesting to identify areas on pictures ? I mean, I am not Chloe O’Brian, I don’t have to spot flags in a crowd, neither Waldo, nor some terrorists (that might wear striped shirts). This might be fun if you want to give grades for your exams automatically. Consider the two following scans, the template, and a filled copy,

A first step is to identify regions where we expect to find some “red” part (I assume here that students have to use a red pencil). Let us start to check on the template and the filled form if we can identify red areas,

exam = stack("C:\\Users\\exam-blank.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE) 
exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

First, we have to identify areas where students have to fill the blanks. So in the template, identify black boxes, and get the coordinates (here manually)

exam = stack("C:\\Users\\exam-blank.png")
black = max(  exam[[1]] ,exam[[2]] , exam[[3]])<50
focalsblack = focal(black, w=3, fun=mean)
plot(focalsblack,useRaster=FALSE)
correct=locator(20)
coordinates=locator(20)
X1=c(73,115,156,199,239)
X2=c(386,428.9,471,510,554)
Y=c(601,536,470,405,341,276,210,145,79,15)
LISTX=c(rep(X1,each=10),rep(X2,each=10))
LISTY=rep(Y,10)
points(LISTX,LISTY,pch=16,col="blue")

The blue points above are where we look for students’ answers. Then, we have to define the vector of correct answers,

CORRECTX=c(X1[c(2,4,1,3,1,1,4,5,2,2)],
X2[c(2,3,4,2,1,1,1,2,5,5)])
CORRECTY=c(Y,Y)
points(CORRECTX, CORRECTY,pch=16,col="red",cex=1.3)
UNCORRECTX=c(X1[rep(1:5,10)[-(c(2,4,1,3,1,1,4,5,2,2)
+seq(0,length=10,by=5))]],
X2[rep(1:5,10)[-(c(2,3,4,2,1,1,1,2,5,5)
+seq(0,length=10,by=5))]])
UNCORRECTY=c(rep(Y,each=4),rep(Y,each=4))

Now, let us get back on red areas in the form filled by the student, identified earlier,

exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=5, fun=mean)

Here, we simply have to compare what the student answered with areas where we expect to find some red in,

ind=which(values(focalsred)>.3)
yind=750-trunc(ind/610)
xind=ind-trunc(ind/610)*610
points(xind,yind,pch=19,cex=.4,col="blue")
points(CORRECTX, CORRECTY,pch=1,
col="red",cex=1.5,lwd=1.5)

Crosses on the graph on the right below are the answers identified as correct (here 13),

> icorrect=values(red)[(750-CORRECTY)*
+ 610+(CORRECTX)]
> points(CORRECTX[icorrect], CORRECTY[icorrect],
+ pch=4,col="black",cex=1.5,lwd=1.5)
> sum(icorrect)
[1] 13

In the case there are negative points for non-correct answer, we can count how many incorrect answers we had. Here 4.

> iuncorrect=values(red)[(750-UNCORRECTY)*610+
+ (UNCORRECTX)]
> sum(iuncorrect)
[1] 4

So I have not been able to find Waldo, but I least, that will probably save me hours next time I have to mark exams…

Simulation d’un processus de Lévy, et discrétisation

Avec @renaudjf, on discutait l’autre jour de la simulation d’un processus de Lévy. Et on se posait la question d’un algorithme optimal pour combiner un processus de Poisson (ou un process Poisson composé) avec un processus de Wiener (avec éventuellement un drift, voire une diffusion plus générale). En fait, pour générer des processus de Poisson, j’ai toujours eu l’habitude de simuler les durées entre sauts (avec des lois exponentielles, indépendantes, comme dans des vieux billets). Jean François me suggérait d’utiliser une propriété d’uniformité des sauts sur un intervalle de temps donné, conditionnellement aux nombres de sauts.

Commençons par la première piste. On peut générer un processus de Wiener, éventuellement avec un drift, et à coté, on peut générer les lois exponentielles  (qui vont correspondre aux durées entre sauts), et éventuellement aussi des amplitudes de sauts (e.g. des pertes qui suivent des lois exponentielles). On a ici

où . On commence par générer  en notant que

où les incréments  sont Gaussiens (centrés et de variance ) et indépendants les uns des autres. Quant aux durées entre sauts, les  , ce sont des lois exponentielles indépendantes, de moyenne . Voilà le code qui permet de générer les trois composantes,

n=1000
h=1/n
lambda=5
set.seed(2)
W=c(0,cumsum(rnorm(n,sd=sqrt(h))))
W=rexp(100,lambda)
N=sum(cumsum(W)<1)
T=cumsum(W[1:N])
X=-rexp(N)

Le hic est que pour le processus de Wiener, on a du discrétiser, alors que pour le processus de Poisson composé, non. Pourtant, il va bien se ramener sur une même échelle de temps. Une première piste est de créer vraiment la fonction 

Lt=function(t){
W[trunc(n*t)+1]+sum(X[T<=t])+lambda*t
}

et pour faire un dessin ensuite, c’est un jeu d’enfant

L=Vectorize(Lt)
u=seq(0,1,length=n+1)
plot(u,L(u),type="l",col="blue")

Une autre possibilité est d’utiliser une propriété d’uniformité du processus de Poisson que j’évoquais en introduction. Car le processus de Poisson vérifie une propriété remarquable: si  est la date où survient le ième saut, , alors conditionnellement au fait que , les variables  correspondent aux statistiques d’ordres de  variables indépendantes, uniformément distribuées sur , i.e.

Cette propriété se trouve dans Wolff (1982). L’idée de la démonstration est relativement simple. Commençons avec un (unique) saut, alors pour ,

i.e. on retrouve la fonction de répartition d’une loi uniforme sur . On itère ensuite avec 2 sauts, 3 sauts, etc.

La traduction en R de cette idée est tout simplement (car on se place sur )

N=rpois(1,lambda)
T=runif(N)
X=-rexp(N)

Ensuite, une stratégie est de discrétiser le processus de Poisson, avec le même pas de temps que le processus de Wiener,

indice=trunc(T*n)+1
saut=rep(0,n+1)
saut[indice]=X
processus=W+cumsum(saut)+lambda*u

On retrouve la même trajectoire qu’auparavant

plot(u,processus,type="l",col="red")

,,

Sauf qu’on a eu de la chance. Avec cette procédure, il ne faut pas que l’on ait deux sauts dans le même intervalle de temps ! Bon, il est vrai qu’une caractérisation du processus de Poisson est que

et donc on doit avoir très peu de chance d’avoir deux sauts au même instant d’autant plus que le pas de temps est petit. Mais “peu de chance” ne veut pas dire nul, et si on génère des milliers de trajectoires, la probabilité d’avoir une fois un soucis n’est pas négligeable.

Jean-François a eu l’idée brillante de proposer de tirer non pas des lois uniformes sur , mais des lois uniformes discrètes, dans

T=c(0,sort(sample((1:(n-1)/n),size=N,replace=FALSE)))

sans remise afin d’éviter d’avoir deux sauts au même moment. L’idée était séduisante, mais il me restait à etre convaincu que les durées entre sauts… continuaient à être (presque) exponentielles.

Pour ça, on peut faire quelques tests. Par exemple, générer quelques simulations pour avoir une centaines de sauts (et donc une centaine de durées entre sauts), puis faire un test de loi exponentielle (de moyenne )

VT=0
for(ns in 1:20){
N=rpois(1,lambda)
if(N>0){
T=c(0,sort(sample((1:(n-1)/n),size=N,replace=FALSE)))
VT=c(VT,diff(T))
}}

On fait ici 20 boucles car on avait fixé

lambda=5

et j’avais dit que je voulais une centaine d’observations pour faire un test de loi (ce qui est purement arbitraire, on en conviendra). On peut ensuite faire un test d’ajustement de loi exponentielle,

ks.test(VT[-1],"pexp",lambda)$p.value

Si on répète un grand nombre de fois, en changeant le pas de temps (ou le nombre de subdivisons de l’intervalle de temps), on notera qu’effectivement, avec un grand pas de temps (à gauche ci-dessous) on rejette souvent, voire presque toujours, l’hypothèse de loi exponentielle. Mais qu’assez vite, c’est une hypothèse qui n’est pas invraisemblable,

Je ne sais pas entre ces deux fonctions ce qui serait le plus rapide, mais on a deux jolis algorithmes pour générer les processus de Lévy. Non ?

ACT2040, introduction aux modèles linéaires généralisés

On commencera ce mardi les GLM, après avoir introduit les lois exponentielles (qui ont du être revues en démonstration vendredi dernier). La notation utilisée sera que la loi (densité ou fonction de probabilité) de http://freakonometrics.blog.free.fr/public/perso4/Yi-ltx.gif sera de la forme

http://freakonometrics.blog.free.fr/public/perso4/loi-exponentielle.gif

Pour un complément plus exhaustif, je renvoie au chapitre en ligne.

  • Le modèle linéaire (Gaussien)

Le modèle de base est le modèle Gaussien que l’on avait revu au dernier cours,

> X=c(1,2,3,4)
> Y=c(1,2,5,6)
> base=data.frame(X,Y)
> reg1=lm(Y~1+X,data=base)
> nbase=data.frame(X=seq(0,5,by=.1))
> Y1=predict(reg1,newdata=nbase)

Pour une prédiction (unique), on obtient la prédiction suivante

Le code pour une telle représentation est le suivant

> plot(X,Y,pch=3,cex=1.5,lwd=2,xlab="",ylab="")
> lines(nbase$X,Y1,col="red",lwd=2)
> u=2
> mu=predict(reg1)[2]
> sigma=summary(reg1)$sigma
> y=seq(0,7,.05)
> loi=dnorm(y,mu,sigma)
> segments(u,y,loi+u,y,col="light green")
> lines(loi+u,y)
> abline(v=u,lty=2)
> points(X[2],Y[2],pch=3,cex=1.5,lwd=2)
> points(X[2],predict(reg1)[2],pch=19,col="red")
> arrows(u-.2,qnorm(.05,mu,sigma),
+ u-.2,qnorm(.95,mu,sigma),length=0.1,code=3,col="blue")

On peut multiplier les prédictions, en se basant sur l’hypothèse d’homoscédasticité (la variance sera alors constante)

Mais on peut aller plus loin

  • Le modèle linéaire généralisé

Plusieurs modèles peuvent etre estimés, en changeant la loi de la variable à expliquer, et la fonction lien,

> reg2=glm(Y~1+X,data=base,family=poisson(link="identity"))
> Y2=predict(reg2,newdata=nbase,type="response")
> reg3=glm(Y~1+X,data=base,family=poisson(link="log"))
> Y3=predict(reg3,newdata=nbase,type="response")
> reg4=glm(Y~1+X,data=base,family=gaussian(link="log"))
> Y4=predict(reg4,newdata=nbase,type="response")
> sigma=sqrt(summary(reg4)$dispersion)

Pour le modèle Poissonnien avec un lien identité, on obtient

On obtient ainsi une variance qui augmente avec la prédiction,

Pour une régression de Poisson avec un lien logarithmique,

i.e. pour nos quatre prédictions

On peut comparer avec une prédiction d’un modèle Gaussien avec un lien logarithmique,

i.e. pour les quatre prédictions

Margin of error, and comparing proportions in the same sample

Irecently tried to answer a simple question, asked by @adelaigue. Actually, I thought that the answer would be obvious… but it is a little bit more compexe than what I thought. In a recent survey about elections in Brazil, it was mentionned in a French newspapper that “Mme Rousseff, 62 ans, de 46,8% des intentions de vote et José Serra, 68 ans, de 42,7%” (i.e. proportions obtained from the survey). It is also mentioned that “la marge d’erreur du sondage est de 2,2% ” i.e. the margin of error is 2.2%, which means (for the journalist) that there is a “grande probabilité que les 2 candidats soient à égalité” (there is a “large probability” to have equal proportions).
Usually, in sampling theory, we look at the margin of error of a single proportion. The idea is that the variance of https://latex.codecogs.com/gif.latex?%20\widehat{p}, obtained from a sample of size https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m201.png

thus, the standard error is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m202.png

The standard 95% confidence interval, derived from a Gaussian approximation of the binomial distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m203.png

The largest value is obtained when p is 1/2, and then we have a worst case confidence interval (an upper bound) which is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m204.png

So with a margin of error https://perso.univ-rennes1.fr/arthur.charpentier/latex/m205.png means that https://perso.univ-rennes1.fr/arthur.charpentier/latex/m206.png. Hence, with a 5% margin of error, it means that n=400. While 2.2% means that n=2000:
> 1/.022^2
[1] 2066.116
Classically, we compare proportions between two samples: surveys at two different dates, surveys in different regions, surveys paid by two different newpapers, etc. But here, we wish to compare proportions within the same sample. This has been consider in an “old” paper published in 1993 in the American Statistician,

It contains nice figures to illustrate the difference between the standard approach,

and the one we would like to study here.

This point is mentioned in the book by Kish, survey sampling (thanks Benoit for the reference),


Let https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial05.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial06.png denote empirical frequencies we have obtained from the sample, based on https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png observations. Then since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial07.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial08.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial09.png

we have

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial11.png

Thus, a natural margin of error on the difference between the two proportion is here

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m207.png

which is here 4 points
> n=2000
> p1=46.8/100
> p2=42.7/100
> 1.96*sqrt((p1+p2)-(p1-p2)^2)/sqrt(n)
[1] 0.04142327
Which is exactly the difference we have here ! Hence, the probability of reaching such a value is quite small (2%)
> s=sqrt(p1*(1-p1)/n+p2*(1-p2)/n+2*p1*p2/n)
> (p1-p2)/s
[1] 1.939972
> 1-pnorm(p1-p2,mean=0,sd=sqrt((p1+p2)-(p1-p2)^2)/sqrt(n))
[1] 0.02619152

Actually, we can compare the three margin of errors we have so far,

  • the upper bound
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m208.png
  • the “average one”
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m209.png

where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m212.png
  • the more accurate one we just obtained,
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m213.png

where https://perso.univ-rennes1.fr/arthur.charpentier/latex/m214.png.
> p=seq(0,.5,by=.01)
> ic1=rep(1.96/sqrt(4*n),length(p))
> ic2=1.96*sqrt(p*(1-p))/sqrt(n)
> delta=.01
> ic31=1.96*sqrt(2*p-delta^2)/sqrt(n)
> delta=.2
> ic32=1.96*sqrt(2*p-delta^2)/sqrt(n)
> plot(p,ic32,type=”l”,col=”blue”)
> lines(p,ic31,col=”red”)
> lines(p,ic2)
> lines(p,ic1,lty=2)
So on the graph below, the dotted line is the standard upper bound, the plain line in black being a more accurate one when the probability is https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png (the x-axis). The red line is the true margin of error with a large difference between candidates (20 points) and the blue line with a small difference (1 point).


Remark: an alternative is to consider a chi-square test, comparering two multinomial distributions, with probabilities https://perso.univ-rennes1.fr/arthur.charpentier/latex/m215.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/m216.png where https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png is the average proportion, i.e. 44.75%. Then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial21.png

i.e.  https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png=3.71
> p=(p1+p2)/2
> (x2=n*((p1-p)^2/p+(p2-p)^2/p))
[1] 3.756425
> 1-pchisq(x2,df=1)
[1] 0.05260495
Under the null hypothesis, https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png should have a chi-square distribution, with one degree of freedom (since the average is fixed here). Here the probability to reach that level is around 5% (which can be compared with the 2% we add before).

So finally, I would think that here, stating that there is a “large probability” is not correct…

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.

Detecting distributions with infinite mean

In a post I published a few month ago (in French, here, based on some old paper,there), I mentioned a statistical procedure to test if the underlying distribution https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO02.pngof an i.i.d. sample https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO01.png had a finite mean (based on extreme value results). Since I just used it on a small dataset (yes, with real data), I decided to post the R code, since it is rather simple to use. But instead of working on that dataset, let us see what happens on simulated samples. Consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO03.png=200 observations generated from a Pareto distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO04.png

(here https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, as a start)

> a=2
> X=runif(200)^(-1/a)

Here, we will use the package developped by Mathieu Ribatet,

> library(RFA)
  • Using Generalized Pareto Distribution (and LR test)

A first idea is to fit a GPD distribution on observations that exceed a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO11.png>1.
Since we would like to test https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png (against the assumption that the expected value is finite, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO07.png), it is natural to consider the likelihood ratio

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO08.png

Under the null hypothesis, the distribution of https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO09.png should be chi square distribution with one degree of freedom. As mentioned here, the significance level is attained with a higher accuracy by employing Bartlett correction (there). But let  us make it as simple as possible for the blog, and use the chi-square distribution to derive the p-value.
Since it is rather difficult to select an appropriate threshold, it can be natural (as in Hill’s estimator) to consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO10.png, and thus, to fit a GPD on the https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.pnglargest values. And then to plot all that on a graph (like the Hill plot)

> Xs=rev(sort(X))
> s=0;G=rep(NA,length(Xs)-14);Gsd=G;LR=G;pLR=G
> for(i in length(X):15){
+ s=s+1
+ FG=fitgpd(X,Xs[i],method="mle")
+ FGc=fitgpd(X,Xs[i],method="mle",shape=1)
+ G[s]=FG$estimate[2]
+ Gsd[s]=FG$std.err[2]
+ FGc$fixed
+ LR[s]=FGc$deviance-FG$deviance
+ pLR[s]=1-pchisq(LR[s],df=1)
+ }

Here we keep the estimated value of the tail index, and the associated standard deviation of the estimator, to draw some confidence interval (assuming that the maximum likelihood estimate is Gaussian, which is correct only when n is extremely large). We also calculate the deviance of the model, the deviance of the constrained model (https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png), and the difference, which is the log likelihood ratio. Then we calculate the p-value (since under https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png the likelihood ratio statistics has a chi-square distribution).
If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we have the following graph, with on top the p-value (which is almost null here), the estimation of the tail index he https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.png largest values (and a confidence interval for the estimator),

If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance), we have

i.e. for those two models, we clearly reject the assumption of infinite mean (even if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png gets too close from 1, we should consider thresholds large enough). On the other hand, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean) we clearly accept the assumption of infinite mean (whatever the threshold),

  • Using Hill’s estimator

An alternative could be to use Hill’s estimator (with Alexander McNeil’s package). See here for more details on that estimator. The test is simply based on the confidence interval derived from the (asymptotic) normal distribution of Hill’s estimator,

> library(evir)
> Xs=rev(sort(X))
> HILL=hill(X)
> G=rev(HILL$y)
> Gsd=rev(G/sqrt(HILL$x))
> pLR=1-pnorm(rep(1,length(G)),mean=G,sd=Gsd)

Again, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we clearly rejct the assumption of infinite mean,

and similarly, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance)

Here the test is more robust than the one based on the GPD. And if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean), again we accept https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png,

Note that if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=0.7, it is still possible to run the procedure, and hopefully, it suggests that the underlying distribution has infinite mean,

(here without any doubt). Now you need to wait a few days to see some practical applications of the idea (there was on in the paper mentioned above actually, on business interruption insurance losses).

EM and mixture estimation

Following my previous post on optimization and mixtures (here), Nicolas told me that my idea was probably not the most clever one (there).
So, we get back to our simple mixture model,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM01.png

In order to describe how EM algorithm works, assume first that both https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM02.png and  https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM03.pngare perfectly known, and the mixture parameter is the only one we care about.

  • The simple model, with only one parameter that is unknown

Here, the likelihood is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM04.png

so that we write the log likelihood as

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM05.png

which might not be simple to maximize. Recall that the mixture model can interpreted through a latent variate https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM06.png (that cannot be observed), taking value when https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM07.png is drawn from https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM02.png, and 0 if it is drawn from https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM03.png. More generally (especially in the case we want to extend our model to 3, 4, … mixtures), https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM08.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM09.png.
With that notation, the likelihood becomes

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM10.png

and the log likelihood

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM11.png

the term on the right is useless since we only care about p, here. From here, consider the following iterative procedure,
Assume that the mixture probability https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM13.png is known, denoted https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM12.png. Then I can predict the value of https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM06.png (i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM08.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM09.png) for all observations,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM14.png

So I can inject those values into my log likelihood, i.e. in

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM15.png

having maximum (no need to run numerical tools here)

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM16.png

that will be denoted https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM17.png. And I can iterate from here.
Formally, the first step is where we calculate an expected (E) value, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.pngis the best predictor of https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM19.png given my observations (as well as my belief in https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM13.png). Then comes a maximization (M) step, where using https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM06.png, I can estimate probability https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM13.png.

  • A more general framework, all parameters are now unkown

So far, it was simple, since we assumed that https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM02.png and  https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM03.png were perfectly known. Which is not reallistic. An there is not much to change to get a complete algorithm, to estimate https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM30.png. Recall that we had https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.png which was the expected value of Z_{1,i}, i.e. it is a probability that observation i has been drawn from https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM02.png.
If https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.png, instead of being in the segment https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM31.png was in https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM32.png, then we could have considered mean and standard deviations of observations such that https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.png=0, and similarly on the subset of observations such that https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.png=1.
But we can’t. So what can be done is to consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.png as the weight we should give to observation i when estimating parameters of https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM02.png, and similarly, 1-https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM18.pngwould be weights given to observation i when estimating parameters of https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM03.png.
So we set, as before

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM33.png

and then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM34.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM35.png

and for the variance, well, it is a weighted mean again,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM36.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM37.png

and this is it.

  • Let us run the code on the same data as before

Here, the code is rather simple: let us start generating a sample
> X1 = rnorm(n,0,1)
> X20 = rnorm(n,0,1)
> Z  = sample(c(1,2,2),size=n,replace=TRUE)
> X2=4+X20
> X = c(X1[Z==1],X2[Z==2])
then, given a vector of initial values (that I called https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM12.png and then https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM99.png before),
>  s = c(0.5, mean(X)-1, var(X), mean(X)+1, var(X))
I define my function as,
>  em = function(X0,s) {
+  Ep = s[1]*dnorm(X0, s[2], sqrt(s[4]))/(s[1]*dnorm(X0, s[2], sqrt(s[4])) +
+  (1-s[1])*dnorm(X0, s[3], sqrt(s[5])))
+  s[1] = mean(Ep)
+  s[2] = sum(Ep*X0) / sum(Ep)
+  s[3] = sum((1-Ep)*X0) / sum(1-Ep)
+  s[4] = sum(Ep*(X0-s[2])^2) / sum(Ep)
+  s[5] = sum((1-Ep)*(X0-s[3])^2) / sum(1-Ep)
+  return(s)
+  }
Then I get https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM12.png, or https://perso.univ-rennes1.fr/arthur.charpentier/latex/mixEM99.png. So this is it ! We just need to iterate (here I stop after 200 iterations) since we can see that, actually, our algorithm converges quite fast,
> for(i in 2:200){
+ s=em(X,s)
+ }

Let us run the same procedure as before, i.e. I generate samples of size 200, where difference between means can be small (0) or large (4),

Ok, Nicolas, you were right, we’re doing much better ! Maybe we should also go for a Gibbs sampling procedure ?… next time, maybe….

Optimization and mixture estimation

Recently, one of my students asked me about optimization routines in R. He told me he that R performed well on the estimation of a time series model with different regimes, while he had trouble with a (simple) GARCH process, and he was wondering if R was good in optimization routines. Actually, I always thought that mixtures (and regimes) was something difficult to estimate, so I was a bit surprised…

Indeed, it reminded me some trouble I experienced once, while I was talking about maximum likelihooh estimation, for non standard distribution, i.e. when optimization had to be done on the log likelihood function. And even when generating nice samples, giving appropriate initial values (actually the true value used in random generation), each time I tried to optimize my log likelihood, it failed. So I decided to play a little bit with standard optimization functions, to see which one performed better when trying to estimate mixture parameter (from a mixture based sample). Here, I generate a mixture of two gaussian distributions, and I would like to see how different the mean should be to have a high probability to estimate properly the parameters of the mixture.

The density is here https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-01.png proportional to

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-02.png

The true model is https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-03.png, and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-04.png being a parameter that will change, from 0 to 4.
The log likelihood (actually, I add a minus since most of the optimization functions actually minimize functions) is
> logvraineg <- function(param, obs) {
+ p <- param[1]
+ m1 <- param[2]
+ sd1 <- param[3]
+ m2 <- param[4]
+  sd2 <- param[5]
+  -sum(log(p * dnorm(x = obs, mean = m1, sd = sd1) + (1 – p) *
+ dnorm(x = obs, mean = m2, sd = sd2)))
+  }
The code to generate my samples is the following,
>X1 = rnorm(n,0,1)
> X20 = rnorm(n,0,1)
> Z  = sample(c(1,2,2),size=n,replace=TRUE)
> X2=m+X20
> X = c(X1[Z==1],X2[Z==2])
Then I use two functions to optimize my log likelihood, with identical intial values,
> O1=nlm(f = logvraineg, p = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), obs = X)
> logvrainegX <- function(param) {logvraineg(param,X)}
> O2=optim( par = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)),
+   fn = logvrainegX)
Actually, since I might have identification problems, I take either https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-05.png or https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-06.png, depending whether https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-07.png or https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-08.png is the smallest parameter.

On the graph above, the x-axis is the difference between means of the mixture (as on the animated grap above). Then, the red point is the median of estimated parameter I have (here https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-05.png), and I have included something that can be interpreted as a confidence interval, i.e. where I have been in 90% of my scenarios: theblack vertical segments. Obviously, when the sample is not enough heterogeneous (i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-09.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-04.png rather different), I cannot estimate properly my parameters, I might even have a probability that exceed 1 (I did not add any constraint). The blue plain horizontal line is the true value of the parameter, while the blue dotted horizontal line is the initial value of the parameter in the optimization algorithm (I started assuming that the mixture probability was around 0.2).
The graph below is based on the second optimization routine (with identical  starting values, and of course on the same generated samples),

(just to be honest, in many cases, it did not converge, so the loop stopped, and I had to run it again… so finally, my study is based on a bit less than 500 samples (times 15 since I considered several values for the mean of my second underlying distribution), with 200 generated observations from a mixture).
The graph below compares the two (empty circles are the first algorithm, while plain circles the second one),

On average, it is not so bad…. but the probability to be far away from the tru value is not small at all… except when the difference between the two means exceeds 3…
If I change starting values for the optimization algorithm (previously, I assumed that the mixture probability was 1/5, here I start from 1/2), we have the following graph

which look like the previous one, except for small differences between the two underlying distributions (just as if initial values had not impact on the optimization, but it might come from the fact that the surface is nice, and we are not trapped in regions of local minimum).
Thus, I am far from being an expert in optimization routines in R (see here for further information), but so far, it looks like R is not doing so bad… and the two algorithm perform similarly (maybe the first one being a bit closer to the trueparameter).

La tarification avec SAS

En tarification, il est possible d’utiliser d’autres logiciels que R, en particulier, il semble que l’on puisse faire deux ou trois choses avec SAS…. J’en parle un peu car il semble  que, paradoxalement, les asssureurs préfèrent encore SAS à R (par exemple). Et comme plusieurs étudiants m’avaient demandé “et comment on fait avec SAS ?“. Bon, par contre je ne mets que les choses de base, parce que SAS est assez limité sur ce qu’il peut faire….

Pour suivre un peu le plan du cours, la première étape est de définir une variable d’exposition dans la table,

DATA contrats;
SET lib.contrats;
lnexpo = log(expo);
run;

Pour faire une régression de Poisson, ce n’est pas forcément compliqué,

PROC GENMOD DATA = base;
ODS OUTPUT ParameterEstimates=Genmod1_Param
           Type3=Genmod1_Var
           Modelfit=Genmod1_InfoModele; 
MODEL nbsin = ageconducteur /
                  dist = poisson   
                  link = log   
                  offset = lnexpo 
                  type3;
RUN; QUIT;

La sortie SAS a alors l’allure suivante

                                  The GENMOD Procedure
                    Critère pour évaluer la qualité de l'ajustement
              Critère                   DF          Valeur       Valeur/DF
              Deviance                63E3      26872.5334          0.4237
              Scaled Deviance         63E3      26872.5334          0.4237
              Pearson Chi-Square      63E3      73275.5362          1.1553
              Scaled Pearson X2       63E3      73275.5362          1.1553
              Log Likelihood                   -18474.2667

       Algorithm converged.
                       Analyse des résultats estimés de paramètres

                              Erreur      Wald 95Limites
Paramètre    DF   Estimation   standard      de confiance %    Khi 2   Pr > Khi 2
Intercept     1      -3.5164     0.0851    -3.6832  -3.3496   1708.02       <.0001
ageconducteur 1       0.0168     0.0014     0.0141   0.0195    146.73       <.0001
Scale         0       1.0000     0.0000     1.0000   1.0000
NOTE: The scale parameter was held fixed.

                         Statistiques LR pour Analyse de Type 3
                      Source           DF      Khi 2    Pr > Khi 2
                      ageconducteur     1     148.72        <.0001

Il est aussi possible de faire des GAM (i.e. du lissage de la variable explicative – continue – avec des fonctions splines)

PROC GAM DATA = base;
MODEL nbsin = spline(ageconducteur) / dist = Poisson;
OUTPUT OUT=gam PREDICTED; 
RUN; QUIT;
PROC SORT DATA = gam NODUPKEY; BY age_cond; RUN; QUIT;

et on peut faire des prédictions avec ce modèle (la sortie n’apporte pas grand chose, en pratique),

DATA gam;
SET gam;
pred_nbsin_gam = exp(P_nbsin);
KEEP ageconducteur pred_nbsin_gam;
RUN;

Enfin, on peut tenter de faire un joli graphique. Pour cela, on calcule les prédictions de trois modèles, le premier étant des nombres moyens de sinistres par âge

PROC SORT DATA = base; BY ageconducteur; RUN; QUIT;
PROC MEANS DATA = base NOPRINT;
BY ageconducteur;
VAR nbsin;
WEIGHT expo;
OUTPUT OUT = nbsin_age (DROP = _TYPE_ _FREQ_) MEAN=mo
y_uni_nbsin;
RUN; QUIT;

ensuite, on fait un modèle GLM, et  un modèle GAM, et on récupère les sorties

PROC SORT DATA = nbsin_age; BY age_cond; RUN; QUIT;
PROC SORT DATA = gam; BY age_cond; RUN; QUIT;
DATA nbsin_age;
MERGE nbsin_age
      gam;
BY age_cond;
RUN;

On essaye de faire le dessin (je passe les lignes de commande, il y en a une vingtaine)

Pour faire une régression quasiPoisson, le code a l’allure suivante,s

PROC GENMOD DATA = base;
ODS OUTPUT ParameterEstimates=Genmod1bis_Param
           Type3=Genmod1bis_Var
           Modelfit=Genmod1bis_InfoModele; 
MODEL nbsin = ageconducteur /
                 dist = poisson  
                 link = log      
                 offset = lnexpo 
                 type3           
                 scale = deviance;
RUN; QUIT;

La sortie donne alors l’estimation du paramètre de surdispersion (ou sur cet exemple de sousdispersion)

                      Analyse des résultats estimés de paramètres

                                   Erreur    Wald 95Limites
Paramètre      DF   Estimation   standard    de confiance %     Khi 2   Pr > Khi 2

Intercept        1     -3.5164     0.0554  -3.6249  -3.4078   4031.29       <.0001
ageconducteur    1      0.0168     0.0009   0.0150   0.0186    346.32       <.0001
Scale            0      0.6509     0.0000   0.6509   0.6509

On notera que pour calculer le critère d’Akaike, ça n’est pas forcément trivial,

%MACRO CALCUL_AIC_BIC(infomodel=, param=);
    DATA _null_;
    SET &infomodel.;
    IF Criterion = "Log Likelihood" THEN CALL SYMPUT("Loglike", Value);
    IF Criterion = "Deviance" THEN CALL SYMPUT("n_etoile", Df);
    RUN;
    DATA _null_;
    SET &param.  end=fin;
    RETAIN nb_df 0;
    nb_df = nb_df + df;
    IF fin THEN CALL SYMPUT("k", nb_df);
    RUN;
    DATA genmod_aic_bic;
    SET &param.;
    FORMAT Loglike 12.2 K 10. N 10. AIC_CALC 12.2 BIC_CALC 12.2;
    Loglike = 0; K = 0; N = 0; AIC_CALC = 0; BIC_CALC = 0;
    IF Parameter = "Intercept";
    KEEP Loglike K N AIC_CALC BIC_CALC;
    RUN;
    DATA genmod_aic_bic;
    SET genmod_aic_bic;
    Loglike = &loglike.;
    K = &k.;
    N = %eval(&n_etoile. + &k.);
    AIC_CALC = 2 * Loglike + 2 * K;
    BIC_CALC = 2 * Loglike + K * log(N);
    RUN;
    PROC PRINT DATA = genmod_aic_bic;
    RUN; QUIT;
%MEND CALCUL_AIC_BIC;
%CALCUL_AIC_BIC(infomodel=Genmod2_InfoModele, param=Genmod2_Param);

Tails of copulas, une lecture graphique

Suite à une formation que je faisais en fin de semaine à Brest (les slides sont ici et ), je voulais revenir sur les histoires de tails of copulas, pour reprendre le titre de l’article (ici) de Gary Venter (et qui correspond à des choses que j’avais pu présenter il y a quelques années à Berlin, les slides étant en ligne ici).

  • Quantifier la dépendance de queue

L’idée est de noter qu’il est noter qu’il existe deux manières de quantifier la dépendance de queue. La première est liée à l’approche de Joe (1990, ici, ou 1997 pour le livre), qui a introduit un (strong) tail dependence index. Par exemple pour la queue inférieure,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.2.php.png

soit

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.3.php.png

La seconde est liée à une idée que l’on retrouve dans les travaux de Janet Heffernan, Stuart Coles ou Jonathan Tawn. L’intuition est la suivante (on peut la retrouver en ligne ici). Si https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-2.2.php.png et https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-3.2.php.png ont la même loi et que l’on suppose les variables indépendantes, alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-1.2.php.png

En revanche, si les variables sont comonotones (c’est à dire égales comme on suppose les lois identiques),

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-4.2.php.png

Aussi, on peut supposer qu’il existe un indice https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png tel que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-5.2.php.png

Le soucis est que le cas d’indépendance correspond à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=2, alors que le cas de dépendance forte correspond au cas https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=1. Il est alors usuel de faire une transformation affine pour se ramener sur [0,1], et que la force de la dépendance soit croissante avec https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-8.2.php.png

Posons alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.2.php.png

qui pourra être interprété comme un (weak) tail dependence index.
Bref, ces deux mesures donnent de l’information sur le comportement dans les queues de distribution.

  • Les fonctions de concentration dans les queues

L’idée est de noter qu’il est possible d’étudier ces fonctions afin de mieux comprendre le comportement dans les queues. En s’inspirant de Gary Venter, on peut définir

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Llatex2png.2.php.png

pour étudier le comportement dans la queue inférieure, et

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Clatex2png.2.php.png

pour la queue supérieure,où https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-12.2.php.png est la copule de survie associée à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-13.2.php.png, au sens où
https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-14.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-15.2.php.png

Cet outil permettra de modéliser la dépendance forte. On peut également poser, afin d’étudier la dépendance faible,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.3.php.png

ou

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.4.php.png
  • Application statistique

L’idée est de noter qu’il est facile d’estimer ces fonctions. Ces outils peuvent être utiles pour mieux comprendre le comportement dans les queues.
Par exemple pour une copule Gaussienne de corrélation 0,5, on a la forme théorique suivante pour les fonctions de concentration (au sens fort)

Statistiquement, il est possible d’estimer ces quantités en comptant simplement le nombre d’observations dans le coin inférieur gauche, ou le coin supérieur droit.  Si on dispose d’un échantillon, on peut alors regarder ce que donnent les versions

http://perso.univ-re<br /><br /> nnes1.fr/arthur.charpentier/latex/toclatex2png-18.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-19.2.php.png

Pour un échantillon de taille n=500, on obtient les intervalles de confiance à 90% de la forme suivante,

Le code R ressemble à ça

> library(evd); data(lossalae)
> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000

avec le code suivant pour la version empirique,

> z=seq(0,.5,by=.001)
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=sum((U<=z[i])&(V<=z[i]))/sum(U<=z[i])
+  Remp[i]=sum((U>=1-z[i])&(V>=1-z[i]))/sum(U<=z[i])
+ }

et pour la version théorique,

> Lg=(pcopula(copclayton,cbind(z,z)))/(z)
> Rg=((1-2*(1-z)+pcopula(copclayton,cbind(1-z,1-z))))/(z)
> plot(c(1-z,z),c(Lg,Rg))

De plus, on a des fonctions similaires pour la dépendance au sens faible, avec le code suivant pour la version théorique,

> Lg=log(pcopula(cop,cbind(z,z)))/log(z)
> Rg=log((1-2*(1-z)+pcopula(cop,cbind(1-z,1-z))))/log(z)
> Lg=1/Lg*2-1
> Rg=1/Rg*2-1

et celui là pour la version empirique

> z=seq(0,.5,by=.001)
> v <- lossalae
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=log(mean((U<=z[i])&(V<=z[i])))/log(mean(U<=z[i]))
+  Remp[i]=log(mean((U>=1-z[i])&(V>=1-z[i])))/log(mean(U<=z[i]))
+ }
> Lemp=1/Lemp*2-1
> Remp=1/Remp*2-1

Bref, on peut utiliser ces fonctions sur des vrais échantillons. Considérons l’exemple classique loss-alae (où l’on couple les frais dans des sinistres assurés, et les frais payés par l’assureur). On souhaite ajuster une copule, sans trop savoir laquelle. On peut commencer par étudier la dépendance forte, et comparer avec une copule Gaussienne. La copule Gaussienne de référence possède ici le même rho de Spearman que l’échantillon dont on dispose,

> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000
> library(copula)
> paramgauss=.47
> paramclayton=.9
> paramgumbel=1.45
> copgauss=normalCopula(paramgauss)
> copclayton=claytonCopula(paramclayton, dim = 2)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)

On obtient ici

La courbe verte est l’intervalle de confiance (ponctuel) à 95% pour une copule Gaussienne et un échantillon de même taille. On voit qu’on modélise mal la structure de dépendance. Avec une copule duale de Clayton, on obtient

et enfin pour une copule de Gumbel,

Bref, la copule de Gumbel semble réellement bien adaptée… Si on creuse en étudiant la dépendance au sens faible, on peut valider là aussi ce modèle. En effet, si la référence est la copule Gaussienne,

ou pour une copule de Clayton,

alors qu’une copule de Gumbel donnerait