Pour le septième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, on abordera la modélisation des coûts individuels, aussi bien dommage que responsabilité civile.. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)
Tag Archives: lognormal
p-hacking, or cheating on a p-value
Yesterday evening, I discovered some interesting slides on False-Positives, p-Hacking, Statistical Power, and Evidential Value, via @UCBITSS ‘s post on Twitter. More precisely, there was this slide on how cheating (because that’s basically what it is) to get a ‘good’ model (by targeting the p-value)
As mentioned by @david_colquhoun one should be careful when reading the slides : some statistician might have a heart attack when they read
But still, there are interesting points in that slide.
Income distribution and Tour de France
A few days ago, Jean-François Mignot published an interesting article entitled Tour de France 2014 : pourquoi le vainqueur gagne 100 fois plus que le 10e. In this article, we have the following graph, with the income of the cyclist, as a function of his final ranking (the data where downloaded from http://sportbuzzbusiness.fr/)
> bike=read.csv( + "http://freakonometrics.free.fr/tourdefrance.csv", + sep=";",header=TRUE,dec=" ") > bike[1:19,"prime"]=bike[1:19,"prime"]*1000 > plot(bike,log="y",type="b", + xlab="(Final) rank",ylab="Bonus")
As pointed out by Jean-François, if the winner gets a lot of money, the bonus decreases fast, very fast actually. Gini index is very high here
> library(ineq) > ineq(X,type="Gini") [1] 0.910461
and if we look at Lorenz curve, indeed, the Tour de France is not very equalitarian,
Maximum Likelihood versus Goodness of Fit
Thursday, I got an interesting question from a colleague of mine (JP). I mean, the way I understood the question turned out to be a nice puzzle (but I have to confess I might have misunderstood). The question is the following : consider a i.i.d. sample of continuous variables. We would like to choose between two (parametric) families for the distribution, and . If we use maximum likelihood techniques, we get two estimators, one for each family, and . Clearly, is a much better than , in the sense of a standard goodness of fit test (e.g. Kolmogorov-Smirnov since the sample is assumed to be obtained from a continuous variable). Does that mean that family is (somehow) better than family ?
This is my interpretation of the question, and I found it amusing. So I will try to show (using simulated samples) that some odd situations can easily be obtained.
Consider a sample from a mixture of log-normal distributions,
> set.seed(228) > X=exp(c(rnorm(50,1,1),rnorm(50,2,1.2)))
Consider two standard families for positive random variables: a Gamma distribution and a lognormal distribution.
> library(MASS) > ab=fitdistr(X,"gamma") > ms=fitdistr(X,"lognormal")
If we want to visualized those two distributions, let us use
> vab=pgamma(u,ab$estimate[1],ab$estimate[2]) > vms=plnorm(u,ms$estimate[1],ms$estimate[2]) > plot(ecdf(X)) > lines(u,vab,col="red") > lines(u,vms,col="blue")
Here, we get
What else can we say ? actually, we can also compute Kolmogorov-Smirnov statistic,
where
This can be done using
> ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2]) One-sample Kolmogorov-Smirnov test data: X D = 0.0693, p-value = 0.7231 alternative hypothesis: two-sided > ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2]) One-sample Kolmogorov-Smirnov test data: X D = 0.148, p-value = 0.02507 alternative hypothesis: two-sided
From a theoretical point of view, we should not look at the p-values, since the null-distribution is based on a fixed distribution, not a fitted one (see the Lilliefors tests for normal samples). But still. The Gamma distribution seems to be very far away from the true distribution. The statistics is twice the one we have with our lognormal distribution. And one p-value is 72%, while the other one is 2.5%. Here, we should prefer this lognormal distribution to that Gamma one. But here, we did consider only one distribution in each family. Does that mean that we cannot find one Gamma distribution that will be better than all possible lognormal distributions ? Better, for instance, according to Kolmogorov-Smirnov statistics…
Well, it is possible to use another strategy to find appropriate parameters. We can minimize this statistic actually. Define
> ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic} > ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}
and compute
> n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2])) > n1 $minimum [1] 0.05252692 $estimate [1] 1.547437 1.121864 > n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2])) > n2 $minimum [1] 0.04737725 $estimate [1] 1.1449041 0.167041
So here, it is possible to find a distribution much closer to the empirical sample, within the Gamma family actually.
> vab=pgamma(u,n2$estimate[1],n2$estimate[2]) > vms=plnorm(u,n1$estimate[1],n1$estimate[2]) > lines(u,vab,col="red",lwd=2) > lines(u,vms,col="blue",lwd=2)
What would be the point here? Maybe just the idea that the maximum likelihood estimator is only one estimator among a lot of them. And if it has interesting asymptotic properties, on small samples, it might not be the best estimator to consider…
And to be completely honest, I’ve been cheating here… I mean, not really cheating (not more than any researcher using a statistical test to validate the findings). But here, I did fix the seed of the random number generator. Actually, such example does not occur that frequently. Here, out of 1000 samples, I got this odd conclusion almost 15 times. And the smaller the sample, the more likely we can observe that story, where the maximum likelihood estimator can be far away from the best fit. Here is the proportion of opposite conclusions, as a function of the sample size,
> SIM=function(ns=1000,n=100){ + t=0 + for(s in 1:ns){ + set.seed(s) + X=exp(c(rnorm(n/2,1,1),rnorm(n/2,2,1.2))) + ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic} + ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic} + library(MASS) + ab=fitdistr(X,"gamma") + ms=fitdistr(X,"lognormal") + n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2])) + n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2])) + if( (ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])$statistic- + ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])$statistic) + *(n1$minimum-n2$minimum)<=0 ) t=t+1 + } + return(t/ns)} > VM=rep(NA,20) > VS=seq(10,200,by=10) > for(i in 1:20){VM[i]=SIM(n=VS[i],ns=1000)} > plot(VS,VM,type="p")
So to provide a more complete answer to JP’s question, with a very large sample, I guess that your intuition should be valid…. but clearly not on a small sample.
Modélisation des coûts individuels
Cette semaine, même si le réseau de l’UQAM est down, on va continuer le cours et finir la section sur la modélisation de la surdispersion pour la fréquence de sinistres. On devrait ensuite commencer la modélisation des coûts individuels. En particulier, on passera du temps autour de deux points,
- la distinction lognormale et gamma
- l’écrêtement des gros sinistres
Les transparents sont en ligne. Et la base des coûts est celle évoquée au second cours.
Bootstrap et régression
Lors du dernier cours, on a évoqué l’utilisation du bootstrap pour obtenir des intervalles de confiance sur des prévisions. Je mets en ligne les codes tapés en cours (très sommairement commentés, je peux renvoyer vers des vieux billets du cours ACT6420 pour des compléments). On va travailler sur ma base préférée pour évoquer la régression linéaire (avant de parler triangles de provisionnement, revenons cinq minutes sur des choses simples).
> plot(cars) > reg=lm(dist~speed,data=cars) > abline(reg,col="red") > n=nrow(cars) > x=21 > points(x,predict(reg,newdata= data.frame(speed=x)),pch=19,col="red")
On cherche ici à faire une prédiction en un point. Comme rappelé en cours (mais aussi dans le cours de modèles de prévision), quand on veut donner un intervalle de confiance pour la prévision, il convient de distinger l’intervalle de confiance pour le prédicteur (qui va dépendre de l’erreur d’estimation des paramétres) et l’intervalle de confiance pour une potentielle valeur (on peut parler de génération de scénarios, qui va dépendre en plus de l’erreur de modèle, c’est à dire de la dispersion des résidus). Commençons par l’intervalle de confiance sur la prédiction, sur le best estimate comme on dit en provisionnement
> Yx=rep(NA,500) > B=matrix(NA,500,2) > for(s in 1:500){ + indice=sample(1:n,size=n, + replace=TRUE) + base=cars[indice,] + #points(base,pch=3) + reg=lm(dist~speed,data=base) + abline(reg,col="light blue") + points(x,predict(reg,newdata=data.frame(speed=x)),pch=19,col="blue") + Yx[s]=predict(reg,newdata=data.frame(speed=x)) + B[s,]=coefficients(reg) + }
Les valeurs bleues sont ici des prévisions possibles, obtenues en rééchantillonnant dans notre base d’observations. Pour rappel, l’intervalle de confiance (à 90%), sous hypothèse de normalité des résidus (et donc des estimateurs de la pente et de la constante de la droite de régression) s’obtient de la manière suivante
> reg=lm(dist~speed,data=cars) > U=predict(reg,interval ="confidence", + level=.9,newdata= + data.frame(speed=0:30)) > lines(0:30,U[,2],col="red",lwd=2) > lines(0:30,U[,3],col="red",lwd=2)
On peut comparer ici la distribution des valeurs obtenues sur nos 500 jeux de données générées, et comparer les quantiles empiriques, avec les quantiles sous hypothèse de normalité,
> hist(Yx,proba=TRUE,col="light blue",border="white") > boxplot(Yx,horizontal=TRUE,at=.07,boxwex = 0.02,add=TRUE,col="light green") > abline(v=U[x+1,2:3],col="red",lwd=2) > D=density(Yx) > lines(D) > I=which(D$x<=quantile(Yx,.05)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA) > I=which(D$x>=quantile(Yx,.95)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
On peut noter que les ordres de grandeur sont comparables.
> reg=lm(dist~speed,data=cars) > quantile(Yx,c(.05,.95)) 5% 95% 58.63689 70.31281 > predict(reg,interval ="confidence", + level=.9,newdata=data.frame(speed=x)) fit lwr upr 1 65.00149 59.65934 70.34364
Regardons maintenant l’autre type d’intervalle de confiance, sur la valeur possible de la variable d’intérêt. Cette fois, en plus de tirer des nouveaux échantillons et calculer des prédictions, on va en plus rajouter un bruit à chaque tirage, qui permettra d’obtenir une valeur possible.
> Yx=rep(NA,500) > for(s in 1:500){ + indice=sample(1:n,size=n, + replace=TRUE) + base=cars[indice,] + #points(base,pch=3) + reg=lm(dist~speed,data=base) + erreur=residuals(reg) + #abline(reg,lty=2) + E=sample(erreur,size=1) + Yx[s]=predict(reg,newdata=data.frame(speed=x))+E + points(x,Yx[s],pch=19,col="red") + }
Là encore, on peut comparer (graphiquement pour commencer) les valeurs obtenues par rééchantillonnage, et celle obtenues sous hypothèse de normalité,
> hist(Yx,proba=TRUE,col="light blue",border="white") > boxplot(Yx,horizontal=TRUE,at=.025,boxwex = 0.005,add=TRUE,col="light green") > abline(v=U[2:3],col="red",lwd=2) > D=density(Yx) > lines(D) > I=which(D$x<=quantile(Yx,.05)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA) > I=which(D$x>=quantile(Yx,.95)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
Ce qui donne, numériquement, les comparaisons suivantes
> quantile(Yx,c(.05,.95)) 5% 95% 44.43468 96.01357 > (U=predict(reg,interval ="prediction",level=.9,newdata=data.frame(speed=x))) fit lwr upr 1 67.63136 45.16967 90.09305
On observe cette fois une légère asymméytrie vers la droite. Manifestement, on ne peut pas supposer les résidus Gaussien, car il y a plus de grandes valeurs positives, que négatives. Ce qui fait du sens compte tenu de la nature des données (une distance ne peut être négative).
On avait ensuite commencé à discuter de l’utilisation des modèles de régression en provisionnement. Afin d’avoir des données présentant de l’indépendance, on avait rappelé qu’il fallait travailler avec les incréments de paiments, et non pas les paiements cumulés.
> T [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3209 4372 4411 4428 4435 4456 [2,] 3367 4659 4696 4720 4730 NA [3,] 3871 5345 5398 5420 NA NA [4,] 4239 5917 6020 NA NA NA [5,] 4929 6794 NA NA NA NA [6,] 5217 NA NA NA NA NA > n=ncol(T) > Y=T > Y[,2:n]=T[,2:n]- + T[,1:(n-1)] > Y [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3209 1163 39 17 7 21 [2,] 3367 1292 37 24 10 NA [3,] 3871 1474 53 22 NA NA [4,] 4239 1678 103 NA NA NA [5,] 4929 1865 NA NA NA NA [6,] 5217 NA NA NA NA NA
On peut alors constituer une base de données, avec comme variables explicatives la ligne et la colonne.
> y=as.vector(as.matrix(Y)) > base=data.frame( + y, + ai=rep(2000:2005,n), + bj=rep(0:(n-1),each=n)) > > head(base,12) y ai bj 1 3209 2000 0 2 3367 2001 0 3 3871 2002 0 4 4239 2003 0 5 4929 2004 0 6 5217 2005 0 7 1163 2000 1 8 1292 2001 1 9 1474 2002 1 10 1678 2003 1 11 1865 2004 1 12 NA 2005 1 > tail(base,12) y ai bj 25 7 2000 4 26 10 2001 4 27 NA 2002 4 28 NA 2003 4 29 NA 2004 4 30 NA 2005 4 31 21 2000 5 32 NA 2001 5 33 NA 2002 5 34 NA 2003 5 35 NA 2004 5 36 NA 2005 5
On peut alors commencer par utiliser le modèle Regression models based on log-incremental payments de Stavros Christofides, basé sur une modélisation lognormale, introduite initialement par Etienne de Vylder en 1978 (Markus en parle, en trois parties, sur son blog http://lamages.blogspot.ca/Barnett%20Zehnwirth)
> reg1=lm(log(y)~ + as.factor(ai)+ + as.factor(bj),data=base) > summary(reg1) Call: lm(formula = log(y) ~ as.factor(ai) + as.factor(bj), data = base) Residuals: Min 1Q Median 3Q Max -0.26374 -0.05681 0.00000 0.04419 0.33014 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.9471 0.1101 72.188 6.35e-15 *** as.factor(ai)2001 0.1604 0.1109 1.447 0.17849 as.factor(ai)2002 0.2718 0.1208 2.250 0.04819 * as.factor(ai)2003 0.5904 0.1342 4.399 0.00134 ** as.factor(ai)2004 0.5535 0.1562 3.543 0.00533 ** as.factor(ai)2005 0.6126 0.2070 2.959 0.01431 * as.factor(bj)1 -0.9674 0.1109 -8.726 5.46e-06 *** as.factor(bj)2 -4.2329 0.1208 -35.038 8.50e-12 *** as.factor(bj)3 -5.0571 0.1342 -37.684 4.13e-12 *** as.factor(bj)4 -5.9031 0.1562 -37.783 4.02e-12 *** as.factor(bj)5 -4.9026 0.2070 -23.685 4.08e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1753 on 10 degrees of freedom (15 observations deleted due to missingness) Multiple R-squared: 0.9975, Adjusted R-squared: 0.9949 F-statistic: 391.7 on 10 and 10 DF, p-value: 1.338e-11 > base$py=exp(predict(reg1, + newdata=base)+summary(reg1)$sigma^2/2) > round(matrix(base$py,n,n),1) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 2871.2 1091.3 41.7 18.3 7.8 21.3 [2,] 3370.8 1281.2 48.9 21.5 9.2 25.0 [3,] 3768.0 1432.1 54.7 24.0 10.3 28.0 [4,] 5181.5 1969.4 75.2 33.0 14.2 38.5 [5,] 4994.1 1898.1 72.5 31.8 13.6 37.1 [6,] 5297.8 2013.6 76.9 33.7 14.5 39.3 > sum(base$py[is.na(base$y)]) [1] 2481.857
On obtient un montant un peu différent de celui obtenu par la méthode Chain Ladder, mais néanmoins comparable. On peut aussi tenter une régression de Poisson (avec un lien logarithmique), comme suggéré en 1975 par Hachemeister et Stanard,
> reg2=glm(y~ + as.factor(ai)+ + as.factor(bj),data=base, + family=poisson) > summary(reg2) Call: glm(formula = y ~ as.factor(ai) + as.factor(bj), family = poisson, data = base) Deviance Residuals: Min 1Q Median 3Q Max -2.3426 -0.4996 0.0000 0.2770 3.9355 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 8.05697 0.01551 519.426 < 2e-16 *** as.factor(ai)2001 0.06440 0.02090 3.081 0.00206 ** as.factor(ai)2002 0.20242 0.02025 9.995 < 2e-16 *** as.factor(ai)2003 0.31175 0.01980 15.744 < 2e-16 *** as.factor(ai)2004 0.44407 0.01933 22.971 < 2e-16 *** as.factor(ai)2005 0.50271 0.02079 24.179 < 2e-16 *** as.factor(bj)1 -0.96513 0.01359 -70.994 < 2e-16 *** as.factor(bj)2 -4.14853 0.06613 -62.729 < 2e-16 *** as.factor(bj)3 -5.10499 0.12632 -40.413 < 2e-16 *** as.factor(bj)4 -5.94962 0.24279 -24.505 < 2e-16 *** as.factor(bj)5 -5.01244 0.21877 -22.912 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 46695.269 on 20 degrees of freedom Residual deviance: 30.214 on 10 degrees of freedom (15 observations deleted due to missingness) AIC: 209.52 Number of Fisher Scoring iterations: 4 > base$py2=predict(reg2, + newdata=base,type="response") > > round(matrix(base$py2,n,n),1) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3155.7 1202.1 49.8 19.1 8.2 21.0 [2,] 3365.6 1282.1 53.1 20.4 8.8 22.4 [3,] 3863.7 1471.8 61.0 23.4 10.1 25.7 [4,] 4310.1 1641.9 68.0 26.1 11.2 28.7 [5,] 4919.9 1874.1 77.7 29.8 12.8 32.7 [6,] 5217.0 1987.3 82.4 31.6 13.6 34.7 > > sum(base$py2[is.na(base$y)]) [1] 2426.985
La prédiction coïncide avec l’estimateur obtenu par la méthode Chain Ladder. Le lien avec les méthodes de biais minimal a été établi par Klaus Schmidt et Angela Wünsche en 1998, dans Chain ladder, marginal sum and maximum likelihood estimation. La semaine prochaine, on parlera des méthodes de bootstrap pour obtenir des intervalles de confiance, ou des quantiles, sur les montants de réserve. Je ne sais pas si j’aurais le temps de taper des transparents, je préfère, sur cette partie du cours taper au fur et à mesure, et écrire au tableau. Je renvoie au Chapitre 3 du livre avec Christophe Dutang – en ligne sur http://cran.r-project.org/doc/contrib/ – pour le détail. C’est le code que je tape en cours, tout en essayant de répondre aussi aux questions.
Benford law and lognormal distributions
Benford’s law is nowadays extremely popular (see e.g. http://en.wikipedia.org/…). It is usually claimed that, for a given set data set, changing units does not affect the distribution of the first digit. Thus, it should be related to scale invariant distributions. Heuristically, scale (or unit) invariance means that the density of the measure (or probability function) should be proportional to . Thus, because densities integrate to 1, the proportionality coefficient has to be , and therefore, should satisfy the following functional equation, , for all in and in . The solution of this functional equation is , I guess this can be proved easily solving ordinary differential equation
Now if denotes the first digit of , in base 10, then
Which is the so-called Benford’s law. So, this distribution looks like that
> (benford=log(1+1/(1:9))/log(10)) [1] 0.30103000 0.17609126 0.12493874 0.09691001 0.07918125 [6] 0.06694679 0.05799195 0.05115252 0.04575749 > names(benford)=1:9 > sum(benford) [1] 1 > barplot(benford,col="white",ylim=c(-.045,.3)) > abline(h=0)
To compute the empirical distribution from a sample, use the following function
> firstdigit=function(x){ + if(x>=1){x=as.numeric(substr(as.character(x),1,1)); zero=FALSE} + if(x<1){zero=TRUE} + while(zero==TRUE){ + x=x*10; zero=FALSE + if(trunc(x)==0){zero=TRUE} + } + return(trunc(x)) + }
and then
> Xd=sapply(X,firstdigit) > table(Xd)/1000
In Benford’s Law: An Empirical Investigation and a Novel Explanation, we can read
It is not a mathematical article, so do not expect any formal proof in this paper. At least, we can run monte carlo simulation, and see what’s going on if we generate samples from a lognormal distribution with variance . For instance, with a unit variance,
> set.seed(1) > s=1 > X=rlnorm(n=1000,0,s) > Xd=sapply(X,firstdigit) > table(Xd)/1000 Xd 1 2 3 4 5 6 7 8 9 0.288 0.172 0.121 0.086 0.075 0.072 0.073 0.053 0.060 > T=rbind(benford,-table(Xd)/1000) > barplot(T,col=c("red","white"),ylim=c(-.045,.3)) > abline(h=0)
Clearly, it not far away from Benford’s law. Perhaps a more formal test can be considered, for instance Pearson’s (goodness of fit) test.
> chisq.test(T,p=benford) Chi-squared test for given probabilities data: T X-squared = 10.9976, df = 8, p-value = 0.2018
So yes, Benford’s law is admissible ! Now, if we consider the case where is smaller (say 0.9), it is a rather different story,
compared with the case where is larger (say 1.1)
It is possible to generate several samples (always the same size, here 1,000 observations), just change the variance parameter and compute the -value of the test. There might be one tricky part: when generating samples from lognormal distributions with small variance, it might be possible that some digits do not appear at all. On that case, there is a problem with the test. So we just use here
> T=table(Xd) > T=T[as.character(1:9)] > T[is.na(T)]=0 > PVAL[i]=chisq.test(T,p=benford)$p.value
Boxplots of the -value of the test are the following,
When is too small, it is clearly not Benford’s distribution: for half (or more) of our samples, the -value is lower than 5%. On the other hand, when is large (enough), Benford’s distribution is the distribution of the first digit of lognormal samples, since 95% of our samples have -values higher than 5% (and the distribution of the -value is almost uniform on the unit interval). Here is the proportion of samples where the -value was lower than 5% (on 5,000 generations each time)
Note that it is also possible to compute the -value of Komogorov-Smirnov test, testing if the -value has a uniform distribution,
> ks.test(PVAL[,s], "punif")$p.value
Indeed, if is larger than 1.15 (around that value), it looks like Benford’s law is a suitable distribution for the first digit.
Modélisation des coûts individuels en tarification
Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,
Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….
Le passage au log dans les modèles linéaires
Un billet rapide pour compléter et illustrer le passage au log dans un modèle linéaire (que l’on abordera cette semaine en cours). Le point de départ est le modèle linéaire, où on suppose que, conditionnellement à , suit une loi normale. Pour rappel, si on a une loi normale, , alors et . Les intervalles de confiance à 90% et 95% sont symétriques par rapport à la moyenne (qui est aussi la médiane, soit dit en passant),
Dans un modèle Gaussien avec homoscédasiticité, i.e. alors que . On a alors les bandes de confiance suivantes, pour un modèle de régression linéaire,
Bon, maintenant, que se passe-t-il si on prend l’exponentiel ? Pour la loi normale, rappelons que l’on obtient une loi lognormale, i.e. , les deux paramètres étant liés à la loi normale sous jacente, car désormais
alors que
Graphiquement, on a la loi suivante, avec les intervalles de confiance à 90% et 95% représentés ci-dessous. Le point noir est alors que le point bleu est l’espérance de la loi lognormale.
On notera que le quantile de la loi log-normale est l’exponentiel du quantile de la loi normale. En effet, si alors . En particulier, n’est pas la moyenne de , mais la médiane (puisque était la médiane de ).
Mais il n’est pas rare de voir utilisé un intervalle de confiance de la forme
qui est la forme classique de l’intervalle de confiance Gaussien (symétrique autour de la moyenne). Ici, on aurait les niveaux suivants
Notons qu’il n’y a aucune raison ici d’avoir une probabilité d’être dans l’intervalle de confiance obtenu avec les quantiles de la loi normale.
Maintenant, si on prend l’exponentiel d’un modèle linéaire (i.e. le logarithme de la variable d’intérêt est modélisé par un modèle linéaire) on a
avec une variance (conditionnelle) qui dépend de la variable explicative
Là encore, le plus naturel est d’utiliser comme bornes de l’intervalle de confiance des quantiles associés à la loi lognormale,
mais il n’est pas rare de voir utilisé des intervalles de type Gaussiens,
On perd là encore en interprétation car les bornes n’ont plus rien à voir avec les quantiles.
Bounding sums of random variables, part 1
For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).
This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).
Let denote the set of univariate continuous distribution function, left-continuous, on . And the set of distributions on . Thus, if and . Consider now two distributions . In a very general setting, it is possible to consider operators on . Thus, let denote an operator, increasing in each component, thus that . And consider some function assumed to be also increasing in each component (and continuous). For such functions and , define the following (general) operator, as
One interesting case can be obtained when is a copula, . In that case,
and further, it is possible to write
It is also possible to consider other (general) operators, e.g. based on the sum
or on the minimum,
where is the survival copula associated with , i.e. . Note that those operators can be used to define distribution functions, i.e.
and similarly
All that seems too theoretical ? An application can be the case of the sum, i.e. , in that case is the distribution of sum of two random variables with marginal distributions and , and copula . Thus, is simply the convolution of two distributions,
The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator , then, for any copula , one can find a lower bound for
as well as an upper bound
Those inequalities come from the fact that for all copula , , where is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…
In the case of the sum of two random variables, with marginal distributions and , bounds for the distribution of the sum , where and , can be written
for the lower bound, and
for the upper bound. And those bounds are sharp, in the sense that, for all , there is a copula such that
and there is (another) copula such that
Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all let denotes its generalized inverse, left continuous, and let denote the set of those quantile functions. Define then the dual versions of our operators,
and
Those definitions are really dual versions of the previous ones, in the sense that and .
Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is
while the upper bound is
A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .
> F=function(x) plnorm(x,0,1) > G=function(x) plnorm(x,0,1) > n=100 > X=seq(0,10,by=.05) > Hinf=Hsup=rep(NA,length(X)) > for(i in 1:length(X)){ + x=X[i] + U=seq(0,x,by=1/n); V=x-U + Hinf[i]=max(pmax(F(U)+G(V)-1,0)) + Hsup[i]=min(pmin(F(U)+G(V),1))}
If we plot those bounds, we obtain
> plot(X,Hinf,ylim=c(0,1),type="s",col="red") > lines(X,Hsup,type="s",col="red")
But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here
> Finv=function(u) qlnorm(u,0,1) > Ginv=function(u) qlnorm(u,0,1)
The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance
The idea is to consider and , and the bound for the quantile function at point is then
The code to compute those bounds, for a given is here
> n=1000 > Qinf=Qsup=rep(NA,n-1) > for(i in 1:(n-1)){ + J=0:i + Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n)) + J=(i-1):(n-1) + Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n)) + }
Here we have (several s were considered, so that we can visualize the convergence of that numerical algorithm),
Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…
Maximum likelihood estimates for multivariate distributions
Consider our loss-ALAE dataset, and – as in Frees & Valdez (1998) – let us fit a parametric model, in order to price a reinsurance treaty. The dataset is the following,
> library(evd) > data(lossalae) > Z=lossalae > X=Z[,1];Y=Z[,2]
The first step can be to estimate marginal distributions, independently. Here, we consider lognormal distributions for both components,
> Fempx=function(x) mean(X<=x) > Fx=Vectorize(Fempx) > u=exp(seq(2,15,by=.05)) > plot(u,Fx(u),log="x",type="l", + xlab="loss (log scale)") > Lx=function(px) -sum(log(Vectorize(dlnorm)( + X,px[1],px[2]))) > opx=optim(c(1,5),fn=Lx) > opx$par [1] 9.373679 1.637499 > lines(u,Vectorize(plnorm)(u,opx$par[1], + opx$par[2]),col="red")
The fit here is quite good,
For the second component, we do the same,
> Fempy=function(x) mean(Y<=x) > Fy=Vectorize(Fempy) > u=exp(seq(2,15,by=.05)) > plot(u,Fy(u),log="x",type="l", + xlab="ALAE (log scale)") > Ly=function(px) -sum(log(Vectorize(dlnorm)( + Y,px[1],px[2]))) > opy=optim(c(1.5,10),fn=Ly) > opy$par [1] 8.522452 1.429645 > lines(u,Vectorize(plnorm)(u,opy$par[1], + opy$par[2]),col="blue")
It is not as good as the fit obtained on losses, but it is not that bad,
Now, consider a multivariate model, with Gumbel copula. We’ve seen before that it worked well. But this time, consider the maximum likelihood estimator globally.
> Cop=function(u,v,a) exp(-((-log(u))^a+ + (-log(v))^a)^(1/a)) > phi=function(t,a) (-log(t))^a > cop=function(u,v,a) Cop(u,v,a)*(phi(u,a)+ + phi(v,a))^(1/a-2)*( + a-1+(phi(u,a)+phi(v,a))^(1/a))*(phi(u,a-1)* + phi(v,a-1))/(u*v) > L=function(p) {-sum(log(Vectorize(dlnorm)( + X,p[1],p[2])))- + sum(log(Vectorize(dlnorm)(Y,p[3],p[4])))- + sum(log(Vectorize(cop)(plnorm(X,p[1],p[2]), + plnorm(Y,p[3],p[4]),p[5])))} > opz=optim(c(1.5,10,1.5,10,1.5),fn=L) > opz$par [1] 9.377219 1.671410 8.524221 1.428552 1.468238
Marginal parameters are (slightly) different from the one obtained independently,
> c(opx$par,opy$par) [1] 9.373679 1.637499 8.522452 1.429645 > opz$par[1:4] [1] 9.377219 1.671410 8.524221 1.428552
And the parameter of Gumbel copula is close to the one obtained with heuristic methods in class.
Now that we have a model, let us play with it, to price a reinsurance treaty. But first, let us see how to generate Gumbel copula… One idea can be to use the frailty approach, based on a stable frailty. And we can use Chambers et al (1976)to generate a stable distribution. So here is the algorithm to generate samples from Gumbel copula
> alpha=opz$par[5] > invphi=function(t,a) exp(-t^(1/a)) > n=500 > x=matrix(rexp(2*n),n,2) > angle=runif(n,0,pi) > E=rexp(n) > beta=1/alpha > stable=sin((1-beta)*angle)^((1-beta)/beta)* + (sin(beta*angle))/(sin(angle))^(1/beta)/ + (E^(alpha-1)) > U=invphi(x/stable,alpha) > plot(U)
Here, we consider only 500 simulations,
> Xloss=qlnorm(U[,1],opz$par[1],opz$par[2]) > Xalae=qlnorm(U[,2],opz$par[3],opz$par[4])
In standard reinsurance treaties – see e.g. Clarke (1996) – allocated expenses are splited prorata capita between the insurance company, and the reinsurer. If denotes losses, and the allocated expenses, a standard excess treaty can be has payoff
> L=100000 > R=50000 > Z=((Xloss-R)+(Xloss-R)/Xloss*Xalae)* + (R<=Xloss)*(Xloss<L)+ + ((L-R)+(L-R)/R*Xalae)*(L<=Xloss) > mean(Z) [1] 12596.45
Now, play with it… it is possible to find a better fit, I guess…
Transformation logarithmique de séries temporelles
Pour poursuivre une discussion amorcée en fin de cours, dans certains cas, on peut avoir l’impression que modéliser une série pourrait être compliqué,
plot(X,xlim=c(1,length(X)+20))
Mais on peut avoir l’intuition que modéliser le logarithme de la série pourrait être plus simple,
> X=log(Y) > plot(X,xlim=c(1,length(X)+20))
On va alors tenter une modélisation par un processus ARMA de cette dernière série,
> md=arima(X,c(12,0,1)) > P=predict(md,24) > E=P$pred > V=P$se^2
On peut alors faire une prévision sur cette série plus simple à modéliser, et visualiser cette prévision.
> temps=length(X)+1:24 > ciu=(E+2*sqrt(V)) > cil=(E-2*sqrt(V)) > polygon(c(temps,rev(temps)),c(ciu, + rev(cil)),col="yellow",border=NA) > lines(temps,E,col="red",lwd=2) > lines(temps,ciu,col="red",lty=2) > lines(temps,cil,col="red",lty=2)
Maintenant, on va devoir remonter. On va utiliser un résultat que l’on a vu sur la transformation logarithmique dans une régression: si après avoir pris le logarithme, on a un modèle simple, Gaussien, c’est que le modèle initial était log-normal. On peut alors utiliser les propriétés de la loi lognormale, dont on connait les moments à partir de ceux de la loi Gaussienne sous-jacente. Pour la prévision, on n’a pas trop le choix,
> mu=exp(E+.5*V)
Par contre, pour construire un intervalle de confiance, soit on utilise la variance de notre loi lognormale pour avoir la variance de notre processus, et on oublie cette histoire de loi lognormale pour construire un intervalle Gaussien,
> sig2=(exp(V)-1)*exp(2*E+V) > ci1u=mu+2*sqrt(sig2) > ci1l=mu-2*sqrt(sig2)
ou alors on utilise le fait que comme la transformation est monotone, l’intervalle de confiance peut etre vu comme une transformation du précédant intervalle de confiance,
> ci2u=exp(E+2*sqrt(V)) > ci2l=exp(E-2*sqrt(V))
Si on compare visuellement les deux, on a dans le premier cas,
> plot(Y,xlim=c(1,length(X)+20)) > temps=length(X)+1:24 > polygon(c(temps,rev(temps)),c(ci1u, > rev(ci1l)),col="yellow",border=NA) > lines(temps,mu,col="red",lwd=2) > lines(temps,ci1u,col="red",lty=2) > lines(temps,ci1l,col="red",lty=2)
(qui est symétrique et centré sur notre prévision) et dans le second
> plot(Y,xlim=c(1,length(X)+20)) > temps=length(X)+1:24 > ci1u=mu+2*sqrt(sig2) > ci1l=mu-2*sqrt(sig2) > polygon(c(temps,rev(temps)),c(ci2u, > rev(ci2l)),col="yellow",border=NA) > lines(temps,mu,col="red",lwd=2) > lines(temps,ci2u,col="red",lty=2) > lines(temps,ci2l,col="red",lty=2)
Visualization in regression analysis
> library(gdata) > XLS1=read.xls("http://api.worldbank.org/datafiles /NY.GDP.PCAP.PP.CD_Indicator_MetaData_en_EXCEL.xls", sheet = 1) > data1=XLS1[-(1:28),c("Country.Name","Country.Code","X2010")] > names(data1)[3]="GDP" > XLS2=read.xls("http://api.worldbank.org/datafiles /SH.DYN.MORT_Indicator_MetaData_en_EXCEL.xls", sheet = 1) > data2=XLS2[-(1:28),c("Country.Code","X2010")] > names(data2)[2]="MORTALITY" > data=merge(data1,data2) > head(data) Country.Code Country.Name GDP MORTALITY 1 ABW Aruba NA NA 2 AFG Afghanistan 1207.278 149.2 3 AGO Angola 6119.930 160.5 4 ALB Albania 8817.009 18.4 5 AND Andorra NA 3.8 6 ARE United Arab Emirates 47215.315 7.1
If we estimate a simple linear regression – – we get
> regBB=lm(MORTALITY~GDP,data=data) > summary(regBB) Call: lm(formula = MORTALITY ~ GDP, data = data) Residuals: Min 1Q Median 3Q Max -45.24 -29.58 -12.12 16.19 115.83 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 67.1008781 4.1577411 16.139 < 2e-16 *** GDP -0.0017887 0.0002161 -8.278 3.83e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 39.99 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.2909, Adjusted R-squared: 0.2867 F-statistic: 68.53 on 1 and 167 DF, p-value: 3.834e-14
We can look at the scatter plot, including the linear regression line, and some confidence bounds,
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5)",cex=.5) > text(data$GDP,data$MORTALITY,data$Country.Name,pos=3) > x=seq(-10000,100000,length=101) > y=predict(regBB,newdata=data.frame(GDP=x), + interval="prediction",level = 0.9) > lines(x,y[,1],col="red") > lines(x,y[,2],col="red",lty=2) > lines(x,y[,3],col="red",lty=2)
We should be able to do a better job here. For instance, if we look at the Box-Cox profile likelihood,
> boxcox(regBB)
it looks like taking the logarithm of the mortality rate should be better, i.e. or :
> regLB=lm(log(MORTALITY)~GDP,data=data) > summary(regLB) Call: lm(formula = log(MORTALITY) ~ GDP, data = data) Residuals: Min 1Q Median 3Q Max -1.3035 -0.5837 -0.1138 0.5597 3.0583 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.989e+00 7.970e-02 50.05 <2e-16 *** GDP -6.487e-05 4.142e-06 -15.66 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7666 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.5949, Adjusted R-squared: 0.5925 F-statistic: 245.3 on 1 and 167 DF, p-value: < 2.2e-16 > plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5) log scale",cex=.5,log="y") > text(data$GDP,data$MORTALITY,data$Country.Name) > x=seq(300,100000,length=101) > y=exp(predict(regLB,newdata=data.frame(GDP=x)))* + exp(summary(regLB)$sigma^2/2) > lines(x,y,col="red") > y=qlnorm(.95, meanlog=predict(regLB,newdata=data.frame(GDP=x)), + sdlog=summary(regLB)$sigma^2) > lines(x,y,col="red",lty=2) > y=qlnorm(.05, meanlog=predict(regLB,newdata=data.frame(GDP=x)), + sdlog=summary(regLB)$sigma^2) > lines(x,y,col="red",lty=2)
on the log scale or
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5) log scale",cex=.5)
on the standard scale. Here we use quantiles of the log-normal distribution to derive confidence intervals.
But why shouldn’t we take also the logarithm of the GDP ? We can fit a model or equivalently .
> regLL=lm(log(MORTALITY)~log(GDP),data=data) > summary(regLL) Call: lm(formula = log(MORTALITY) ~ log(GDP), data = data) Residuals: Min 1Q Median 3Q Max -1.13200 -0.38326 -0.07127 0.26610 3.02212 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.50192 0.31556 33.28 <2e-16 *** log(GDP) -0.83496 0.03548 -23.54 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.5797 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.7684, Adjusted R-squared: 0.767 F-statistic: 554 on 1 and 167 DF, p-value: < 2.2e-16 > plot(data$GDP,data$MORTALITY,xlab="GDP per capita ", + ylab="Mortality rate (under 5)",cex=.5,log="xy") > text(data$GDP,data$MORTALITY,data$Country.Name) > x=exp(seq(1,12,by=.1)) > y=exp(predict(regLL,newdata=data.frame(GDP=x)))* + exp(summary(regLL)$sigma^2/2) > lines(x,y,col="red") > y=qlnorm(.95, meanlog=predict(regLL,newdata=data.frame(GDP=x)), + sdlog=summary(regLL)$sigma^2) > lines(x,y,col="red",lty=2) > y=qlnorm(.05, meanlog=predict(regLL,newdata=data.frame(GDP=x)), + sdlog=summary(regLL)$sigma^2) > lines(x,y,col="red",lty=2)
on the log scales or
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ", + ylab="Mortality rate (under 5)",cex=.5)
on the standard scale. If we compare the last two predictions, we have
with in blue is the log model, and in red is the log-log model (I did not include the first one for obvious reasons).
Fisher-Tippett theorem and limiting distribution for the maximum
Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples . For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. on the unit interval. Let and . Then, for all and ,
i.e. the limiting distribution of the maximum is Weibull’s.
set.seed(1) s=1000000 n=100 M=matrix(runif(s),n,s/n) V=apply(M,2,max) bn=1 an=1/n U=(V-bn)/an hist(U,probability=TRUE,,col="light green", xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25)) u=seq(-10,0,by=.1) v=exp(u) lines(u,v,lwd=3,col="red")
For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function . Let and , then
which means that the limiting distribution is Fréchet’s.
set.seed(1) s=1000000 n=100 M=matrix((runif(s))^(-1/2),n,s/n) V=apply(M,2,max) bn=0 an=n^(1/2) U=(V-bn)/an hist(U,probability=TRUE,col="light green", xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25)) u=seq(0,10,by=.1) v=dfrechet(u,shape=2) lines(u,v,lwd=3,col="red")
For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function . Let and , then
i.e. the limiting distribution for the maximum is Gumbel’s distribution.
library(evd) set.seed(1) s=1000000 n=100 M=matrix(rexp(s,1),n,s/n) V=apply(M,2,max) (bn=qexp(1-1/n)) log(n) an=1 U=(V-bn)/an hist(U,probability=TRUE,col="light green", xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25)) u=seq(-5,15,by=.1) v=dgumbel(u) lines(u,v,lwd=3,col="red")
Consider now a Gaussian sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)
as . Let and . Then we can get
as . I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,
and it is only slightly better with 1,000 observations,
set.seed(1) s=10000000 n=1000 M=matrix(rnorm(s,0,1),n,s/n) V=apply(M,2,max) (bn=qnorm(1-1/n,0,1)) an=1/bn U=(V-bn)/an hist(U,probability=TRUE,col="light green", xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25)) u=seq(-5,15,by=.1) v=dgumbel(u) lines(u,v,lwd=3,col="red")
Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since , if
then
i.e. using Taylor’s approximation on the right term,
This gives us normalizing coefficients we should use here.
set.seed(1) s=10000000 n=1000 M=matrix(rlnorm(s,0,1),n,s/n) V=apply(M,2,max) bn=exp(qnorm(1-1/n,0,1)) an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1)) U=(V-bn)/an hist(U,probability=TRUE,col="light green", xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25)) u=seq(-5,15,by=.1) v=dgumbel(u) lines(u,v,lwd=3,col="red")