Pour le septième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, on abordera la modélisation des coûts individuels, aussi bien dommage que responsabilité civile.. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)
Tag Archives: Gamma
Calcul(s) d’information de Fisher
La semaine passée, on avait fait quelques calculs pour obtenir l’information de Fisher pour des lois classiques. Je voulais juste remettre au propre les calculs pour les lois à plusieurs paramètres. Pour la loi Gamma,
la log-vraisemblance s’écrit
de telle sorte que
Ici, même pas besoin de prendre une espérance car la Hessienne est constante
I Fought the (distribution) Law (and the Law did not win)
A few days ago, I was asked if we should spend a lot of time to choose the distribution we use, in GLMs, for (actuarial) ratemaking. On that topic, I usually claim that the family is not the most important parameter in the regression model. Consider the following dataset
> db <- data.frame(x=c(1,2,3,4,5),y=c(1,2,4,2,6)) > plot(db,xlim=c(0,6),ylim=c(-1,8),pch=19)
To visualize a regression model, use the following code
> nd=data.frame(x=seq(0,6,by=.1)) > add_predict = function(reg){ + prd1=predict(reg,newdata=nd,se.fit = TRUE,type="response") + y1=prd1$fit + y1_upp=prd1$fit+prd1$residual.scale*1.96* prd1$se.fit + y1_low=prd1$fit-prd1$residual.scale*1.96* prd1$se.fit + polygon(c(nd$x,rev(nd$x)),c(y1_upp, rev(y1_low)),col="light green",angle=90, density=40,border=NA) + lines(nd$x,y1,col="red",lwd=2) + }
For instance, with a Poisson regression (with a log link function) we get
> plot(db) > reg1=glm(y~x,family=poisson(link="log"), + data=db) > add_predict(reg1)
while, with a Gaussian regresion (but still with a log link function), we get
> plot(db) > reg2=glm(y~x,family=gaussian(link="log"), + data=db) > add_predict(reg2)
If we just care about the expected value of our prediction, the output is more or less the same
> plot(db) > lines(nd$x,predict(reg1,newdata=nd, + type="response"),col="red",lwd=1.5) > lines(nd$x,predict(reg2,newdata=nd, + type="response"),col="blue",lwd=1.5)
So, indeed, forget about the (distribution) law when running a GLM. Not convinced? Consider – on the same dataset – a Poisson regression (with an identity link function this time)
> plot(db) > reg1=glm(y~x,family=poisson(link="identity"), + data=db) > add_predict(reg1)
while, with a Gaussian regresion (but still with an identity link function), we get
> plot(db) > reg2=glm(y~x,family=gaussian(link="identity"), + data=db) > add_predict(reg2)
Again, if we just plot the expected value of our prediction, the output is more or less the same
> plot(db) > lines(nd$x,predict(reg1,newdata=nd, + type="response"),col="red",lwd=1.5) > lines(nd$x,predict(reg2,newdata=nd, + type="response"),col="blue",lwd=1.5)
So clearly, the simplistic message you should not care too much about the (distribution) law seems to be valid…
Continue reading I Fought the (distribution) Law (and the Law did not win)
Modeling Earthquake Dynamics
In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,
In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.
The paper is online on https://hal.archives-ouvertes.fr/.
Bias and MLE
Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.
Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?
As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a
distribution, let
and
denote the maximum likelihood estimators of the two parameters.
library(fitdistrplus) maxl=function(x) fitdist(x,"gamma",method="mle")$estimate VK=floor(exp(seq(log(5),log(200),length=25))) V=NULL for(k in 1:length(VK)){ n=VK[k] N=5000 m=matrix(rgamma(n*N,1,2),n,N) ss=apply(m,2,maxl) V=rbind(V,ss)} y=as.vector(V[seq(1,length(VK)*2,by=2),]) x=rep(c(VK),ncol(V)) boxplot(y~x, xlab="Nb. observations (log scale)",ylim=c(0,6)) abline(h=1,lty=2,col="blue")
Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with
(as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of
(instead of boxplots)
It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that
where
being the so-called digamma function,
and where and
are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…
Observe that the bias of does not depend on
, while the bias of
will depend on
.
d1digamma=function(x,h=1e-7) return((digamma(x+h)-digamma(x-h))/(2*h)) d2digamma=function(x,h=1e-7) return((d1digamma(x+h)-d1digamma(x-h))/(2*h)) biasalpha=function(a,n){ return((a*d1digamma(a)-a^2*d2digamma(a) -2)/(2*n*(a*d1digamma(a)-1)^2)) }
The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get
m=apply(V,1,mean) plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x") abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200) lines(1:200,B+1,col="orange")
Observe here that neglecting the factor yield an underestimation of the real biais… Fun, isn’t it?
Maximum Likelihood versus Goodness of Fit
Thursday, I got an interesting question from a colleague of mine (JP). I mean, the way I understood the question turned out to be a nice puzzle (but I have to confess I might have misunderstood). The question is the following : consider a i.i.d. sample of continuous variables. We would like to choose between two (parametric) families for the distribution,
and
. If we use maximum likelihood techniques, we get two estimators, one for each family,
and
. Clearly,
is a much better than
, in the sense of a standard goodness of fit test (e.g. Kolmogorov-Smirnov since the sample is assumed to be obtained from a continuous variable). Does that mean that family is
(somehow) better than family
?
This is my interpretation of the question, and I found it amusing. So I will try to show (using simulated samples) that some odd situations can easily be obtained.
Consider a sample from a mixture of log-normal distributions,
> set.seed(228) > X=exp(c(rnorm(50,1,1),rnorm(50,2,1.2)))
Consider two standard families for positive random variables: a Gamma distribution and a lognormal distribution.
> library(MASS) > ab=fitdistr(X,"gamma") > ms=fitdistr(X,"lognormal")
If we want to visualized those two distributions, let us use
> vab=pgamma(u,ab$estimate[1],ab$estimate[2]) > vms=plnorm(u,ms$estimate[1],ms$estimate[2]) > plot(ecdf(X)) > lines(u,vab,col="red") > lines(u,vms,col="blue")
Here, we get
What else can we say ? actually, we can also compute Kolmogorov-Smirnov statistic,
where
This can be done using
> ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2]) One-sample Kolmogorov-Smirnov test data: X D = 0.0693, p-value = 0.7231 alternative hypothesis: two-sided > ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2]) One-sample Kolmogorov-Smirnov test data: X D = 0.148, p-value = 0.02507 alternative hypothesis: two-sided
From a theoretical point of view, we should not look at the p-values, since the null-distribution is based on a fixed distribution, not a fitted one (see the Lilliefors tests for normal samples). But still. The Gamma distribution seems to be very far away from the true distribution. The statistics is twice the one we have with our lognormal distribution. And one p-value is 72%, while the other one is 2.5%. Here, we should prefer this lognormal distribution to that Gamma one. But here, we did consider only one distribution in each family. Does that mean that we cannot find one Gamma distribution that will be better than all possible lognormal distributions ? Better, for instance, according to Kolmogorov-Smirnov statistics…
Well, it is possible to use another strategy to find appropriate parameters. We can minimize this statistic actually. Define
> ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic} > ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}
and compute
> n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2])) > n1 $minimum [1] 0.05252692 $estimate [1] 1.547437 1.121864 > n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2])) > n2 $minimum [1] 0.04737725 $estimate [1] 1.1449041 0.167041
So here, it is possible to find a distribution much closer to the empirical sample, within the Gamma family actually.
> vab=pgamma(u,n2$estimate[1],n2$estimate[2]) > vms=plnorm(u,n1$estimate[1],n1$estimate[2]) > lines(u,vab,col="red",lwd=2) > lines(u,vms,col="blue",lwd=2)
What would be the point here? Maybe just the idea that the maximum likelihood estimator is only one estimator among a lot of them. And if it has interesting asymptotic properties, on small samples, it might not be the best estimator to consider…
And to be completely honest, I’ve been cheating here… I mean, not really cheating (not more than any researcher using a statistical test to validate the findings). But here, I did fix the seed of the random number generator. Actually, such example does not occur that frequently. Here, out of 1000 samples, I got this odd conclusion almost 15 times. And the smaller the sample, the more likely we can observe that story, where the maximum likelihood estimator can be far away from the best fit. Here is the proportion of opposite conclusions, as a function of the sample size,
> SIM=function(ns=1000,n=100){ + t=0 + for(s in 1:ns){ + set.seed(s) + X=exp(c(rnorm(n/2,1,1),rnorm(n/2,2,1.2))) + ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic} + ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic} + library(MASS) + ab=fitdistr(X,"gamma") + ms=fitdistr(X,"lognormal") + n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2])) + n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2])) + if( (ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])$statistic- + ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])$statistic) + *(n1$minimum-n2$minimum)<=0 ) t=t+1 + } + return(t/ns)} > VM=rep(NA,20) > VS=seq(10,200,by=10) > for(i in 1:20){VM[i]=SIM(n=VS[i],ns=1000)} > plot(VS,VM,type="p")
So to provide a more complete answer to JP’s question, with a very large sample, I guess that your intuition should be valid…. but clearly not on a small sample.
More significant? so what…
Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.92883 0.06391 14.534 <2e-16 *** X -0.12499 0.06108 -2.046 0.0421 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
for the first model – a GLM with some distribution, and some link function – and
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.92901 0.06270 14.817 <2e-16 *** X -0.09883 0.05816 -1.699 0.0909 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.
That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.
> fitdistr(Y,"normal") mean sd 0.93685011 0.90700830 (0.06413517) (0.04535042) > fitdistr(Y,"exponential") rate 1.06740661 (0.07547704)
Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.
Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,
> set.seed(5) > n=200 > U=runif(n); > Y=-log(U)
Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution
> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333)) > x=seq(0,6,by=.02) > lines(x,dexp(x,1/mean(Y)),col="red",lty=2)
On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable
> reg0=glm(Y~1,family=Gamma(link="identity")) > a=reg0$coefficient > b=summary(reg0)$dispersion > lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")
Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)
> a=.1 > set.seed(5) > n=200 > U=runif(n); > V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a) > Y=-log(U) > X=qnorm(V)
To visualize the copula of the variables, we can use
> cop=function(u,v){ + (a+1)*(u*v)^(-(a+1))* + (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) } > x=y=seq(.05,.95,by=.05) > z=outer(x,y,cop) > mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30, + ticktype ="detailed",zlab="")
We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,
- a Gaussian model (here a standard linear model)
- a gamma model, with a linear link function
The outputs are the following (you will recognize the outputs given previously)
> reg1=lm(Y~X) > reg2=glm(Y~X,family=Gamma(link="identity")) > summary(reg1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.92883 0.06391 14.534 <2e-16 *** X -0.12499 0.06108 -2.046 0.0421 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.9021 on 198 degrees of freedom Multiple R-squared: 0.02071, Adjusted R-squared: 0.01576 F-statistic: 4.187 on 1 and 198 DF, p-value: 0.04206 > summary(reg2) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.92901 0.06270 14.817 <2e-16 *** X -0.09883 0.05816 -1.699 0.0909 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for Gamma family taken to be 0.9086447) Null deviance: 229.72 on 199 degrees of freedom Residual deviance: 226.58 on 198 degrees of freedom AIC: 379.22 Number of Fisher Scoring iterations: 10
And here are the two predictions,
So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here
(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here
And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get
Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.
Modélisation des coûts individuels
Cette semaine, même si le réseau de l’UQAM est down, on va continuer le cours et finir la section sur la modélisation de la surdispersion pour la fréquence de sinistres. On devrait ensuite commencer la modélisation des coûts individuels. En particulier, on passera du temps autour de deux points,
- la distinction lognormale et gamma
- l’écrêtement des gros sinistres
Les transparents sont en ligne. Et la base des coûts est celle évoquée au second cours.
Earthquake dynamics
I just upload on http://hal.archives-ouvertes.fr/hal-00871883 a joint paper entitled Modeling earthquake dynamics.
In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.
Modélisation des coûts individuels en tarification
Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,
Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….
Large claims, and ratemaking
During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost , given some covariates
.Here is the dataset we’ll use,
> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt", + header=TRUE,sep=";") > sinistres=sinistre[sinistre$garantie=="1RC",] > sinistres=sinistres[sinistres$cout>0,] > contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt", + header=TRUE,sep=";") > couts=merge(sinistres,contrat) > tail(couts) nocontrat no garantie cout exposition zone puissance agevehicule 1919 6104006 11933 1RC 5376.04 0.37 E 6 1 1920 6107355 12349 1RC 51.63 0.74 E 4 1 1921 6108364 13229 1RC 1320.00 0.74 B 9 1 1922 6109171 11567 1RC 1320.00 0.74 B 13 1 1923 6111208 14161 1RC 970.20 0.49 E 10 5 1924 6111650 14476 1RC 1940.40 0.48 E 4 0 ageconducteur bonus marque carburant densite region 1919 32 57 12 E 93 10 1920 45 57 12 E 72 10 1921 32 100 12 E 83 0 1922 56 50 12 E 93 13 1923 30 90 12 E 53 2 1924 69 50 12 E 93 13
Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.
> age=0:20 > reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"), + data=couts) > Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")
For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,
> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT) > sigma <- summary(reglm.sp)$sigma > mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age)) > Pln <- exp(mu+sigma^2/2)
We can plot those two predictions on a single graph,
> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4) > lines(age,Pln,col="blue",type="b")
Here it is,
Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,
Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,
i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,
> couts[which.max(couts$cout),] cout exposition zone puissance agevehicule ageconducteur 7842 4024601 0.22 B 9 13 19 marque carburant densite region 7842 2 E 93 24
One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write
where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)
> s= 10000 > couts$normal=(couts$cout<=s) > mean(couts$normal) [1] 0.9818087
which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,
> indice = which(couts$cout>s) > mean(couts$cout[indice]) [1] 34471.59 > library(splines) > regB=glm(cout~bs(agevehicule),data=couts, + subset=indice,family=Gamma(link="log")) > ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response") > ypB2=mean(couts$cout[indice])
the second one to model normal claims individual costs,
> indice = which(couts$cout<=s) > mean(couts$cout[indice]) [1] 1335.878 > regA=glm(cout~bs(agevehicule),data=couts, + subset=indice,family=Gamma(link="log")) > ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response") > ypA2=mean(couts$cout[indice])
And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred
> regC=glm(normal~bs(agevehicule),data=couts,family=binomial) > ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response") > regC2=glm(normal~1,data=couts,family=binomial) > ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")
Note that we to have, each time something that can be interpreted either as , or
– i.e. no covariate is considered on the later. On the graph below, we did plot
where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.
(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium). If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.
To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too
From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…
Introduction aux modèles linéaires généralisés
J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.
Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,
Mais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.
ACT2121, éléments de correction
La plupart des calculs pouvant se faire sans calculs (trop) complexes avec une calculatrice, je vais revenir sur un exercice (l’exercice 11 de la seconde série) pour proposer une correction.
“Le petit Nestor collectionne les cartes de joueurs de Baseball dans les paquets de gommes à mâcher. Il y a en tout 20 cartes différentes (réparties aléatoirement, une par paquet). Combien de paquets de gommes Nestor devrait-il s’attendre à avoir à acheter pour obtenir la collection complète ? ”
La bonne stratégie était d’ordonner les cartes par ordre d’apparition, pour la première fois, et de modéliser le nombre de paquets entre deux premières apparitions, comme indiqué sur le dessin ci-dessous

Si
soit (par linéarité de l’espérance)
i.e.
ou encore
La bonne réponse était alors (il faut sommer les vingt termes)
> sum(20/(1:20)) [1] 71.95479
Mais cette sommation de vingt termes n’est pas triviale, aussi, en cours, j’avais suggéré que
> log(20)*20 [1] 59.91465
qui diffère du résultat numérique car il manquait la constante d’Euler
i.e.
Numériquement, on obtient, en prenant pour 0.57721 (cf Google)
> (log(20)+ 0.57721)*20 [1] 71.45885
the Dirichlet distribution
In the course, since we are still introducing some concepts of dependent distributions, we will talk about the Dirichlet distribution, which is a distribution over the simplex of . Let
denote the Gamma distribution with density (on
)
Let denote independent
random variables, with
. Then
where
has a Dirichlet distribution with parameter
Note that has a distribution in the simplex of
,
and has density
We will write .
The density for different values of can be visualized below, e.g.
, with some kind of symmetry,
or and
, below
and finally, below,
Note that marginal distributions are also Dirichlet, in the sense that if
then
if , and if
, then
‘s have Beta distributions,
See Devroye (1986) section XI.4, or Frigyik, Kapila & Gupta (2010) .This distribution might also be called multivariate Beta distribution. In R, this function can be used as follows
> library(MCMCpack) > alpha=c(2,2,5) > x=seq(0,1,by=.05) > vx=rep(x,length(x)) > vy=rep(x,each=length(x)) > vz=1-x-vy > V=cbind(vx,vy,vz) > D=ddirichlet(V, alpha) > persp(x,x,matrix(D,length(x),length(x))
(to plot the density, as figures above). Note that we will come back on that distribution later on so-called Liouville copulas (see also Gupta & Richards (1986)).
Hotline, modèles paramétriques et angoisse insoutenable
Après le billet sexe de l’été (ou presque, ici), je vais faire un rapide un billet angoissant (pas angoisse insécuritaire comme dans les journaux, mais pire). En tout cas, moi ça m’angoisse, et il n’y a pas de raison que je sois le seul à angoisser quand j’appelle une hotline…
Afin d’illustrer mon billet, je me suis permis de faire une petite expérience: j’ai appelé 7 fois de suite une hotline (gratuite1) et j’ai noté le temps avant qu’un opérateur ne décroche2. J’ai obtenu les durées suivantes
Bref, à partir de ces données, on peut ajuster une loi pour modéliser le temps d’attente avant que l’opérateur ne décroche,
> T [1] 1.3500000 10.1166667 2.3500000 8.9666667 0.1833333 0.5666667 1.2166667 > mean(T) [1] 3.535714 > var(T) [1] 17.40319 > library(MASS) > (Exp <- fitdistr(T,"exponential")) rate 0.2828283 (0.1068990) > (Gam <- fitdistr(T,"gamma")) shape rate 0.7928017 0.2242283 (0.3653143) (0.1404783)
où j’ai retenu deux lois classiques sur les modèles de durées: la loi exponentielle et la loi gamma. Ces lois donnent des modèles assez comparable en regardant les fonctions de répartition.
> u=seq(0,12,.1) > plot(ecdf(T)) > lines(u,pexp(u,rate=Exp$estimate),col="blue",lwd=3) > lines(u,pgamma(u,shape=Gam$estimate[1],rate=Gam$estimate[2]),col="red",lwd=3)
Toutefois,ces modèles donnes des choses assez différentes si l’on regarde les espérances de vie résiduelles, i.e. les mean remaining lifetime en anglais.
> t2=as.POSIXlt(naissance$date)> F=function(x){1-pgamma(x,shape=Gam$estimate[1],rate=Gam$estimate[2])} > e=function(x){ + integrate(F,x,Inf)$value/F(x) + } > v=rep(NA,length(u)) > for(i in 1:length(u)){ + v[i]=e(u[i]) + }
avec la loi exponentielle en bleu, et la loi gamma en rouge. La loi exponentielle est bien connue pour son absence de mémoire: peu importe le temps déjà attendu, l’espérance d’attente reste constante: ici, même si j’ai déjà attendu 3 minutes, ou 8 minutes, mon espérance d’attente est constante, à savoir 3 minutes et demi. On parle de constant mean remaining lifetime. En revanche, pour une loi gamma dont le shape paramètre est inférieur à 1, on a une loi dite increasing mean remaining lifetime (IMRL, dont la théorie peut se retrouver ici ou là pour le chapitre 2 du livre de Maxim Finkelstein, en ligne là)
En fait, si on regarde les évolutions des espérances de vie résiduelles, on a la loi exponentielle en bleu, constante, et la loi gamma, avec une fonction monotone: si le paramètre shape est inférieur à un (en rouge), on a une loi IMRL alors que s’il excède 1 (en vert), on a une loi DMRL (decreasing mean remaining lifetime).
Bref, à chaque fois que j’appelle une hotline et que j’attends plus de 3 minutes, je me demande toujours: quelle est la stratégie à adopter ? Pour ma part, le plus souvent, j’ai l’angoisse de la loi IMRL, autrement dit, plus j’ai attendu, plus le temps qu’il me reste à attendre est grand ! Et je raccroche assez vite…
Moralité, en statistique paramétrique, le choix du modèle n’est jamais neutre, donc il vaut mieux connaître les propriétés de différentes lois avant de se lancer dans un modèle paramétrique…. Et j’espère, ami lecteur, que je ne serais plus le seul à me demander si je suis tombé sur une hotline IMRL la prochaine fois que j’essaye de connaître le prix d’un transfert bancaire…
1 à l’intention des jeunes chercheurs en herbe qui souhaitent faire avancer la science, l’expérience ne marche que sur des hotline gratuites….
2 au début je posais une question quelconque, sur la facturation d’un service bancaire ou autre, mais je dois avouer que sur la fin, je raccrochais sans rien dire… oui, la science autorise ce genre de fantaisie….