Tag Archives: Gamma

Tweedie regression, or Poisson-Gamma regressions ?

Yesterday, I was chating with a young and enthousiastic actuary, who asked a nice (and classical) question: is it the same, or not to use a Tweedie regression, or two regressions (Poisson, and Gamma). For distributions, the two are equivalent, but when we have heterogeneity and explanatory variable, I really think that using all information, and running two regressions is much more interesting.

Homogeneous case

In the homogenous case, without any explanatory variable, the Tweedie distribution and compound Poisson-gamma distribution are equivalent representation (i.e., it is simply a reparametrization)

Consider a Tweedie distribution, with variance function power p\in(1,2), mean \mu and scale parameter \phi, then it is a compound Poisson model,

  • N\sim\mathcal{P}(\lambda) with \lambda=\displaystyle{\frac{\phi \mu^{2-p}}{2-p}}
  • Y_i\sim\mathcal{G}(\alpha,\beta) with \alpha=\displaystyle{-\frac{p-2}{p-1}}\text{~and~}\beta=\displaystyle{\frac{\phi \mu^{1-p}}{p-1}}

Conversely, consider a compound Poisson model N\sim\mathcal{P}(\lambda) and Y_i\sim\mathcal{G}(\alpha,\beta), then

  • variance function power is p=\displaystyle{\frac{\alpha+2}{\alpha+1}}
  • mean is \mu=\displaystyle{\frac{\lambda \alpha}{\beta}}
  • scale (nuisance) parameter is
    \phi=\displaystyle{\frac{[\lambda\alpha]^{\frac{\alpha+2}{\alpha+1}-1}\beta^{2-\frac{\alpha+2}{\alpha+1}}}{\alpha+1}}

So the two are equivalent…

Heterogeneous case

Now, in the context of regressionN_i\sim\mathcal{P}(\lambda_i)\text{ with }\lambda_i=\exp[\boldsymbol{x}_i^\top\boldsymbol{\beta}_{\lambda}]
andY_{j,i}\sim\mathcal{G}(\mu_i,\phi)\text{ with }\mu_i=\exp[\boldsymbol{x}_i^\top\boldsymbol{\beta}_{\mu}]
Then S_i=Y_{1,i}+\cdots+Y_{N,i} has a Tweedie distribution

  • variance function power is p=\displaystyle{\frac{\phi+2}{\phi+1}}
  • mean is \lambda_i \mu_i
  • scale parameter is\displaystyle{\frac{\lambda_i^{\frac{1}{\phi+1}-1}}{\mu_i^{\frac{\phi}{\phi+1}}}\left(\frac{\phi}{1+\phi}\right)}

There are 1+2\text{dim}(\boldsymbol{X}) degrees of freedom here. And a Tweedie regression is

  • variance function power is p\in(1,2)
  • mean is \mu_i=\exp[\boldsymbol{x}_i^{\top}\boldsymbol{\beta}_{\text{Tweedie}}]
  • scale parameter is \phi

There are now 2+\text{dim}(\boldsymbol{X}) degrees of freedom.

In the actuarial terminology

  • N is the annual claim frequency
  • Y is the cost of single claims
  • S is the annual cost for a single insurance policy

As explained in our book, frequency and costs can be explained by different features, so that itself is a motifivation to consider two models. But consider the following simulated data

n = 1e4
a=2
set.seed(123)
x = runif(n)
etan = exp(-2+a*x)
N = rpois(n,etan)
dfn = data.frame(y=N,x=x)
I=rep(1:n,N)
etaz = exp(2-a*x[I])
Z = rgamma(sum(N),etaz,20)
dfz = data.frame(y=Z,x=x[I])
S=tapply(Z,as.factor(I),sum)
V=as.numeric(S[as.character(1:n)])
V[is.na(V)]=0
dfy = data.frame(y=V,x=x)

We can run two regressions, for the frequency, and for the costs

regn = glm(y~x, family=poisson(link="log"),data=dfn)
regz = glm(y~x, family=Gamma(link="log"),data=dfz)

For the tweedie regression, let us find the optimal power parameter

library(statmod)
library(tweedie)
glmtw = function(t){
m = glm(y~x, family=tweedie(var.power = t, link.power = 0),data=dfy)
d = NULL
if(t == 1) d = 1
AICtweedie(m, dispersion = d)
}
vt = seq(1.01,1.99,length=251)
vg = Vectorize(glmtw)(vt)
plot(vt,vg,log="y",type="l")
i=which.min(vg)

and consider the associated Tweedie regression.

regy = glm(y~x, family=tweedie(var.power = vt[i], link.power = 0),data=dfy)

For frequency, there is a clear increase of the average frequency with x (and significant)

summary(regy)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.00822 0.04101 -73.356 <2e-16 ***
x           -0.02226 0.07154  -0.311  0.756
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for Tweedie family taken to be 0.6516459)

For the individual costs, there is a clear decline of the average cost with x (and highly significant)

summary(regn)

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.01508 0.04135 -48.73 <2e-16 ***
x            1.99036 0.05887  33.81 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Now, if we consider the average cost for the policy, we have

summary(regy)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.00822 0.04101 -73.356 <2e-16 ***
x           -0.02226 0.07154  -0.311  0.756
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for Tweedie family taken to be 0.6516459)

I.e., the average annual cost for a single policy does not depend on x (it is clearly not significant). As the product of the frequency and the average costs tells more or less the same story…

If the outcome, the price, is the same, one could agree that having here the two regressions is much more informative for risk management (if one wants to introduce deductibles for instance).

Model selection, AIC and Tweedie regression

Just some simple codes to illustrate some points we will discuss this week, for the last course on GLMs, before the final exam.  We have mentioned that the Gamma distribution belongs to the exponential, so we can run a regression, and compute the associated AIC,

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> m1 = glm( test.data~1, family=Gamma(link=log))
> AIC(m1)
[1] 3997.332

The Gamma distribution is also a special case of the Tweedie distribution, with power 2

> library(statmod)
> library(tweedie)
> m2 = glm( test.data~1, family=tweedie(link.power=0, var.power=2) )
> AIC(m2)
[1] NA

Unfortunately, we cannot compute the AIC, and we need a trick (with the appropriate R function).

> AICtweedie(m2)
[1] 3997.332

Of course, we can do the same with the Poisson distribution, which also belongs to the exponential family

> test.data = rpois(n=2000, lambda=1)
> m3 = glm( test.data~1, family=poisson(link=log))
> m4 = glm( test.data~1, family=tweedie(link.power=0, var.power=1) )
> AIC(m3)
[1] 5124.61

Here, we have a problem with the AICtweedie function

> AICtweedie(m4)
[1] Inf

because we need to specify the dispersion parameter

> AICtweedie(m4, dispersion=1)
[1] 5124.61

We can now check: we generate some Gamma sample, and fit various Tweedie distribution, changing simply the variance function (which is a power function)

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2.7,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum of the AIC is close to 2, corresponding to the Gamma distribution

We can also try with a Poisson

> set.seed(123)
> test.data = rpois(n=2000, lambda=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum is now close to 1, corresponding to the Poisson distriubtion (the variance is equal to the average)

Let us now try some compound Poisson distribution,

> rcpd=function(n,lambda,shape,scale){
+ N=rpois(n,lambda)
+ X=rgamma(sum(N),shape=shape, scale=scale)
+ I=as.factor(rep(1:n,N))
+ S=tapply(X,I,sum)
+ V=as.numeric(S[as.character(1:n)])
+ V[is.na(V)]=0
+ return(V)}

Let us generate some compound Poisson random variables, with Poisson distribution with average 1, and with Gamma jumps, with average and variance 1,

> set.seed(123)
> test.data = rcpd(n=2000, 1,1,1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ }
> vt = seq(1.1,1.9,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal value for the power function is here 1.5, based on the AIC (relationships between Tweedie parameters and the compound Poisson ones are given in the slides)

We can now play a little bit with the variance of the jumps: they still have aveage 1, but they now have a smaller variance

> set.seed(123)
> test.data = rcpd(n=2000, 1,3,1/3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal power for the Tweedie is closer to one, closer to the Poison case

while if we increase the variance of the jumps

> set.seed(123)
> test.data = rcpd(n=2000, 1,1/3,3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

the optimal power is higher, closer to the Gamma distribution.

Calcul(s) d’information de Fisher

La semaine passée, on avait fait quelques calculs pour obtenir l’information de Fisher pour des lois classiques. Je voulais juste remettre au propre les calculs pour les lois à plusieurs paramètres. Pour la loi Gamma,

la log-vraisemblance s’écrit

de telle sorte que

Ici, même pas besoin de prendre une espérance car la Hessienne est constante

Continue reading Calcul(s) d’information de Fisher

I Fought the (distribution) Law (and the Law did not win)

A few days ago, I was asked if we should spend a lot of time to choose the distribution we use, in GLMs, for (actuarial) ratemaking. On that topic, I usually claim that the family is not the most important parameter in the regression model. Consider the following dataset

> db <- data.frame(x=c(1,2,3,4,5),y=c(1,2,4,2,6))
> plot(db,xlim=c(0,6),ylim=c(-1,8),pch=19)

To visualize a regression model, use the following code

> nd=data.frame(x=seq(0,6,by=.1))
> add_predict = function(reg){
+ prd1=predict(reg,newdata=nd,se.fit = TRUE,type="response")
+ y1=prd1$fit
+ y1_upp=prd1$fit+prd1$residual.scale*1.96*
prd1$se.fit   
+ y1_low=prd1$fit-prd1$residual.scale*1.96*
prd1$se.fit 
+ polygon(c(nd$x,rev(nd$x)),c(y1_upp,
rev(y1_low)),col="light green",angle=90,
density=40,border=NA)
+ lines(nd$x,y1,col="red",lwd=2)
+ }

For instance, with a Poisson regression (with a log link function) we get

> plot(db)
> reg1=glm(y~x,family=poisson(link="log"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with a log link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="log"),
+ data=db)
> add_predict(reg2)

If we just care about the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So, indeed, forget about the (distribution) law when running a GLM. Not convinced? Consider – on the same dataset – a Poisson regression (with an identity link function this time)

> plot(db)
> reg1=glm(y~x,family=poisson(link="identity"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with an identity link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="identity"),
+ data=db)
> add_predict(reg2)

Again, if we just plot the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So clearly, the simplistic message you should not care too much about the (distribution) law seems to be valid…

Continue reading I Fought the (distribution) Law (and the Law did not win)

Modeling Earthquake Dynamics

In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

The paper is online on https://hal.archives-ouvertes.fr/.

Bias and MLE

Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.

Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?

As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a  distribution, let  and  denote the maximum likelihood estimators of the two parameters.

library(fitdistrplus)
maxl=function(x) fitdist(x,"gamma",method="mle")$estimate
VK=floor(exp(seq(log(5),log(200),length=25)))
V=NULL
for(k in 1:length(VK)){
n=VK[k]
N=5000
m=matrix(rgamma(n*N,1,2),n,N)
ss=apply(m,2,maxl)
V=rbind(V,ss)}
y=as.vector(V[seq(1,length(VK)*2,by=2),])
x=rep(c(VK),ncol(V))
boxplot(y~x,
xlab="Nb. observations (log scale)",ylim=c(0,6))
abline(h=1,lty=2,col="blue")

Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of   obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with  (as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of    (instead of boxplots)

It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that

where

 being the so-called digamma function,

and where  and  are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…

Observe that the bias of   does not depend on , while the bias of   will depend on .

d1digamma=function(x,h=1e-7)
return((digamma(x+h)-digamma(x-h))/(2*h))
d2digamma=function(x,h=1e-7)
return((d1digamma(x+h)-d1digamma(x-h))/(2*h))
biasalpha=function(a,n){
return((a*d1digamma(a)-a^2*d2digamma(a)
-2)/(2*n*(a*d1digamma(a)-1)^2))
}

The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get

m=apply(V,1,mean)
plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x")
abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200)
lines(1:200,B+1,col="orange")

Observe here that neglecting the  factor yield an underestimation of the real biais… Fun, isn’t it?

Maximum Likelihood versus Goodness of Fit

Thursday, I got an interesting question from a colleague of mine (JP). I mean, the way I understood the question turned out to be a nice puzzle (but I have to confess I might have misunderstood). The question is the following : consider a i.i.d. sample https://latex.codecogs.com/gif.latex?\{X_1,\cdots,X_n\} of continuous variables. We would like to choose between two (parametric) families for the distribution, https://latex.codecogs.com/gif.latex?\mathcal{F}=\{F_{\boldsymbol%20\alpha};\boldsymbol%20\alpha\in\mathcal{A}\} and https://latex.codecogs.com/gif.latex?\mathcal{G}=\{G_{\boldsymbol%20\beta};\boldsymbol%20\beta\in\mathcal{B}\}. If we use maximum likelihood techniques, we get two estimators, one for each family, https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol%20\alpha} and https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol%20\beta}. Clearly, https://latex.codecogs.com/gif.latex?F_{\widehat{\boldsymbol%20\alpha}}(\cdot) is a much better than https://latex.codecogs.com/gif.latex?G_{\widehat{\boldsymbol%20\beta}}(\cdot), in the sense of a standard goodness of fit test (e.g. Kolmogorov-Smirnov since the sample is assumed to be obtained from a continuous variable). Does that mean that family is https://latex.codecogs.com/gif.latex?\mathcal{F} (somehow) better than family https://latex.codecogs.com/gif.latex?\mathcal{G} ?

This is my interpretation of the question, and I found it amusing. So I will try to show (using simulated samples) that some odd situations can easily be obtained.

Consider a sample from a mixture of log-normal distributions,

>  set.seed(228)
>  X=exp(c(rnorm(50,1,1),rnorm(50,2,1.2)))

Consider two standard families for positive random variables: a Gamma distribution and a lognormal distribution.

> library(MASS)
> ab=fitdistr(X,"gamma")
> ms=fitdistr(X,"lognormal")

If we want to visualized those two distributions, let us use

> vab=pgamma(u,ab$estimate[1],ab$estimate[2])
> vms=plnorm(u,ms$estimate[1],ms$estimate[2])
> plot(ecdf(X))
> lines(u,vab,col="red")
> lines(u,vms,col="blue")

Here, we get

What else can we say ? actually, we can also compute Kolmogorov-Smirnov statistic,

https://latex.codecogs.com/gif.latex?D_n=\sup_x%20|\widehat%20F_n(x)-F_\star(x)|where

https://latex.codecogs.com/gif.latex?\widehat%20F_n(x)={1%20\over%20n}\sum_{i=1}^n%20\boldsymbol{1}_{X_i\leq%20x}

This can be done using

> ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])

One-sample Kolmogorov-Smirnov test

data:  X
D = 0.0693, p-value = 0.7231
alternative hypothesis: two-sided

> ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])

One-sample Kolmogorov-Smirnov test

data:  X
D = 0.148, p-value = 0.02507
alternative hypothesis: two-sided

From a theoretical point of view, we should not look at the p-values, since the null-distribution is based on a fixed distribution, not a fitted one (see the Lilliefors tests for normal samples). But still. The Gamma distribution seems to be very far away from the true distribution. The statistics is twice the one we have with our lognormal distribution. And one p-value is 72%, while the other one is 2.5%. Here, we should prefer this lognormal distribution to that Gamma one. But here, we did consider only one distribution in each family. Does that mean that we cannot find one Gamma distribution that will be better than all possible lognormal distributions ? Better, for instance, according to Kolmogorov-Smirnov statistics…

Well, it is possible to use another strategy to find appropriate parameters. We can minimize this statistic actually. Define

> ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic}
> ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}

and compute

> n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2]))
> n1
$minimum
[1] 0.05252692

$estimate
[1] 1.547437 1.121864
> n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2]))
> n2
$minimum
[1] 0.04737725

$estimate
[1] 1.1449041 0.167041

So here, it is possible to find a distribution much closer to the empirical sample, within the Gamma family actually.

>  vab=pgamma(u,n2$estimate[1],n2$estimate[2])
>  vms=plnorm(u,n1$estimate[1],n1$estimate[2])
>  lines(u,vab,col="red",lwd=2)
>  lines(u,vms,col="blue",lwd=2)

What would be the point here? Maybe just the idea that the maximum likelihood estimator is only one estimator among a lot of them. And if it has interesting asymptotic properties, on small samples, it might not be the best estimator to consider…

And to be completely honest, I’ve been cheating here… I mean, not really cheating (not more than any researcher using a statistical test to validate the findings). But here, I did fix the seed of the random number generator. Actually, such example does not occur that frequently. Here, out of 1000 samples, I got this odd conclusion almost 15 times. And the smaller the sample, the more likely we can observe that story, where the maximum likelihood estimator can be far away from the best fit. Here is the proportion of opposite conclusions, as a function of the sample size,

> SIM=function(ns=1000,n=100){
+ t=0
+ for(s in 1:ns){
+  set.seed(s)
+  X=exp(c(rnorm(n/2,1,1),rnorm(n/2,2,1.2)))
+  ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic}
+  ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}
+  library(MASS)
+  ab=fitdistr(X,"gamma")
+  ms=fitdistr(X,"lognormal")
+  n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2]))
+  n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2]))
+  if( (ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])$statistic-
+  ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])$statistic)
+ *(n1$minimum-n2$minimum)<=0 ) t=t+1
+ }
+ return(t/ns)}

> VM=rep(NA,20)
> VS=seq(10,200,by=10)
> for(i in 1:20){VM[i]=SIM(n=VS[i],ns=1000)}
> plot(VS,VM,type="p")

So to provide a more complete answer to JP’s question, with a very large sample, I guess that your intuition should be valid…. but clearly not on a small sample.

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576 
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

    Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

Modélisation des coûts individuels

Cette semaine, même si le réseau de l’UQAM est down, on va continuer le cours et finir la section sur la modélisation de la surdispersion pour la fréquence de sinistres. On devrait ensuite commencer la modélisation des coûts individuels. En particulier, on passera du temps autour de deux points,

  • la distinction lognormale et gamma
  • l’écrêtement des gros sinistres

Les transparents sont en ligne. Et la base des coûts est celle évoquée au second cours.

Earthquake dynamics

I just upload on http://hal.archives-ouvertes.fr/hal-00871883 a joint paper entitled Modeling earthquake dynamics.

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

Modélisation des coûts individuels en tarification

Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,

Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Introduction aux modèles linéaires généralisés

J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.

Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,

http://freakonometrics.hypotheses.org/files/2013/02/assurance4.jpgMais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.

ACT2121, éléments de correction

La plupart des calculs pouvant se faire sans calculs (trop) complexes avec une calculatrice, je vais revenir sur un exercice (l’exercice 11 de la seconde série) pour proposer une correction.

Le petit Nestor collectionne les cartes de joueurs de Baseball dans les paquets de gommes à mâcher. Il y a en tout 20 cartes différentes (réparties aléatoirement, une par paquet). Combien de paquets de gommes Nestor devrait-il s’attendre à avoir à acheter pour obtenir la collection complète ? 

La bonne stratégie était d’ordonner les cartes par ordre d’apparition, pour la première fois, et de modéliser le nombre de paquets entre deux premières apparitions, comme indiqué sur le dessin ci-dessous


Si  désigne le nombre total de paquets à acheter, on note  le nombre de paquets à acheter, entre l’apparition de la ème nouvelle carte, et la ème (avec la convention que vaut 1, i.e. à l’achat du premier paquet, on a notre première carte). On notera que
https://latex.codecogs.com/gif.latex?M_i\sim\mathcal{G}((20-(i-1))/20)
et à partir de là, les calculs sont simples, puisque

https://latex.codecogs.com/gif.latex?mathbb{E}(N)=mathbb{E}(M_1+M_2+cdots+M_{20})

soit (par linéarité de l’espérance)

https://latex.codecogs.com/gif.latex?mathbb{E}(N)=mathbb{E}(M_1)+mathbb{E}(M_2)+cdots+mathbb{E}(M_{20})

i.e.

https://latex.codecogs.com/gif.latex?mathbb{E}(N)=left(%201+%20frac{20}{19}%20+%20frac{20}{18}%20+%20frac{20}{17}%20+%20...%20+%20frac{20}{2}+frac{20}{1}%20<br /> ight)

ou encore

https://latex.codecogs.com/gif.latex?mathbb{E}(N)=20left(%20frac{1}{20}+%20frac{1}{19}%20+%20frac{1}{18}%20+%20frac{1}{17}%20+%20...%20+%20frac{1}{2}+1%20<br /> ight)

La bonne réponse était alors (il faut sommer les vingt termes)

> sum(20/(1:20)) [1] 71.95479

Mais cette sommation de vingt termes n’est pas triviale, aussi, en cours, j’avais suggéré que

https://latex.codecogs.com/gif.latex?1+%20frac{1}{2}%20+%20frac{1}{3}%20+%20frac{1}{4}%20+%20...%20+%20frac{1}{n}%20sim%20ln(n)
> log(20)*20 [1] 59.91465

qui diffère du résultat numérique car il manquait la constante d’Euler

https://latex.codecogs.com/gif.latex?gamma%20=%20lim_{n%20<br /> ightarrow%20infty%20}%20left(%201+%20frac{1}{2}%20+%20frac{1}{3}%20+%20frac{1}{4}%20+%20...%20+%20frac{1}{n}%20-%20ln(n)%20<br /> ight)

i.e.

https://latex.codecogs.com/gif.latex?1+%20frac{1}{2}%20+%20frac{1}{3}%20+%20frac{1}{4}%20+%20...%20+%20frac{1}{n}%20sim%20ln(n)+gamma

Numériquement, on obtient, en prenant pour https://latex.codecogs.com/gif.latex?gamma 0.57721 (cf Google)

> (log(20)+ 0.57721)*20 [1] 71.45885