Exposure with binomial responses

Last week, we’ve seen how to take into account the exposure to compute nonparametric estimators of several quantities (empirical means, and empirical variances) incorporating exposure. Let us see what can be done if we want to model a binomial response. The model here is the following: ,

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

that can be visualize below

http://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

Consider the case where the variable of interest is not the number of claims, but simply the indicator of the occurrence of a claim. Then we wish to model the event https://latex.codecogs.com/gif.latex?\{N=0\} versus https://latex.codecogs.com/gif.latex?\{N%3E0\}, interpreted as non-occurrence and occurrence. Given the fact that we can only observe https://latex.codecogs.com/gif.latex?\{Y=0\} versus https://latex.codecogs.com/gif.latex?\{Y%3E0\}. Having an inclusion is not enough to derive a model. Actually, with a Poisson process model, we can get easily that

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense. Assume that the probability of not having a claim can be explained by some covariates, denoted https://latex.codecogs.com/gif.latex?\boldsymbol{X}, through some link function (using the GLM terminology),

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=0|\boldsymbol{X})=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, since we do observe https://latex.codecogs.com/gif.latex?Y – and not https://latex.codecogs.com/gif.latex?N – we have

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})^E

The dataset we will use is always the same

> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+ header=TRUE,sep=";")
> sinistres=sinistre[sinistre$garantie=="1RC",]
> sinistres=sinistres[sinistres$cout>0,]
> contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+ header=TRUE,sep=";")
> T=table(sinistres$nocontrat)
> T1=as.numeric(names(T))
> T2=as.numeric(T)
> nombre1 = data.frame(nocontrat=T1,nbre=T2)
> I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE]
> nombre2 = data.frame(nocontrat=T1,nbre=0)
> nombre=rbind(nombre1,nombre2)
> sinistres = merge(contrat,nombre)
> sinistres$nonsin = (sinistres$nbre==0)

The first model we can consider is based on the standard logistic approach, i.e.

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\left(\frac{\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}\right)^E

That’s nice, but difficult to handle with standard functions. Nevertheless, it is always possible to compute numerically the maximum likelihood estimator of https://latex.codecogs.com/gif.latex?\boldymbol{\beta} given https://latex.codecogs.com/gif.latex?(Y_i,\boldsymbol{X}_i,E_i).

> Y=sinistres$nonsin
> X=cbind(1,sinistres$ageconducteur)
> E=sinistres$exposition
> logL = function(beta){
+ 	pi=(exp(X%*%beta)/(1+exp(X%*%beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))
+ }
> optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")
$par
[1] 2.14420560 0.01040707
$value
[1] 7604.073
$counts
function gradient 
      42       10 
$convergence
[1] 0
$message
NULL
> parametres=optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")$par

Now, let us look at alternatives, based on standard regression models. For instance a binomial-log model. Because the exposure appears as a power of the annual probability, everything would be fine if https://latex.codecogs.com/gif.latex?h was the exponential function (or https://latex.codecogs.com/gif.latex?h^{-1} was the log link function), since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\exp(E+\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, if we try to code it, it starts quickly to be problematic,

> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log")) 
Error: no valid set of coefficients has been found: please supply starting values

I tried (almost) everything I could, but I could not get rid of that error message,

> startglm=c(0,-.001)
> names(startglm)=c("(Intercept)","ageconducteur")
> etaglm=rep(-.01,nrow(sinistresI))
> etaglm[sinistresI$nonsin==0]=-10
> muglm=exp(etaglm)
> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log"),
+ control = glm.control(epsilon=1e-5,trace=TRUE,maxit=50),
+ start=startglm,
+ etastart=etaglm,mustart=muglm)
Deviance = NaN Iterations - 1 
Error: no valid set of coefficients has been found: please supply starting values

So I decided to give up. Almost. Actually, the problem comes from the fact that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0) is closed to 1. I guess everything would be nicer if we could work with probability close to 0. Which is possible, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)=1-\mathbb{P}(Y=0)%20=%201-[1-\mathbb{P}(N%3E0)]^E

where https://latex.codecogs.com/gif.latex?\mathbb{P}(N%3E0) is close to 0. So we can use Taylor’s expansion,

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)\sim1-1+E\cdot%20\mathbb{P}(N%3E0)]=E\cdot%20\mathbb{P}(N%3E0)]

Here, the exposure does no longer appears as a power of the probability, but appears multiplicatively. Of course, there are higher order terms. But let us forget them (so far). If – one more time – we consider a log link function, then we can incorporate the exposure, or to be more specific, the logarithm of the exposure.

> regopp=glm((1-nonsin)~ageconducteur+offset(log(exposition)),
+ data=sinistresI,family=binomial(link="log"))

which now works perfectly.

Now, to see a final model, perhaps we should get back to our Poisson regression model since we do have a model for the probability that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=\cdot).

> regpois=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))

We can now compare those three models. Perhaps, we should also include the prediction without any explanatory variable. For the second model (actually, it does run without any explanatory variable), we run

>  regreff=glm((1-nonsin)~1+offset(log(exposition)),
+ data=sinistres,family=binomial(link="log"))

so that the prediction is here

> exp(coefficients(regreff))
(Intercept) 
 0.06776376

This value is comparable with the logistic regression,

> logL2 = function(beta){
+ 	pi=(exp(beta)/(1+exp(beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))}
> param=optim(fn=logL2,par=.01,method="BFGS")$par
> 1-exp(param)/(1+exp(param))
[1] 0.06747777

But is quite different from the Poisson model,

> exp(coefficients(glm(nbre~1+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))))
(Intercept) 
 0.07279295

Let us produce a graph, to compare those models,

> age=18:100
> yml1=exp(parametres[1]+parametres[2]*age)/(1+exp(parametres[1]+parametres[2]*age))
> plot(age,1-yml1,type="l",col="purple")
> yp=predict(regpois,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> yp1=1-exp(-yp)
> ydl=predict(regopp,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> plot(age,ydl,type="l",col="red")
> lines(age,yp1,type="l",col="blue")
> lines(age,1-yml1,type="l",col="purple")
> abline(h=exp(coefficients(regreff)),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.55.591.png

Observe here that the three models are quite different. Actually, with two models, it is possible to run more complex regression, e.g. with splines, to visualize the impact of the age on the probability of having – or not – a car accident. If we compare the Poisson regression (still in red) and the log-binomial model, with Taylor’s expansion, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.39.08.png

The next step is to see how to incorporate the exposure in a tree. But that’s another story…

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

Crash course on R for financial and actuarial econometrics

Next Friday, I will give in Montréal a crash course entitled Econometric Modeling in Finance and Insurance with the R Language. Since IFM2 wanted this course to be an opportunity to discover R, the first part o fthe course will be on the R language. Slides can be downloaded from here.

(since the course is still scheduled, all comments and remarks are welcomed)

Natura non facit saltus

(see John Wilkins’ article on the – interesting – history of that phrase http://scienceblogs.com/evolvingthoughts/…). We will see, this week in class, several smoothing techniques, for insurance ratemaking. As a starting point, assume that we do not want to use segmentation techniques: everyone will pay exactly the same price.

  • no segmentation of the premium

And that price should be related to the pure premium, which is proportional to the frequency (or the annualized frequency, as discussed previously), since

https://latex.codecogs.com/gif.latex?\mathbb{E}_{\mathbb{P}}\left(\sum_{i=1}^N%20Y_i\right)=\mathbb{E}_{\mathbb{P}}(N)%20\cdot%20\mathbb{E}_{\mathbb{P}}(Y_i)

The probability measure is mentioned here just to recall that we can use any measure. Even https://latex.codecogs.com/gif.latex?\mathbb{P}_{\boldsymbol{X}} (based on some covariates). Without any covariate, the expected frequency should be

> regglm0=glm(nbre~1+offset(log(exposition)),data=sinistres,family=poisson)
> summary(regglm0)

Call:
glm(formula = nbre ~ 1 + offset(log(exposition)), family = poisson, 
    data = sinistres)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5033  -0.3719  -0.2588  -0.1376  13.2700  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -2.6201     0.0228  -114.9   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12680  on 49999  degrees of freedom
Residual deviance: 12680  on 49999  degrees of freedom
AIC: 16353

Number of Fisher Scoring iterations: 6
> exp(coefficients(regglm0))
(Intercept) 
 0.07279295

Thus, if we do not want to take into account potential heterogeneity, we should assume that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) where https://latex.codecogs.com/gif.latex?\lambda is closed to 7.28%. Yes, as mentioned in class, it is rather common to see https://latex.codecogs.com/gif.latex?\lambda as a percentage, i.e. a probability, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq%200)=1-e^{-\lambda}\approx%20\lambda

i.e. https://latex.codecogs.com/gif.latex?\lambda can be interpreted as the probability of not have a claim (see also the law of small numbers). Let us visualize this as a function of the age of the driver,

> a=18:100
> yp=predict(regglm0,newdata=data.frame(ageconducteur=a,exposition=1),type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> k=23
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-constante.png

We do predict the same frequency for all drivers, e.g. for some drive aged 40,

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07279295  confidence interval 0.07611196 0.06947393

Let us now consider the case where we try to take into account heterogeneity, e.g. by age,

  • The (standard) Poisson regression

The idea of the (log-)Poisson regression is to assume that instead of having https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda), we should have https://latex.codecogs.com/gif.latex?N|\boldsymbol{X}\sim\mathcal{P}(\lambda_{\boldsymbol{X}}), where

https://latex.codecogs.com/gif.latex?\lambda_{\boldsymbol{X}}=\exp(\beta_0+\beta_1%20\boldsymbol{X}_1+\cdots+\beta_k\boldsymbol{X}_k)

in a very general setting. Here, let us consider only one explanatory variable, i.e.

https://latex.codecogs.com/gif.latex?\lambda_{X}=\exp(\beta_0+\beta_1%20{X})

Here, we have

> yp=predict(regglm1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-exp-standard.png

i.e. the prediction for the annualized claim frequency for our 40 year old driver is now 7.74% (which is slightly higher than what we had before, 7.28%)

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07740574  confidence interval 0.08117512 0.07363636

It is possible to compute not the expected frequency , but the ratio https://latex.codecogs.com/gif.latex?\mathbb{E}(N|X)/\mathbb{E}(N).

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-13.45.43.png

Above the horizontal blue line, the premium will be higher than the one obtained without segmentation, and (of course) lower below. Here, drivers younger than 44 year old will pay more, while driver older than 44 year old will be less. We have discussed, in the introduction, the necessity of segmentation. If we consider two companies, one segmenting, while the other one has a flat rate, then older drivers will go to the first company (since insurance is cheaper) while younger ones will go to the second one (again, it is cheaper). The problem is that the second company implicitly hopes that older drivers will compensate the risk. But since they’re gone, insurance will be too cheap, and the company will loose money (if not goes bankrupt). So companies have to use segmentation techniques to survive. Now, the problem is that we cannot be sure that this exponential decay of the premium is the proper way the premium should evolve as a function of the age. An alternative can be to use nonparametric techniques to visualize to true influence of the age on claims frequency.

  • A pure nonparametric model

A first model can be to consider a premium, per age. This can be done considering the age of the driver as a factor in the regression,

> regglm2=glm(nbre~as.factor(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> yp=predict(regglm2,newdata=data.frame(ageconducteur=a0,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a0,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-factors.png

Here, the forecast for our 40 year old driver is slightly lower than be previous one, but the confidence interval is much larger (since we focus on a very small subclass of the portfolio: drivers aged exactly 40)

Frequency = 0.06686658  confidence interval 0.08750205 0.0462311

Here, we consider too small classes, and the premium is too erratic: the premium will decrease of 20% from age 40 to 41, and then increase of 50% from age 41 to 42,

> diff(log(yp0[23:25]))
        24         25 
-0.2330241  0.5223478

There is no chance that the company will keep the insured with this strategy. This discontinuity of the premium is clearly an important issue here.

  • Using age classes

An alternative can be to consider age classes, from very young drivers to senior drivers.

> level1=seq(15,105,by=5)
> regglmc1=glm(nbre~cut(ageconducteur,level1)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> summary(regglmc1)

Coefficients:
                                   Estimate Std. Error z value Pr(>|z|)    
(Intercept)                         -1.6036     0.1741  -9.212  < 2e-16 ***
cut(ageconducteur, level1)(20,25]   -0.4200     0.1948  -2.157   0.0310 *  
cut(ageconducteur, level1)(25,30]   -0.9378     0.1903  -4.927 8.33e-07 ***
cut(ageconducteur, level1)(30,35]   -1.0030     0.1869  -5.367 8.02e-08 ***
cut(ageconducteur, level1)(35,40]   -1.0779     0.1866  -5.776 7.65e-09 ***
cut(ageconducteur, level1)(40,45]   -1.0264     0.1858  -5.526 3.28e-08 ***
cut(ageconducteur, level1)(45,50]   -0.9978     0.1856  -5.377 7.58e-08 ***
cut(ageconducteur, level1)(50,55]   -1.0137     0.1855  -5.464 4.65e-08 ***
cut(ageconducteur, level1)(55,60]   -1.2036     0.1939  -6.207 5.40e-10 ***
cut(ageconducteur, level1)(60,65]   -1.1411     0.2008  -5.684 1.31e-08 ***
cut(ageconducteur, level1)(65,70]   -1.2114     0.2085  -5.811 6.22e-09 ***
cut(ageconducteur, level1)(70,75]   -1.3285     0.2210  -6.012 1.83e-09 ***
cut(ageconducteur, level1)(75,80]   -0.9814     0.2271  -4.321 1.55e-05 ***
cut(ageconducteur, level1)(80,85]   -1.4782     0.3371  -4.385 1.16e-05 ***
cut(ageconducteur, level1)(85,90]   -1.2120     0.5294  -2.289   0.0221 *  
cut(ageconducteur, level1)(90,95]   -0.9728     1.0150  -0.958   0.3379    
cut(ageconducteur, level1)(95,100] -11.4694   144.2817  -0.079   0.9366    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

> yp=predict(regglmc1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,ylim=c(.03,.12),type="s")
> abline(v=40,col="grey")
> lines(a,yp1,lty=2,type="s")
> lines(a,yp2,lty=2,type="s")

Here we obtain the following predictions,

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-1.png

and for our 40 year old driver, the frequency is now 6.84%.

Frequency = 0.0684573  confidence interval 0.07766717 0.05924742

But our classes were defined arbitrarily here. Perhaps should we consider other classes, to see if the prediction is sensitive to the cutting values,

> level2=level1-2
> regglmc2=glm(nbre~cut(ageconducteur,level2)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-2.png

which yields the following values for our 40 year old driver,

Frequency = 0.07050614  confidence interval 0.07980422 0.06120807

So here, we did not remove the discontinuity problem. An idea here can be to consider moving regions: if the goal is to predict the frequency for a 40 year old driver, perhaps the class should be (somehow) centered around 40. And center the interval around 35 for drivers aged 35. Etc.

  • Moving average

Thus, it is natural to consider some local regressions, where only drivers aged almost 40 should be considered. This almost concept is related to the bandwidth. For instance, drivers between 35 and 45 can be considered as being almost40. In practice we can either consider a subset function, or we can use weights in the regressions

> value=40
> h=5
> sinistres$omega=(abs(sinistres$ageconducteur-value)<=h)*1
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

To see what’s going on, let us consider an animated plot, where the age of interest is changing,

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-2.gif

Here, for our 40 year old drive, we get

Frequency = 0.06913391  confidence interval 0.07535564 0.06291218

We do obtain a curve that can be interpreted as a local regression. But here, we do not take into account that 35 is not as close to 40 as 39 could be. An here, 34 is assumed to be very far away from 40. Clearly, we could improve that technique: kernel functions can considered, i.e. the closer to 40, the larger the weight.

> value=40
> h=5
> sinistres$omega=dnorm(abs(sinistres$ageconducteur-value)/h)
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

which can be plotted below

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-1.gif

Here, our prediction for our 40 year old drive is

Frequency = 0.07040464  confidence interval 0.07981521 0.06099408

This is the idea of kernel regression techniques. But as explained in the slides, other non parametric techniques can be considered, like spline functions.

  • Smoothing with splines

In R, it is simple to use spline function (somehow much more simple than kernel smoothers)

> library(splines)
> regglmbs=glm(nbre~bs(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-splines.png

The prediction for our 40 year old driver is now

Frequency = 0.06928169  confidence interval 0.07397124 0.06459215

Note that this techniques is related to another class of models, the so-called Generalized Additive Models, i.e. GAMs.

> library(mgcv)
> reggam=gam(nbre~s(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-gam.png

The prediction is extremely close to the one we obtained above (the main differences being observed for very old drivers)

Frequency = 0.06912683  confidence interval 0.07501663 0.06323702
  • Comparison of the different models

Somehow, one way or another, all those models are valid. So perhaps we should compare them,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.50.19.png

On the graph above, we can visualize the upper and the lower bound of the prediction, for the 9 models. The horizontal line is the predicted value without taking into account heterogeneity. It is possible to consider relative values, with respect to this value,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.54.56.png

Big data, statistics and computer science

Today, software and hardware together provide far more powerful factories than most statisticians realize, factories that many of today’s most able young people find exciting and worth learning about on their own. Their interest can help us greatly, if statistics starts to make much more nearly adequate use of the computer. However, if we fail to expand our uses, their interest in computers can cost us many of our best recruits, and set us back many years.” John Tukey, The technical tools of statistics, 1965, http://jstor.org/… via http://cm.bell-labs.com/…

http://freakonometrics.hypotheses.org/files/2013/02/102646212-05-04.jpeg

Somewhere else, part 31

A nice post on statistics and codes, that I read this week

and till a lot of interesting posts

http://freakonometrics.hypotheses.org/files/2013/02/BBpocVRCMAAD7D9.jpg

avec quelques billets en français, cette semaine

A random walk ? What else ?

Consider the following time series,

What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,

> PP.test(x)

	Phillips-Perron Unit Root Test

data:  x 
Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758

If we look at the autocorrelation function, we do observe some persistence,

> acf(x,100)

Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies

as  for some  (the so called Hurst index). But here, we start to observe unexpected ouputs,

> library(longmemo)
> (d  <- WhittleEst(x))
'WhittleEst' Whittle estimator for  fractional Gaussian noise ('fGn');	 call:
WhittleEst(x = x)
	  time series of length  n = 759.

H = 0.9899335
coefficients 'eta' =
    Estimate Std. Error z value   Pr(>|z|)
H 0.98993350 0.02468323 40.1055 < 2.22e-16
 <==> d := H - 1/2 = 0.49 (0.025)

 $ vcov       : num [1, 1] 0.000609
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr "H"
  .. ..$ : chr "H"
 $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ...
 $ spec       : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...

or more precisely some non-expected values for Hurst parameter, which should be in 

> confint(d)
      2.5 %   97.5 %
H 0.9415553 1.038312

Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,

> plot(d)

It is probablty time to ask where I found that series… To be honest, I did borrow  it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use

> Y=2013
> M=1
> D=25
> url=paste(
"http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html?
timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02-
01&Year=",Y,"&Month=",M,"&Day=",D,sep="")
> page=scan(url,what="character")

Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,

So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…

Enseigner, et bloguer

Petite introspection aujourd’hui, à la demande de Robin (et en français, car je pense que j’aurais déjà du mal dans une langue que je maitrise). Je n’ai jamais été très doué pour ce genre d’exercice, et je ne pense avoir aucune légitimité en la matière, mais si Robin me demande, je ne vois pas pourquoi refuser. La question était (je me permets de citer) “je me demandais si tu avais écris un billet “meta” sur ton utilisation du blog pour l’enseignement. Si non, ce serait très intéressant que tu en écrives un quand tu auras le temps” (Robin évoque ce point pour introduire son dernier billet, sur  http://robinryder.wordpress.com). Comme je n’ai publié aucun billet répondant à la question, je me lance.

Tout d’abord, mon blog n’est pas un blog de chercheur (comme peuvent l’être celui de Xi’an http://xianblog.wordpress.com/ ou de Djalil, http://djalil.chafai.net/blog/) pas un blog de point de vue sur l’enseignement supérieur et la recherche (comme celui de David http://david.monniaux.free.fr/, ou de Tom Roud http://tomroud.cafe-sciences.org/), pas un blog d’enseignement (euh, je ne sais pas s’il y en a, pour être sincère, ou alors je mettrais le blog improbable de Terry Tao, http://terrytao.wordpress.com/ mais ce qui est de l’enseignement pour lui ressemble à de la recherche pointue et novatrice….), ce n’est rien de tout ça. Ou c’est tout ça à la fois (et je crois que j’en tire une certaine fierté). Mais le fait est que ce blog (ou sa version primitive, https://blogperso.univ-rennes1.fr/arthur.charpentier/) est né via l’enseignement. Peut-être devrais-je revenir un peu en arrière, avant de faire un bilan après… bientôt cinq ans en tant que “blogger académique“.

  • la vie d’enseignant avant le blog

Oui, fut un temps ou j’enseignais sans tenir de blog en parallèle. Enfin, presque. Mon premier cours a été mon cours séries temporelles, théorie et applications aux masters Actuariat et Mathématiques de la Décision, en janvier 2002, à Dauphine.  Je m’en souviens, tout d’abord  car tous les profs se souviennent – je pense – de leur premier cours. Aussi parce qu’à l’époque je travaillais dans le privé comme on dit, et enseigner a été une révélation. Enfin, parce que ce cours aura été (je pense) à l’image de tous mes autres cours: j’étais enthousiaste, je préparais mes cours au dernier moment (en l’occurence, on m’a demandé avant les vacances de Noel si je pouvais assurer un cours de cinquième année au pied levé, au retour des vacances), et je me suis rendu compte que je ne pourrais jamais tout raconter dans un cours, et que je serais toujours affreusement brouillon.

A l’époque, je tapais des documents que je remettais toutes les semaines, qui ont pris la forme de notes de cours quelques années plus tard (qui sont d’ailleurs toujours en ligne sur le site http://193.51.89.161/st/, et que des générations d’étudiants auront cité, à mon grand étonnement). Et sur ma “page internet“, je mettais des liens pour aller plus loin, des liens vers des bases de données, etc. C’était le début d’internet, et on pouvait faire ce qu’on voulait. Je crois qu’à une époque, j’avais même mis une version pirate d’EViews sur la page, pour que les élèves l’installent sur leur PC. Bref, dès mon premier cours, je me souviens avoir pensé que le cours ne s’arrêterait pas aux trois heures de cours magistral, mais il me manquait un support informatique digne de nom.

Je me souviens aussi, pour mon cours de Chaines de Markov, avoir mis sur ma page perso (hébergé alors sur http://crest.fr/) des vidéos que j’utilisais dans mon cours pour montrer la convergence vers la loi limite. Là aussi, je m’étais dit que le polycopié de cours en pdf était un peu dépassé, qu’il manquait les animations que l’on pouvait mettre en ligne pour accompagner un cours (d’un point de vue technique, je suis constamment surpris d’ailleurs qu’on ne puisse insérer un gif animé dans un fichier pdf).

  • la vie d’enseignant qui découvre le blog

En 2008, j’ai rejoint Rennes 1, et le service informatique à l’époque poussait à créer des blogs, plus vivants que des pages persos. Sans être convaincu, je me suis lancé dans l’expérience. Si je n’y voyais aucun intérêt pour la recherche, je me souviens m’être dit que ce support pourrait justement me permettre d’éviter les pages statiques et brouillonnes que je pouvais avoir (je renvoie à la page de mon cours de séries temporelles, qui est encore en ligne sur cf http://193.51.89.161/st/).

Ma première motivation était de pouvoir mettre en ligne, après chaque cours, transparents, codes, bases de données, liens vers des ressources supplémentaires (en particulier je pouvais insister sur les aspects computationnels, que je ne peux aborder en cours, faute de temps, et souvent parce que ce n;est pas l’objet du cours). Bref, avoir sous le coude une espèce de carnet de route du cours. Et c’est comme ça que je voyais le blog, comme un carnet d’enseignant. En plus, j’ai longtemps eu tendance à improviser les cours en classe, au grès du vent, en navigant du tableau blanc à R, constamment ouvert pour illustrer tel ou tel point (je dois avouer que c’est à Bernard Ycart que j’ai emprunté le style, sans jamais réussir à l’égaler).

Ensuite, je me suis rendu compte que cela permettait d’éviter un problème qui m’agaçait depuis mon premier cours: beaucoup d’élèves n’osent pas prendre la parole en cours (ce que je comprends parfaitement, l’idée n’est aucunement de leur jeter la pierre), et un certain nombre venaient me voir à la pause, ou après le cours, pour me poser des questions. Pour les questions à la pause (je continue à fonctionner de cette manière), je les retiens, et je réponds collectivement lorsque le cours reprend, en partant du principe que si un élève ose poser une question, c’est que cinq se la posent aussi sans oser venir me voir. Donc la réponse intéresse tout le monde. Par contre, pour les questions de fin de cours, je trouvais que le blog permettait de répondre à ces questions, qui parfois étaient connexes à la matière vue en cours (je fais assez de digressions dans mon cours, inutile de rajouter de l’huile sur le feu).

Bref, le blog a permis de mettre en place une espèce de forum: quand je postais un billet, les élèves poursuivaient via les commentaires, ou par courriel pour les moins courageux, et le blog est devenu une espèce de forum. Avec le recul, je pense que cela tenait de l’incroyable dynamisme des étudiants que j’ai eu au master statistique et économétrie de Rennes, qui m’ont permis d’avoir un blog à la fois académique, mais un peu décalé, qui n’essayait pas de se prendre au sérieux. Je pense que si je suis devenu accro à mon blog, c’est à cause de cette promotion d’étudiants, qui arrivaient au début du master, en même temps que je prenais possession de mon poste à Rennes.

Avec un peu de recul, je pense qu’il fallait une sacré dose de naïveté pour penser que seuls les étudiants (et les copains) passeraient voir le blog. Car il y a une grande différence entre un forum lié à un cours et un blog ouvert sur le monde. Oui, il faut etre soit mégalo pour tenir un blog, soit oublier que le blog sera lu par plus de 60 utilisateurs (ce que j’appelais de la naïveté). Je me souviens que ça a été un choc de me rendre compte que le nombre de visiteurs n’était pas les 60 étudiants inscrits à mes cours, mais quelques centaines de millers de personnes, après quelques mois d’utilisation, que je ne connaissais pas.

Pour ceux qui veulent savoir dans quelle catégorie je suis, entre le mégalo et le naïf (car j’ai perdu ma naïveté) j’en rajouterais une troisième: il suffit d’être un vieil ours des cavernes, qui se contente de tenir un carnet de route de ses cours, tout en le mettant en accès ouvert, en espérant que ca aide quelqu’un, un jour.

  • un petit bilan avec plus de recul

Qu’en est-il avec cinq ans de recul ? J’avoue que je ne sais pas… Je ne sais pas si le blog est devenu trop sérieux, trop institutionnel (j’évite pourtant de le mettre dans les classements pour éviter de lui faire prendre plus d’importance qu’il ne mérite d’en avoir). J’ai aussi changé de type d’étudiants: en France, je n’ai jamais fait de cours en licence par exemple alors que je fais essentiellement des cours undergraduate à Montréal (mais c’est très lié aux différences fondamentales entre l’actuariat en Amérique du Nord et en Europe, il faudrait un billet entier sur le sujet). Mais force est de constater que ce n’est plus pareil…

L’enthousiasme que j’avais ressenti à Rennes a disparu. A la fin de la session d’automne, des élèves m’ont reproché de bloguer, et de ne pas utiliser moodle (“comme tous les profs“) et j’avoue ne pas avoir compris ce que cela signifiait.

Le dernier élève a avoir posté des commentaires sur le blog, je vais finir par lui proposer de l’encadrer pour son stage de maitrise… A croire que les étudiants sont intimidés par le blog. Cela dit, là encore je ne jetterais la pierre à personne car j’évite autant que possible de poster des commentaires sur les blogs des autres, essentiellement car je n’ai pas la main dessus. Je veux pouvoir effacer mes messages si j’en ai envie. Mais je suis surpris que les seules personnes qui me contactent par email (et ils sont nombreux) ne soient pas mes élèves…

Mais je m’égare. En fait, je tenais à dire qu’avec la migration de mon blog, je me pose beaucoup de questions sur le blog (surtout que Martin a transféré les messages en les mettant tous en pending: je dois rouvrir les billets pour demander à les remettre en ligne). A plusieurs reprises, je me suis demandé si je devais remettre en ligne des vieux billets de cours: le cours est fini, le billet n’intéresse (peut-être) plus personne. Je dois avouer que j’ai remis en ligne plusieurs vieux billets, pour une raison très simple: j’ai beau vieillir, les étudiants, eux, ne vieillissent pas, et me posent les mêmes questions que les étudiants qui avaient suivi le cours une ou deux sessions auparavant. Pour le cours que je donne cette session, il y a les billets que je mets en ligne, régulièrement, mais en plus, j’ai décidé, de rajouter avec le même code les billets passés. En fait – pour faire une petite remarque technique – je classais avant mes billets par un code cours-année. Je me rends compte que l’année ne sert à rien. Mieux vaut avoir une accumulation de billet pour un même cours.

Cela dit, ça semble perturber les étudiants. Là encore, je les comprends: comment s’y retrouver quand un prof demande de fouiller – mon fameux “va voir sur le blog, j’ai fait un billet sur le sujet au printemps 2009” – dans un blog avec plus de 1000 billets ? Ça peut intimider. Peut-être autant que la première fois que j’ai mis les pieds à la bibliothèque de maths de Jussieu (à l’époque) à Paris: on ne sais pas comment s’y retrouver. Ça prend du temps, et à la longue, on finit par s’y sentir comme chez soi, et par s’y retrouver. Et je pense que c’est pareil avec le blog. Il faut comprendre comment naviguer, entre les tags, les catégories, les pages.

Je me rends compte que mon billet a été plus long que prévu. Et que je ne crois pas avoir réussi à conclure. Je ne sais pas si bloguer sert à mes étudiants (si je ne blogue pas pour mes élèves, les courriels que je reçois me laissent croire que je blogue pour d’autres). Ce qui est clair, c’est que personnellement, bloguer me sert à préparer un cours, ça me permet de mieux m’organiser (c’est d’ailleurs un point que notait Simon dans son billet sur le sujet http://blogs.lse.ac.uk/impactofsocialsciences/…), de me laisser de la place pour ce que je n’ai pas le temps de faire au tableau (car oui, auparavant, je voulais tout dire au tableau, quitte à saturer les élèves). Mon blog, c’est la version électronique de mes cahiers, où je note tout. Je me souviens de tout ce que j’ai pu taper depuis 5 ans (je me souviens avoir été impressionné qu’Andrew Gelman se souvienne de billets écrits plusieurs mois auparavant sur son blog http://andrewgelman.com/, mais je dois avouer, avec le recul, qu’effectivement, on s’en souvient), et je sais que je peux y retrouver des choses, quand j’en ai besoin, pour un cours, par exemple. Maintenant, c’est vrai que c’est bordélique. Mais ceux qui trouvent mon blog mal rangé devraient un jour entrer dans mon bureau, ils changeraient d’avis !

Fréquence de sinistres, et surdispersion

Je continue à mettre en ligne les transparents qui serviront de support pour le cours ACT2040. Dans cette dernière partie sur la modélisation de la fréquence de sinistre, on parlera de surdispersion. Les transparents sont en ligne ici,

Sinon, parmi les références complémentaires, je peux suggérer plusieurs documents rédigés par des praticiens, comme Meyers (2009) http://casact.org/education/…, Isamail & Jemain (2009) http://casact.org/pubs/… ou encore le document très intéressant (et critique) de Schmid (2011) http://casact.org/education/…. Les plus motivés pourront aussi survoler les section 2.3 et 2.4. du livre Denuit et al. (2007), en ligne sur http://books.google.ca/…

Overdispersion with different exposures

In actuarial science, and insurance ratemaking, taking into account the exposure can be a nightmare (in datasets, some clients have been here for a few years – we call that exposure – while others have been here for a few months, or weeks). Somehow, simple results because more complicated to compute just because we have to take into account the fact that exposure is an heterogeneous variable.

The exposure in insurance ratemaking can be seen as a problem of censored data (in my dataset, the exposure is always smaller than 1 since observations are contracts, not policyholders),

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

And as always, the variable of interest is the unobserved one, because we have to price insurance contract with a cover period of one (full) year. So we have to model the yearly frequency of insurance claims.

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

In our dataset, we have https://latex.codecogs.com/gif.latex?(Y_i,E_i)‘s – or more generally also some additional covariates https://latex.codecogs.com/gif.latex?(Y_i,E_i,\boldsymbol{X}_i)‘s. For ratemaking, we need to estimate https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) and perhaps also https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) (for instance to test if the Poisson assumption is valid, or not). To estimate the expected value, a natural estimate for https://latex.codecogs.com/gif.latex?\mathbb{E}(N) (forget about covariates as a start) is
https://latex.codecogs.com/gif.latex?m_N=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}
which is also the weight average of annualized individual counts
https://latex.codecogs.com/gif.latex?m_N=\sum_{i=1}^n%20\frac{%20E_i}{\sum_{i=1}^n%20E_i}%20\cdot%20\frac{Y_i}{E_i}
We consider the ratio of the total number of claims to the total exposure-to-
risk. This estimate appears for instance if we consider a Poisson process, so that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) while https://latex.codecogs.com/gif.latex?Y\sim\mathcal{P}(\lambda%20\cdot%20E). Then, the likelihood is

https://latex.codecogs.com/gif.latex?\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})=\prod_{i=1}^n%20\frac{e^{-\lambda%20E_i}%20[\lambda%20E_i]^{Y_i}}{Y_i!}

i.e.

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20-\lambda%20\sum_{i=1}^n%20E_i%20+\sum_{i=1}^n%20Y_i%20\log[\lambda%20E_i]%20-%20\log\left(\prod_{i=1}^n%20Y_i!\right)

The first order condition is here

https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20\lambda}\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20%20-%20\sum_{i=1}^n%20E_i%20+\frac{1}{\lambda}\sum_{i=1}^n%20Y_i%20=0

which is satisfied if

https://latex.codecogs.com/gif.latex?\widehat{\lambda}=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}

So, we do have an estimator for the expected value, and a natural estimator for https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) is then (if we consider categorical covariates)
https://latex.codecogs.com/gif.latex?m_{N|\boldsymbol{x}}%20=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20Y_i}{\sum_%20{i,\boldsymbol{X}_i=\boldsymbol{x}}%20E_i}

Now, we need an estimate for the variance, or more precisely the conditional variable. Assume (as a starting point) that all have the same exposure https://latex.codecogs.com/gif.latex?E. For instance, if https://latex.codecogs.com/gif.latex?E is one half, insured were observed only the first six months. Then https://latex.codecogs.com/gif.latex?N=Y+Y%27 with https://latex.codecogs.com/gif.latex?Y\overset{\mathcal%20L}{=}Y%27 (https://latex.codecogs.com/gif.latex?Y is the number of claims on the first six months, while https://latex.codecogs.com/gif.latex?Y%27 are the number of claims on the last six months), i.e. https://latex.codecogs.com/gif.latex?\text{Var}(N)=\text{Var}(Y)+%20\text{Var}(Y%27) if we assume independent increments. I.e.
https://latex.codecogs.com/gif.latex?\text{Var}(N)=2\text{Var}(Y), or conversely https://latex.codecogs.com/gif.latex?E%20\cdot\text{Var}(N)=\text{Var}(Y). More generally, it is reasonable to assume that

https://latex.codecogs.com/gif.latex?\text{Var}(Y)=E\cdot%20\text{Var}(N)
for all values of https://latex.codecogs.com/gif.latex?E. And then
https://latex.codecogs.com/gif.latex?\text{Var}\left(\frac{Y}{E}\right)=\frac{1}{E}\cdot%20\text{Var}(N)
Thus, it seems legitimate to assume that the empirical variance of https://latex.codecogs.com/gif.latex?N can be written
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20S_{Y/E}^2
Since the average of https://latex.codecogs.com/gif.latex?Y_i/E is https://latex.codecogs.com/gif.latex?\overline{N}=m_N, then
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20\frac{1}{n}\sum_{i=1}^n%20\left[\frac{Y_i}{E}-\overline{N}\right]^2}%20=%20\frac{1}{n}\sum_{i=1}^n%20E\left[\frac{Y_i}{E}-\overline{N}\right]^2}
or equivalently
https://latex.codecogs.com/gif.latex?S_N^2=\frac{1}{n}\sum_{i=1}^n%20\frac{E}{E^2}\left[Y_i-\overline{N}\cdot%20E\right]^2}%20=\frac{1}{n}\sum_{i=1}^n%20\frac{1}{E}[Y_i-\overline{N}\cdot%20E]^2i.e.
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E]^2%20}{nE}
Thus, with different https://latex.codecogs.com/gif.latex?E_i‘s, it would be legitimate (I guess) to consider
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E_i]^2%20}{\sum_{i=1}^n%20E_i}
Thus, an estimator for https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) is
https://latex.codecogs.com/gif.latex?S_{N|\boldsymbol{x}}^2=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20[Y_i-\overline{N}\cdot%20E_i]^2}{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}%20}%20E_i}

This can be used to test is the Poisson assumption is valid to model frequency. Consider the following dataset,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  T=table(sinistres$nocontrat)
>  T1=as.numeric(names(T))
>  T2=as.numeric(T)
>  nombre1 = data.frame(nocontrat=T1,nbre=T2)
>  I = contrat$nocontrat%in%T1
>  T1= contrat$nocontrat[I==FALSE]
>  nombre2 = data.frame(nocontrat=T1,nbre=0)
>  nombre=rbind(nombre1,nombre2)
>  baseFREQ = merge(contrat,nombre)

Here, we do have our two variables of interest, the exposure, per contract,

>  E <- baseFREQ$exposition

and the (observed) number of claims (during that time frame)

>  Y <- baseFREQ$nbre

It is possible to compute without covariates, the average (yearly) number of claims, per contract, and the associated variance

> (mean=weighted.mean(Y/E,E))
[1] 0.07279295
> (variance=sum((Y-mean*E)^2)/sum(E)) 
[1] 0.08778567

It looks like the variance is (slightly) larger than the average (we’ll see in a few weeks how to test it, more formally). It is possible to add covariates, for instance the density of population, in the area where the policyholder lives,

>  X=as.factor(baseFREQ$densite)
>  for(i in 1:length(levels(X))){
+ 	   Ei=E[X==levels(X)[i]]
+ 	   Yi=Y[X==levels(X)[i]]
+  (meani=weighted.mean(Yi/Ei,Ei))    # moyenne 
+  (variancei=sum((Yi-meani*Ei)^2)/sum(Ei))    # variance
+ cat("Density, zone",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Density, zone 11 average = 0.07962411  variance = 0.08711477 
Density, zone 21 average = 0.05294927  variance = 0.07378567 
Density, zone 22 average = 0.09330982  variance = 0.09582698 
Density, zone 23 average = 0.06918033  variance = 0.07641805 
Density, zone 24 average = 0.06004009  variance = 0.06293811 
Density, zone 25 average = 0.06577788  variance = 0.06726093 
Density, zone 26 average = 0.0688496   variance = 0.07126078 
Density, zone 31 average = 0.07725273  variance = 0.09067 
Density, zone 41 average = 0.03649222  variance = 0.03914317 
Density, zone 42 average = 0.08333333  variance = 0.1004027 
Density, zone 43 average = 0.07304602  variance = 0.07209618 
Density, zone 52 average = 0.06893741  variance = 0.07178091 
Density, zone 53 average = 0.07725661  variance = 0.07811935 
Density, zone 54 average = 0.07816105  variance = 0.08947993 
Density, zone 72 average = 0.08579731  variance = 0.09693305 
Density, zone 73 average = 0.04943033  variance = 0.04835521 
Density, zone 74 average = 0.1188611   variance = 0.1221675 
Density, zone 82 average = 0.09345635  variance = 0.09917425 
Density, zone 83 average = 0.04299708  variance = 0.05259835 
Density, zone 91 average = 0.07468126  variance = 0.3045718 
Density, zone 93 average = 0.08197912  variance = 0.09350102 
Density, zone 94 average = 0.03140971  variance = 0.04672329

Perhaps graphs would be a nice tool to play with, to visualize that information

> plot(meani,variancei,cex=sqrt(Ei),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(meani,variancei,cex=sqrt(Ei))

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.26.png

The size of the circles is related to the size of the group (the area is proportional to the total exposure within the group). The first diagonal corresponds to the Poisson model, i.e. the variance should be equal to the mean. It is also possible to consider other covariates, like the gas type

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.52.02.png

or the car brand,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.50.49.png

It is also possible to consider the age of the driver as a categorical variate

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.40.png

Actually, the age is interesting: we can observe on that dataset a feature that Jean-Philippe Boucher observed also on his own datasets. Let us look more carefully where are the different ages,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.55.17.png

On the right, we can observe young (unexperienced) drivers. That was expected. But some classes are below the first diagonal: the expected frequency is large, but not the variance. I.e. we know for sure that young drivers have more car accidents. It is not an heterogeneous class, on the contrary: young drivers can be seen as a relatively homogeneous class, with a high frequency of car accidents.

With the original dataset (here, I use only a subset with 50,000 clients), we do obtain the following graph:

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-11.27.04.png

If we do not observe underdispersion for young drivers, observe that those are incredibly homogeneous classes. With a clear impact of experience, since circles are moving downward from age 18 to 25.

Another disturbing story (this was – one more time – suggestion from Jean-Philippe) that it might be possible to consider the exposure as a standard variable, and see if the coefficient is actually equal to 1. Without any covariate,

>  reg=glm(Y~log(E),family=poisson("log"))
>  summary(reg)

Call:
glm(formula = Y ~ log(E), family = poisson("log"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.83045    0.02822 -100.31   <2e-16 ***
log(E)       0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

i.e. the parameter is clearly strictly smaller than 1. And it is neither related to significance,

> library(car)
> linearHypothesis(reg,"log(E)",1)
Linear hypothesis test

Hypothesis:
log(E) = 1

Model 1: restricted model
Model 2: Y ~ log(E)

  Res.Df Df  Chisq Pr(>Chisq)    
1  49999                         
2  49998  1 251.19  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

nor to the fact that I did not take into account covariates,

> reg=glm(nbre~log(exposition)+carburant+as.factor(ageconducteur)+as.factor(densite),family=poisson("log"),data=baseFREQ)
>  summary(reg)

Call:
glm(formula = nbre ~ log(exposition) + carburant + as.factor(ageconducteur) + 
    as.factor(densite), family = poisson("log"), data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.7114  -0.3200  -0.2637  -0.1896  12.7104  

Coefficients:
                              Estimate Std. Error z value Pr(>|z|)    
(Intercept)                  -14.07321  181.04892  -0.078 0.938042    
log(exposition)                0.56781    0.03029  18.744  < 2e-16 ***
carburantE                    -0.17979    0.04630  -3.883 0.000103 ***
as.factor(ageconducteur)19    12.18354  181.04915   0.067 0.946348    
as.factor(ageconducteur)20    12.48752  181.04902   0.069 0.945011

(etc). So it might be a too strong assumption to assume that the exposure is an exogenous variate here. But that’s another story !