Category Archives: GLM

Natura non facit saltus

(see John Wilkins’ article on the – interesting – history of that phrase http://scienceblogs.com/evolvingthoughts/…). We will see, this week in class, several smoothing techniques, for insurance ratemaking. As a starting point, assume that we do not want to use segmentation techniques: everyone will pay exactly the same price.

  • no segmentation of the premium

And that price should be related to the pure premium, which is proportional to the frequency (or the annualized frequency, as discussed previously), since

https://latex.codecogs.com/gif.latex?\mathbb{E}_{\mathbb{P}}\left(\sum_{i=1}^N%20Y_i\right)=\mathbb{E}_{\mathbb{P}}(N)%20\cdot%20\mathbb{E}_{\mathbb{P}}(Y_i)

The probability measure is mentioned here just to recall that we can use any measure. Even https://latex.codecogs.com/gif.latex?\mathbb{P}_{\boldsymbol{X}} (based on some covariates). Without any covariate, the expected frequency should be

> regglm0=glm(nbre~1+offset(log(exposition)),data=sinistres,family=poisson)
> summary(regglm0)

Call:
glm(formula = nbre ~ 1 + offset(log(exposition)), family = poisson, 
    data = sinistres)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5033  -0.3719  -0.2588  -0.1376  13.2700  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -2.6201     0.0228  -114.9   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12680  on 49999  degrees of freedom
Residual deviance: 12680  on 49999  degrees of freedom
AIC: 16353

Number of Fisher Scoring iterations: 6
> exp(coefficients(regglm0))
(Intercept) 
 0.07279295

Thus, if we do not want to take into account potential heterogeneity, we should assume that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) where https://latex.codecogs.com/gif.latex?\lambda is closed to 7.28%. Yes, as mentioned in class, it is rather common to see https://latex.codecogs.com/gif.latex?\lambda as a percentage, i.e. a probability, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq%200)=1-e^{-\lambda}\approx%20\lambda

i.e. https://latex.codecogs.com/gif.latex?\lambda can be interpreted as the probability of not have a claim (see also the law of small numbers). Let us visualize this as a function of the age of the driver,

> a=18:100
> yp=predict(regglm0,newdata=data.frame(ageconducteur=a,exposition=1),type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> k=23
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-constante.png

We do predict the same frequency for all drivers, e.g. for some drive aged 40,

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07279295  confidence interval 0.07611196 0.06947393

Let us now consider the case where we try to take into account heterogeneity, e.g. by age,

  • The (standard) Poisson regression

The idea of the (log-)Poisson regression is to assume that instead of having https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda), we should have https://latex.codecogs.com/gif.latex?N|\boldsymbol{X}\sim\mathcal{P}(\lambda_{\boldsymbol{X}}), where

https://latex.codecogs.com/gif.latex?\lambda_{\boldsymbol{X}}=\exp(\beta_0+\beta_1%20\boldsymbol{X}_1+\cdots+\beta_k\boldsymbol{X}_k)

in a very general setting. Here, let us consider only one explanatory variable, i.e.

https://latex.codecogs.com/gif.latex?\lambda_{X}=\exp(\beta_0+\beta_1%20{X})

Here, we have

> yp=predict(regglm1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-exp-standard.png

i.e. the prediction for the annualized claim frequency for our 40 year old driver is now 7.74% (which is slightly higher than what we had before, 7.28%)

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07740574  confidence interval 0.08117512 0.07363636

It is possible to compute not the expected frequency , but the ratio https://latex.codecogs.com/gif.latex?\mathbb{E}(N|X)/\mathbb{E}(N).

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-13.45.43.png

Above the horizontal blue line, the premium will be higher than the one obtained without segmentation, and (of course) lower below. Here, drivers younger than 44 year old will pay more, while driver older than 44 year old will be less. We have discussed, in the introduction, the necessity of segmentation. If we consider two companies, one segmenting, while the other one has a flat rate, then older drivers will go to the first company (since insurance is cheaper) while younger ones will go to the second one (again, it is cheaper). The problem is that the second company implicitly hopes that older drivers will compensate the risk. But since they’re gone, insurance will be too cheap, and the company will loose money (if not goes bankrupt). So companies have to use segmentation techniques to survive. Now, the problem is that we cannot be sure that this exponential decay of the premium is the proper way the premium should evolve as a function of the age. An alternative can be to use nonparametric techniques to visualize to true influence of the age on claims frequency.

  • A pure nonparametric model

A first model can be to consider a premium, per age. This can be done considering the age of the driver as a factor in the regression,

> regglm2=glm(nbre~as.factor(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> yp=predict(regglm2,newdata=data.frame(ageconducteur=a0,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a0,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-factors.png

Here, the forecast for our 40 year old driver is slightly lower than be previous one, but the confidence interval is much larger (since we focus on a very small subclass of the portfolio: drivers aged exactly 40)

Frequency = 0.06686658  confidence interval 0.08750205 0.0462311

Here, we consider too small classes, and the premium is too erratic: the premium will decrease of 20% from age 40 to 41, and then increase of 50% from age 41 to 42,

> diff(log(yp0[23:25]))
        24         25 
-0.2330241  0.5223478

There is no chance that the company will keep the insured with this strategy. This discontinuity of the premium is clearly an important issue here.

  • Using age classes

An alternative can be to consider age classes, from very young drivers to senior drivers.

> level1=seq(15,105,by=5)
> regglmc1=glm(nbre~cut(ageconducteur,level1)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> summary(regglmc1)

Coefficients:
                                   Estimate Std. Error z value Pr(>|z|)    
(Intercept)                         -1.6036     0.1741  -9.212  < 2e-16 ***
cut(ageconducteur, level1)(20,25]   -0.4200     0.1948  -2.157   0.0310 *  
cut(ageconducteur, level1)(25,30]   -0.9378     0.1903  -4.927 8.33e-07 ***
cut(ageconducteur, level1)(30,35]   -1.0030     0.1869  -5.367 8.02e-08 ***
cut(ageconducteur, level1)(35,40]   -1.0779     0.1866  -5.776 7.65e-09 ***
cut(ageconducteur, level1)(40,45]   -1.0264     0.1858  -5.526 3.28e-08 ***
cut(ageconducteur, level1)(45,50]   -0.9978     0.1856  -5.377 7.58e-08 ***
cut(ageconducteur, level1)(50,55]   -1.0137     0.1855  -5.464 4.65e-08 ***
cut(ageconducteur, level1)(55,60]   -1.2036     0.1939  -6.207 5.40e-10 ***
cut(ageconducteur, level1)(60,65]   -1.1411     0.2008  -5.684 1.31e-08 ***
cut(ageconducteur, level1)(65,70]   -1.2114     0.2085  -5.811 6.22e-09 ***
cut(ageconducteur, level1)(70,75]   -1.3285     0.2210  -6.012 1.83e-09 ***
cut(ageconducteur, level1)(75,80]   -0.9814     0.2271  -4.321 1.55e-05 ***
cut(ageconducteur, level1)(80,85]   -1.4782     0.3371  -4.385 1.16e-05 ***
cut(ageconducteur, level1)(85,90]   -1.2120     0.5294  -2.289   0.0221 *  
cut(ageconducteur, level1)(90,95]   -0.9728     1.0150  -0.958   0.3379    
cut(ageconducteur, level1)(95,100] -11.4694   144.2817  -0.079   0.9366    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

> yp=predict(regglmc1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,ylim=c(.03,.12),type="s")
> abline(v=40,col="grey")
> lines(a,yp1,lty=2,type="s")
> lines(a,yp2,lty=2,type="s")

Here we obtain the following predictions,

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-1.png

and for our 40 year old driver, the frequency is now 6.84%.

Frequency = 0.0684573  confidence interval 0.07766717 0.05924742

But our classes were defined arbitrarily here. Perhaps should we consider other classes, to see if the prediction is sensitive to the cutting values,

> level2=level1-2
> regglmc2=glm(nbre~cut(ageconducteur,level2)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-2.png

which yields the following values for our 40 year old driver,

Frequency = 0.07050614  confidence interval 0.07980422 0.06120807

So here, we did not remove the discontinuity problem. An idea here can be to consider moving regions: if the goal is to predict the frequency for a 40 year old driver, perhaps the class should be (somehow) centered around 40. And center the interval around 35 for drivers aged 35. Etc.

  • Moving average

Thus, it is natural to consider some local regressions, where only drivers aged almost 40 should be considered. This almost concept is related to the bandwidth. For instance, drivers between 35 and 45 can be considered as being almost40. In practice we can either consider a subset function, or we can use weights in the regressions

> value=40
> h=5
> sinistres$omega=(abs(sinistres$ageconducteur-value)<=h)*1
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

To see what’s going on, let us consider an animated plot, where the age of interest is changing,

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-2.gif

Here, for our 40 year old drive, we get

Frequency = 0.06913391  confidence interval 0.07535564 0.06291218

We do obtain a curve that can be interpreted as a local regression. But here, we do not take into account that 35 is not as close to 40 as 39 could be. An here, 34 is assumed to be very far away from 40. Clearly, we could improve that technique: kernel functions can considered, i.e. the closer to 40, the larger the weight.

> value=40
> h=5
> sinistres$omega=dnorm(abs(sinistres$ageconducteur-value)/h)
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

which can be plotted below

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-1.gif

Here, our prediction for our 40 year old drive is

Frequency = 0.07040464  confidence interval 0.07981521 0.06099408

This is the idea of kernel regression techniques. But as explained in the slides, other non parametric techniques can be considered, like spline functions.

  • Smoothing with splines

In R, it is simple to use spline function (somehow much more simple than kernel smoothers)

> library(splines)
> regglmbs=glm(nbre~bs(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-splines.png

The prediction for our 40 year old driver is now

Frequency = 0.06928169  confidence interval 0.07397124 0.06459215

Note that this techniques is related to another class of models, the so-called Generalized Additive Models, i.e. GAMs.

> library(mgcv)
> reggam=gam(nbre~s(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-gam.png

The prediction is extremely close to the one we obtained above (the main differences being observed for very old drivers)

Frequency = 0.06912683  confidence interval 0.07501663 0.06323702
  • Comparison of the different models

Somehow, one way or another, all those models are valid. So perhaps we should compare them,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.50.19.png

On the graph above, we can visualize the upper and the lower bound of the prediction, for the 9 models. The horizontal line is the predicted value without taking into account heterogeneity. It is possible to consider relative values, with respect to this value,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.54.56.png

Fréquence de sinistres, et surdispersion

Je continue à mettre en ligne les transparents qui serviront de support pour le cours ACT2040. Dans cette dernière partie sur la modélisation de la fréquence de sinistre, on parlera de surdispersion. Les transparents sont en ligne ici,

Sinon, parmi les références complémentaires, je peux suggérer plusieurs documents rédigés par des praticiens, comme Meyers (2009) http://casact.org/education/…, Isamail & Jemain (2009) http://casact.org/pubs/… ou encore le document très intéressant (et critique) de Schmid (2011) http://casact.org/education/…. Les plus motivés pourront aussi survoler les section 2.3 et 2.4. du livre Denuit et al. (2007), en ligne sur http://books.google.ca/…

Overdispersion with different exposures

In actuarial science, and insurance ratemaking, taking into account the exposure can be a nightmare (in datasets, some clients have been here for a few years – we call that exposure – while others have been here for a few months, or weeks). Somehow, simple results because more complicated to compute just because we have to take into account the fact that exposure is an heterogeneous variable.

The exposure in insurance ratemaking can be seen as a problem of censored data (in my dataset, the exposure is always smaller than 1 since observations are contracts, not policyholders),

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

And as always, the variable of interest is the unobserved one, because we have to price insurance contract with a cover period of one (full) year. So we have to model the yearly frequency of insurance claims.

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

In our dataset, we have https://latex.codecogs.com/gif.latex?(Y_i,E_i)‘s – or more generally also some additional covariates https://latex.codecogs.com/gif.latex?(Y_i,E_i,\boldsymbol{X}_i)‘s. For ratemaking, we need to estimate https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) and perhaps also https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) (for instance to test if the Poisson assumption is valid, or not). To estimate the expected value, a natural estimate for https://latex.codecogs.com/gif.latex?\mathbb{E}(N) (forget about covariates as a start) is
https://latex.codecogs.com/gif.latex?m_N=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}
which is also the weight average of annualized individual counts
https://latex.codecogs.com/gif.latex?m_N=\sum_{i=1}^n%20\frac{%20E_i}{\sum_{i=1}^n%20E_i}%20\cdot%20\frac{Y_i}{E_i}
We consider the ratio of the total number of claims to the total exposure-to-
risk. This estimate appears for instance if we consider a Poisson process, so that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) while https://latex.codecogs.com/gif.latex?Y\sim\mathcal{P}(\lambda%20\cdot%20E). Then, the likelihood is

https://latex.codecogs.com/gif.latex?\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})=\prod_{i=1}^n%20\frac{e^{-\lambda%20E_i}%20[\lambda%20E_i]^{Y_i}}{Y_i!}

i.e.

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20-\lambda%20\sum_{i=1}^n%20E_i%20+\sum_{i=1}^n%20Y_i%20\log[\lambda%20E_i]%20-%20\log\left(\prod_{i=1}^n%20Y_i!\right)

The first order condition is here

https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20\lambda}\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20%20-%20\sum_{i=1}^n%20E_i%20+\frac{1}{\lambda}\sum_{i=1}^n%20Y_i%20=0

which is satisfied if

https://latex.codecogs.com/gif.latex?\widehat{\lambda}=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}

So, we do have an estimator for the expected value, and a natural estimator for https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) is then (if we consider categorical covariates)
https://latex.codecogs.com/gif.latex?m_{N|\boldsymbol{x}}%20=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20Y_i}{\sum_%20{i,\boldsymbol{X}_i=\boldsymbol{x}}%20E_i}

Now, we need an estimate for the variance, or more precisely the conditional variable. Assume (as a starting point) that all have the same exposure https://latex.codecogs.com/gif.latex?E. For instance, if https://latex.codecogs.com/gif.latex?E is one half, insured were observed only the first six months. Then https://latex.codecogs.com/gif.latex?N=Y+Y%27 with https://latex.codecogs.com/gif.latex?Y\overset{\mathcal%20L}{=}Y%27 (https://latex.codecogs.com/gif.latex?Y is the number of claims on the first six months, while https://latex.codecogs.com/gif.latex?Y%27 are the number of claims on the last six months), i.e. https://latex.codecogs.com/gif.latex?\text{Var}(N)=\text{Var}(Y)+%20\text{Var}(Y%27) if we assume independent increments. I.e.
https://latex.codecogs.com/gif.latex?\text{Var}(N)=2\text{Var}(Y), or conversely https://latex.codecogs.com/gif.latex?E%20\cdot\text{Var}(N)=\text{Var}(Y). More generally, it is reasonable to assume that

https://latex.codecogs.com/gif.latex?\text{Var}(Y)=E\cdot%20\text{Var}(N)
for all values of https://latex.codecogs.com/gif.latex?E. And then
https://latex.codecogs.com/gif.latex?\text{Var}\left(\frac{Y}{E}\right)=\frac{1}{E}\cdot%20\text{Var}(N)
Thus, it seems legitimate to assume that the empirical variance of https://latex.codecogs.com/gif.latex?N can be written
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20S_{Y/E}^2
Since the average of https://latex.codecogs.com/gif.latex?Y_i/E is https://latex.codecogs.com/gif.latex?\overline{N}=m_N, then
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20\frac{1}{n}\sum_{i=1}^n%20\left[\frac{Y_i}{E}-\overline{N}\right]^2}%20=%20\frac{1}{n}\sum_{i=1}^n%20E\left[\frac{Y_i}{E}-\overline{N}\right]^2}
or equivalently
https://latex.codecogs.com/gif.latex?S_N^2=\frac{1}{n}\sum_{i=1}^n%20\frac{E}{E^2}\left[Y_i-\overline{N}\cdot%20E\right]^2}%20=\frac{1}{n}\sum_{i=1}^n%20\frac{1}{E}[Y_i-\overline{N}\cdot%20E]^2i.e.
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E]^2%20}{nE}
Thus, with different https://latex.codecogs.com/gif.latex?E_i‘s, it would be legitimate (I guess) to consider
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E_i]^2%20}{\sum_{i=1}^n%20E_i}
Thus, an estimator for https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) is
https://latex.codecogs.com/gif.latex?S_{N|\boldsymbol{x}}^2=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20[Y_i-\overline{N}\cdot%20E_i]^2}{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}%20}%20E_i}

This can be used to test is the Poisson assumption is valid to model frequency. Consider the following dataset,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  T=table(sinistres$nocontrat)
>  T1=as.numeric(names(T))
>  T2=as.numeric(T)
>  nombre1 = data.frame(nocontrat=T1,nbre=T2)
>  I = contrat$nocontrat%in%T1
>  T1= contrat$nocontrat[I==FALSE]
>  nombre2 = data.frame(nocontrat=T1,nbre=0)
>  nombre=rbind(nombre1,nombre2)
>  baseFREQ = merge(contrat,nombre)

Here, we do have our two variables of interest, the exposure, per contract,

>  E <- baseFREQ$exposition

and the (observed) number of claims (during that time frame)

>  Y <- baseFREQ$nbre

It is possible to compute without covariates, the average (yearly) number of claims, per contract, and the associated variance

> (mean=weighted.mean(Y/E,E))
[1] 0.07279295
> (variance=sum((Y-mean*E)^2)/sum(E)) 
[1] 0.08778567

It looks like the variance is (slightly) larger than the average (we’ll see in a few weeks how to test it, more formally). It is possible to add covariates, for instance the density of population, in the area where the policyholder lives,

>  X=as.factor(baseFREQ$densite)
>  for(i in 1:length(levels(X))){
+ 	   Ei=E[X==levels(X)[i]]
+ 	   Yi=Y[X==levels(X)[i]]
+  (meani=weighted.mean(Yi/Ei,Ei))    # moyenne 
+  (variancei=sum((Yi-meani*Ei)^2)/sum(Ei))    # variance
+ cat("Density, zone",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Density, zone 11 average = 0.07962411  variance = 0.08711477 
Density, zone 21 average = 0.05294927  variance = 0.07378567 
Density, zone 22 average = 0.09330982  variance = 0.09582698 
Density, zone 23 average = 0.06918033  variance = 0.07641805 
Density, zone 24 average = 0.06004009  variance = 0.06293811 
Density, zone 25 average = 0.06577788  variance = 0.06726093 
Density, zone 26 average = 0.0688496   variance = 0.07126078 
Density, zone 31 average = 0.07725273  variance = 0.09067 
Density, zone 41 average = 0.03649222  variance = 0.03914317 
Density, zone 42 average = 0.08333333  variance = 0.1004027 
Density, zone 43 average = 0.07304602  variance = 0.07209618 
Density, zone 52 average = 0.06893741  variance = 0.07178091 
Density, zone 53 average = 0.07725661  variance = 0.07811935 
Density, zone 54 average = 0.07816105  variance = 0.08947993 
Density, zone 72 average = 0.08579731  variance = 0.09693305 
Density, zone 73 average = 0.04943033  variance = 0.04835521 
Density, zone 74 average = 0.1188611   variance = 0.1221675 
Density, zone 82 average = 0.09345635  variance = 0.09917425 
Density, zone 83 average = 0.04299708  variance = 0.05259835 
Density, zone 91 average = 0.07468126  variance = 0.3045718 
Density, zone 93 average = 0.08197912  variance = 0.09350102 
Density, zone 94 average = 0.03140971  variance = 0.04672329

Perhaps graphs would be a nice tool to play with, to visualize that information

> plot(meani,variancei,cex=sqrt(Ei),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(meani,variancei,cex=sqrt(Ei))

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.26.png

The size of the circles is related to the size of the group (the area is proportional to the total exposure within the group). The first diagonal corresponds to the Poisson model, i.e. the variance should be equal to the mean. It is also possible to consider other covariates, like the gas type

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.52.02.png

or the car brand,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.50.49.png

It is also possible to consider the age of the driver as a categorical variate

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.40.png

Actually, the age is interesting: we can observe on that dataset a feature that Jean-Philippe Boucher observed also on his own datasets. Let us look more carefully where are the different ages,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.55.17.png

On the right, we can observe young (unexperienced) drivers. That was expected. But some classes are below the first diagonal: the expected frequency is large, but not the variance. I.e. we know for sure that young drivers have more car accidents. It is not an heterogeneous class, on the contrary: young drivers can be seen as a relatively homogeneous class, with a high frequency of car accidents.

With the original dataset (here, I use only a subset with 50,000 clients), we do obtain the following graph:

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-11.27.04.png

If we do not observe underdispersion for young drivers, observe that those are incredibly homogeneous classes. With a clear impact of experience, since circles are moving downward from age 18 to 25.

Another disturbing story (this was – one more time – suggestion from Jean-Philippe) that it might be possible to consider the exposure as a standard variable, and see if the coefficient is actually equal to 1. Without any covariate,

>  reg=glm(Y~log(E),family=poisson("log"))
>  summary(reg)

Call:
glm(formula = Y ~ log(E), family = poisson("log"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.83045    0.02822 -100.31   <2e-16 ***
log(E)       0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

i.e. the parameter is clearly strictly smaller than 1. And it is neither related to significance,

> library(car)
> linearHypothesis(reg,"log(E)",1)
Linear hypothesis test

Hypothesis:
log(E) = 1

Model 1: restricted model
Model 2: Y ~ log(E)

  Res.Df Df  Chisq Pr(>Chisq)    
1  49999                         
2  49998  1 251.19  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

nor to the fact that I did not take into account covariates,

> reg=glm(nbre~log(exposition)+carburant+as.factor(ageconducteur)+as.factor(densite),family=poisson("log"),data=baseFREQ)
>  summary(reg)

Call:
glm(formula = nbre ~ log(exposition) + carburant + as.factor(ageconducteur) + 
    as.factor(densite), family = poisson("log"), data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.7114  -0.3200  -0.2637  -0.1896  12.7104  

Coefficients:
                              Estimate Std. Error z value Pr(>|z|)    
(Intercept)                  -14.07321  181.04892  -0.078 0.938042    
log(exposition)                0.56781    0.03029  18.744  < 2e-16 ***
carburantE                    -0.17979    0.04630  -3.883 0.000103 ***
as.factor(ageconducteur)19    12.18354  181.04915   0.067 0.946348    
as.factor(ageconducteur)20    12.48752  181.04902   0.069 0.945011

(etc). So it might be a too strong assumption to assume that the exposure is an exogenous variate here. But that’s another story !

Introduction aux modèles linéaires généralisés

J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.

Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,

http://freakonometrics.hypotheses.org/files/2013/02/assurance4.jpgMais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.

Régression de Poisson, et biais minimal

Lors du prochain cours d’actuariat, nous allons finir les arbres de régression, et introduire la régression de Poisson. Les transparents sont en ligne ici,

Je vais présenter la régression en Poisson, en faisant un parallèle avec la régression logistique, la session suivante portera sur la généralisation obtenue avec les modèles linéaires généralisés. Sur la régression de Poisson, je suggère de lire Frees (2010) chapitre 12 (p 343-361), Greene (2012), section 18.3 (p 802-828) ou encore de Jong Heller (2008) chapitre 6. Sur les méthodes de biais minimal, de Jong Heller (2008), section 1.3 et l’article de Sholom Feldblum, http://www.casact.org/…. Sur le passage de ces dernières méthodes (introduites par Robert Bailey dans les années 60, http://www.casact.org/… et http://www.casact.org/…), je recommande la lecture de l’article de Ben Zehnwirth, Ratemaking From Bailey and Simon (1960) to Generalized Linear Regression Models, en ligne sur http://www.casact.org/…

Comme annoncé au premier cours, j’essaye de mettre en ligne les transparents au fur et à mesure, mais j’avais pris l’habitude d’écrire au tableau ces dernières années. Il faut donc que je tape tout. Pour le devoir un courriel sera envoyé d’ici la fin de semaine à tous les groupes qui se sont inscrits.

 

Régression logistique et arbres

Pour le cours de mercredi prochain, la base utilisée sera une base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…

> baseavocat=read.table("http://freakonometrics.free.fr/AutoBI.csv",header=TRUE,sep=",")
> tail(baseavocat)
     CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE  LOSS
1335   34204        2      2       2        2        1     26 0.161
1336   34210        2      1       2        2        1     NA 0.576
1337   34220        1      2       1        2        1     46 3.705
1338   34223        2      2       1        2        1     39 0.099
1339   34245        1      2       2        1        1     18 3.277
1340   34253        2      2       2        2        1     30 0.688

On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.

Les transparents du cours sont en ligne ici,

Sur les arbres de régression, je mettrais en ligne un billet, afin d’illustrer la méthode. En attendant des compléments théoriques peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….

La semaine suivante, nous aborderons la régression de Poisson, les méthodes de biais minimal, et introduirons les modèles linéaires généralisés. Je renvoie au chapitre sur la tarification a priori du Denuit & Charpentier (2005), aux chapitres 12 et 13 de Frees (2010) ou encore les chapitres 5 et 6 du De Jong  & Heller (2008). Pour les plus curieux qui veulent comprendre les liens entre les modèles linéaires généralisés et la tarification par crédibilité, je renvoie à l’article de Klinker (2010)

Une région géographique n’est pas une variable continue

En relisant les devoirs maisons, je me suis rendu compte que certains avaient tenté de regrouper les régions (géographiques) par régions homogènes. Sauf que les régions étaient codées par un numéro (selon la codification officielle). Par exemple, dans une des bases, nous avions des assurés dans 4 zones géographiques, à savoir la région 82 (région Rhône-Alpes en rouge) la région 54 (région Poitou-Charentes en vert) la région 73 (région Midi-Pyrénées en bleu) et enfin la région 41 (région Lorraine en mauve).

> unique(baseFREQ$region) 
[1] 82 54 73 41

Une idée intéressante pour regrouper les régions pouvait être d’utiliser les arbres. Les régions étant des couleurs (on le voit bien sur la carte) et pas des variables quantitatives, il est normal de travailler sur des facteurs. D’ailleurs le code pour faire la carte est le suivant,

> library(maps) 
>  france<-map(database="france") 
>  dpt=c("Ain","Ardeche","Drome","Isere","Loire ","Rhone",  
+ "Savoie","Haute-Savoie","Charente","Charente-Maritime", 
+ "Deux-Sevres","Vienne","Ariege","Aveyron","Haute-Garonne",  
+ "Gers","Lot","Hautes-Pyrenees","Tarn","Tarn-et-Garonne", 
+ "Meurthe-et-Moselle","Meuse","Moselle","Vosges") 
>  couleur=c(rep(2,8),rep(3,4),rep(4,8),rep(6,4))  
>  match=match.map(france,dpt) 
>  color=couleur[match] 
>  map(database="france", fill=TRUE, col=color)

L’arbre sur les régions en tant que facteurs donne le découpage suivant

>  baseFREQ$fregion=as.factor(baseFREQ$region) 
>  ARBRE1=tree(nombre~fregion,data=baseFREQ,split="gini")  
>  plot(ARBRE1) 
>  text(ARBRE1)

Bon, R a la mauvaise idée de recoder les classes (mais il garde l’ordre, i.e. a correspond à la région 41, b à 54, c à 73 et d à 82). Visuellement, on retient qu’il est possible de considérer deux grandes régions, AC (i.e. 41 et 73) et BD (i.e. 54 et 72). L’intérêt des arbres sur des variables qualitatives, des facteurs, c’est que tous les regroupements sont possibles. En revanche, si on fait un arbre sur la région qui est lue en tant que nombre (quantitatif), on obtient

>  ARBRE2=tree(nombre~region,data=baseFREQ,split="gini") 
>  plot(ARBRE2) 
>  text(ARBRE2)

Il est alors impossible de regrouper dans une même classe deux régions séparées par un nombre, i.e. on ne peut regrouper 41 et 82 dans la même classe. R suggère de distinguer peut être trois régions, à savoir 82 (à droite), puis 73 (au centre) et enfin de mettre éventuellement 41 et 54 ensemble. Ce qui n’est pas la stratégie optimale quand on regroupe des facteurs.

Sélection de variables versus sélection de modalités

En cours, nous avions évoqué (très rapidement) la sélection automatique de variables. La méthode la plus simple est une méthode stepwise, basé sur un critère de type AIC, ou BIC. Considérons la base suivante,

>  N = base$nbre
>  E = base$exposition
>  X1 = base$carburant
>  X2 = cut(base$agevehicule,c(0,3,10,101),
+ right=FALSE)
>  X3 = cut(base$ageconducteur,c(0,22,45,101),
+ right=FALSE)
>  X4 = as.factor(base$zone)
>  X5 = as.factor(base$puissance)
>  X6 = as.factor(base$region)
>  X7 = as.factor(base$marque)
>  base1=data.frame(N,E,X1,X2,X3,X4,X5,X6,X7)

Une méthode stepwise (backward) donne ici

> reg1=glm(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),
+ family="poisson",data=base1)
> step(reg1)
Start:  AIC=20492.67
N ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X5   11    15316 20482
- X3    2    15305 20490
<none>       15304 20493
- X2    2    15314 20499
- X1    1    15319 20506
- X7   10    15343 20511
- X4    5    15398 20576
- X6   14    15569 20729

Step:  AIC=20482.35
N ~ X1 + X2 + X3 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X3    2    15317 20479
<none>       15316 20482
- X2    2    15326 20488
- X1    1    15334 20498
- X7   10    15359 20505
- X4    5    15410 20566
- X6   14    15579 20717

Step:  AIC=20479.33
N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
<none>       15317 20479
- X2    2    15327 20485
- X1    1    15334 20495
- X7   10    15360 20502
- X4    5    15410 20563
- X6   14    15620 20754

Call:  glm(formula = N ~ X1 + X2 + X4 + X6 + X7 
       + offset(log(E)),
       family = "poisson",
data = base1)

Coefficients:
(Intercept)          X1E     X2[3,10)   X2[10,101)          X4B
-1.0588454   -0.1653822    0.0266763   -0.1135451   -0.0004047
X4C          X4D          X4E          X4F          X60
0.1497622    0.3748811    0.5052894    0.4292016   -0.3590838
X61          X62          X63          X64          X65
-0.9300641   -1.0278887   -1.1818218   -1.0971797   -0.9459414
X66          X67          X68          X69         X610
-1.3690795   -1.1425678   -1.5309402   -1.3883549   -1.4603624
X611         X612         X613          X72          X73
-1.6763206   -1.3974092   -1.4864404    0.0246113    0.1144990
X74          X75          X76         X710         X711
-0.0932555    0.1635397   -0.1478095    0.2502030    0.1967970
X712         X713         X714
-0.2420215    0.2161411   -0.1963162

Degrees of Freedom: 49999 Total (i.e. Null);  49967 Residual
Null Deviance:	    15810
Residual Deviance: 15320 	AIC: 20480

Autrement dit, on supprime la troisième (âge du conducteur principal, par classes arbitraires) et la cinquième variable (puissance du véhicule) en gardant toutes les autres. Mais ici, si une variable n’a pas été retenue, c’est que globalement, elle n’apportait pas beaucoup d’information. Il serait toutefois possible de garder une information partielle, en gardant éventuellement certaines modalités. L’idée est de disjoncter la base, en créant des variables indicatrices par modalités. La base sera beaucoup plus grosse, et la sélection prendra alors beaucoup plus de temps,

> base2=data.frame(model.matrix( ~ 0+X1+X2+X3+X4+X5+X6+X7,
+ data=base1))
> base2$E=base1$E
> base2$N=base1$N
> reg2=glm(N~.-E+offset(log(E)),family="poisson",
+ data=base2)
>  step(reg2)
Start:  AIC=20492.67
N ~ (X1D + X1E + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101.
X4B + X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 +
X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 +
X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + E) - E + offset(log(E))

Step:  AIC=20492.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4B
X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 +
X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 +
X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + offset(log(E))

Df Deviance   AIC
- X4B         1    15304 20491
- X58         1    15304 20491
- X511        1    15304 20491
- X2.3.10.    1    15304 20491
- X72         1    15304 20491
- X513        1    15304 20491
- X512        1    15304 20491
- X515        1    15304 20491
- X74         1    15305 20491
- X3.45.101.  1    15305 20491
- X714        1    15305 20491
- X55         1    15305 20492
- X3.22.45.   1    15305 20492
- X711        1    15306 20492
- X76         1    15306 20492
- X59         1    15306 20492
<none>             15304 20493
- X514        1    15306 20493
- X713        1    15306 20493
- X73         1    15307 20493
- X56         1    15307 20493
- X710        1    15307 20494
- X75         1    15308 20494
- X2.10.101.  1    15308 20495
- X57         1    15309 20495
- X4C         1    15310 20496
- X510        1    15310 20496
- X60         1    15312 20498
- X4F         1    15314 20500
- X712        1    15316 20503
- X1D         1    15319 20506
- X4D         1    15337 20524
- X61         1    15345 20532
- X65         1    15350 20536
- X62         1    15352 20538
- X64         1    15359 20545
- X4E         1    15362 20549
- X63         1    15366 20553
- X67         1    15370 20556
- X612        1    15381 20568
- X69         1    15382 20569
- X66         1    15387 20574
- X610        1    15389 20576
- X68         1    15393 20580
- X611        1    15406 20592
- X613        1    15451 20637

Step:  AIC=20490.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4C
X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 +
X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 +
X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 +
X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 +
X714 + offset(log(E))

etc etc… et si on va directement à la fin,

Step:  AIC=20469.18
N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F + X57 + X510 + X60
X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 +
X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 +
offset(log(E))

Df Deviance   AIC
<none>             15315 20469
- X76         1    15317 20470
- X713        1    15317 20470
- X73         1    15317 20470
- X57         1    15318 20470
- X75         1    15318 20471
- X710        1    15319 20471
- X510        1    15319 20471
- X4C         1    15322 20474
- X60         1    15322 20475
- X2.10.101.  1    15325 20478
- X4F         1    15325 20478
- X1D         1    15333 20485
- X712        1    15338 20490
- X61         1    15356 20508
- X4D         1    15359 20511
- X62         1    15363 20515
- X65         1    15363 20515
- X64         1    15371 20524
- X63         1    15378 20530
- X67         1    15383 20536
- X4E         1    15390 20543
- X612        1    15394 20547
- X69         1    15396 20548
- X66         1    15400 20553
- X610        1    15403 20555
- X68         1    15407 20559
- X611        1    15419 20572
- X613        1    15467 20619

Call:  glm(formula = N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F
X57 + X510 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 +
X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 +
X710 + X712 + X713 + offset(log(E)), family = "poisson",
data = base2)

Coefficients:
(Intercept)          X1D   X2.10.101.          X4C          X4D
-1.20880      0.16886     -0.13808      0.14888      0.37539
X4E          X4F          X57         X510          X60
0.50458      0.42768      0.08381      0.18722     -0.36509
X61          X62          X63          X64          X65
-0.93836     -1.03471     -1.18803     -1.10217     -0.95624
X66          X67          X68          X69         X610
-1.37463     -1.15391     -1.54213     -1.40188     -1.47217
X611         X612         X613          X73          X75
-1.68559     -1.40582     -1.49700      0.10874      0.15022
X76         X710         X712         X713
-0.15183      0.21948     -0.27400      0.19565

Degrees of Freedom: 49999 Total (i.e. Null);  49971 Residual
Null Deviance:	    15810
Residual Deviance: 15310 	AIC: 20470

Si la troisième variable (âge du conducteur principal, par classes arbitraires) disparait assez vite, en revanche, une information sur la cinquième (la puissance) est gardée car certaines modalités semblent être informative sur la fréquence d’accidents. En revanche, on notera qui si on fait un arbre, la troisième variable était toujours clairement significative, ce qui peut nous conforter dans l’idée de faire de la sélection de variables sur les modalités.

> library(tree)
> TREE= tree(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),split="gini",
+ mincut = 2500,data=base1)
> plot(TREE)
> text(TREE,cex=.9)

Confidence interval for predictions with GLMs

Consider a (simple) Poisson regression http://freakonometrics.hypotheses.org/files/2016/11/poiss01.gif. Given a sample http://freakonometrics.hypotheses.org/files/2016/11/poiss02.gif where http://freakonometrics.hypotheses.org/files/2016/11/poiss03.gif, the goal is to derive a 95% confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif given http://freakonometrics.hypotheses.org/files/2016/11/poiss05.gif, where http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif is the prediction. Hence, we want to derive a confidence interval for the prediction, not the potential observation, i.e. the dot on the graph below

> r=glm(dist~speed,data=cars,family=poisson)
> P=predict(r,type="response",
+ newdata=data.frame(speed=seq(-1,35,by=.2)))
> plot(cars,xlim=c(0,31),ylim=c(0,170))
> abline(v=30,lty=2)
> lines(seq(-1,35,by=.2),P,lwd=2,col="red")
> P0=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> points(30,P1$fit,pch=4,lwd=3)

i.e.

Let http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif denote the maximum likelihood estimator of http://freakonometrics.hypotheses.org/files/2016/11/poiss07.gif. Then
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif
where http://freakonometrics.hypotheses.org/files/2016/11/poiss101.gif is Fisher information of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif (from standard maximum likelihood theory). Recall that
http://freakonometrics.hypotheses.org/files/2016/11/poiss13.gif
where computation of those values is based on the following calculations
http://freakonometrics.blog.fre<br /><br /> e.fr/public/latex/poiss21.gif
In the case of the log-Poisson regression
http://freakonometrics.hypotheses.org/files/2016/11/poiss36.gif
Let us get back to our initial problem.

  • confidence interval for the linear combination

A first idea to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif is to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss100.gif (by taking exponential values of bounds, since the exponential is a monotone function). Asymptotically, we know that
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif

thus, an approximation for the variance matrix of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif will be based on http://freakonometrics.hypotheses.org/files/2016/11/poiss45.gif, obtained by plugging estimators of the parameters.
Then, since http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif as an asymptotic multivariate distribution, any linear combination of the parameters will also be normal, i.e.
http://freakonometrics.hypotheses.org/files/2016/11/poiss47.gif has a normal distribution, centered on http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif, with variance http://freakonometrics.hypotheses.org/files/2016/11/poiss102.gif where http://freakonometrics.hypotheses.org/files/2016/11/Poiss110.gif is the variance of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif. All those quantities can be easily computed. First, we can get the variance of the estimators

> i1=sum(predict(reg,type="response"))
> i2=sum(cars$speed*predict(reg,type="response"))
> i3=sum(cars$speed^2*predict(reg,type="response"))
> I=matrix(c(i1,i2,i2,i3),2,2)
> V=solve(I)

Hence, if we compare with the output of the regression,

> summary(reg)$cov.unscaled
(Intercept)         speed
(Intercept)  0.0066870446 -3.474479e-04
speed       -0.0003474479  1.940302e-05
> V
[,1]          [,2]
[1,]  0.0066871228 -3.474515e-04
[2,] -0.0003474515  1.940318e-05

Based on those values, it is easy to derive the standard deviation for the linear combination,

> x=30
> P2=predict(r,type="link",se.fit=TRUE,
+ newdata=data.frame(speed=x))
> P2
$fit
1
5.046034

$se.fit
[1] 0.05747075

$residual.scale
[1] 1

> sqrt(V[1,1]+2*x*V[2,1]+x^2*V[2,2])
[1] 0.05747084
> sqrt(t(c(1,x))%*%V%*%c(1,x))
[,1]
[1,] 0.05747084

And once we have the standard deviation, and normality (at least asymptotically), confidence intervals are derived, and then, taking the exponential of the bounds, we get confidence interval

> segments(30,exp(P2$fit-1.96*P2$se.fit),
+ 30,exp(P2$fit+1.96*P2$se.fit),col="blue",lwd=3)

Based on that technique, confidence intervals are no longer centered on the prediction. But who cares ?

  • delta method

Actually, those who like to use “more or less” expressions for confidence intervals will not like non centered intervals. So, an alternative is to use the delta method. Instead of writing (again) something on the theory, we can use a package which computes that method,

> estmean=t(c(1,x))%*%coef(reg)
> var=t(c(1,x))%*%summary(reg)$cov.unscaled%*%c(1,x)
> library(msm)
> deltamethod (~ exp(x1), estmean, var)
[1] 8.931232
> P1=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> P1
$fit
1
155.4048

$se.fit
1
8.931232

$residual.scale
[1] 1

The delta method gives us (asymptotic) normality, so once we have a standard deviation, we get the confidence interval.

> segments(30,P1$fit-1.96*P1$se.fit,30,
+ P1$fit+1.96*P1$se.fit,col="blue",lwd=3)

Note that those quantities – obtained with two different approaches – are rather close here

> exp(P2$fit-1.96*P2$se.fit)
1
138.8495
> P1$fit-1.96*P1$se.fit
1
137.8996
> exp(P2$fit+1.96*P2$se.fit)
1
173.9341
> P1$fit+1.96*P1$se.fit
1
172.9101
  • bootstrap techniques

And a third method (but far from what I expect to teach on that course) is to use bootstrap techniques to about those results based on asymptotic normality (we have only 50 observations). The idea is to sample from out dataset, and to run a log-Poisson regression on those new samples, and to repeat a lot of time,

Régression sur des variables catégorielles

Petit complément par rapport au cours de mardi. On avait évoqué tout d’abord la lecture des sorties lorsque l’on régresse sur des variables catégorielles (des facteurs). Commençons par supprimer la constante de la régression

> reg0=glm(nbre~0+zone,offset=log(exposition),data=base, 
+ family=poisson(link="log"))
> summary(reg0)

Call:
glm(formula = nbre ~ 0 + zone, family = poisson(link = "log"), 
    data = base, offset = log(exposition))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5717  -0.3968  -0.2996  -0.1547  12.6722  

Coefficients:
      Estimate Std. Error z value Pr(>|z|)    
zoneB -2.54187    0.06287  -40.43   <2e-16 ***
zoneA -2.54912    0.05285  -48.23   <2e-16 ***
zoneC -2.38525    0.03753  -63.56   <2e-16 ***
zoneD -2.13454    0.03878  -55.05   <2e-16 ***
zoneE -2.00204    0.03965  -50.49   <2e-16 ***
zoneF -2.06932    0.11547  -17.92   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 50966  on 50000  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

> predict(reg0,newdata=data.frame(
+ zone=c("A","B","C","D","E"),exposition=rep(1,5)))
        1         2         3         4         5 
-2.549120 -2.541870 -2.385253 -2.134543 -2.002044

On voit que toutes les modalités sont présentes, et toutes sont significatives. Si on régresse sur la constante, il faudra supprimer une modalité pour rendre le modèle identifiable. On peut forcer pour que la modalité de référence soit la seconde,

> base$zone=relevel(base$zone,"B")
> regB=glm(nbre~zone,offset=log(exposition),data=base,
+ family=poisson(link="log"))
> summary(regB)

Call:
glm(formula = nbre ~ zone, family = poisson(link = "log"), 
data = base,
offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3968  -0.2996  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54187    0.06287 -40.431  < 2e-16 ***
zoneA       -0.00725    0.08213  -0.088 0.929661
zoneC        0.15662    0.07322   2.139 0.032432 *
zoneD        0.40733    0.07387   5.514 3.50e-08 ***
zoneE        0.53983    0.07433   7.263 3.80e-13 ***
zoneF        0.47255    0.13148   3.594 0.000325 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

> predict(regB,newdata=data.frame(
+ zone=c("A","B","C","D","E"),exposition=rep(1,5)))
1         2         3         4         5
-2.549120 -2.541870 -2.385253 -2.134543 -2.002044

On notera que les prédictions ne changent pas. On peut aussi choisir la première comme modalité de référence,

> base$zone=relevel(base$zone,"A")
> reg=glm(nbre~zone,offset=log(exposition),
> data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zone, family = poisson(link = "log"), 
data = base,
offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3968  -0.2996  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54912    0.05285 -48.232  < 2e-16 ***
zoneB        0.00725    0.08213   0.088 0.929661
zoneC        0.16387    0.06482   2.528 0.011471 *
zoneD        0.41458    0.06555   6.324 2.54e-10 ***
zoneE        0.54708    0.06607   8.280  < 2e-16 ***
zoneF        0.47980    0.12699   3.778 0.000158 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49994  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

Le fait que la seconde modalité ne soit pas significative se lit par rapport à la modalité de référence (en l’occurrence la première): non significatif signifie alors non significativement différente. Autrement dit, on peut regrouper les modalités en une seule.

> base$zonesimple=base$zone
> base$zonesimple[base$zone%in%c("A","B")]="A"
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.54612    0.04046 -62.937  < 2e-16 ***
zonesimpleC  0.16087    0.05518   2.915  0.00355 **
zonesimpleD  0.41158    0.05604   7.345 2.06e-13 ***
zonesimpleE  0.54408    0.05665   9.605  < 2e-16 ***
zonesimpleF  0.47681    0.12235   3.897 9.74e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

On note qu’avec ce regroupement, les autres modalités sont sensiblement différentes. On peut aussi retenir la troisième comme modalité de référence

> base$zonesimple=relevel(base$zonesimple,"C")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.38525    0.03753 -63.557  < 2e-16 ***
zonesimpleA -0.16087    0.05518  -2.915  0.00355 **
zonesimpleD  0.25071    0.05396   4.646 3.39e-06 ***
zonesimpleE  0.38321    0.05460   7.019 2.24e-12 ***
zonesimpleF  0.31593    0.12142   2.602  0.00927 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

Comme toutes les modalités semblent significatives, on peut tenter de prendre comme modalité de référence une des dernières (dont les estimations des coefficients donnent des résultats très proches)

> base$zonesimple=relevel(base$zonesimple,"F")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5717  -0.3959  -0.2989  -0.1547  12.6722

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.06932    0.11547 -17.921  < 2e-16 ***
zonesimpleC -0.31593    0.12142  -2.602  0.00927 **
zonesimpleA -0.47681    0.12235  -3.897 9.74e-05 ***
zonesimpleD -0.06522    0.12181  -0.535  0.59232
zonesimpleE  0.06727    0.12209   0.551  0.58161
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15692  on 49995  degrees of freedom
AIC: 20798

Number of Fisher Scoring iterations: 6

Au vue de cette dernière sortie, on peut tenter de fusionner toutes les dernières classes ensembles

> base$zonesimple[base$zone%in%c("D","E","F")]="F"
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = nbre ~ zonesimple, family = poisson(link = "log"),
data = base, offset = log(exposition))

Deviance Residuals:
Min       1Q   Median       3Q      Max
-0.5660  -0.3959  -0.3004  -0.1547  12.5929

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.07182    0.02696 -76.853  < 2e-16 ***
zonesimpleC -0.31344    0.04621  -6.783 1.18e-11 ***
zonesimpleA -0.47431    0.04861  -9.757  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 15809  on 49999  degrees of freedom
Residual deviance: 15698  on 49997  degrees of freedom
AIC: 20800

Number of Fisher Scoring iterations: 6

Bon, formellement, regrouper deux modalités (i.e. décréter que deux variables sont non significatives simultanément) demande un peu plus qu’un test de Student, ou que deux tests de Student…. Si on remonte un peu en arrière, on aurait pu faire un test multiple avant de fusionner les trois modalités (un test de type Fisher, ou une ANOVA)

> base$zonesimple=relevel(base$zonesimple,"F")
> reg=glm(nbre~zonesimple,offset=log(exposition),
+ data=base,family=poisson(link="log"))
> library(car)
> linearHypothesis(reg,c("zonesimpleD=0","zonesimpleE=0"))
Linear hypothesis test

Hypothesis:
zonesimpleD = 0
zonesimpleE = 0

Model 1: restricted model
Model 2: nbre ~ zonesimple

Res.Df Df  Chisq Pr(>Chisq)
1  49997
2  49995  2 5.7073    0.05763 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Manifestement, on peut accepter l’hypothèse que ces trois catégories n’en font qu’une. La zone géographique peut alors se découper en trois grandes zones, et pas en six. On notera que cela correspond à ce que propose un arbre de régression

> library(tree)
> arbre=tree(nbre~zone+offset(log(exposition)),
+ data=base,split="gini")
> plot(arbre)
> text(arbre)

ACT2040, introduction aux modèles linéaires généralisés

On commencera ce mardi les GLM, après avoir introduit les lois exponentielles (qui ont du être revues en démonstration vendredi dernier). La notation utilisée sera que la loi (densité ou fonction de probabilité) de http://freakonometrics.blog.free.fr/public/perso4/Yi-ltx.gif sera de la forme

http://freakonometrics.blog.free.fr/public/perso4/loi-exponentielle.gif

Pour un complément plus exhaustif, je renvoie au chapitre en ligne.

  • Le modèle linéaire (Gaussien)

Le modèle de base est le modèle Gaussien que l’on avait revu au dernier cours,

> X=c(1,2,3,4)
> Y=c(1,2,5,6)
> base=data.frame(X,Y)
> reg1=lm(Y~1+X,data=base)
> nbase=data.frame(X=seq(0,5,by=.1))
> Y1=predict(reg1,newdata=nbase)

Pour une prédiction (unique), on obtient la prédiction suivante

Le code pour une telle représentation est le suivant

> plot(X,Y,pch=3,cex=1.5,lwd=2,xlab="",ylab="")
> lines(nbase$X,Y1,col="red",lwd=2)
> u=2
> mu=predict(reg1)[2]
> sigma=summary(reg1)$sigma
> y=seq(0,7,.05)
> loi=dnorm(y,mu,sigma)
> segments(u,y,loi+u,y,col="light green")
> lines(loi+u,y)
> abline(v=u,lty=2)
> points(X[2],Y[2],pch=3,cex=1.5,lwd=2)
> points(X[2],predict(reg1)[2],pch=19,col="red")
> arrows(u-.2,qnorm(.05,mu,sigma),
+ u-.2,qnorm(.95,mu,sigma),length=0.1,code=3,col="blue")

On peut multiplier les prédictions, en se basant sur l’hypothèse d’homoscédasticité (la variance sera alors constante)

Mais on peut aller plus loin

  • Le modèle linéaire généralisé

Plusieurs modèles peuvent etre estimés, en changeant la loi de la variable à expliquer, et la fonction lien,

> reg2=glm(Y~1+X,data=base,family=poisson(link="identity"))
> Y2=predict(reg2,newdata=nbase,type="response")
> reg3=glm(Y~1+X,data=base,family=poisson(link="log"))
> Y3=predict(reg3,newdata=nbase,type="response")
> reg4=glm(Y~1+X,data=base,family=gaussian(link="log"))
> Y4=predict(reg4,newdata=nbase,type="response")
> sigma=sqrt(summary(reg4)$dispersion)

Pour le modèle Poissonnien avec un lien identité, on obtient

On obtient ainsi une variance qui augmente avec la prédiction,

Pour une régression de Poisson avec un lien logarithmique,

i.e. pour nos quatre prédictions

On peut comparer avec une prédiction d’un modèle Gaussien avec un lien logarithmique,

i.e. pour les quatre prédictions

Qui peut m’aider à comprendre les sorties de SAS ?

Je m’étais promis que j’évoquerais une bizarrerie rencontrée avec SAS lors d’une formation…. Écrire ce billet permettra à ceux qui auraient des éléments d’explication de poster un commentaire.
Pour cela, comparons une régression logistique faite avec deux outils différents, sous SAS,

  • avec la procédure logistique

Le code pour faire une régression logistique ressemble à ça

PROC LOGISTIC DATA=base_logistq;
FORMAT age_soc f2_ageso.;
CLASS sexe_soc age_soc fract_paiemt;
MODEL SPOCAM = sexe_soc age_soc fract_paiemt / selection=stepwise;
RUN; QUIT;

ce qui donne la sortie suivante (je passe l’introduction pour insister sur les coefficients)

                                 The LOGISTIC Procedure

                   Analyse des estimations de la vraisemblance maximum
                                                     Erreur         Khi 2
  Paramètre                    DF    Estimation         std       de Wald    Pr > Khi 2

  Intercept                     1        1.7833      0.0676      696.9022        <.0001
  sexe_soc     Femme            1       -0.2429      0.0619       15.4237        <.0001
  age_soc      1_AGESOC_-60     1        0.4578      0.0667       47.1020        <.0001
  fract_paiemt Annuel           1        0.6021      0.0997       36.4862        <.0001
  fract_paiemt Mensuel          1       -0.5410      0.0842       41.2342        <.0001
  • avec la procédure genmod (car la régression logistique est un glm)

On peut faire exactement la même chose (théoriquement) en ajustement un modèle GLM,

PROC GENMOD DATA=base_logistq;
FORMAT age_soc f2_ageso.;
CLASS sexe_soc age_soc fract_paiemt;
MODEL SPOCAM = sexe_soc age_soc fract_paiemt / dist = binomial;
RUN;

et la sortie ressemble à ça

                                  The GENMOD Procedure
                      Analyse des résultats estimés de paramètres

                                                  Erreur      Wald 95Limites
Paramètre                     DF   Estimation   standard      de confiance %       Khi 2
Intercept                      1       1.5073     0.1501     1.2131     1.8014    100.85
sexe_soc       Femme           1      -0.4859     0.1237    -0.7284    -0.2434     15.42
sexe_soc       Homme           0       0.0000     0.0000     0.0000     0.0000       .
age_soc        1_AGESOC_-60    1       0.9156     0.1334     0.6542     1.1771     47.10
age_soc        Z_AGESOC_+60    0       0.0000     0.0000     0.0000     0.0000       .
fract_paiemt   Annuel          1       0.6634     0.1770     0.3165     1.0104     14.05
fract_paiemt   Mensuel         1      -0.4798     0.1510    -0.7759    -0.1838     10.09
fract_paiemt   Semestriel      0       0.0000     0.0000     0.0000     0.0000       .
Scale                          0       1.0000     0.0000     1.0000     1.0000
  • comparaison des deux sorties

Si on regarde l’impact du sexe par exemple, dans la première sortie on peut lire

sexe_soc     Femme            1       -0.2429      0.0619       15.4237        <.0001
alors que dans la seconde sortie, on a
sexe_soc       Femme           1      -0.4859     0.1237    -0.7284    -0.2434     15.42
sexe_soc       Homme           0       0.0000     0.0000     0.0000     0.0000

On dira ce qu’on veut, mais moi je trouve cette différence troublante…. Dans la seconde sortie, le coefficient vaut le double de l’autre….
Alors SAS semble s’y retrouver car si on lui demande d’afficher le score prédit pour un individu au hasard (le premier de la base par exemple), les prédictions sont très proches,

                           fract_                                 proba1_       proba1_
  Obs  sexe_soc   age_soc  paiemt      SPOCAM  proba1_logit        
1      Homme          71  Annuel         0      0.10242637    0.10241302

Si quelqu’un sait interpréter ce qui est fait avec cette procédure logistique (car R donne la même chose que la sortie GLM), je suis preneur…..


Modeling analogies in life and nonlife insurance

On Wednesday afternoon I will be giving a talk at the SCOR Reserving Seminar. The talk will be on modeling analogies in life and nonlife insurance. We will start by discussing data analogies, based on the Lexis diagram in life insurance and in nonlife (when modeling claims dynamics),

This will induce similarities in datasets used in life models, and in nonlife reserving

Further, in the two cases, logPoisson models are usually used, either to model the number of deaths, or the amount of payment. The main difference is that in nonlife insurance, forecasting future payments is rather simple,

But in life models, unfortunately, we need to forecast the behavior of year based parameters.

Note that this is also the case in nonlife insurance when an inflation factor is introduced.
To go further, the slides are available here.