Category Archives: Graphics

Overdispersion with different exposures

In actuarial science, and insurance ratemaking, taking into account the exposure can be a nightmare (in datasets, some clients have been here for a few years – we call that exposure – while others have been here for a few months, or weeks). Somehow, simple results because more complicated to compute just because we have to take into account the fact that exposure is an heterogeneous variable.

The exposure in insurance ratemaking can be seen as a problem of censored data (in my dataset, the exposure is always smaller than 1 since observations are contracts, not policyholders),

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

And as always, the variable of interest is the unobserved one, because we have to price insurance contract with a cover period of one (full) year. So we have to model the yearly frequency of insurance claims.

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

In our dataset, we have https://latex.codecogs.com/gif.latex?(Y_i,E_i)‘s – or more generally also some additional covariates https://latex.codecogs.com/gif.latex?(Y_i,E_i,\boldsymbol{X}_i)‘s. For ratemaking, we need to estimate https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) and perhaps also https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) (for instance to test if the Poisson assumption is valid, or not). To estimate the expected value, a natural estimate for https://latex.codecogs.com/gif.latex?\mathbb{E}(N) (forget about covariates as a start) is
https://latex.codecogs.com/gif.latex?m_N=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}
which is also the weight average of annualized individual counts
https://latex.codecogs.com/gif.latex?m_N=\sum_{i=1}^n%20\frac{%20E_i}{\sum_{i=1}^n%20E_i}%20\cdot%20\frac{Y_i}{E_i}
We consider the ratio of the total number of claims to the total exposure-to-
risk. This estimate appears for instance if we consider a Poisson process, so that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) while https://latex.codecogs.com/gif.latex?Y\sim\mathcal{P}(\lambda%20\cdot%20E). Then, the likelihood is

https://latex.codecogs.com/gif.latex?\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})=\prod_{i=1}^n%20\frac{e^{-\lambda%20E_i}%20[\lambda%20E_i]^{Y_i}}{Y_i!}

i.e.

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20-\lambda%20\sum_{i=1}^n%20E_i%20+\sum_{i=1}^n%20Y_i%20\log[\lambda%20E_i]%20-%20\log\left(\prod_{i=1}^n%20Y_i!\right)

The first order condition is here

https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20\lambda}\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20%20-%20\sum_{i=1}^n%20E_i%20+\frac{1}{\lambda}\sum_{i=1}^n%20Y_i%20=0

which is satisfied if

https://latex.codecogs.com/gif.latex?\widehat{\lambda}=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}

So, we do have an estimator for the expected value, and a natural estimator for https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) is then (if we consider categorical covariates)
https://latex.codecogs.com/gif.latex?m_{N|\boldsymbol{x}}%20=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20Y_i}{\sum_%20{i,\boldsymbol{X}_i=\boldsymbol{x}}%20E_i}

Now, we need an estimate for the variance, or more precisely the conditional variable. Assume (as a starting point) that all have the same exposure https://latex.codecogs.com/gif.latex?E. For instance, if https://latex.codecogs.com/gif.latex?E is one half, insured were observed only the first six months. Then https://latex.codecogs.com/gif.latex?N=Y+Y%27 with https://latex.codecogs.com/gif.latex?Y\overset{\mathcal%20L}{=}Y%27 (https://latex.codecogs.com/gif.latex?Y is the number of claims on the first six months, while https://latex.codecogs.com/gif.latex?Y%27 are the number of claims on the last six months), i.e. https://latex.codecogs.com/gif.latex?\text{Var}(N)=\text{Var}(Y)+%20\text{Var}(Y%27) if we assume independent increments. I.e.
https://latex.codecogs.com/gif.latex?\text{Var}(N)=2\text{Var}(Y), or conversely https://latex.codecogs.com/gif.latex?E%20\cdot\text{Var}(N)=\text{Var}(Y). More generally, it is reasonable to assume that

https://latex.codecogs.com/gif.latex?\text{Var}(Y)=E\cdot%20\text{Var}(N)
for all values of https://latex.codecogs.com/gif.latex?E. And then
https://latex.codecogs.com/gif.latex?\text{Var}\left(\frac{Y}{E}\right)=\frac{1}{E}\cdot%20\text{Var}(N)
Thus, it seems legitimate to assume that the empirical variance of https://latex.codecogs.com/gif.latex?N can be written
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20S_{Y/E}^2
Since the average of https://latex.codecogs.com/gif.latex?Y_i/E is https://latex.codecogs.com/gif.latex?\overline{N}=m_N, then
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20\frac{1}{n}\sum_{i=1}^n%20\left[\frac{Y_i}{E}-\overline{N}\right]^2}%20=%20\frac{1}{n}\sum_{i=1}^n%20E\left[\frac{Y_i}{E}-\overline{N}\right]^2}
or equivalently
https://latex.codecogs.com/gif.latex?S_N^2=\frac{1}{n}\sum_{i=1}^n%20\frac{E}{E^2}\left[Y_i-\overline{N}\cdot%20E\right]^2}%20=\frac{1}{n}\sum_{i=1}^n%20\frac{1}{E}[Y_i-\overline{N}\cdot%20E]^2i.e.
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E]^2%20}{nE}
Thus, with different https://latex.codecogs.com/gif.latex?E_i‘s, it would be legitimate (I guess) to consider
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E_i]^2%20}{\sum_{i=1}^n%20E_i}
Thus, an estimator for https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) is
https://latex.codecogs.com/gif.latex?S_{N|\boldsymbol{x}}^2=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20[Y_i-\overline{N}\cdot%20E_i]^2}{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}%20}%20E_i}

This can be used to test is the Poisson assumption is valid to model frequency. Consider the following dataset,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  T=table(sinistres$nocontrat)
>  T1=as.numeric(names(T))
>  T2=as.numeric(T)
>  nombre1 = data.frame(nocontrat=T1,nbre=T2)
>  I = contrat$nocontrat%in%T1
>  T1= contrat$nocontrat[I==FALSE]
>  nombre2 = data.frame(nocontrat=T1,nbre=0)
>  nombre=rbind(nombre1,nombre2)
>  baseFREQ = merge(contrat,nombre)

Here, we do have our two variables of interest, the exposure, per contract,

>  E <- baseFREQ$exposition

and the (observed) number of claims (during that time frame)

>  Y <- baseFREQ$nbre

It is possible to compute without covariates, the average (yearly) number of claims, per contract, and the associated variance

> (mean=weighted.mean(Y/E,E))
[1] 0.07279295
> (variance=sum((Y-mean*E)^2)/sum(E)) 
[1] 0.08778567

It looks like the variance is (slightly) larger than the average (we’ll see in a few weeks how to test it, more formally). It is possible to add covariates, for instance the density of population, in the area where the policyholder lives,

>  X=as.factor(baseFREQ$densite)
>  for(i in 1:length(levels(X))){
+ 	   Ei=E[X==levels(X)[i]]
+ 	   Yi=Y[X==levels(X)[i]]
+  (meani=weighted.mean(Yi/Ei,Ei))    # moyenne 
+  (variancei=sum((Yi-meani*Ei)^2)/sum(Ei))    # variance
+ cat("Density, zone",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Density, zone 11 average = 0.07962411  variance = 0.08711477 
Density, zone 21 average = 0.05294927  variance = 0.07378567 
Density, zone 22 average = 0.09330982  variance = 0.09582698 
Density, zone 23 average = 0.06918033  variance = 0.07641805 
Density, zone 24 average = 0.06004009  variance = 0.06293811 
Density, zone 25 average = 0.06577788  variance = 0.06726093 
Density, zone 26 average = 0.0688496   variance = 0.07126078 
Density, zone 31 average = 0.07725273  variance = 0.09067 
Density, zone 41 average = 0.03649222  variance = 0.03914317 
Density, zone 42 average = 0.08333333  variance = 0.1004027 
Density, zone 43 average = 0.07304602  variance = 0.07209618 
Density, zone 52 average = 0.06893741  variance = 0.07178091 
Density, zone 53 average = 0.07725661  variance = 0.07811935 
Density, zone 54 average = 0.07816105  variance = 0.08947993 
Density, zone 72 average = 0.08579731  variance = 0.09693305 
Density, zone 73 average = 0.04943033  variance = 0.04835521 
Density, zone 74 average = 0.1188611   variance = 0.1221675 
Density, zone 82 average = 0.09345635  variance = 0.09917425 
Density, zone 83 average = 0.04299708  variance = 0.05259835 
Density, zone 91 average = 0.07468126  variance = 0.3045718 
Density, zone 93 average = 0.08197912  variance = 0.09350102 
Density, zone 94 average = 0.03140971  variance = 0.04672329

Perhaps graphs would be a nice tool to play with, to visualize that information

> plot(meani,variancei,cex=sqrt(Ei),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(meani,variancei,cex=sqrt(Ei))

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.26.png

The size of the circles is related to the size of the group (the area is proportional to the total exposure within the group). The first diagonal corresponds to the Poisson model, i.e. the variance should be equal to the mean. It is also possible to consider other covariates, like the gas type

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.52.02.png

or the car brand,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.50.49.png

It is also possible to consider the age of the driver as a categorical variate

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.40.png

Actually, the age is interesting: we can observe on that dataset a feature that Jean-Philippe Boucher observed also on his own datasets. Let us look more carefully where are the different ages,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.55.17.png

On the right, we can observe young (unexperienced) drivers. That was expected. But some classes are below the first diagonal: the expected frequency is large, but not the variance. I.e. we know for sure that young drivers have more car accidents. It is not an heterogeneous class, on the contrary: young drivers can be seen as a relatively homogeneous class, with a high frequency of car accidents.

With the original dataset (here, I use only a subset with 50,000 clients), we do obtain the following graph:

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-11.27.04.png

If we do not observe underdispersion for young drivers, observe that those are incredibly homogeneous classes. With a clear impact of experience, since circles are moving downward from age 18 to 25.

Another disturbing story (this was – one more time – suggestion from Jean-Philippe) that it might be possible to consider the exposure as a standard variable, and see if the coefficient is actually equal to 1. Without any covariate,

>  reg=glm(Y~log(E),family=poisson("log"))
>  summary(reg)

Call:
glm(formula = Y ~ log(E), family = poisson("log"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.83045    0.02822 -100.31   <2e-16 ***
log(E)       0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

i.e. the parameter is clearly strictly smaller than 1. And it is neither related to significance,

> library(car)
> linearHypothesis(reg,"log(E)",1)
Linear hypothesis test

Hypothesis:
log(E) = 1

Model 1: restricted model
Model 2: Y ~ log(E)

  Res.Df Df  Chisq Pr(>Chisq)    
1  49999                         
2  49998  1 251.19  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

nor to the fact that I did not take into account covariates,

> reg=glm(nbre~log(exposition)+carburant+as.factor(ageconducteur)+as.factor(densite),family=poisson("log"),data=baseFREQ)
>  summary(reg)

Call:
glm(formula = nbre ~ log(exposition) + carburant + as.factor(ageconducteur) + 
    as.factor(densite), family = poisson("log"), data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.7114  -0.3200  -0.2637  -0.1896  12.7104  

Coefficients:
                              Estimate Std. Error z value Pr(>|z|)    
(Intercept)                  -14.07321  181.04892  -0.078 0.938042    
log(exposition)                0.56781    0.03029  18.744  < 2e-16 ***
carburantE                    -0.17979    0.04630  -3.883 0.000103 ***
as.factor(ageconducteur)19    12.18354  181.04915   0.067 0.946348    
as.factor(ageconducteur)20    12.48752  181.04902   0.069 0.945011

(etc). So it might be a too strong assumption to assume that the exposure is an exogenous variate here. But that’s another story !

Fractals and Kronecker product

A few years ago, I went to listen to Roger Nelsen who was giving a talk about copulas with fractal support. Roger is amazing when he gives a talk (I am also a huge fan of his books, and articles), and I really wanted to play with that concept (that he did publish later on, with Gregory Fredricks and José Antonio Rodriguez-Lallena). I did mention that idea in a paper, writen with Alessandro Juri, just to mention some cases where deriving fixed point theorems is not that simple (since the limit may not exist).

The idea in the initial article was to start with something quite simple, a the so-called transformation matrix, e.g.

https://latex.codecogs.com/gif.latex?T=\frac{1}{8}\left(\begin{matrix}1&%200%20&%201%20\\%200%20&%204%20&%200%20\\%201%20&%200&1\end{matrix}\right)
Here, in all areas with mass, we spread it uniformly (say), i.e. the support of https://latex.codecogs.com/gif.latex?T(C^\perp) is the one below, i.e. https://latex.codecogs.com/gif.latex?1/8th of the mass is located in each corner, and https://latex.codecogs.com/gif.latex?1/2 is in the center. So if we spread the mass to have a copula (with uniform margin,)we have to consider squares on intervals https://latex.codecogs.com/gif.latex?[0,1/4]https://latex.codecogs.com/gif.latex?[1/4,3/4] and https://latex.codecogs.com/gif.latex?[3/4,1],

Then the idea, then, is to consider https://latex.codecogs.com/gif.latex?T^2=\otimes^2T, where  https://latex.codecogs.com/gif.latex?\otimes^2T is the tensor product (also called Kronecker product) of https://latex.codecogs.com/gif.latex?T with itself. Here, the support of https://latex.codecogs.com/gif.latex?T^2(C^\perp) is

Then, consider https://latex.codecogs.com/gif.latex?T^3=\otimes^3T, where https://latex.codecogs.com/gif.latex?\otimes^3T is the tensor product of https://latex.codecogs.com/gif.latex?T with itself, three times. And the support of https://latex.codecogs.com/gif.latex?T^3(C^\perp) is

Etc. Here, it is computationally extremely simple to do it, using this Kronecker product. Recall that if https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}=(a_{i,j}), then

https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}\otimes\mathbf{B}%20=%20\begin{pmatrix}%20a_{11}%20\mathbf{B}%20&%20\cdots%20&%20a_{1n}\mathbf{B}%20\\%20\vdots%20&%20\ddots%20&%20\vdots%20\\%20a_{m1}%20\mathbf{B}%20&%20\cdots%20&%20a_{mn}%20\mathbf{B}%20\end{pmatrix}

So, we need a transformation matrix: consider the following https://latex.codecogs.com/gif.latex?4\times4 matrix,

> k=4
> M=matrix(c(1,0,0,1,
+            0,1,1,0,
+            0,1,1,0,
+            1,0,0,1),k,k)
> M
[,1] [,2] [,3] [,4]
[1,]    1    0    0    1
[2,]    0    1    1    0
[3,]    0    1    1    0
[4,]    1    0    0    1

Once we have it, we just consider the Kronecker product of this matrix with itself, which yields a https://latex.codecogs.com/gif.latex?4^2\times4^2 matrix,

> N=kronecker(M,M)
> N[,1:4]
[,1]  [,2] [,3] [,4]
[1,]     1    0    0    1
[2,]     0    1    1    0
[3,]     0    1    1    0
[4,]     1    0    0    1
[5,]     0    0    0    0
[6,]     0    0    0    0
[7,]     0    0    0    0
[8,]     0    0    0    0
[9,]     0    0    0    0
[10,]    0    0    0    0
[11,]    0    0    0    0
[12,]    0    0    0    0
[13,]    1    0    0    1
[14,]    0    1    1    0
[15,]    0    1    1    0
[16,]    1    0    0    1

And then, we continue,

> for(s in 1:3){N=kronecker(N,M)}

After only a couple of loops, we have a https://latex.codecogs.com/gif.latex?4^5\times4^5 matrix. And we can plot it simply to visualize the support,

> image(N,col=c("white","blue"))

As we zoom in, we can visualize this fractal property,

Finding Waldo, a flag on the moon and multiple choice tests, with R

I have to admit, first, that finding Waldo has been a difficult task. And I did not succeed. Neither could I correctly spot his shirt (because actually, it was what I was looking for). You know, that red-and-white striped shirt. I guess it should have been possible to look for Waldo’s face (assuming that his face does not change) but I still have problems with size factor (and resolution issues too). The problem is not that simple. At thehttp://mlsp2009.conwiz.dk/ conference, a price was offered for writing an algorithm in Matlab. And one can even find Mathematica codes online. But most of the those algorithms are based on the idea that we look for similarities with Waldo’s face, as described in problem 3 on http://www1.cs.columbia.edu/~blake/‘s webpage. You can find papers on that problem, e.g. Friencly & Kwan (2009) (based on statistical techniques, but Waldo is here a pretext to discuss other issues actually), or more recently (but more complex) Garg et al. (2011) on matching people in images of crowds.

What about codes in R ? On http://stackoverflow.com/, some ideas can be found (and thank Robert Hijmans for his help on his package). So let us try here to do something, on our own. Consider the following picture,

With the following code (based on the following file) it is possible to import the picture, and to extract the colors (based on an RGB decomposition),

> library(raster)
> waldo=brick(system.file("DepartmentStoreW.grd",
+ package="raster"))
> waldo
class       : RasterBrick
dimensions  : 768, 1024, 786432, 3 (nrow,ncol,ncell,nlayer)
resolution  : 1, 1  (x, y)
extent      : 0, 1024, 0, 768  (xmin, xmax, ymin, ymax)
coord. ref. : NA
values      : C:\R\win-library\raster\DepartmentStoreW.grd
min values  : 0 0 0
max values  : 255 255 255

My strategy is simple: try to spot areas with white and red stripes (horizontal stripes). Note that here, I ran the code on a Windows machine, the package is not working well on Mac. In order to get a better understanding of what could be done, let us start with something much more simple. Like the picture below, with Waldo (and Waldo only). Here, it is possible to extract the three colors (red, green and blue),

> plot(waldo,useRaster=FALSE)

It is possible to extract the red zones (already on the graph above, since red is a primary color), as well as the white ones (green zones on the graphs means a white region on the picture, on the left)

# white component
white = min(waldo[[1]] , waldo[[2]] , waldo[[3]])>220
focalswhite = focal(white, w=3, fun=mean)
plot(focalswhite,useRaster=FALSE)

# red component
red = (waldo[[1]]>150)&(max(  waldo[[2]] , waldo[[3]])<90)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

i.e. here we have the graphs below, with the white regions, and the red ones,

From those two parts, it has been possible to extract the red-and-white stripes from the picture, i.e. some regions that were red above, and white below (or the reverse),

# striped component
striped = red; n=length(values(striped)); h=5
values(striped)=0
values(striped)[(h+1):(n-h)]=(values(red)[1:(n-2*h)]==
TRUE)&(values(red)[(2*h+1):n]==TRUE)
focalsstriped = focal(striped, w=3, fun=mean)
plot(focalsstriped,useRaster=FALSE)

So here, we can easily spot Waldo, i.e. the guy with the red-white stripes (with two different sets of thresholds for the RGB decomposition)

Let us try somthing slightly more complicated, with a zoom on the large picture of the department store (since, to be honest, I know where Waldo is…).

Here again, we can spot the white part (on the left) and the red one (on the right), with some thresholds for the RGB decomposition

Note that we can try to be (much) more selective, playing with threshold. Here, it is not very convincing: I cannot clearly identify the region where Waldo might be (the two graphs below were obtained playing with thresholds)

And if we look at the overall pictures, it is worst. Here are the white zones, and the red ones,

and again, playing with RGB thresholds, I cannot spot Waldo,

Maybe I was a bit optimistic, or ambitious. Let us try something more simple, like finding a flag on the moon. Consider the picture below on the left, and let us see if we can spot an American flag,

Again, on the left, let us identify white areas, and on the right, red ones

Then as before, let us look for horizontal stripes

Waouh, I did it ! That’s one small step for man, one giant leap for R-coders ! Or least for me… So, why might it be interesting to identify areas on pictures ? I mean, I am not Chloe O’Brian, I don’t have to spot flags in a crowd, neither Waldo, nor some terrorists (that might wear striped shirts). This might be fun if you want to give grades for your exams automatically. Consider the two following scans, the template, and a filled copy,

A first step is to identify regions where we expect to find some “red” part (I assume here that students have to use a red pencil). Let us start to check on the template and the filled form if we can identify red areas,

exam = stack("C:\\Users\\exam-blank.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE) 
exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

First, we have to identify areas where students have to fill the blanks. So in the template, identify black boxes, and get the coordinates (here manually)

exam = stack("C:\\Users\\exam-blank.png")
black = max(  exam[[1]] ,exam[[2]] , exam[[3]])<50
focalsblack = focal(black, w=3, fun=mean)
plot(focalsblack,useRaster=FALSE)
correct=locator(20)
coordinates=locator(20)
X1=c(73,115,156,199,239)
X2=c(386,428.9,471,510,554)
Y=c(601,536,470,405,341,276,210,145,79,15)
LISTX=c(rep(X1,each=10),rep(X2,each=10))
LISTY=rep(Y,10)
points(LISTX,LISTY,pch=16,col="blue")

The blue points above are where we look for students’ answers. Then, we have to define the vector of correct answers,

CORRECTX=c(X1[c(2,4,1,3,1,1,4,5,2,2)],
X2[c(2,3,4,2,1,1,1,2,5,5)])
CORRECTY=c(Y,Y)
points(CORRECTX, CORRECTY,pch=16,col="red",cex=1.3)
UNCORRECTX=c(X1[rep(1:5,10)[-(c(2,4,1,3,1,1,4,5,2,2)
+seq(0,length=10,by=5))]],
X2[rep(1:5,10)[-(c(2,3,4,2,1,1,1,2,5,5)
+seq(0,length=10,by=5))]])
UNCORRECTY=c(rep(Y,each=4),rep(Y,each=4))

Now, let us get back on red areas in the form filled by the student, identified earlier,

exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=5, fun=mean)

Here, we simply have to compare what the student answered with areas where we expect to find some red in,

ind=which(values(focalsred)>.3)
yind=750-trunc(ind/610)
xind=ind-trunc(ind/610)*610
points(xind,yind,pch=19,cex=.4,col="blue")
points(CORRECTX, CORRECTY,pch=1,
col="red",cex=1.5,lwd=1.5)

Crosses on the graph on the right below are the answers identified as correct (here 13),

> icorrect=values(red)[(750-CORRECTY)*
+ 610+(CORRECTX)]
> points(CORRECTX[icorrect], CORRECTY[icorrect],
+ pch=4,col="black",cex=1.5,lwd=1.5)
> sum(icorrect)
[1] 13

In the case there are negative points for non-correct answer, we can count how many incorrect answers we had. Here 4.

> iuncorrect=values(red)[(750-UNCORRECTY)*610+
+ (UNCORRECTX)]
> sum(iuncorrect)
[1] 4

So I have not been able to find Waldo, but I least, that will probably save me hours next time I have to mark exams…

Playing with fire (or water)

A few days ago,http://www.futilitycloset.com/published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)

The answer is nice, an can be read on the blog.

What puzzled me in this problem is the following: if we know, for sure, that at least one player won’t get wet, we don’t know exactly how many of them won’t get wet (assuming that if they shoot at the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

It is then rather simple to get the distribution of the number of player that did not get wet,

N25=Vectorize(NOTWET)(n=rep(25,NSim))
T=table(N25)
plot(as.numeric(names(T)),T/NSim,type="b")

The graph for different values for the total number of players is the following (based on 25,000 simulations)

If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,

NSim=25000
N51=Vectorize(NOTWET)(n=rep(51,NSim))
T=table(N51)
plot(as.numeric(names(T)),T/NSim,type="b",col="blue")
u=seq(0,51,by=.1)
lines(u,dnorm(u,mean(N51),sd(N51)),col="red",lty=2)

If anyone has an intuition (not to say a proof) for that, I’d be glad to hear it…

Visualization in regression analysis

Visualization is a key to success in regression analysis. This is one of the (many) reasons I am also suspicious when I read an article with a quantitative (econometric) analysis without any graph. Consider for instance the following dataset, obtained from http://data.worldbank.org/, with, for each country, the GDP per capita (in some common currency) and the infant mortality rate (deaths before the age of 5),

> library(gdata)
> XLS1=read.xls("http://api.worldbank.org/datafiles
/NY.GDP.PCAP.PP.CD_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data1=XLS1[-(1:28),c("Country.Name","Country.Code","X2010")]
> names(data1)[3]="GDP"
> XLS2=read.xls("http://api.worldbank.org/datafiles
/SH.DYN.MORT_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data2=XLS2[-(1:28),c("Country.Code","X2010")]
> names(data2)[2]="MORTALITY"
> data=merge(data1,data2)
> head(data)
Country.Code         Country.Name       GDP MORTALITY
1          ABW                Aruba        NA        NA
2          AFG          Afghanistan  1207.278     149.2
3          AGO               Angola  6119.930     160.5
4          ALB              Albania  8817.009      18.4
5          AND              Andorra        NA       3.8
6          ARE United Arab Emirates 47215.315       7.1

If we estimate a simple linear regression – http://freakonometrics.blog.free.fr/public/perso5/logormal01.gif  – we get

> regBB=lm(MORTALITY~GDP,data=data)
> summary(regBB)

Call:
lm(formula = MORTALITY ~ GDP, data = data)

Residuals:
Min     1Q Median     3Q    Max
-45.24 -29.58 -12.12  16.19 115.83

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 67.1008781  4.1577411  16.139  < 2e-16 ***
GDP         -0.0017887  0.0002161  -8.278 3.83e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 39.99 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.2909,	Adjusted R-squared: 0.2867
F-statistic: 68.53 on 1 and 167 DF,  p-value: 3.834e-14

We can look at the scatter plot, including the linear regression line, and some confidence bounds,

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5)",cex=.5)
> text(data$GDP,data$MORTALITY,data$Country.Name,pos=3)
> x=seq(-10000,100000,length=101)
> y=predict(regBB,newdata=data.frame(GDP=x),
+ interval="prediction",level = 0.9)
> lines(x,y[,1],col="red")
> lines(x,y[,2],col="red",lty=2)
> lines(x,y[,3],col="red",lty=2)

We should be able to do a better job here. For instance, if we look at the Box-Cox profile likelihood,

> boxcox(regBB)

it looks like taking the logarithm of the mortality rate should be better, i.e. http://freakonometrics.blog.free.fr/public/perso5/lognormal02.gif or http://freakonometrics.blog.free.fr/public/perso5/lognormal05.gif:

> regLB=lm(log(MORTALITY)~GDP,data=data)
> summary(regLB)

Call:
lm(formula = log(MORTALITY) ~ GDP, data = data)

Residuals:
Min      1Q  Median      3Q     Max
-1.3035 -0.5837 -0.1138  0.5597  3.0583

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  3.989e+00  7.970e-02   50.05   <2e-16 ***
GDP         -6.487e-05  4.142e-06  -15.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7666 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.5949,	Adjusted R-squared: 0.5925
F-statistic: 245.3 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5,log="y")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=seq(300,100000,length=101)
> y=exp(predict(regLB,newdata=data.frame(GDP=x)))*
+ exp(summary(regLB)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scale or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5)

on the standard scale. Here we use quantiles of the log-normal distribution to derive confidence intervals.

But why shouldn’t we take also the logarithm of the GDP ? We can fit a model http://freakonometrics.blog.free.fr/public/perso5/lognormal03.gif or equivalently http://freakonometrics.blog.free.fr/public/perso5/lognormal04.gif.

> regLL=lm(log(MORTALITY)~log(GDP),data=data)
> summary(regLL)

Call:
lm(formula = log(MORTALITY) ~ log(GDP), data = data)

Residuals:
Min       1Q   Median       3Q      Max
-1.13200 -0.38326 -0.07127  0.26610  3.02212

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.50192    0.31556   33.28   <2e-16 ***
log(GDP)    -0.83496    0.03548  -23.54   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.5797 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.7684,	Adjusted R-squared: 0.767
F-statistic:   554 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5,log="xy")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=exp(seq(1,12,by=.1))
> y=exp(predict(regLL,newdata=data.frame(GDP=x)))*
+ exp(summary(regLL)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scales or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5)

on the standard scale. If we compare the last two predictions, we have

with in blue is the log model, and in red is the log-log model (I did not include the first one for obvious reasons).

Construire une courbe ROC

Juste avant les vacances, Jean-Pierre Liégeois, un jeune lecteur du var, me demandais par courriel, “à partir d’une régression logistique (ou d’une matrice de confusion 2×2), comment programmer en R, un programme qui construit la courbe ROC associée“. Avant d’aller plus loin (et de répondre a la question), je vais renvoyer vers un vieux billet sur les matrices de confusion. L’idée est que l’on suppose que l’on dispose d’un prédicteur d’une variable prenant des valeurs 0 et 1 (ou pour reprendre la terminologie classique “positif” et “négatif”), par exemple un modèle logistique. Formellement, pour l’ensemble de nos observations, on a une valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png et (comme je l’expliquais dans un autre billet) et d’un score \widehat{S}. Et c’est ce score qu’on va utiliser pour construire la courbe ROC. Ce score sera utilise pour prédire http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png . La règle d’affectation est alors simple: on se fixe un seuil http://freakonometrics.hypotheses.org/files/2018/02/ROC-04.png, et

  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-05.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “positif”
  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-06.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “négatif”

On peut alors construire une matrice dite de confusion, qui est simplement un table de contingence,

valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png
valeur prédite
http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png
“positif” “négatif”
“positif” TP FP
“négatif” FN TN

où TP désigne les vrais positifs (true positive), TN les vrais négatifs (true negative),FP désigne les faux positifs (false positive) ou erreur de type I (dans une terminologie de théorie de la décision, ou de théorie des tests), et FN désigne les faux négatifs (false negative) ou erreur de type II.
Quid de la mise en œuvre sous R ? Commençons par générer des données, et estimons un modèle de régression.

set.seed(1)
n=50
X=rnorm(n)
Y=rbinom(n,size=1,prob=
exp(2*X-1)/(1+exp(2*X-1)))
B=data.frame(Y,X)
reg=glm(Y~X,family=binomial,data=B)
S=predict(reg,type="response")

On a maintenant notre observation (variable prenant les valeurs 0 ou 1) et notre score. On va ensuite pouvoir choisir plusieurs valeurs possible pour le seuil, et visualiser le taux de vrais positifs, en fonction du taux de faux positifs.

plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP=sum((Ps==1)*(Y==0))/sum(Y==0)
TP=sum((Ps==1)*(Y==1))/sum(Y==1)
points(FP,TP,cex=.5,col="red")
}

On a alors le graphique suivant,

Si on relit les points, on a alors la courbe ROC,

FP=TP=rep(NA,101)
plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP[1+s*100]=sum((Ps==1)*(Y==0))/sum(Y==0)
TP[1+s*100]=sum((Ps==1)*(Y==1))/sum(Y==1)
}
lines(c(FP),c(TP),type="s",col="red")

En fait, le code est assez simple, et il traîne dans différents packages de R, e.g.

library(ROCR)
pred=prediction(P,Y)
perf=performance(pred,"tpr", "fpr")
plot(perf,colorize = TRUE)

On peut aussi s’amuser a bootstrapper l’échantillon pour construire des intervalles de confiance, ou ajuster des modèles théoriques,

library(verification)
roc.plot(Y,P, xlab = "False Positive Rate",
ylab = "True Positive Rate", main = "", CI = TRUE,
n.boot = 100, plot = "both", binormal = TRUE)

ou encore (toujours avec des bornes de confiance obtenues par bootstrap)

library(pROC)
PROC=plot.roc(Y,P,main="", percent=TRUE,
ci=TRUE)
SE=ci.se(PROC,specificities=seq(0, 100, 5))
plot(SE, type="shape", col="light blue")

Climate change and insurance

I will be in Lyon next Monday to give a talk on “Modeling heat-waves: return period for non-stationary extremes” in a workshop entitled “Changement climatique et gestion des risques“. An interesting reference might be some pages from Le Monde (2010). The talk will be more a discussion about modeling series of temperatures (daily temperatures). A starting point might be the IPCC Third Assessment graph which illustrates the effect on extreme temperatures when (a) the mean temperature increases, (b) the variance increases, and (c) when both the mean and variance increase for a normal distribution of temperature.

I will add here some code used to generate some graphs I will comment. The graph below it the daily minimum temperature,

TEMP=read.table("http://freakonometrics.blog.free.fr/
public/data/TN_STAID000038.txt",header=TRUE,sep=",")
D=as.Date(as.character(TEMP$DATE),"%Y%m%d")
T=TEMP$TN/10
day=as.POSIXlt(D)$yday+1
an=trunc(TEMP$DATE/10000)
plot(D,T,col="light blue",xlab="Minimum
daily temperature in Paris",ylab="",cex=.5)
abline(R,lwd=2,col="red")

We can clearly see an increasing linear trend. But we do not care (too much) here about the increase of the average temperature, but more dispersion, and tails. Here are decenal box-plots

or quantile-regressions

library(quantreg)
PENTESTD=PENTE=rep(NA,99)
for(i in 1:99){
R=rq(T~D,tau=i/100)
PENTE[i]=R$coefficients[2]
PENTESTD[i]=summary(R)$coefficients[2,2]
}
m=lm(T~D)$coefficients[2]
plot((1:99)/100,(PENTE/m-1)*100,type="b")
segments((1:99)/100,((PENTE-2*PENTESTD)/m-1)*100,
(1:99)/100,((PENTE+2*PENTESTD)/m-1)*100,
col="light blue",lwd=3)
points((1:99)/100,(PENTE/m-1)*100,type="b")
abline(h=0,lty=2,col="red")

In order to get a better understanding of the graph above, here are slopes of quantile regressions associated to different probabilities,

The annualized maxima (of minimum temperature, i.e. warmest night of the year)

i.e. the regression of yearly maximas.

tail index of a Generalized Pareto distribution

Instead of looking at observation over a century (the trend is obviously linear), we can focus on seaonal behavior,

B=data.frame(Y=rep(T,3),X=c(day,day-365,day+365),
A=rep(an,3))
library(quantreg)
library(splines)
Q50=rq(Y~bs(X,10),data=B,tau=.5)
Q95=rq(Y~bs(X,10),data=B,tau=.95)
Q05=rq(Y~bs(X,10),data=B,tau=.05)
YP95=predict(Q95,newdata=data.frame(X=1:366))
YP05=predict(Q05,newdata=data.frame(X=1:366))
I=(T>predict(Q95))|(T<predict(Q05))
YP50=predict(Q50,newdata=data.frame(X=1:366))
plot(day[I],T[I],col="light blue",cex=.5)
lines(1:365,YP95[1:365],col="blue")
lines(1:365,YP05[1:365],col="blue")
lines(1:365,YP50[1:365],col="blue",lwd=3)

with on red series from 1900 till 1920, and on purple from 1990 till 2010. If we remove the linear trend, and the seasonal cycle, here are the residuals, assume to be stationary,

on during the year

Obviously, something has been missed,

The graph below is the volatility of the residual series, within the year,

Instead of looking at volatility, we can focus on tails, with tail index per month,

mois=as.POSIXlt(D)$mon+1
Pmax=Dmax=matrix(NA,12,2)
for(s in 1:12){
X=T3[mois==s]
FIT=gpd(X,5)
Pmax[s,1:2]=FIT$par.ests
Dmax[s,1:2]=FIT$par.ses
}
plot(1:12,Pmax[,1],type="b",col="blue",
ylim=c(-.6,0))
segments(1:12,Pmax[,1]+2*Dmax[,1],1:12,Pmax[,1]-
2*Dmax[,1],col="light blue",lwd=2)
points(1:12,Pmax[,1],col="blue")
text(1:12,rep(-.5,12),c("JAN","FEV","MARS",
"AVR","MAI","JUIN","JUIL","AOUT","SEPT",
"OCT","NOV","DEV"),cex=.7)

At the end of the talk, I will also mention multiple city models, e.g. Paris and Marseille,

If we look at residuals (once we have removed the linear trend and the seasonal cycle) we observe some positive dependence

In order to study (strong) tail dependence, define

http://freakonometrics.hypotheses.org/files/2017/07/Llatex2png.2.php_.png

for lower left tail and

http://freakonometrics.hypotheses.org/files/2017/07/Clatex2png.2.php_.png

for upper right tail, where http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-12.2.php_.png is the survival copula associated to http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-13.2.php_.png, i.e.
http://freakonometrics.hypotheses.org/files/2017/01/toclatex2png-14.2.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/01/toclatex2png-15.2.php_.png

It looks like there is no tail dependence (in the uper tail). But it is also possible to study weaker tail dependence, through

http://freakonometrics.hypotheses.org/files/2017/01/toc2latex2png.3.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/01/toc2latex2png.4.php_.png


Slides can be visualized below, I will upload them soon,

Visualisation en tarification, avec R

Chose promise, chose due: mardi prochain, le 27 septembre, le cours d’Actuariat IARD se déroulera en salle informatique (PK-S1525) à l’horaire habituel. Sinon quelques notes de cours sur la tarification a priori sont en ligne ici.

  • Quelques références en R

Avant de commencer à programmer en R, quelques références qui peuvent être utiles, en français pour commencer,

  • “R pour les débutants” d’Emmanuel Paradis, (PDF)
  • “Introduction à la programmation en S” par Vincent Goulet, (PDF)
  • “Statistique de l’Assurance” par Arthur Charpentier (PDF) qui sont mes notes de cours de l’an passé, mais qui insistent sur l’utilisation de R, pas sur la programmation en R, qui est supposée un peu connue.

Sinon en anglais, les références sont un peu plus nombreuses,

  • “R for Beginners” d’Emmanuel Paradis (PDF),
  • “An Introduction to R” par Longhow Lam (PDF)
  • “The R language — a short companion” par Marc Vandemeulebroecke (PDF),
  • “The R Guide” par Jason Owen (PDF),
  • “Econometrics in R” par Grant Farnsworth (PDF) pour aller plus loin sur les régressions,
  • “Practical Regression and Anova using R” by Julian Faraway (PDF) sur le meme sujet
  • “Statistics with R and S-Plus” d’Hugo Quené (PDF)
  • “Statistical Computing and Graphics Course Notes” par Frank Harrell, (PDF).
  • “Using R for Data Analysis and Graphics – Introduction, Examples and Commentary” par John Maindonald (PDF).

Le code pour importer les données est le suivant,

> sinistre <- read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+ header=TRUE,sep=";")
> sinistres=sinistre[sinistre$garantie=="1RC",]
> contrat <- read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+ header=TRUE,sep=";")
> T=table(sinistres$nocontrat)
> T1=as.numeric(names(T))
> T2=as.numeric(T)
> nombre1 = data.frame(nocontrat=T1,nbre=T2)
> I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE]
> nombre2 = data.frame(nocontrat=T1,nbre=0)
> nombre=rbind(nombre1,nombre2)
> basenb = merge(contrat,nombre)
> head(basenb)
> basesin=merge(sinistres,contrat)
> basesin=basesin[basesin$cout>0,]
  • Faire des graphiques en R

Dans A practicioner’s guide to Generalized Linear Models (en ligne ici), on peut voir des graphiques comme celui ci-dessous, avec la fréquence de sinistre en fonction de l’age (ou plutôt de classes d’ages). Sur le graphique ci-dessous, on a également une distinction entre les hommes (en bleu) et les femmes (en rouge),

sur ce graphique, les fréquences sont exprimées en logarithme du multiplicateur (i.e. en variation par rapport à la moyenne du portefeuille).
On peut utiliser le code suivant pour générer (automatiquement) des graphiques similaires,

> graphique=function(nom="ageconducteur",
+ niveau=c(17,21,24,29,34,44,64,84,110),
+ continu=TRUE,type=1){
+ if(continu==TRUE){X=cut(basenb[,nom],niveau)}
+ if(continu==FALSE){X=as.factor(basenb[,nom])}
+ E=basenb$exposition
+ Y=basenb$nbre
+ FREQ=levels(X)
+ moyenne=variance=n=rep(NA,length(FREQ))
+ for(k in 1:length(FREQ)){
+ moyenne[k] =weighted.mean(Y[X==FREQ[k]]/E[X==FREQ[k]],
+ E[X==FREQ[k]])
+ variance[k]=weighted.mean((Y[X==FREQ[k]]/E[X==FREQ[k]]-
+ moyenne[k])^2,E[X==FREQ[k]])
+ n[k]       =sum(E[X==FREQ[k]])
+}
+ w=barplot(n,names.arg=FREQ,col="light green",axes=FALSE,
+ xlim=c(0,1.2*length(FREQ)+.5))
+ mid=w[,1]
+ axis(2)
+ par(new=TRUE)
+ IC1=moyenne+1.96/sqrt(n)*sqrt(variance)
+ IC2=moyenne-1.96/sqrt(n)*sqrt(variance)
+ moyenneglobale=sum(Y)/sum(E)
+ 
+ if(type==1){
+ plot(mid,moyenne,ylim=range(c(IC1,IC2)),type="b",
+ col="red",axes=FALSE,xlab="",ylab="",
+ xlim=c(0,1.2*length(FREQ)+.5))
+ segments(mid,IC1,mid,IC2,col="red")
+ segments(mid-.1,IC1,mid+.1,IC1,col="red")
+ segments(mid-.1,IC2,mid+.1,IC2,col="red")
+ points(mid,moyenne,pch=19,col="red")
+ axis(4)
+ abline(h=moyenneglobale,lty=2,col="red")}
+
+ if(type==2){
+ plot(mid,log(moyenne/moyenneglobale),ylim=
+ range(c(log(IC1/moyenneglobale),log(IC2/moyenneglobale))),
+ type="b",col="red",axes=FALSE,xlab="",ylab="",
+ xlim=c(0,1.2*length(FREQ)+.5))
+ segments(mid,log(IC1/moyenneglobale),mid,
+ log(IC2/moyenneglobale),col="blue")
+ segments(mid-.1,log(IC1/moyenneglobale),mid+.1,
+ log(IC1/moyenneglobale),col="blue")
+ segments(mid-.1,log(IC2/moyenneglobale),mid+.1,
+ log(IC2/moyenneglobale),col="blue")
+ points(mid,log(moyenne/moyenneglobale),pch=19,col="red")
+ axis(4)
+ abline(h=0,lty=2,col="red")}
+
+ mtext("Exposition", 2, line=2, cex=1.2,col="light green")
+ if(type==1){mtext("Fréquence annualisée", 
+     4, line=-2, cex=1.2,col="red")}
+ if(type==2){mtext("Fréquence annualisée (log multiplicateur)", 
+    4, line=-2, cex=1.2,col="red")}
+ }

Par exemple en utilisant un découpage arbitraire par classe d’age (comme cela est fait par défaut dans la fonction),

> graphique()

Mais on pourrait aussi utiliser un découpage assurant d’avoir des classes plus grandes, par exemple en utilisant les quantiles,

> Q=quantile(basenb[,"ageconducteur"],(0:10)/10)
> Q[1]=Q[1]-1
> graphique(nom="ageconducteur",niveau=Q,continu=TRUE)

Ces deux graphiques permettent de visualiser la fréquence empirique de sinistre, par classe d’age, sans modèle paramétrique. Des intervalles de confiance sont également représentés (basés sur une hypothèse de normalité). Notons que l’on peut aussi faire une représentation relative à la fréquence moyenne (en log des multiplicateurs),
> graphique(type=2)

ou encore, si on cherche à analyser la fréquence en fonction de la zone géographique d’habitation

> graphique(nom="zone",continu=FALSE,type=2)

Une autre utilisation peut être faite sur la sévérité des sinistres (coût moyen) et la fréquence, par exemple un assureur,

Il est possible de modifier un peu la fonction pour ajouter au graphique la sévérité des sinistres, e.g.

> graphiquecout=function(nom="ageconducteur",
+ niveau=c(17,21,24,29,34,44,64,84,110),
+ continu=TRUE,type=1){
+ if(continu==TRUE){X=cut(basenb[,nom],niveau)}
+ if(continu==FALSE){X=basenb[,nom]}
+ E=basenb$exposition
+ Y=basenb$nbre
+ FREQ=levels(X)
+ moyennen=variancen=nn=rep(NA,length(FREQ))
+ for(k in 1:length(FREQ)){
+ moyennen[k] =weighted.mean(Y[X==FREQ[k]]/E[X==FREQ[k]],
+ E[X==FREQ[k]])
+ variancen[k]=weighted.mean((Y[X==FREQ[k]]/E[X==FREQ[k]]-
+ moyennen[k])^2,E[X==FREQ[k]])
+ nn[k]       =sum(E[X==FREQ[k]])
+ }
+ moyenneglobalen=sum(Y)/sum(E)
+ 
+ if(continu==TRUE){X=cut(basesin[,nom],niveau)}
+ if(continu==FALSE){X=basesin[,nom]}
+ Y=basesin$cout
+ FREQ=levels(X)
+ moyennes=variances=ns=rep(NA,length(FREQ))
+ for(k in 1:length(FREQ)){
+ moyennes[k] =mean(Y[X==FREQ[k]])
+ variances[k]=var(Y[X==FREQ[k]])
+ ns[k]=length(Y[X==FREQ[k]])
+ }
+ moyenneglobales=mean(Y)
+ 
+ w=barplot(nn,names.arg=FREQ,col="light green",
+ axes=FALSE,xlim=c(0,1.2*length(FREQ)+.5))
+ mid=w[,1]
+ 
+  par(new=TRUE)
+ IC1=moyennen+1.96/sqrt(nn)*sqrt(variancen)
+ IC2=moyennen-1.96/sqrt(nn)*sqrt(variancen)
+ plot(mid,moyennen,ylim=range(c(IC1,IC2)),type="b",
+ col="red",axes=FALSE,xlab="",ylab="",
+ xlim=c(0,1.2*length(FREQ)+.5))
+ segments(mid,IC1,mid,IC2,col="red")
+ segments(mid-.1,IC1,mid+.1,IC1,col="red")
+ segments(mid-.1,IC2,mid+.1,IC2,col="red")
+ points(mid,moyennen,pch=19,col="red")
+ axis(4)
+ abline(h=moyenneglobalen,lty=2,col="red")
+ 
+ par(new=TRUE)
+ IC1=moyennes+1.96/sqrt(ns)*sqrt(variances)
+ IC2=moyennes-1.96/sqrt(ns)*sqrt(variances)
+ plot(mid,moyennes,ylim=range(c(IC1,IC2)),type="b",
+ col="blue",axes=FALSE,xlab="",ylab="",
+ xlim=c(0,1.2*length(FREQ)+.5))
+ segments(mid,IC1,mid,IC2,col="blue")
+ segments(mid-.1,IC1,mid+.1,IC1,col="blue")
+ segments(mid-.1,IC2,mid+.1,IC2,col="blue")
+ points(mid,moyennes,pch=19,col="blue")
+ axis(2)
+ abline(h=moyenneglobales,lty=2,col="blue")
+ 
+ mtext("Cout moyen", 2, line=2, cex=1.2,col="blue")
+ mtext("Fréquence annualisée", 4, line=-2, cex=1.2,col="red")
+ }

Si on regarde par classe d’age,

> graphiquecout()

ou encore, en fonction de l’age du véhicule,

> Q=quantile(basenb[,"agevehicule"],(0:10)/10)
> Q[1]=Q[1]-1
> graphiquecout(nom="agevehicule",niveau=Q,continu=TRUE)

ou enfin, en fonction de la zone géographique

> graphiquecout(nom="zone",continu=FALSE)

Nikkei’s past experience vs. SP500 (in euros)

Following Michael’s idea (here), I wanted to go further, based on his intuition (and dataset that he kindly sent me, there). If we consider the two series of Nikkei index and SP500 index in euros, we have to following graph,

the code is simply the following (the merging function is simply here to avoid problem with different trading days: since we look at the index and not the return, it is the simplest way to deal with it).

> library(RODBC)
> base = odbcConnectExcel(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/spx_nky_eurusd.xls", 
+ readOnly = TRUE)
> series1 = sqlQuery(base,query="select * from [Tabelle1$A2:B8837]") # SPX
> series2 = sqlQuery(base,query="select * from [Tabelle1$D2:E8631]") # NKY
> series3 = sqlQuery(base,query="select * from [Tabelle1$G2:H8945]") # EURUSD
> odbcCloseAll()
> series4=merge(series1,series3)
> series4$SPEUR=series4$SPX/series4$EURUSD
> series5=merge(series4,series2)
> x=(as.Date(series5[,1])-as.Date("01/01/0000","%d/%m/%Y"))/365.25
> yl=range(series5[,4])
> xl=c(1975,2010)
> plot(x,series5[,4],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="red",xlim=xl,ylim=yl)
> axis(1)
> axis(2, col="red")
> par(new=TRUE)
> yl=range(series5[,5])
> plot(x,series5[,5],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="blue",xlim=xl,ylim=yl)
> axis(4, col="blue")
> mtext("SP500 in Euros", 2, line=2, col="red", cex=1.2)
> mtext("NKY", 4, line=2, col="blue", cex=1.2)

Those two series series seem to have a similar pattern, so an idea can be translate the SP500 on the left,

Interesting isn’t it ? Suppose that we want to forecast (or forsee ?) the SP500 in euro for the next 10 years…

People who enjoy charts would have here a nice tool…

Those two series are extremely correlated, with a correlation of 0.9572,

> X1=series5[2501:n,4]
> X2=series5[1:(n-2500),5]
> cor(X1,X2)
[1] 0.9572484

But are the two series cointegrated (see here, here or therefor material on cointegration) ? Well, using standard procedure, we first have to prove that the two series are integrated. First, let us look at the autocorrelograms,

At first sight, we confirm the economic intuition that those indices should be integrated. Standard tests confirm that intuition,

> acf(X2,lag=1000,col="light green")
> acf(X1,lag=1000,col="light green")
> library(tseries)
> adf.test(X1)
        Augmented Dickey-Fuller Test
data:  X1 
Dickey-Fuller = -1.0768, Lag order = 17, p-value = 0.9264
alternative hypothesis: stationary 
> adf.test(X2)
        Augmented Dickey-Fuller Test
data:  X2 
Dickey-Fuller = -1.2905, Lag order = 17, p-value = 0.8788
alternative hypothesis: stationary

But if we want to go further, we have to find the cointegration relationship between the two series. From an heuristic point of view, a linear regression should be a good proxy,

> reg=lm(X1~X2)
> plot(residuals(reg))

> acf(residuals(reg),lag=1000,col="light green")

> adf.test(residuals(reg))
        Augmented Dickey-Fuller Test
data:  residuals(reg) 
Dickey-Fuller = -5.176, Lag order = 17, p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In adf.test(residuals(reg)) : p-value smaller than printed p-value
> pp.test(residuals(reg))
        Phillips-Perron Unit Root Test
data:  residuals(reg) 
Dickey-Fuller Z(alpha) = -46.9775, Truncation lag parameter = 11,
p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In pp.test(residuals(reg)) : p-value smaller than printed p-value

When we look at the autocorrelation function, it looks like we do have a stationary series.
This idea is – more or less – the idea of Engle-Granger two step procedure. But actually, we can not directly use Dickey-Fuller’s test to see if residuals are integrated. This was proved in Phillips and Ouliaris (1990), who also proposed a test (see e.g. here),

> library(tseries); po.test(cbind(X1,X2))
        Phillips-Ouliaris Cointegration Test
data:  cbind(X1, X2) 
Phillips-Ouliaris demeaned = -53.1766, Truncation lag parameter = 57,
p-value = 0.01
Message d'avis :
In po.test(cbind(X1, X2)) : p-value smaller than printed p-value
Another similar function can be found in R
> library(urca)
> summary(ca.po(cbind(X1,X2)))
######################################## 
# Phillips and Ouliaris Unit Root Test # 
######################################## 
Test of type Pu 
detrending of series none 
Call:
lm(formula = z[, 1] ~ z[, -1] - 1)
Value of test-statistic is: 45.2032 
Critical values of Pu are:
                  10pct    5pct    1pct
critical values 20.3933 25.9711 38.3413

Thus, we has to admit that those series are cointegrated.

Based on that idea, it is possible to model the stationary component, and forecast it for the next ten years, based on the assumption that we know the behavior of one time series. Hence, if we add the confidence interval due to the stationary component uncertainty, we have the following graph,

 Of course, again, only uncertainty related to the stationary process is considered here….