Job for life ? Bishop of Rome ?

The job of Bishop of Rome – i.e. the Pope – is considered to be a life-long commitment. I mean, it usually was. There have been 266 popes since 32 A.D. (according to http://oce.catholic.com/…): almost all popes have served until their death. But that does not mean that they were in the job for long… One can easily extract the data from the website,

> L2=scan("http://oce.catholic.com/index.php?title=List_of_Popes",what="character")
Read 4485 items
> index=which(L2=="</td><td>Reigned")
> X=L2[index+1]
> Y=strsplit(X,split="-")

But one should work a little bit because sometimes, there are inconsistencies, e.g. 911-913 and then 913-14, so we need some more lines. Further, we can extract from this file the years popes started to reign, the year it ended, and the length, using those functions

> diffyears=function(x){
+ s=NA
+ if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))}
+ if(length(x)==1){s=1}
+ if(length(x)==2){s=diff(as.numeric(x))}
+ return(s)}
> whichyearsbeg=function(x){
+ s=NA
+ if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))}
+ if(length(x)==1){s=as.numeric(x)}
+ if(length(x)==2){s=as.numeric(x)[1]}
+ return(s)}
> whichyearsend=function(x){
+ s=NA
+ if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))}
+ if(length(x)==1){s=as.numeric(x)}
+ if(length(x)==2){s=as.numeric(x)[2]}
+ return(s)}

On our file, we have

> Years=unlist(lapply(Y,whichyearsbeg))
> YearsB=c(Years[1:91],752,Years[92:length(Years)])
> YearsB[187]=1276
> Years=unlist(lapply(Y,whichyearsend))
> YearsE=c(Years[1:91],752,Years[92:length(Years)])
> YearsE[187]=1276
> YearsE[266]=2013
> YearsE[122]=914 
> W=unlist(lapply(Y,diffyears))
> W=c(W[1:91],1,W[92:length(W)])
> W[W==-899]=1
> which(is.na(W))
[1] 187 266
> W[187]=1
> W[266]=2013-2005

If we plot it, we have the following graph,

> plot(YearsB,W,type="h")

and if we look at the average length, we have the following graph,

> n=200
> YEARS = seq(0,2000,length=n)
> Z=rep(NA,n)
> for(i in 2:(n-1)){
+ index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50))
+ Z[i] = mean(W[index])}
> plot(YEARS,Z,type="l",ylim=c(0,30))
> n=50
> YEARS = seq(0,2000,length=n)
> Z=rep(NA,n)
> for(i in 2:(n-1)){
+ index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50))
+ Z[i] = mean(W[index])}
> lines(YEARS,Z,type="l",col="grey")

which does not reflect mortality improvements that have been observed over two millenniums. It might related to the fact that the average age at time of election has  increased over time (for instance, Benedict XVI was elected at 78 – one of the oldest to be elected). Actually, serving a bit more than 7 years is almost the median,

> mean(W>=7.5)
[1] 0.424812

(42% of the Popes did stay at least 7 years in charge) or we can look at the histogram,

> hist(W,breaks=0:35)

Unfortunately, I could not find more detailed database (including the years of birth for instance) to start a life-table of Popes.

 

Semaine de relâche et données de comptage

Comme annoncé en cours (pour ceux qui souhaitent profiter de la semaine de relâche pour se préparer) une partie de l’examen intra portera sur la base

> base=read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
> tail(base)
    SEX AGE YEARMARRIAGE CHILDREN RELIGIOUS EDUCATION OCCUPATION SATISFACTION Y
596   1  47         15.0        1         3        16          4            2 7
597   1  22          1.5        1         1        12          2            5 1
598   0  32         10.0        1         2        18          5            4 6
599   1  32         10.0        1         2        17          6            5 2
600   1  22          7.0        1         3        18          6            2 2
601   0  32         15.0        1         3        14          1            5 1

Il s’agit d’une base construite à partir des données de l’article A Theory of Extramarital Affairs, de Ray Fair, paru en 1978 dabs Journal of Political Economy. La variable d’intérêt est (comme son nom l’indique) Y, le nombre d’aventures extra-conjugales pendant l’année passée, avec plusieurs variables explicatives

  • sex: 0 pour une femme, et 1 pour un homme
  • age: âge de la personne interrogée
  • yearmarriage: nombre d’années de mariage
  • children: 0 si la personne n’a pas d’enfants (avec son épouse) et 1 si elle en a
  • religious: degré de “religiosité”, entre 1 (anti-religieuse) à 5 (très religieuse)
  • education: nombre d’aéées d’éducation, 9=grade school, 12=high school, …, à 20=PhD
  • occupation: construit suivant l’échelle d’Hollingshead (cf http://cba.uah.edu/berkowd/….)
    • Higher executives of large concerns, proprietors, and major professionals (1)
    • Business managers, proprietors of medium-sized businesses, and lesser professionals (2)
    • Administrative personnel, owners of small businesses, and minor professionals (3)
    • Clerical and sales workers, technicians, and owners of little businesses (4)
    • Skilled manual employees (5)
    • Machine operators and semiskilled employees (6)
    • Unskilled employees (7)
  • satisfaction: perception de son mariage, de très mécontente (1) à très contente (5)

Je ne répondrais pas, a priori, aux questions portant sur ces données. Bon courage, et bonne semaine de relâche.

Further readings on GLMs and ratemaking

Some articles found in Actuarial journal, on ratemarking,

and in the CAS forums, and Astin conference papers

Bristish Statisticians and American Gangsters

A few months ago, I did publish a post (in French) following my reading of Leonard Mlodinow’s the Drunkard’s Walk. More precisely, I mentioned a paragraph that I found extremely informative

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-18-a%CC%80-13.27.42.png

But it looks like those gangsters were not only stealing money. They were also stealing ideas, here from a British statistician, manely Leonard Henry Caleb Tippett. Leonard Tippett is famous in Extreme Value Theory for his theorem (the so-called Fisher-Tippett theorem, which gives the possible limiting distributions for a normalized version of the maximum from an i.i.d. sequence, see old posts). According to Martin Gardner, Leonard Tippett suggested to use middle numbers (not the last ones) of larger ones to generate (pseudo) random sequences, or more precisely, in 1927, “published a table of 41,600 random numbers, obtained by taking the middle digits of the area of parishes in England

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-18-a%CC%80-11.34.23.png

I could not get a copy of the book Random Sampling Numbers by Leonard Tippett (I could only find reviews, e.g. Nair (1938)) but I do believe that this technique should work to generate sequences that do look like sequences of random numbers. Note that several techniques were mentioned in previous posts (in French) published a few years ago.

Now, I should also take some time to apologize because, sometimes, I am the one playing the gangster: I do steal a lot of illustrations on the internet. And I would like to apologize to the authors. On my previous blog, I did try – once – to add a short line at the end of a post, explaining where the illustration was coming from (trying to give credit to the illustrator). Less than 10 days after adding this short line, I received an email from a ‘publisher’, telling me that there were rights attached to the picture, and that I had 24 hours to remove it (if not, their lawyers will see what to do). Of course, I did remove the picture, and the mention. Now, I use pictures, and no mention. And I feel guilty. So I wanted to apologize for stealing others’ work. I am still discussing to hire an illustrator, to illustrate my blog. Work in progress….

Modélisation des coûts individuels en tarification

Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici,

Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/….

From Simpson’s paradox to pies

Today, I wanted to publish a post on economics, and decision theory. And probability too… Those who do follow my blog should know that I am a big fan of Simpson’s paradox. I also love to mention it in my
econometric classes. It does raise important questions, that I do relate to multicolinearity, and interepretations of regression models, with multiple (negatively correlated) explanatory variables. This paradox has amazing pedogological virtues. I did mention it several times on this blog (I should probably mention that I discovered this paradox via Marco Scarsini, who did learn me a lot of things, in decision theory and in probability). For those who do not know this paradox, here is an example that Marco gave in one of his talk, a few years ago. Consider the following statistics, when healthy people entered in some hospital

hospital total survivors deaths survival
rate
hospital A 600 590 10 98%
hospital B 900 870 30 97%

while, when sick people entered in the same hospitals

hospital total survivors deaths survival
rate
hospital A 400 210 190 53%
hospital B 100 30 70 30%

Somehow, whatever your health situation, you should choose hospital A. Now, if we agregate

hospital total survivors deaths survival
rate
hospital A 1000 800 200 80%
hospital B 1000 900 100 90%

i.e. without any doubts, people should choose hospital B.

Actually, Simpson’s paradox is called Simpson’s paradox because Colin Blyth named it that way in 1972, in his paper entitled on Simpson’s paradox and the sure-thing principle (an economic article in a statistical journal), that can be downloaded from http://www.stat.cmu.edu/~fienberg/…. He found this paradox in a paper published in 1951 by Edward Simpson, even if other papers actually did mention it earlier. The most popular application is probably admission at Berckley’s graduate studiesprograms, and sex bias, see Bickel, Hammel & O’Connell (1975), that can be downloaded from http://www.unc.edu/~nielsen/…. I also mentioned a geometric interpretation of this paradox a few years ago on my blog, which is so simple to understand that the paradox is no longer a paradox actually, since on the example above, we had

and

while

With symbolic notations, one can have at the same time

and

with also

as shown on the graph below

There should be connection between Simpson’s paradox and the ecological fallacy (which is an issue I recently discovered and that I found extremely interesting, related again to difficulties of interpreting
regressions). But that’s another story. My point today is that Colin Blyth did mention another nice paradox, that is related, this time, to stochastic orderings. The idea is the following. Consider the three spinners drawn below (imagine some arrows in those circles)

  • spinner A: no matter where the arrow stops, the gain is 3,
  • spinner B: 56% chances to gain 2, 22% chances to gain 4, and 22% chances to gain 6,
  • spinner C: 51% chances to gain 1, 49% chances to gain 5.

Instead of spinners, it is also possible to consider three different lotteries,

You play against a friend, you pick a spinner, while the friend picks another. Everyone flick his arrow, the highest number wins (no matter the difference). Let us compute the odds. First case, A against B, from
A’s perspective

B-2 B-4 B-6
A-3 56%
+1
win
22%
-1
lose
22%
-3
lose

In that case, A has 56% chance of beating B. Second case, A against C, from A’s perspective,

C-1 C-5
A-3 51%
+1
win
49%
-2
lose
In that case, A has 51% chance of beating C. Third (an final) case, B against C, from B’s perspective. Assuming independence between the spinners, joint probabilities can easily be computed,
C-1 C-5
B-2 28.56%
+1
win
27.44%
-3
lose
B-4 11.22%
+3
win
10.78%
-1
lose
B-6 11.22%
+5
win
10.78%
+1
win
In that case, B has 61.78% chance of beating C. So, if we try to summarize,
  • A is the best choice, since it beats both with – always – more than 50% chance,
  • C is the worst choice, since it is beaten by both with – always – more than 50% chance,
Now, assume that you play not against one friend, but two friends. An everyone picks a different spinner. Let
us compute the odds, one more time. First case, A against B and C, from A’s perspective
B-2
C-1
B-2
C-5
B-4
C-1
B-4
C-5
B-6
C-1
B-6
C-5
A-3 28.56%
+1
win
27.44%
-2
lose
11.22%
-1
lose
10.78%
-1
lose
11.22%
-3
lose
10.78%
-3
lose
In that case, A has 28.56% chance of beating B and C. Second case, B against A and C, from B’s perspective,
A-3
C-1
A-3
C-5
B-2 28.56%
-1
lose
27.44%
-2
lose
B-4 11.22%
+1
win
10.78%
-1
lose
B-6 11.22%
+3
win
10.78%
+1
win
In that case, B has 33.22% chance of beating A and B.Third (an final) case, C against A, from C’s perspective,
A-3
B-2
A-3
B-4
A-3
B-6
C-1 28.56%
-2
lose
11.22%
-3
lose
11.22%
-5
lose
C-5 27.44%
+2
win
10.78%
+1
win
10.78%
-1
lose

In that case, C has 38.22% chance of beating A and B. So, if we try to summarize, this time

  • C is the best choice, since has (strictly) more than 1/3 chances to win, which the highest probability
  • A is the worst choice, since has (strictly) less than 1/3 chances to win, which the lowest probability

Odd isn’t it ? Now, is there an interpretation of that paradox ? Yes, Martin Gardner, in his paper on induction and probability, mentioned the case of drug testing. The value we had with the spinner is the health level, rated from 1 to 6. Thus, taking drug A, you always get an average health level of 3. With drug C, on the other hand, you get either very sick (level 1) or very well (level 5). Consider now a doctor who wants to maximize the patient’s chance of being well. If only pills A and C are available, then the doctor should choose A. This is what we’ve seen in the first part. Assume that now a company delivers a third pill, called drug B. Then the doctor should find C more interesting…. Odd, isn’t it ?

Colin Blyth gave a more amusing application. Assume that you like to go to the restaurant, and you like get a dessert there. Dessert A – the apple pie – is the average one, with a standard level, that you rank 3 (on a scale from 1 to 6). Dessert C – the cheese cake – can either be awfull (ranked 1) or delicious (ranked 5). You’d better go for the apple pie if you want to maximize the probability of not being disappointed (i.e. maximizing your “best chance” according to Colin Blyth, but I guess it can be interpreted as regret minimization too). Now assume that dessert B – the blueberry pie – is available (with ranks given by the spinner). Then you should go for the cheese cake. I let you imagine the discussion that you can have, then, with your favorite waitress

– Hi Mr Freakonometrics, do you want a piece of apple pie ? (yes, actually she also comes frequently on my blog, and knows me from my pseudo…)

– Probably. But actually, I was wondering if you did have your blueberry pie today ?

– Yes, in fact we do….

– Great, in that case, I’ll go for the cheese cake.

She’ll probably think that I am freak… so I hope she’ll come and read my post, to understand that, actually, it does make a lot of sense to go for what was supposed to be my worst case.

Modeling individual losses with mixtures

Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following,

> n=nrow(couts)
> plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green")

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.10.26.png

It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here,

with

  • a distribution for small claims, https://latex.codecogs.com/gif.latex?{\color{Blue}%20f_1(}\cdot{\color{Blue}%20)}, e.g. an exponential distribution
  • a Dirac mass in https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\kappa}, i.e. https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\delta_{\kappa}(}\cdot{\color{Magenta}%20)}
  • a distribution for larger claims, https://latex.codecogs.com/gif.latex?{\color{Red}%20f_3(}\cdot{\color{Red}%20)}, e.g. a Gamma, or a lognormal, distribution
>  I1=which(couts$cout<1120)
>  I2=which((couts$cout>=1120)&(couts$cout<1220))
>  I3=which(couts$cout>=1220)
>  (p1=length(I1)/nrow(couts))
[1] 0.3284823
>  (p2=length(I2)/nrow(couts))
[1] 0.4152807
>  (p3=length(I3)/nrow(couts))
[1] 0.256237
>  X=couts$cout
>  (kappa=mean(X[I2]))
[1] 1171.998
>  X0=X[I3]-kappa
>  u=seq(0,10000,by=20)
>  F1=pexp(u,1/mean(X[I1]))
>  F2= (u>kappa)
>  F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa)
>  F=F1*p1+F2*p2+F3*p3
>  lines(u,F)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.13.43.png

In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e.

https://latex.codecogs.com/gif.latex?f(y|\boldsymbol{X})%20=%20p_1(\boldsymbol{X})%20{\color{Blue}%20f_1(}y|\boldsymbol{X}{\color{Blue}%20)}%20+%20p_2(\boldsymbol{X})%20{\color{Magenta}%20\delta_{\kappa}(}y{\color{Magenta}%20)}%20+%20p_3(\boldsymbol{X})%20{\color{Red}%20f_3(}y|\boldsymbol{X}{\color{Red}%20)}

which yield the following premium model,

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s_1)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple}%20{{\underbrace{\mathbb{E}(Y|Y\in(%20s_1,s_2],%20\boldsymbol{X})%20}_{B}}\cdot%20{\underbrace{\mathbb{P}(Y\in(%20s_1,s_2]|%20\boldsymbol{X})}_{D}}}}\\+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s_2,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s_2|%20\boldsymbol{X})}_{D}}}}

For the https://latex.codecogs.com/gif.latex?{\color{Blue}%20A}https://latex.codecogs.com/gif.latex?{\color{Magenta}%20B} and https://latex.codecogs.com/gif.latex?{\color{Red}%20C} terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if https://latex.codecogs.com/gif.latex?(\pi,1-\pi)=(\pi_1,\pi_2), then

https://latex.codecogs.com/gif.latex?\log%20\frac{\pi}{1-\pi}=\log%20\frac{\pi_1}{\pi_2}%20=\boldsymbol{X}%27\boldsymbol{\beta}

i.e.

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

and

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

To derive a multivariate extension, write

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

and

https://latex.codecogs.com/gif.latex?\pi_3%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

Again, maximum likelihood techniques can be used, since

https://latex.codecogs.com/gif.latex?\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto%20\prod_{i=1}^n%20\prod_{j=1}^3%20\pi_{i,j}^{Y_{i,j}}

where here, variable https://latex.codecogs.com/gif.latex?Y_{i}  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus,

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto%20\sum_{i=1}^n%20\sum_{j=1}^2%20\left(Y_{i,j}%20\boldsymbol{X}_i%27\boldsymbol{\beta}_j\right)%20-%20n_i\log\left[1+1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)\right]

and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g.

> seuils=c(0,1120,1220,1e+12)
> couts$tranches=cut(couts$cout,breaks=seuils,
+ labels=c("small","fixed","large"))
> head(couts,5)
  nocontrat    no garantie    cout exposition zone puissance agevehicule
1      1870 17219      1RC 1692.29       0.11    C         5           0
2      1963 16336      1RC  422.05       0.10    E         9           0
3      4263 17089      1RC  549.21       0.65    C        10           7
4      5181 17801      1RC  191.15       0.57    D         5           2
5      6375 17485      1RC 2031.77       0.47    B         7           4
  ageconducteur bonus marque carburant densite region tranches
1            52    50     12         E      73     13    large
2            78    50     12         E      72     13    small
3            27    76     12         D      52      5    small
4            26   100     12         D      83      0    small
5            46    50      6         E      11     13    large

Then, we can run a multinomial regression, from

> library(nnet)

using some selected covariates

> reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts)
# weights:  30 (18 variable)
initial  value 2113.730043 
iter  10 value 2063.326526
iter  20 value 2059.206691
final  value 2059.134802 
converged

The output is here

> summary(reg)
Call:
multinom(formula = tranches ~ ageconducteur + agevehicule + zone + 
    carburant, data = couts)

Coefficients:
      (Intercept) ageconducteur agevehicule      zoneB      zoneC
fixed  -0.2779176   0.012071029  0.01768260 0.05567183 -0.2126045
large  -0.7029836   0.008581459 -0.01426202 0.07608382  0.1007513
           zoneD      zoneE      zoneF   carburantE
fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715
large  0.3434686  0.1803350 -0.1969320  0.039414682

Std. Errors:
      (Intercept) ageconducteur agevehicule     zoneB     zoneC     zoneD
fixed   0.2371936   0.003738456  0.01013892 0.2259144 0.1776762 0.1838344
large   0.2753840   0.004203217  0.01189342 0.2746457 0.2122819 0.2151504
          zoneE     zoneF carburantE
fixed 0.1830139 0.3377169  0.1106009
large 0.2160268 0.3624900  0.1243560

To visualize the impact of a covariate (one, only), one can use also spline functions

> library(splines)
> reg=multinom(tranches~agevehicule,data=couts)
# weights:  9 (4 variable)
initial  value 2113.730043 
final  value 2072.462863 
converged
> reg=multinom(tranches~bs(agevehicule),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2070.496939
iter  20 value 2069.787720
iter  30 value 2069.659958
final  value 2069.479535 
converged

For instance, if the covariate is the age of the car, we do have the following probabilities

> predict(reg,newdata=data.frame(agevehicule=5),type="probs")
    small     fixed     large 
0.3388947 0.3869228 0.2741825

and for all ages from 0 to 20,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.02.55.png

For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities

> reg=multinom(tranches~bs(densite),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2068.469825
final  value 2068.466349 
converged
> predict(reg,newdata=data.frame(densite=90),type="probs")
    small     fixed     large 
0.3484422 0.3473315 0.3042263

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.05.29.png

Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset

> sbaseA=couts[couts$tranches=="small",]
> sbaseB=couts[couts$tranches=="fixed",]
> sbaseC=couts[couts$tranches=="large",]

with a threshold given by

> (k=mean(sousbaseB$cout))
[1] 1171.998

Then, let us run our four models,

> reg=multinom(tranches~bs(densite),data=couts)
> regA=glm(cout~bs(densite),data=sousbaseA,family=Gamma(link="log"))
> regB=glm(cout~1,data=sousbaseB,family=Gamma(link="log"))
> regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log"))

We can now compute predictions based on those models,

> nouveau=data.frame(densite=seq(10,100))
> proba=predict(reg,newdata=nouveau,type="probs")
> predA=predict(regA,newdata=nouveau,type="response")
> predB=predict(regB,newdata=nouveau,type="response")
> predC=predict(regC,newdata=nouveau,type="response")+k
> pred=cbind(predA,predB,predC)

To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset),

> cbind(proba,pred)[seq(10,90,by=10),]
       small     fixed     large    predA    predB    predC
10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904
20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890
30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030
40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051
50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994
60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833
70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601
80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078
90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379

Now, it is possible to plot those figures in a graph,

> barplot(t(proba*pred))
> abline(h=mean(couts$cout),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-15-a%CC%80-11.50.47.png

(the dotted horizontal line is the average cost of a claim, in our dataset).

Sorting rows and colums in a matrix (with some music, and some magic)

This morning, I was working on some paper on inequality measures, and for computational reasons, I had to sort elements in a matrix. To make it simple, I had a rectangular matrix, like the one below,

> set.seed(1)
> u=sample(1:(nc*nl))
> (M1=matrix(u,nl,nc))
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    7    5   11   23    6   17
[2,]    9   18    1   21   24   15
[3,]   13   19    3    8   22    2
[4,]   20   12   14   16    4   10

I had to sort elements in this matrix, by row.

> (M2=t(apply(M1,1,sort)))
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    5    6    7   11   17   23
[2,]    1    9   15   18   21   24
[3,]    2    3    8   13   19   22
[4,]    4   10   12   14   16   20

Nice, elements are sorted by row. But for symmetric reasons, I also wanted to sort them by column. So from this sorted matrix, I decided to sort elements by column,

> (M3=apply(M2,2,sort))
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    1    3    7   11   16   20
[2,]    2    6    8   13   17   22
[3,]    4    9   12   14   19   23
[4,]    5   10   15   18   21   24

Nice, elements are sorted by column now. Wait… elements are also sorted by row. How comes ? Is it some coincidence ? Actually, no, you can try…

> library(scatterplot3d)
> nc=6; nl=5
> set.seed(1)
> u=sample(1:(nc*nl))
> (M1=matrix(u,nl,nc))
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    8   23    5   30   10   15
[2,]   11   27    4   29   21   28
[3,]   17   16   13   24   26   12
[4,]   25   14    7   20    1    3
[5,]    6    2   18    9   22   19
> M2=t(apply(M1,1,sort))
> M3=apply(M2,2,sort)
> M3
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    1    3    7   14   19   22
[2,]    2    6    9   15   20   25
[3,]    4    8   10   17   23   26
[4,]    5   11   16   18   24   29
[5,]   12   13   21   27   28   30

or use the  following function is two random matrices are not enough,

> doublesort=function(seed=2,nl=4,nc=6){
+ set.seed(seed)
+ u=sample(1:(nc*nl))
+ (M1=matrix(u,nl,nc))
+ (M2=t(apply(M1,1,sort)))
+ return(apply(M2,2,sort))
+ }

Please, feel free to play with this function. Because this will always be the case. Of course, this is not a new result. Actually, it is mentioned in More Mathematical Morsels by Ross Honsberger, related to some story on marching band. The idea is simple: consider a marching band, a rectangular one. Here are my players

> library(scatterplot3d)
> scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M1),
+ col.axis="blue",angle=40,
+ col.grid="lightblue", main="", xlab="", ylab="", zlab="",
+ pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE)

Quite messy, isn’t it ? At least, this is what the leader of the band though, since some tall players were hiding shorter ones. So, he brought the shorter ones forward, and moved the taller ones in the back. But still on the same line,

> m=scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M2),
> col.axis="blue",angle=40,
+ col.grid="lightblue", main="", xlab="", ylab="", zlab="",
+ pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE)

From the leader’s perspective, everything was fine,

> M=M2
> for(i in 1:nl){
+ for(j in 1:(nc-1)){
+ pts=m$xyz.convert(x=c(i,i),y=c(j,j+1),z=c(M[i,j],M[i,j+1]))
+ segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2])
+ }}

But someone in the public (on the right of this graph) did not have the same perspective.

> for(j in 1:nc){
+ for(i in 1:(nl-1)){
+ pts=m$xyz.convert(x=c(i,i+1),y=c(j,j),z=c(M[i,j],M[i+1,j]))
+ segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2])
+ }}

So the person in the audience ask – one more time – players to move, but this time, to match with his perspective. Since I consider someone on the right, some minor adjustments should be made here

> sortrev=function(x) sort(x,decreasing=TRUE)
> M3b=apply(M2,2,sortrev)

This time, it is much bettter,

> m=scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M3b),
+ col.axis="blue",angle=40,
+ col.grid="lightblue", main="", xlab="", ylab="", zlab="",
+ pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE)

And not only from the public’ perspective,

> M=M3b
> for(j in 1:nc){
+ for(i in 1:(nl-1)){
+ pts=m$xyz.convert(x=c(i,i+1),y=c(j,j),z=c(M[i,j],M[i+1,j]))
+ segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2])
+ }}

but also for the leader of the marching band

Nice, isn’t it ? And why is this property always valid ? Actually, it comes from the pigeonhole theorem (one more time), a nice explanation can be found in The Power of the Pigeonhole by Martin Gardner (a pdf version can also be found on http://www.ualberta.ca/~sgraves/..). As mentioned at the end of the paper, there is also an interpretation of that result that can be related to some magic trick, discussing – in picture – a few month ago on http://www.futilitycloset.com/… : deal cards into any rectangular array:

2012-01-26-ranks-and-files-1

Then put each row into numerical order:

ranks and files 2

Now put each column into numerical order:

ranks and files 3

That last step hasn’t disturbed the preceding one: rows are still in order. And that’s a direct result from  pigeonhole theorem. That’s awesome, isn’t it ?

Crash course on R for financial and actuarial econometrics

The crash course announced for this Friday, in Montréal, entitled Econometric Modeling in Finance and Insurance with the R Language, has been canceled by IFM2 – or to be more specific postponed. I will upload the slides on financial and actuarial applications later on (even if most of the material can be found, here and here, on this blog). Sorry about this late announcement.

Visualizing overdispersion (with trees)

This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors,

> X=as.factor(paste(sinistres$carburant,sinistres$zone,
+ cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101))))
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Class D A (17,24]  average = 0.06274415  variance = 0.06174966 
Class D A (24,40]  average = 0.07271905  variance = 0.07675049 
Class D A (40,65]  average = 0.05432262  variance = 0.06556844 
Class D A (65,101] average = 0.03026999  variance = 0.02960885 
Class D B (17,24]  average = 0.2383109   variance = 0.2442396 
Class D B (24,40]  average = 0.06662015  variance = 0.07121064 
Class D B (40,65]  average = 0.05551854  variance = 0.05543831 
Class D B (65,101] average = 0.0556386   variance = 0.0540786 
Class D C (17,24]  average = 0.1524552   variance = 0.1592623 
Class D C (24,40]  average = 0.0795852   variance = 0.09091435 
Class D C (40,65]  average = 0.07554481  variance = 0.08263404 
Class D C (65,101] average = 0.06936605  variance = 0.06684982 
Class D D (17,24]  average = 0.1584052   variance = 0.1552583 
Class D D (24,40]  average = 0.1079038   variance = 0.121747 
Class D D (40,65]  average = 0.06989518  variance = 0.07780811 
Class D D (65,101] average = 0.0470501   variance = 0.04575461 
Class D E (17,24]  average = 0.2007164   variance = 0.2647663 
Class D E (24,40]  average = 0.1121569   variance = 0.1172205 
Class D E (40,65]  average = 0.106563    variance = 0.1068348 
Class D E (65,101] average = 0.1572701   variance = 0.2126338 
Class D F (17,24]  average = 0.2314815   variance = 0.1616788 
Class D F (24,40]  average = 0.1690485   variance = 0.1443094 
Class D F (40,65]  average = 0.08496827  variance = 0.07914423 
Class D F (65,101] average = 0.1547769   variance = 0.1442915 
Class E A (17,24]  average = 0.1275345   variance = 0.1171678 
Class E A (24,40]  average = 0.04523504  variance = 0.04741449 
Class E A (40,65]  average = 0.05402834  variance = 0.05427582 
Class E A (65,101] average = 0.04176129  variance = 0.04539265 
Class E B (17,24]  average = 0.1114712   variance = 0.1059153 
Class E B (24,40]  average = 0.04211314  variance = 0.04068724 
Class E B (40,65]  average = 0.04987117  variance = 0.05096601 
Class E B (65,101] average = 0.03123003  variance = 0.03041192 
Class E C (17,24]  average = 0.1256302   variance = 0.1310862 
Class E C (24,40]  average = 0.05118006  variance = 0.05122782 
Class E C (40,65]  average = 0.05394576  variance = 0.05594004 
Class E C (65,101] average = 0.04570239  variance = 0.04422991 
Class E D (17,24]  average = 0.1777142   variance = 0.1917696 
Class E D (24,40]  average = 0.06293331  variance = 0.06738658 
Class E D (40,65]  average = 0.08532688  variance = 0.2378571 
Class E D (65,101] average = 0.05442916  variance = 0.05724951 
Class E E (17,24]  average = 0.1826558   variance = 0.2085505 
Class E E (24,40]  average = 0.07804062  variance = 0.09637156 
Class E E (40,65]  average = 0.08191469  variance = 0.08791804 
Class E E (65,101] average = 0.1017367   variance = 0.1141004 
Class E F (17,24]  average = 0           variance = 0 
Class E F (24,40]  average = 0.07731177  variance = 0.07415932 
Class E F (40,65]  average = 0.1081142   variance = 0.1074324 
Class E F (65,101] average = 0.09071118  variance = 0.1170159

Again, one can plot the variance against the average,

> plot(vm,vv,cex=sqrt(ve),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(vm,vv,cex=sqrt(ve))
> abline(a=0,b=1,lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.58.26.png

An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines)

> library(tree)
> T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+
+ as.factor(marque)+as.factor(carburant)+as.factor(region)+
+ agevehicule+ageconducteur,data=baseFREQ,
+ split =  "gini",minsize =25000)

The tree is the following

> plot(T)
> text(T)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-13.55.13.png

Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous.

> X=as.factor(T$where)
> E=sinistres$exposition
> Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne 
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163 
Class  8 average =   0.05191127  variance = 0.05948133 
Class  9 average =   0.07442635  variance = 0.08694552 
Class  10 average =  0.4143646   variance = 0.4494002 
Class  11 average =  0.1917445   variance = 0.1744355 
Class  15 average =  0.04754595  variance = 0.05389675 
Class  20 average =  0.08129577  variance = 0.0906322 
Class  22 average =  0.05813419  variance = 0.07089811 
Class  23 average =  0.06123807  variance = 0.07010473 
Class  24 average =  0.06707301  variance = 0.07270995 
Class  25 average =  0.3164557   variance = 0.2026906 
Class  26 average =  0.08705041  variance = 0.108456 
Class  27 average =  0.06705214  variance = 0.07174673 
Class  30 average =  0.05292652  variance = 0.06127301 
Class  31 average =  0.07195285  variance = 0.08620593 
Class  32 average =  0.08133722  variance = 0.08960552 
Class  34 average =  0.1831559   variance = 0.2010849 
Class  39 average =  0.06173885  variance = 0.06573939 
Class  41 average =  0.07089419  variance = 0.07102932 
Class  44 average =  0.09426152  variance = 0.1032255 
Class  47 average =  0.03641669  variance = 0.03869702 
Class  49 average =  0.0506601   variance = 0.05089276 
Class  50 average =  0.06373107  variance = 0.06536792 
Class  51 average =  0.06762947  variance = 0.06926191 
Class  56 average =  0.06771764  variance = 0.07122379 
Class  57 average =  0.04949142  variance = 0.05086885 
Class  58 average =  0.2459016   variance = 0.2451116 
Class  59 average =  0.05996851  variance = 0.0615773 
Class  61 average =  0.07458053  variance = 0.0818608 
Class  63 average =  0.06203737  variance = 0.06249892 
Class  64 average =  0.07321618  variance = 0.07603106 
Class  66 average =  0.07332127  variance = 0.07262425 
Class  68 average =  0.07478147  variance = 0.07884597 
Class  70 average =  0.06566728  variance = 0.06749411 
Class  71 average =  0.09159605  variance = 0.09434413 
Class  75 average =  0.03228927  variance = 0.03403198 
Class  76 average =  0.04630848  variance = 0.04861813 
Class  78 average =  0.05342351  variance = 0.05626653 
Class  79 average =  0.05778622  variance = 0.05987139 
Class  80 average =  0.0374993   variance = 0.0385351 
Class  83 average =  0.06721729  variance = 0.07295168 
Class  86 average =  0.09888492  variance = 0.1131409 
Class  87 average =  0.1019186   variance = 0.2051122 
Class  88 average =  0.05281703  variance = 0.0635244 
Class  91 average =  0.08332136  variance = 0.09067632 
Class  96 average =  0.07682093  variance = 0.08144446 
Class  97 average =  0.0792268   variance = 0.08092019 
Class  99 average =  0.1019089   variance = 0.1072126 
Class  100 average = 0.1018262   variance = 0.1081117 
Class  101 average = 0.1106647   variance = 0.1151819 
Class  103 average = 0.08147644  variance = 0.08411685 
Class  104 average = 0.06456508  variance = 0.06801061 
Class  107 average = 0.1197225   variance = 0.1250056 
Class  108 average = 0.0924619   variance = 0.09845582 
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.05.08.png

Here, we can identify classes where remaining heterogeneity.

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Exposure with binomial responses

Last week, we’ve seen how to take into account the exposure to compute nonparametric estimators of several quantities (empirical means, and empirical variances) incorporating exposure. Let us see what can be done if we want to model a binomial response. The model here is the following: ,

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

that can be visualize below

http://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

Consider the case where the variable of interest is not the number of claims, but simply the indicator of the occurrence of a claim. Then we wish to model the event https://latex.codecogs.com/gif.latex?\{N=0\} versus https://latex.codecogs.com/gif.latex?\{N%3E0\}, interpreted as non-occurrence and occurrence. Given the fact that we can only observe https://latex.codecogs.com/gif.latex?\{Y=0\} versus https://latex.codecogs.com/gif.latex?\{Y%3E0\}. Having an inclusion is not enough to derive a model. Actually, with a Poisson process model, we can get easily that

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense. Assume that the probability of not having a claim can be explained by some covariates, denoted https://latex.codecogs.com/gif.latex?\boldsymbol{X}, through some link function (using the GLM terminology),

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=0|\boldsymbol{X})=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, since we do observe https://latex.codecogs.com/gif.latex?Y – and not https://latex.codecogs.com/gif.latex?N – we have

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=h(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})^E

The dataset we will use is always the same

> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+ header=TRUE,sep=";")
> sinistres=sinistre[sinistre$garantie=="1RC",]
> sinistres=sinistres[sinistres$cout>0,]
> contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+ header=TRUE,sep=";")
> T=table(sinistres$nocontrat)
> T1=as.numeric(names(T))
> T2=as.numeric(T)
> nombre1 = data.frame(nocontrat=T1,nbre=T2)
> I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE]
> nombre2 = data.frame(nocontrat=T1,nbre=0)
> nombre=rbind(nombre1,nombre2)
> sinistres = merge(contrat,nombre)
> sinistres$nonsin = (sinistres$nbre==0)

The first model we can consider is based on the standard logistic approach, i.e.

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\left(\frac{\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})}\right)^E

That’s nice, but difficult to handle with standard functions. Nevertheless, it is always possible to compute numerically the maximum likelihood estimator of https://latex.codecogs.com/gif.latex?\boldymbol{\beta} given https://latex.codecogs.com/gif.latex?(Y_i,\boldsymbol{X}_i,E_i).

> Y=sinistres$nonsin
> X=cbind(1,sinistres$ageconducteur)
> E=sinistres$exposition
> logL = function(beta){
+ 	pi=(exp(X%*%beta)/(1+exp(X%*%beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))
+ }
> optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")
$par
[1] 2.14420560 0.01040707
$value
[1] 7604.073
$counts
function gradient 
      42       10 
$convergence
[1] 0
$message
NULL
> parametres=optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")$par

Now, let us look at alternatives, based on standard regression models. For instance a binomial-log model. Because the exposure appears as a power of the annual probability, everything would be fine if https://latex.codecogs.com/gif.latex?h was the exponential function (or https://latex.codecogs.com/gif.latex?h^{-1} was the log link function), since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0|\boldsymbol{X},E)=\exp(E+\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta})

Now, if we try to code it, it starts quickly to be problematic,

> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log")) 
Error: no valid set of coefficients has been found: please supply starting values

I tried (almost) everything I could, but I could not get rid of that error message,

> startglm=c(0,-.001)
> names(startglm)=c("(Intercept)","ageconducteur")
> etaglm=rep(-.01,nrow(sinistresI))
> etaglm[sinistresI$nonsin==0]=-10
> muglm=exp(etaglm)
> reg=glm(nonsin~ageconducteur+offset(exposition),
+ data=sinistresI,family=binomial(link="log"),
+ control = glm.control(epsilon=1e-5,trace=TRUE,maxit=50),
+ start=startglm,
+ etastart=etaglm,mustart=muglm)
Deviance = NaN Iterations - 1 
Error: no valid set of coefficients has been found: please supply starting values

So I decided to give up. Almost. Actually, the problem comes from the fact that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0) is closed to 1. I guess everything would be nicer if we could work with probability close to 0. Which is possible, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)=1-\mathbb{P}(Y=0)%20=%201-[1-\mathbb{P}(N%3E0)]^E

where https://latex.codecogs.com/gif.latex?\mathbb{P}(N%3E0) is close to 0. So we can use Taylor’s expansion,

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y%3E0)\sim1-1+E\cdot%20\mathbb{P}(N%3E0)]=E\cdot%20\mathbb{P}(N%3E0)]

Here, the exposure does no longer appears as a power of the probability, but appears multiplicatively. Of course, there are higher order terms. But let us forget them (so far). If – one more time – we consider a log link function, then we can incorporate the exposure, or to be more specific, the logarithm of the exposure.

> regopp=glm((1-nonsin)~ageconducteur+offset(log(exposition)),
+ data=sinistresI,family=binomial(link="log"))

which now works perfectly.

Now, to see a final model, perhaps we should get back to our Poisson regression model since we do have a model for the probability that https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=\cdot).

> regpois=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))

We can now compare those three models. Perhaps, we should also include the prediction without any explanatory variable. For the second model (actually, it does run without any explanatory variable), we run

>  regreff=glm((1-nonsin)~1+offset(log(exposition)),
+ data=sinistres,family=binomial(link="log"))

so that the prediction is here

> exp(coefficients(regreff))
(Intercept) 
 0.06776376

This value is comparable with the logistic regression,

> logL2 = function(beta){
+ 	pi=(exp(beta)/(1+exp(beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))}
> param=optim(fn=logL2,par=.01,method="BFGS")$par
> 1-exp(param)/(1+exp(param))
[1] 0.06747777

But is quite different from the Poisson model,

> exp(coefficients(glm(nbre~1+offset(log(exposition)),
+ data=sinistres,family=poisson(link="log"))))
(Intercept) 
 0.07279295

Let us produce a graph, to compare those models,

> age=18:100
> yml1=exp(parametres[1]+parametres[2]*age)/(1+exp(parametres[1]+parametres[2]*age))
> plot(age,1-yml1,type="l",col="purple")
> yp=predict(regpois,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> yp1=1-exp(-yp)
> ydl=predict(regopp,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> plot(age,ydl,type="l",col="red")
> lines(age,yp1,type="l",col="blue")
> lines(age,1-yml1,type="l",col="purple")
> abline(h=exp(coefficients(regreff)),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.55.591.png

Observe here that the three models are quite different. Actually, with two models, it is possible to run more complex regression, e.g. with splines, to visualize the impact of the age on the probability of having – or not – a car accident. If we compare the Poisson regression (still in red) and the log-binomial model, with Taylor’s expansion, we get

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-08-a%CC%80-14.39.08.png

The next step is to see how to incorporate the exposure in a tree. But that’s another story…

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

Crash course on R for financial and actuarial econometrics

Next Friday, I will give in Montréal a crash course entitled Econometric Modeling in Finance and Insurance with the R Language. Since IFM2 wanted this course to be an opportunity to discover R, the first part o fthe course will be on the R language. Slides can be downloaded from here.

(since the course is still scheduled, all comments and remarks are welcomed)

Natura non facit saltus

(see John Wilkins’ article on the – interesting – history of that phrase http://scienceblogs.com/evolvingthoughts/…). We will see, this week in class, several smoothing techniques, for insurance ratemaking. As a starting point, assume that we do not want to use segmentation techniques: everyone will pay exactly the same price.

  • no segmentation of the premium

And that price should be related to the pure premium, which is proportional to the frequency (or the annualized frequency, as discussed previously), since

https://latex.codecogs.com/gif.latex?\mathbb{E}_{\mathbb{P}}\left(\sum_{i=1}^N%20Y_i\right)=\mathbb{E}_{\mathbb{P}}(N)%20\cdot%20\mathbb{E}_{\mathbb{P}}(Y_i)

The probability measure is mentioned here just to recall that we can use any measure. Even https://latex.codecogs.com/gif.latex?\mathbb{P}_{\boldsymbol{X}} (based on some covariates). Without any covariate, the expected frequency should be

> regglm0=glm(nbre~1+offset(log(exposition)),data=sinistres,family=poisson)
> summary(regglm0)

Call:
glm(formula = nbre ~ 1 + offset(log(exposition)), family = poisson, 
    data = sinistres)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.5033  -0.3719  -0.2588  -0.1376  13.2700  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -2.6201     0.0228  -114.9   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12680  on 49999  degrees of freedom
Residual deviance: 12680  on 49999  degrees of freedom
AIC: 16353

Number of Fisher Scoring iterations: 6
> exp(coefficients(regglm0))
(Intercept) 
 0.07279295

Thus, if we do not want to take into account potential heterogeneity, we should assume that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) where https://latex.codecogs.com/gif.latex?\lambda is closed to 7.28%. Yes, as mentioned in class, it is rather common to see https://latex.codecogs.com/gif.latex?\lambda as a percentage, i.e. a probability, since

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq%200)=1-e^{-\lambda}\approx%20\lambda

i.e. https://latex.codecogs.com/gif.latex?\lambda can be interpreted as the probability of not have a claim (see also the law of small numbers). Let us visualize this as a function of the age of the driver,

> a=18:100
> yp=predict(regglm0,newdata=data.frame(ageconducteur=a,exposition=1),type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> k=23
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-constante.png

We do predict the same frequency for all drivers, e.g. for some drive aged 40,

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07279295  confidence interval 0.07611196 0.06947393

Let us now consider the case where we try to take into account heterogeneity, e.g. by age,

  • The (standard) Poisson regression

The idea of the (log-)Poisson regression is to assume that instead of having https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda), we should have https://latex.codecogs.com/gif.latex?N|\boldsymbol{X}\sim\mathcal{P}(\lambda_{\boldsymbol{X}}), where

https://latex.codecogs.com/gif.latex?\lambda_{\boldsymbol{X}}=\exp(\beta_0+\beta_1%20\boldsymbol{X}_1+\cdots+\beta_k\boldsymbol{X}_k)

in a very general setting. Here, let us consider only one explanatory variable, i.e.

https://latex.codecogs.com/gif.latex?\lambda_{X}=\exp(\beta_0+\beta_1%20{X})

Here, we have

> yp=predict(regglm1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")
> lines(a,yp1,lty=2)
> lines(a,yp2,lty=2)
> points(a[k],yp0[k],pch=3,lwd=3,col="red")
> segments(a[k],yp1[k],a[k],yp2[k],col="red",lwd=3)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-exp-standard.png

i.e. the prediction for the annualized claim frequency for our 40 year old driver is now 7.74% (which is slightly higher than what we had before, 7.28%)

> cat("Frequency =",yp0[k]," confidence interval",yp1[k],yp2[k])
Frequency = 0.07740574  confidence interval 0.08117512 0.07363636

It is possible to compute not the expected frequency , but the ratio https://latex.codecogs.com/gif.latex?\mathbb{E}(N|X)/\mathbb{E}(N).

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-13.45.43.png

Above the horizontal blue line, the premium will be higher than the one obtained without segmentation, and (of course) lower below. Here, drivers younger than 44 year old will pay more, while driver older than 44 year old will be less. We have discussed, in the introduction, the necessity of segmentation. If we consider two companies, one segmenting, while the other one has a flat rate, then older drivers will go to the first company (since insurance is cheaper) while younger ones will go to the second one (again, it is cheaper). The problem is that the second company implicitly hopes that older drivers will compensate the risk. But since they’re gone, insurance will be too cheap, and the company will loose money (if not goes bankrupt). So companies have to use segmentation techniques to survive. Now, the problem is that we cannot be sure that this exponential decay of the premium is the proper way the premium should evolve as a function of the age. An alternative can be to use nonparametric techniques to visualize to true influence of the age on claims frequency.

  • A pure nonparametric model

A first model can be to consider a premium, per age. This can be done considering the age of the driver as a factor in the regression,

> regglm2=glm(nbre~as.factor(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> yp=predict(regglm2,newdata=data.frame(ageconducteur=a0,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a0,yp0,type="l",ylim=c(.03,.12))
> abline(v=40,col="grey")

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-factors.png

Here, the forecast for our 40 year old driver is slightly lower than be previous one, but the confidence interval is much larger (since we focus on a very small subclass of the portfolio: drivers aged exactly 40)

Frequency = 0.06686658  confidence interval 0.08750205 0.0462311

Here, we consider too small classes, and the premium is too erratic: the premium will decrease of 20% from age 40 to 41, and then increase of 50% from age 41 to 42,

> diff(log(yp0[23:25]))
        24         25 
-0.2330241  0.5223478

There is no chance that the company will keep the insured with this strategy. This discontinuity of the premium is clearly an important issue here.

  • Using age classes

An alternative can be to consider age classes, from very young drivers to senior drivers.

> level1=seq(15,105,by=5)
> regglmc1=glm(nbre~cut(ageconducteur,level1)+offset(log(exposition)),
+ data=sinistres,family=poisson)
> summary(regglmc1)

Coefficients:
                                   Estimate Std. Error z value Pr(>|z|)    
(Intercept)                         -1.6036     0.1741  -9.212  < 2e-16 ***
cut(ageconducteur, level1)(20,25]   -0.4200     0.1948  -2.157   0.0310 *  
cut(ageconducteur, level1)(25,30]   -0.9378     0.1903  -4.927 8.33e-07 ***
cut(ageconducteur, level1)(30,35]   -1.0030     0.1869  -5.367 8.02e-08 ***
cut(ageconducteur, level1)(35,40]   -1.0779     0.1866  -5.776 7.65e-09 ***
cut(ageconducteur, level1)(40,45]   -1.0264     0.1858  -5.526 3.28e-08 ***
cut(ageconducteur, level1)(45,50]   -0.9978     0.1856  -5.377 7.58e-08 ***
cut(ageconducteur, level1)(50,55]   -1.0137     0.1855  -5.464 4.65e-08 ***
cut(ageconducteur, level1)(55,60]   -1.2036     0.1939  -6.207 5.40e-10 ***
cut(ageconducteur, level1)(60,65]   -1.1411     0.2008  -5.684 1.31e-08 ***
cut(ageconducteur, level1)(65,70]   -1.2114     0.2085  -5.811 6.22e-09 ***
cut(ageconducteur, level1)(70,75]   -1.3285     0.2210  -6.012 1.83e-09 ***
cut(ageconducteur, level1)(75,80]   -0.9814     0.2271  -4.321 1.55e-05 ***
cut(ageconducteur, level1)(80,85]   -1.4782     0.3371  -4.385 1.16e-05 ***
cut(ageconducteur, level1)(85,90]   -1.2120     0.5294  -2.289   0.0221 *  
cut(ageconducteur, level1)(90,95]   -0.9728     1.0150  -0.958   0.3379    
cut(ageconducteur, level1)(95,100] -11.4694   144.2817  -0.079   0.9366    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

> yp=predict(regglmc1,newdata=data.frame(ageconducteur=a,exposition=1),
+ type="response",se.fit=TRUE)
> yp0=yp$fit
> yp1=yp$fit+2*yp$se.fit
> yp2=yp$fit-2*yp$se.fit
> plot(a,yp0,ylim=c(.03,.12),type="s")
> abline(v=40,col="grey")
> lines(a,yp1,lty=2,type="s")
> lines(a,yp2,lty=2,type="s")

Here we obtain the following predictions,

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-1.png

and for our 40 year old driver, the frequency is now 6.84%.

Frequency = 0.0684573  confidence interval 0.07766717 0.05924742

But our classes were defined arbitrarily here. Perhaps should we consider other classes, to see if the prediction is sensitive to the cutting values,

> level2=level1-2
> regglmc2=glm(nbre~cut(ageconducteur,level2)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-cut-2.png

which yields the following values for our 40 year old driver,

Frequency = 0.07050614  confidence interval 0.07980422 0.06120807

So here, we did not remove the discontinuity problem. An idea here can be to consider moving regions: if the goal is to predict the frequency for a 40 year old driver, perhaps the class should be (somehow) centered around 40. And center the interval around 35 for drivers aged 35. Etc.

  • Moving average

Thus, it is natural to consider some local regressions, where only drivers aged almost 40 should be considered. This almost concept is related to the bandwidth. For instance, drivers between 35 and 45 can be considered as being almost40. In practice we can either consider a subset function, or we can use weights in the regressions

> value=40
> h=5
> sinistres$omega=(abs(sinistres$ageconducteur-value)<=h)*1
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

To see what’s going on, let us consider an animated plot, where the age of interest is changing,

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-2.gif

Here, for our 40 year old drive, we get

Frequency = 0.06913391  confidence interval 0.07535564 0.06291218

We do obtain a curve that can be interpreted as a local regression. But here, we do not take into account that 35 is not as close to 40 as 39 could be. An here, 34 is assumed to be very far away from 40. Clearly, we could improve that technique: kernel functions can considered, i.e. the closer to 40, the larger the weight.

> value=40
> h=5
> sinistres$omega=dnorm(abs(sinistres$ageconducteur-value)/h)
> regglmomega=glm(nbre~ageconducteur+offset(log(exposition)),
+ data=sinistres,family=poisson,weights=omega)

which can be plotted below

http://freakonometrics.hypotheses.org/files/2013/02/liss-poisson-1.gif

Here, our prediction for our 40 year old drive is

Frequency = 0.07040464  confidence interval 0.07981521 0.06099408

This is the idea of kernel regression techniques. But as explained in the slides, other non parametric techniques can be considered, like spline functions.

  • Smoothing with splines

In R, it is simple to use spline function (somehow much more simple than kernel smoothers)

> library(splines)
> regglmbs=glm(nbre~bs(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-splines.png

The prediction for our 40 year old driver is now

Frequency = 0.06928169  confidence interval 0.07397124 0.06459215

Note that this techniques is related to another class of models, the so-called Generalized Additive Models, i.e. GAMs.

> library(mgcv)
> reggam=gam(nbre~s(ageconducteur)+offset(log(exposition)),
+ data=sinistres,family=poisson)

http://freakonometrics.hypotheses.org/files/2013/02/reg-poisson-gam.png

The prediction is extremely close to the one we obtained above (the main differences being observed for very old drivers)

Frequency = 0.06912683  confidence interval 0.07501663 0.06323702
  • Comparison of the different models

Somehow, one way or another, all those models are valid. So perhaps we should compare them,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.50.19.png

On the graph above, we can visualize the upper and the lower bound of the prediction, for the 9 models. The horizontal line is the predicted value without taking into account heterogeneity. It is possible to consider relative values, with respect to this value,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-05-a%CC%80-14.54.56.png