# Base de données de tarification

Pour compléter le cours de ce matin, un mot rapide sur les bases, et plus particulièrement la base de contrats. Au sujet des variables,

• densite est la densité de population dans la commune où habite le conducteur principal,
• zone : zone A B C D E ou F, selon la densité en nombre d’habitants par km2 de la commune de résidence (A =”1-50″, B=”50-100″, C=”100-500″, D=”500-2,000″, E=”2,000-10,000″, F=”10,000+”.

A titre d’information, la répartition de la population en France se fait de la manière suivante

• marque : marque du véhicule selon la table suivante (1 Renault Nissan; 2 Peugeot Citroën ; 3 Volkswagen Audi Skoda Seat ; 4 Opel GM; 5 Ford ; 6 Fiat ; 10 Mercedes Chrysler ; 11 BMW Mini ;12 Autres japonaises et coréennes ; 13 Autres européennes ; 14 Autres marques et marques inconnues). Cette variable n’est pas une variable numérique
• region : code à 2 chiffres (ce qui n’est pas une valeur numérique) donnant les 22 régions françaises (code INSEE), soit géographiquement

• ageconducteur : âge du conducteur principal en début de la couverture,
• agevehicule : âge du véhicule en début de période.

Je demande de ne pas utiliser la variable de bonus, qui fait intervenir une information utilisée en tarification a posteriori (qui ne fait pas l’objet de ce cours).

# Regression on categorical variables

This morning, Stéphane asked me tricky question about extracting coefficients from a regression with categorical explanatory variates. More precisely, he asked me if it was possible to store the coefficients in a nice table, with information on the variable and the modality (those two information being in two different columns). Here is some code I did to produce the table he was looking for, but I guess that some (much) smarter techniques can be used (comments – see below – are open). Consider the following dataset

> base
x sex   hair
1  1   H  Black
2  4   F  Brown
3  6   F  Black
4  6   H  Black
5 10   H  Brown
6  5   H Blonde

with two factors,

> levels(base$hair) [1] "Black" "Blonde" "Brown" > levels(base$sex)
[1] "F" "H"

Let us run a (standard linear) regression,

> reg=lm(x~hair+sex,data=base)

which is here

> summary(reg)

Call:
lm(formula = x ~ hair + sex, data = base)

Residuals:
1          2          3          4          5          6
-3.714e+00 -2.429e+00  2.429e+00  1.286e+00  2.429e+00 -2.220e-16

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)   3.5714     3.4405   1.038    0.408
hairBlonde    0.2857     4.8655   0.059    0.959
hairBrown     2.8571     3.7688   0.758    0.528
sexH          1.1429     3.7688   0.303    0.790

Residual standard error: 4.071 on 2 degrees of freedom
Multiple R-squared: 0.2352,	Adjusted R-squared: -0.9121
F-statistic: 0.205 on 3 and 2 DF,  p-value: 0.886

If we want to extract the names of the factors (assuming here that there are no numbers in the name of the factor), and the values of the associated modality, one can use

> VARIABLE=c("",gsub("[-^0-9]", "", names(unlist(reg$xlevels)))) > MODALITY=c("",as.character(unlist(reg$xlevels)))
> names=data.frame(VARIABLE,MODALITY,NOMVAR=c(
+ "(Intercept)",paste(VARIABLE,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))
> merge(names,regression,all.x=TRUE)
NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2   hairBlack     hair    Black        NA
3  hairBlonde     hair   Blonde 0.2857143
4   hairBrown     hair    Brown 2.8571429
5        sexF      sex        F        NA
6        sexH      sex        H 1.1428571

or, if we want modalities exluding references,

> merge(names,regression)
NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2  hairBlonde     hair   Blonde 0.2857143
3   hairBrown     hair    Brown 2.8571429
4        sexH      sex        H 1.1428571

In order to reproduce the table Stéphane sent me, let us use the following code to produce an html table,

> library(xtable)
> htlmtable <- xtable(merge(names,regression))
> print(htlmtable,type="html")
NOMVAR VARIABLE MODALITY COEF
1 (Intercept) 3.57
2 hairBlonde hair Blonde 0.29
3 hairBrown hair Brown 2.86
4 sexH sex H 1.14

So yes, it is possible to build a table with the variable, modalities, and coefficients. This function can be interesting on prospective mortality, when we do have a large number of modalities per factor (years, ages and year of birth). Consider the following datasets

> DEATH=read.table(
+ "http://freakonometrics.free.fr/DeathsSwitzerland.txt",
+ "http://freakonometrics.free.fr/ExposuresSwitzerland.txt",
> DEATH$Age=as.numeric(as.character(DEATH$Age))
> DEATH=DEATH[-which(is.na(DEATH$Age)),] > EXPOSURE$Age=as.numeric(as.character(EXPOSURE$Age)) > EXPOSURE=EXPOSURE[-which(is.na(EXPOSURE$Age)),]
> base=data.frame(y=as.factor(DEATH$Year),a=as.factor(DEATH$Age),
+ c=as.factor(DEATH$Year-DEATH$Age),D=DEATH$Total,E= EXPOSURE$Total)
> base=base[base$E>0,] and the following nonlinear model, based on Lee-Carter model (including a cohort effect), $N_{x,t}\sim\mathcal{P}(E_{x,t}\cdot \exp[\alpha_x+\beta_x \kappa_t + \gamma_x \delta_{t-x}])$ can be estimated using > library(gnm) > reg=gnm(D~a+Mult(a,y)+Mult(a,c),offset=log(E),family=poisson,data=base) In order to extract the 671 coefficients from the regresssion, > length(coefficients(reg)) [1] 671 (as properly as possible) we have to be careful: names of coefficients are not that simple to handle. For instance, we can see things like > coefficients(reg)[200] Mult(., year).age98 0.04203519 In order to extract them, define > na=length((reg$xlevels)$age) > ny=length((reg$xlevels)$year) > nc=length((reg$xlevels)$cohort) > VARIABLElong=c("",rep("age",na),rep("Mult(., year).age",na), + rep("Mult(a, .).y",ny), + rep("Mult(., cohort).age",na),rep("Mult(age, .).cohort",nc)) > VARIABLEshort=c("",rep("age",na),rep("age",na),rep("year",ny), + rep("age",na),rep("cohort",nc)) > MODALITY=c("",(reg$xlevels)$age,(reg$xlevels)$age, + (reg$xlevels)$year,(reg$xlevels)$age,(reg$xlevels)$cohort) > names=data.frame(VARIABLElong,VARIABLEshort, + MODALITY,NOMVAR=c("(Intercept)",paste(VARIABLElong,MODALITY,sep="")[-1])) > regression=data.frame(NOMVAR=names(coefficients(reg)), + COEF=as.numeric(coefficients(reg))) Here we go, now we have the coefficients from the regression in a nice table, > outputreg=merge(names,regression) > outputreg[1:10,] NOMVAR VARIABLElong VARIABLEshort MODALITY COEF 1 (Intercept) -8.22225458 2 age1 age age 1 -0.87495451 3 age10 age age 10 -1.67145704 4 age100 age age 100 4.91041650 5 age11 age age 11 -1.00186990 6 age12 age age 12 -1.05953497 7 age13 age age 13 -0.90952859 8 age14 age age 14 0.02880668 9 age15 age age 15 0.42830738 10 age16 age age 16 1.35961403 It is now possible to plot all the coefficients, as functions of the age, the year of observation, or the year of birth. For instance, for the standard average age effect (namely $\alpha_x$ as a function of $x$), we can use > typevariable=as.character(unique(outputreg$VARIABLElong))
> basegraph=outputreg[outputreg$VARIABLElong==typevariable[2],] > x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF > plot(x,y,type="p",col="blue",xlab="Age") while the cohort effect ($\delta_t$ as a function of $t$) is obtained using > basegraph=outputreg[outputreg$VARIABLElong==typevariable[5],]
> x=as.numeric(as.character(basegraph$MODALITY)) > y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Cohort (year of birth)",ylim=c(0,10))

# Introduction aux modèles linéaires généralisés

J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.

Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,

Mais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.

# The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

• The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let $N$ denote a counting random variable, then it said to be Poisson distributed if there is $\lambda\in(0,\infty)$ such that

$\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall k\in\mathbb{N}$

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among $n$ insured. If individual death probabilities are identical, say $p$, and if deaths are independent events, then

$\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall k\in\{0,1,\cdots,n\}$
And if $n\rightarrow\infty$ and  $np\rightarrow \lambda$, then

$\mathbb{P}(N=k)\rightarrow e^{-\lambda}\frac{\lambda^k}{k!}$Again, this is an asymptotic theorem, which is valid when we have a lot of observations ($n\rightarrow\infty$), but also, the probability of occurrence should be extremely small (since $p\sim\lambda/n$), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

• The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let $X_1, \cdots,X_n$ denote i.i.d random variables taking values in $\mathbb{R}^d$ (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let $\mathcal{A}_n\subset\mathbb{R}^d$. If  $\mathbb{P}(X_i \in \mathcal{A}_n)\rightarrow 0$ as $n\rightarrow\infty$ (or $\mathbb{P}(X_i \in \mathcal{A}_n)=O(n^{-1})$ to be a little bit more specific about the assumptions), let $N$ denote the (random variable characterizing) count of events $\{X_i \in \mathcal{A}_n\}$, then $N$ can be approximated by a Poisson distribution with parameter $\lambda =n \times \mathbb P(X_i \in \mathcal{A}_n)$.
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

To get an interpretation related to insurance modeling, let $\mathcal{A}$ denote an upper layer in a reinsurance contract, i.e. $\mathcal{A}=\{x>d\}$ for some deductible $d$. Let $X_i$‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible $d$ becomes extremely large (and $\mathbb{P}(X_i \in \mathcal{A})\rightarrow 0$), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if $N$ has a Poisson distribution and, conditionally on $N$$X_1,\cdots,X_N$ are independent identically distributed generalized Pareto random variables, then $\max\{X_1,\cdots,X_N\}$ has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

• The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

• Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

 number of death per year Empirical counts Poisson distribution 0 109 108.67 1 65 66.21 2 22 20.22 3 3 4.11 4 1 0.63 5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

 number of hurricanes per year empirical frequency Poisson frequency 0 30 27.16 1 48 47.99 2 37 42.41 3 29 24.98 4 8 11.03 5 3 3.90 6 3 1.15 7 1 0.29 8 and more 0 0.08
• Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period $T$ (in years), then the yearly probability of non-occurrence is $1-(1/T)$.

And the probability of non-occurence over $n$ years is then $1-[1-(1/T)]^n$. It is standard to summarize this property with the following table,

 return period $T$ Number of years ($n$) without catastrophes 10 20 50 100 200 10 65.1% 40.1% 18.3% 9.6% 4.9% 20 87.8% 64.2% 33.2% 18.2% 9.5% 50 99.5% 92.3% 63.6% 39.5% 22.5% 100 99.9% 99.4% 86.7% 63.4% 39.5% 200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability $1/T=1/n$, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then $1-\exp(-1)$, which is equal to 0.632.

• Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor $p$ is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

$\mathbb{P}(N\neq0)=1-(1-p)^{50 \times 80}$

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

$\mathbb P(N\neq 0)\neq 50\times 80\times p$

On the other hand

$\mathbb P(N=0)=1-(1-p)^{50\times80 } \sim1-\exp\left(-50\times80\times p \right)$

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that $N$ is null when $N$ has a Poisson distribution with parameter $\lambda=50\times80\times p$. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

$1-\exp(-50\times 80/7200)$

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).

# Somewhere else, part 30

A nice graph this week,

and as usual, a lot of interesting posts and articles, here and there

En revanche, peu de documents en français cette semaine,

Did I miss something?

# Regression tree using Gini’s index

In order to illustrate the construction of regression tree (using the CART methodology), consider the following simulated dataset,

> set.seed(1)
> n=200
> X1=runif(n)
> X2=runif(n)
> P=.8*(X1<.3)*(X2<.5)+
+   .2*(X1<.3)*(X2>.5)+
+   .8*(X1>.3)*(X1<.85)*(X2<.3)+
+   .2*(X1>.3)*(X1<.85)*(X2>.3)+
+   .8*(X1>.85)*(X2<.7)+
+   .2*(X1>.85)*(X2>.7)
> Y=rbinom(n,size=1,P)
> B=data.frame(Y,X1,X2)

with one dichotomos varible (the variable of interest, $Y$), and two continuous ones (the explanatory ones $X_1$ and $X_2$).

> tail(B)
Y        X1        X2
195 0 0.2832325 0.1548510
196 0 0.5905732 0.3483021
197 0 0.1103606 0.6598210
198 0 0.8405070 0.3117724
199 0 0.3179637 0.3515734
200 1 0.7828513 0.1478457

The theoretical partition is the following

Here, the sample can be plotted below (be careful, the first variate is on the y-axis above, and the x-axis below) with blue dots when $Y$ equals one, and red dots when $Y$ is null,

> plot(X1,X2,col="white")
> points(X1[Y=="1"],X2[Y=="1"],col="blue",pch=19)
> points(X1[Y=="0"],X2[Y=="0"],col="red",pch=19)

In order to construct the tree, we need a partition critera. The most standard one is probably Gini’s index, which can be writen, when $X$‘s are splited in two classes, denoted here $\{A,B\}$

or when $X$‘s are splited in three classes, denoted $\{A,B,C\}$

etc. Here, $n_{x,y}$ are just counts of observations that belong to partition $x$ such that $Y$ takes value $y$. But it is possible to consider other criteria, such as the chi-square distance,

where, classically

when we consider two classes (one knot) or, in the case of three classes (two knots)

Here again, the idea is to maximize that distance: the idea is to discriminate, so we want samples as not independent as possible. To compute Gini’s index consider

> GINI=function(y,i){
+ T=table(y,i)
+ nx=apply(T,2,sum)
+ pxy=T/matrix(rep(nx,each=2),2,ncol(T))
+ vxy=pxy*(1-pxy)
+ zx=apply(vxy,2,sum)
+ n=sum(T)
+ -sum(nx/n*zx)
+ }

We simply construct the contingency table, and then, compute the quantity given above. Assume, first, that there is only one explanatory variable. We split the sample in two, with all possible spliting values $s$, i.e.

$\{[x_{\min},s],[s,x_{\max}]\}$

Then, we compute Gini’s index, for all those values. The knot is the value that maximizes Gini’s index. Once we have our first knot, we keep it (call it, from now on $s^\star$). And we reiterate, by seeking the best second choice: given one knot, consider the value that splits the sample in three, and give the highest Gini’s index, Thus, we consider either the following partition

$\{[x_{\min},s],[s,s^\star],[s^\star,x_{\max}]\}$

or this one

$\{[x_{\min},s^\star],[s^\star,s],[s,x_{\max}]\}$

I.e. we cut either below, or above the previous knot. And we iterate. The code can be something like that,

> X=X2
> u=(sort(X)[2:n]+sort(X)[1:(n-1)])/2
> knot=NULL
> for(s in 1:4){
+ vgini=rep(NA,length(u))
+ for(i in 1:length(u)){
+ kn=c(knot,u[i])
+ F=function(x){sum(x<=kn)}
+ I=Vectorize(F)(X)
+ vgini[i]=GINI(Y,I)
+ }
+ plot(u,vgini)
+ k=which.max(vgini)
+ cat("knot",k,u[k],"\n")
+ knot=c(knot,u[k])
+ u=u[-k]
+ }
knot 69 0.3025479
knot 133 0.5846202
knot 72 0.3148172
knot 111 0.4811517

At the first step, the value of Gini’s index was the following,

which was maximal around 0.3. Then, this value is considered as fixed. And we try to construct a partition in three parts (spliting either below or above 0.3). We get the following plot for Gini’s index (as a function of this second knot)

which is maximum when the split the sample around 0.6 (which becomes our second knot). Etc. Now, let us compare our code with the standard R function,

> tree(Y~X2,method="gini")
node), split, n, deviance, yval
* denotes terminal node

1) root 200 49.8800 0.4750
2) X2 < 0.302548 69 12.8100 0.7536 *
3) X2 > 0.302548 131 28.8900 0.3282
6) X2 < 0.58462 65 16.1500 0.4615
12) X2 < 0.324591 7  0.8571 0.1429 *
13) X2 > 0.324591 58 14.5000 0.5000 *
7) X2 > 0.58462 66 10.4400 0.1970 *

We do obtain similar knots: the first one is 0.302 and the second one 0.584. So, constructing tree is not that difficult…

Now, what if we consider our two explanatory variables? The story remains the same, except that the partition is now a bit more complex to write. To find the first knot, we consider all values on the two components, and again, keep the one that maximizes Gini’s index,

> n=nrow(B)
> u1=(sort(X1)[2:n]+sort(X1)[1:(n-1)])/2
> u2=(sort(X2)[2:n]+sort(X2)[1:(n-1)])/2
> gini=matrix(NA,nrow(B)-1,2)
> for(i in 1:length(u1)){
+ I=(X1<u1[i])
+ gini[i,1]=GINI(Y,I)
+ I=(X2<u2[i])
+ gini[i,2]=GINI(Y,I)
+ }
> mg=max(gini)
> i=1+sum(mg==max(gini[,2]))
> par(mfrow = c(1, 2))
> plot(u1,gini[,1],ylim=range(gini),col="green",type="b",xlab="X1",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==1){points(u1[which.max(gini[,1])],mg,pch=19,col="red")
+          segments(u1[which.max(gini[,1])],mg,u1[which.max(gini[,1])],-100000)}
> plot(u2,gini[,2],ylim=range(gini),col="green",type="b",xlab="X2",ylab="Gini index")
> abline(h=mg,lty=2,col="red")
> if(i==2){points(u2[which.max(gini[,2])],mg,pch=19,col="red")
+          segments(u2[which.max(gini[,2])],mg,u2[which.max(gini[,2])],-100000)}
> u2[which.max(gini[,2])]
[1] 0.3025479

The graphs are the following: either we split on the first component (and we obtain the partition on the right, below),

or we split on the second one (and we get the following partition),

Here, it is optimal to split on the second variate, first. And actually, we get back to the one-dimensional case discussed previously: as expected, it is optimal to split around 0.3. This is confirmed with the code below,

> library(tree)
> arbre=tree(Y~X1+X2,data=B,method="gini")
> arbre\$frame[1:4,]
var   n       dev      yval splits.cutleft splits.cutright
1     X2 200 49.875000 0.4750000      <0.302548       >0.302548
2     X1  69 12.811594 0.7536232      <0.800113       >0.800113
4 <leaf>  57  8.877193 0.8070175
5 <leaf>  12  3.000000 0.5000000

For the second knot, four cases should be considered: spliting on the second variable (again), either above, or below the previous knot (see below on the left) or spliting on the first one. Then whe have wither a partition below or above the previous knot (see below on the right),

Etc. To visualize the tree, the code is the following

> plot(arbre)
> text(arbre)
> partition.tree(arbre)

Note that we can also visualize the partition. Nice, isn’t it?

To go further, the book Classification and Regression Trees by Leo Breiman (and co-authors) is awesome. Note that there are also interesting sections in the bible Elements of Statistical Learning: Data Mining, Inference, and Prediction by Trevor Hastie, Robert Tibshirani and Jerome Friedman (which can be downloaded from http://www.stanford.edu/~hastie/…)

# Régression de Poisson, et biais minimal

Lors du prochain cours d’actuariat, nous allons finir les arbres de régression, et introduire la régression de Poisson. Les transparents sont en ligne ici,

Je vais présenter la régression en Poisson, en faisant un parallèle avec la régression logistique, la session suivante portera sur la généralisation obtenue avec les modèles linéaires généralisés. Sur la régression de Poisson, je suggère de lire Frees (2010) chapitre 12 (p 343-361), Greene (2012), section 18.3 (p 802-828) ou encore de Jong Heller (2008) chapitre 6. Sur les méthodes de biais minimal, de Jong Heller (2008), section 1.3 et l’article de Sholom Feldblum, http://www.casact.org/…. Sur le passage de ces dernières méthodes (introduites par Robert Bailey dans les années 60, http://www.casact.org/… et http://www.casact.org/…), je recommande la lecture de l’article de Ben Zehnwirth, Ratemaking From Bailey and Simon (1960) to Generalized Linear Regression Models, en ligne sur http://www.casact.org/…

Comme annoncé au premier cours, j’essaye de mettre en ligne les transparents au fur et à mesure, mais j’avais pris l’habitude d’écrire au tableau ces dernières années. Il faut donc que je tape tout. Pour le devoir un courriel sera envoyé d’ici la fin de semaine à tous les groupes qui se sont inscrits.

# Words are important (when dealing with statistics). But still.

In statistics, it might be difficult to know what a symbol stands for. For instance, $\widehat{\theta}$ can either be a real value, i.e. the value taken by a statistics from a given sample. But it can also be a random variable, assuming that the sample is now a collection of i.i.d. random variables. We can usually distinguish $x_i$‘s (values from a given sample) and $X_i$‘s (the underlying random variables, i.e. $x_i=X_i(\omega)$ for some $\omega\in\Omega$). But notations might confusing, and it is hard to distinguish random variables, and values taken by random variables (or realizations). But usually, if we look at the context, one can figure out what symbols stand for.

But sometimes, it is difficult to get a proper definition, not for some symbols, but for words. And most of the time common words. Recently, I wrote a short paper, claiming that it was difficult to model the number of bodily injuries related to car accident, since it is difficult to define death. Actually, the definition of dead did change a few years ago (as weird as it might sound), which did cause a rupture of some series.

I recently had a similar story, discussing with a pharmacist in Montréal who said to me “you French are known to be the world’s champion in terms of drug consumption“, see e.g.

• The French are Europe’s champion medicine-takers” in http://economist.com/…, mentioning “heavy drug-consumption culture
• The data show that drug consumption in France remains one of the largest in Europe” in http://bizcovering.com/…
• France has one of the largest drug markets in the world and the drug consumption per capitahttp://ispor.org/… (among so many articles)

I do not think I am a drug addict (I might be – like most of my colleagues – a coffee addict, but as Paul Erdős  – or more probably Alfréd Rényi – said once, “a mathematician is a device for turning coffee into theorems“). The main problem here is the notion of “consumption“. The economics interpretation is simply that someone buys a product or a service (see http://dictionary.reference.com/…). There is also the food-related interpretation, where consuming means ingesting, i.e. eating or drinking (see http://dictionary.cambridge.org/…).

So pill and drug “consumption” is ambiguous: is it the number of pills purchased, or ingested (actually consumed), or prescribed? The first thing one should remember is that the Social Security in France refunds (almost) all medications prescribed by a doctor. So it is uncommon to leave the office of a doctor without a prescription, at least of aspirin: a visit to the doctor is usually, in France, the opportunity to stock some over-the-counter drugs. The second thing is that there is a major difference between France and North America when we go to the pharmacy. In Montréal for instance, if I have a prescription for 12 pills, then the pharmacist does give me exactly 12 pills (from a big pot). In France, pills are sold in prepackaged boxes, so if the box contains 10 pills, I will get 2 boxes, just to be sure I’ll get my 12 pills. From a medical point of view, I will consume my 12 pills, but from an economic perspective, I will consume 20. So comparing statistics is extremely difficult, not because the the maths, but because it is difficult to define (even simple) concepts.

# Inglés ? Französisch ?

When I started to blog, I found it “natural” to blog in French. Not “natural” in the way Dominique Bouhours sees it: according to Bouhours, only the French language exactly reflects the natural way of thinking

La langue française est peut-être la seule qui suive exactement l’ordre naturel, et qui exprime les pensées en la manière qu’elles naissent dans l’esprit

as he wrote it). “Natural” in the sense that French is my mother language. And perhaps also because, as a French, I did not really see the interest of using another language… Yes, somehow, I still feel that we (French people) believe that Antoine de Rivarol was right when he said, in his Discours sur l’universalité de la langue française that French is innately superior to other languages: French has an element of probity attached to its genius, it is not the language of the French, it is the language of humanity

Dégagée de tous les protocoles que la bassesse invente pour la vanité et le pouvoir, elle en est plus faite pour la conversation, lien des hommes et charme de tous les âges, et puisqu’il faut le dire, elle est de toutes les langues la seule qui ait une probité attachée à son génie. Sûre, sociale, raisonnable, ce n’est plus la langue française, c’est la langue humaine

But I might be exaggerating here: if I can express more subtle things in French, it simply means that my English is too poor. I mean, in my work as a professor and a researcher, I do use English on a daily basis (reading textbooks, discussing with colleagues, or co-autors, writing research papers, etc). But blogging is just for fun… so at first, I started to blog in French.

Then, while I was giving a short course in Brazil, I understood that some students there actually did go to my blog. They were using online translation – which translated quite poorly by that time – and some asked me if I could write more posts in English. I did have also a request from Tal, the editor of http://r-bloggers.com/ who asked me to share my posts that contain R codes with the R community (i.e. non French-speaking people). So I decided to write more and more posts in English. Except perhaps posts related to my undergraduate courses.

I thought about all that recently because, while I was in Europe last week, some colleagues in Amsterdam mentioned my blog, claiming that there were (still) too many posts in French. While on the other hand, some French colleagues still ask me why I do not write only in French.

Somehow, I still find it more comfortable to write in French, especially on French related issues (e.g. politics and pools, or university related debates). But I still like the idea that my post may be interesting for students in Brazil, actuaries in the U.K. or researchers in Norway… So I guess I will try to write more posts in English. Which is not that complicated actually, since, as a French academician (or George Bernard Shaw, the source is not clear) once put it “English is a language that is relatively easy to speak poorly“. So, let’s do it….

# Régression logistique et arbres

Pour le cours de mercredi prochain, la base utilisée sera une base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…

> baseavocat=read.table("http://freakonometrics.free.fr/AutoBI.csv",header=TRUE,sep=",")
> tail(baseavocat)
CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE  LOSS
1335   34204        2      2       2        2        1     26 0.161
1336   34210        2      1       2        2        1     NA 0.576
1337   34220        1      2       1        2        1     46 3.705
1338   34223        2      2       1        2        1     39 0.099
1339   34245        1      2       2        1        1     18 3.277
1340   34253        2      2       2        2        1     30 0.688

On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.

Les transparents du cours sont en ligne ici,

Sur les arbres de régression, je mettrais en ligne un billet, afin d’illustrer la méthode. En attendant des compléments théoriques peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….

La semaine suivante, nous aborderons la régression de Poisson, les méthodes de biais minimal, et introduirons les modèles linéaires généralisés. Je renvoie au chapitre sur la tarification a priori du Denuit & Charpentier (2005), aux chapitres 12 et 13 de Frees (2010) ou encore les chapitres 5 et 6 du De Jong  & Heller (2008). Pour les plus curieux qui veulent comprendre les liens entre les modèles linéaires généralisés et la tarification par crédibilité, je renvoie à l’article de Klinker (2010)

# Amsterdam, PhD defense

Last week, I was involved in the PhD defense of Julien Tomas, with Rob Kaas (promotor), Frédéric Planchet (co-promotor), Katrien AntonioMarc Goovaerts, Ann De Schepper and Michel Vellekoop. The PhD thesis – untitled quantifying biometric life insurance risks with non-parametric smoothing methods – can be dowloaded on http://dare.uva.nl/… and on http://tel.archives-ouvertes.fr/.

The R codes will be available soon on my blog (and on Julien’s new website http://www.likelihood.me/).

# Somewhere else, part 29

This week, I was travelling in Europe (from Lausanne to Amsterdam), and since I have no cell-phone, it is a bit difficult – somethimes – to follow blogs and the news on the internet. Nevertheless, this week, there were – at least – three very interesting articles,

on different topics, and still, other posts here and there,

et quelques billets et articles en francais, cette semaine encore

• ah! la créativité des publicitaires http://souslelogo.com/ … “les mots les plus utilisés dans les slogans publicitaires”

# Amsterdam

I will be in Amsterdam for the end of this week. I will be in the jury of the PhD defense of Julien Tomas, entitled “Quantifying Biometric Life Insurance Risks With Non-Parametric Smoothing Methods” (the thesis will probably be online soon). But before, I will give a talk at the actuarial seminar at UvA. My visit last time was a real pleasure, and it should be the same this time too. I will give a talk this Thursday on “R for actuarial science“. The slides can be downloaded from here.

# Lausanne

I will be back in Lausanne (I was already there last summer) to spend a few days, visiting Florian, at HEC Lausanne. I will also give a talk on old and new results on (standard) families of copulas. There will be also a discussion on tail dependence. Slides can be downloaded from here,

# Somewhere else, part 28

Once again, interesting posts and articles, here and there,

Et comme toutes les semaines, quelques billets et articles en francais,