All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Computing AIC on a Validation Sample

This afternoon, we’ve seen in the training on data science that it was possible to use AIC criteria for model selection.

> library(splines)
> AIC(glm(dist ~ speed, data=train_cars, 
  family=poisson(link="log")))
[1] 438.6314
> AIC(glm(dist ~ speed, data=train_cars, 
  family=poisson(link="identity")))
[1] 436.3997
> AIC(glm(dist ~ bs(speed), data=train_cars, 
  family=poisson(link="log")))
[1] 425.6434
> AIC(glm(dist ~ bs(speed), data=train_cars, 
  family=poisson(link="identity")))
[1] 428.7195

And I’ve been asked why we don’t use a training sample to fit a model, and then use a validation sample to compare predictive properties of those models, penalizing by the complexity of the model.    But it turns out that it is difficult to compute the AIC of those models on a different dataset. I mean, it is possible to write down the likelihood (since we have a Poisson model) but I want a code that could work for any model, any distribution….

Hopefully, Heather suggested a very clever idea, using her package

And actually, it works well.

Continue reading Computing AIC on a Validation Sample

“Improving Segmentation” (using Lorenz curves, or sort of)

This afternoon, André did send me an interesting graph about the use of Lorenz curve in the context of insurance pricing (and modeling)

It is some sort of Lorenz curve, upside-down, with on the x-axis the proportion of the population, and on the y-axis the proportion of the losses. The important point is that the population is sorted according the their risk, i.e. their premium. The code to generate such a curve is actually quite simple,

L <- function(u,varx="premium",vary="losses"){
  base=base[order(base[,varx],decreasing=TRUE),]
  base$cum=(1:nrow(base))/nrow(base)
return(sum(base[base$cum<=u,vary])/
sum(base[,vary]))}
 
vu=seq(0,1,by=.01)
vv=Vectorize(function(u) L(u))(vu)

My concern was more on two labels on the figure, with on the top-left “perfect pricing” and on the first diagonal “average pricing“. What could that possibly mean? Is there even such a thing as a “perfect pricing“? In order to understand what we plot here, let us generate some dataset, and fit some model. Including things that might be seen as the “perfect model“: the price base on the parameters used to generate the data, and the model used to generate the data, fitted on the data.

Continue reading “Improving Segmentation” (using Lorenz curves, or sort of)

Modelling Occurence of Events, with some Exposure

This afternoon, an interesting point was raised, and I wanted to get back on it (since I did publish a post on that same topic a long time ago). How can we adapt a logistic regression when all the observations do not have the same exposure. Here the model is the following: ,

  • the occurence of an event http://latex.codecogs.com/gif.latex?Y_i^\star on the period http://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the occurence of an event http://latex.codecogs.com/gif.latex?Y_i on http://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as http://latex.codecogs.com/gif.latex?E_i)

If we assume that the ‘occurence of an event’ is the first occurence of a Poisson processus, we can prove that

i.e. no event occur on  if no event occur on  and no event occur on . Assuming independence between the two, we can prove that we have

http://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense.

Continue reading Modelling Occurence of Events, with some Exposure

Back at ENSAE, course on Non-Life Insurance Econometrics this Autumn

You might have seen recent pictures of the ENSAE Paristech building. Unfortunately, it will only open next year, in 2016, at Saclay.

This year, 2015-2016; is the last year of ENSAE in Malakoff,

I will be back the automn with a cours on “non-life insurance econometrics“. It could be seen as an updated version of the course we gave together with François Bucchini from 2003 till 2008.

The first course will start on October 12th. I will upload the syllabus and the first lectures notes by then. To be continued…

Variation de Température

Hier, je suis tombé (via limportant.fr/) sur un documentaire intéressant, en ligne sur francetvinfo.fr/monde/environnement/. Mais le passage du début (retranscrit sur le site) m’a laissé une impression très étrange,

Au Groenland, la glace fond à vue d’œil. Cette année, le thermomètre est passé à 25 degrés au-dessus de 0. Il y a huit ans, pour la même période, le blizzard soufflait et les scientifiques devaient affronter des températures de – 35 degrés. Une amplitude de 65 degrés inquiétante pour les chercheurs qui observent la banquise depuis plus de 25 ans.

Ça veut dire quoi “inquiétant” ? Est-ce un phénomène nouveau ? Il y presque 5 ans, quand je suis arrivé à Montréal, j’avais mis en ligne un rapide billet comparant les températures à Rennes et à Montréal. En particulier, il y avait ces deux figures, avec les températures annuelles à Rennes,

avec en rouge un quantile supérieur observé, et un bleu un quantile inférieur (je ne prends pas les maximum et minimum pour lisser un peu). Disons que des écarts de 20 degrés, quelle que soit la période de l’année, ne sont pas impossible, loin de là. On peut parfaitement avoir 30 degrés un été, et 10 l’été suivant (ou disons, pour être plus précis, un 14 juillet il peut faire 30 degré, et le 14 juillet suivant 10). A Montréal, on avait

L’écart est ici un peu plus important. Contrairement à Rennes, il est plus important l’hiver que l’été. Et encore, à cette époque, le printemps érable n’avait pas eu lieu, avec presque 25 degrés en mars, alors qu’il est possible d’atteindre encore les -20 (ce qui fait un écart de 45 degrés). Je peux d’ailleurs remettre en ligne un billet que j’avais écrit, en demandant si la courbe des températures n’était pas une marche aléatoire. Des écarts de température de plus de 40 degrés ne sont pas rares. Entre un maximum pour une journée, et le minimum une autre années. Le plus troublant (ce n’est que mon expérience) c’était plutôt de gagner 40 degrés en une semaine, passer de -30 à +10 (puis replonger deux jours plus tard à -20)

 

Qu’en est-il du Groenland ? Sur eca.knmi.nl/dailydata, on peut récupérer les données journalières de température, de vent, etc, pour plus de 60 stations météo au Groenland. En faisant un petit code, on peut visualiser toutes les séries (en rouge et en bleu, on a les années les plus chaude et plus froide, en moyenne, moyennant que l’on ne tienne pas compte des valeurs manquantes). Le code est assez simple

setwd('/home/Documents/temperature-greenland/')
fichiers=list.files()
for(i in 1:length(fichiers)){
sc=scan(fichiers[i],what="char")[50:150]
i1=which(sc=="[DENMARK],")-1
i2=which(sc=="(STAID:")-1
station=paste(sc[i1:i2],collapse=" ")
temp=read.table(fichiers[i],skip=20,sep=',',
header=TRUE)
date=as.Date(as.character(temp$DATE),format=
"%Y%m%d")
m=format(date, "%m") 
d=format(date, "%d")
y=format(date, "%Y")
date2=as.Date(paste("2000",m,d),format="%Y%m%d")
temperature=temp$TG/10
temperature[temperature<(-200)]=NA
B=aggregate(temperature,by=list(y),FUN=
function(x) length(x[!is.na(x)]))
if(length(yr)>2){
yr=B[which(B[,2]>250),1]
A=aggregate(temperature[y%in%yr],by=list(
y[y%in%yr]),FUN=function(x) mean(x,na.rm=TRUE))
A=A[!is.nan(A$x),]
ymin=A[which.min(A[,2]),1]
ymax=A[which.max(A[,2]),1]
plot(date2,temperature,ylim=c(-60,20))
title(paste(fichiers[i],", ",station,sep=""))
lines(date2[y==ymin],temperature[y==ymin],
col="blue",lwd=3)
lines(date2[y==ymax],temperature[y==ymax],
col="red",lwd=3)
legend(date2[5],20,c(ymax,ymin),
col=c("red","blue"),lwd=3)
}}

Par exemple, si je prends quelques sorties au hasard (toutes celles avec assez d’observations sont du même ordre)

Maintenant, j’ai mis en ligne suffisamment de billets sur mon blog pour croire que personne ne pensera que je suis sceptique quand au réchauffement climatique, et la diminution du réservoir de glace m’inquiète vraiment (on avait travaillé un temps sur les données des régions arctiques, dans un vieux billet we are winter)

Mais les écarts de températures mentionnés dans l’introduction m’étonnent. Même si je n’ai pas 2015 dans mes données, je n’observe pas de tels écarts mentionnés (même si des écarts importants seraient normaux, comme les 30 ou 40 degrés d’écart qu’on peut observer à Montréal, mais plutôt l’hiver). Et j’étais d’autant plus surpris que justement, pour l’instant, l’atlantique nord était justement la région qui ne subissait pas de vague de chaleur

pour mai, et pour juin

(j’attends sur ncdc.noaa.gov/ ou nsstc.uah.edu/climate/ la carte pour juillet 2015).

Choosing a Classifier

In order to illustrate the problem of chosing a classification model consider some simulated data,

> n = 500
> set.seed(1)
> X = rnorm(n)
> ma = 10-(X+1.5)^2*2
> mb = -10+(X-1.5)^2*2
> M = cbind(ma,mb)
> set.seed(1)
> Z = sample(1:2,size=n,replace=TRUE)
> Y = ma*(Z==1)+mb*(Z==2)+rnorm(n)*5
> df = data.frame(Z=as.factor(Z),X,Y)

A first strategy is to split the dataset in two parts, a training dataset, and a testing dataset.

> df1 = training = df[1:300,]
> df2 = testing  = df[301:500,]
  • The Holdout Method: Training and Testing Datasets

The two datasets can be visualised below, with the training dataset on top, and the testing dataset below

> plot(df1$X,df1$Y,pch=19,col=c(rgb(1,0,0,.4),
+ rgb(0,0,1,.4))[df1$Z])

Continue reading Choosing a Classifier

Pricing Game (100% Actuaires)

Début Novembre, avec Romuald Elie et Jérémie Jakubowicz, on devrait animer lors la journée 100% Actuaires un “pricing game“. Nous mettons à disposition une base de données, en assurance automobile, avec 2 ans d’observations, avec 100,000 polices d’assurance. Chaque équipe doit envoyer des propositions de primes pour un peu plus de 36,000 assurés potentiels, pour la 3ème année, en RC automobile (matériel et corporel, les deux informations – fréquence et coût – étant dans les bases mises à disposition).

On jouera ensuite le rôle de courtier (ou de comparateur de prix) entre les différentes équipes (avec un principe assez simple, l’assuré choisi l’assureur le moins cher, tous offrant les mêmes garanties, ou quelques autres variantes pour pimenter un peu l’analyse). Le descriptif complet est en ligne.

Les codes R pour lire les bases sont les suivants

> training <- read.csv2(
+ "http://freakonometrics.free.fr/training.csv")
> dim(training)
[1] 100021     20
> pricing <- read.csv2(
+ "http://freakonometrics.free.fr/pricing.csv")
> dim(pricing)
[1] 36311    15

Tout le monde peut participer, inutile d’être inscrit à la journée pour m’envoyer quelque chose ! Par contre, je veux (comme indiqué dans le descriptif) une base avec le numéro de police, et la prime proposée, mais aussi le code et un rapide descriptif de la méthodologie, et des variables utilisées. Rendez-vous en Novembre pour l’analyse des résultats du jeu.

An Update on Boosting with Splines

In my previous post, An Attempt to Understand Boosting Algorithm(s), I was puzzled by the boosting convergence when I was using some spline functions (more specifically linear by parts and continuous regression functions). I was using

> library(splines)
> fit=lm(y~bs(x,degree=1,df=3),data=df)

The problem with that spline function is that knots seem to be fixed. The iterative boosting algorithm is

  • start with some regression model 
  • compute the residuals, including some shrinkage parameter,

then the strategy is to model those residuals

  • at step , consider regression 
  • update the residuals 

and to loop. Then set

I thought that boosting would work well if at step , it was possible to change the knots. But the output

was quite disappointing: boosting does not improve the prediction here. And it looks like knots don’t change. Actually, if we select the ‘best‘ knots, the output is much better. The dataset is still

> n=300
> set.seed(1)
> u=sort(runif(n)*2*pi)
> y=sin(u)+rnorm(n)/4
> df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

> library(freeknotsplines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1, 
+ numknot = 2, 555)

The code of the previous post can simply be updated

> v=.05
> library(splines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1, 
+ numknot = 2, 555)
> fit=lm(y~bs(x,degree=1,knots=
+ xy.freekt@optknot),data=df)
> yp=predict(fit,newdata=df)
> df$yr=df$y - v*yp
> YP=v*yp
>  for(t in 1:200){
+    xy.freekt=freelsgen(df$x, df$yr, degree = 1,
+    numknot = 2, 555)
+ fit=lm(yr~bs(x,degree=1,knots=
+     xy.freekt@optknot),data=df)
+    yp=predict(fit,newdata=df)
+    df$yr=df$yr - v*yp
+    YP=cbind(YP,v*yp)
+  }
>  nd=data.frame(x=seq(0,2*pi,by=.01))
>  viz=function(M){
+    if(M==1)  y=YP[,1]
+    if(M>1)   y=apply(YP[,1:M],1,sum)
+    plot(df$x,df$y,ylab="",xlab="")
+    lines(df$x,y,type="l",col="red",lwd=3)
+    fit=lm(y~bs(x,degree=1,df=3),data=df)
+    yp=predict(fit,newdata=nd)
+    lines(nd$x,yp,type="l",col="blue",lwd=3)
+    lines(nd$x,sin(nd$x),lty=2)}
 
>  viz(100)

I like that graph. I had the intuition that using (simple) splines would be possible, and indeed, we get a very smooth prediction.

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)

An Attempt to Understand Boosting Algorithm(s)

Last tuesday, at the annual meeting of the French Economic Association, I was having lunch with Alfred, and while we were chatting about modeling issues (econometric models against machine learning prediction), he asked me what boosting was. Since I could not be very specific, we’ve been looking at wikipedia webpage.

Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones

One should admit that it is not very informative. But at least, there is the idea that ‘weak learners’ can be used to provide a good predictor. Now, to be honest, I guess I understand the concept. But I still can’t reproduce what I got with standard ‘boosting’ packages.

There are a lot of publications about the concept of ‘boosting’. In 1988, Michael Kearns published Thoughts on Hypothesis Boosting, which is probably the oldest one. About the algorithms, it is possible to find some references. Consider for instance Improving Regressors using Boosting Techniques, by Harris Drucker. Or The Boosting Approach to Machine Learning An Overview by Robert Schapire, among many others. In order to illustrate the use of boosting in the context of regression (and not classification, since I believe it provides a better visualisation) consider the section in Dong-Sheng Cao’s In The boosting: A new idea of building models.

Continue reading An Attempt to Understand Boosting Algorithm(s)

‘Variable Importance Plot’ and Variable Selection

Classification trees are nice. They provide an interesting alternative to a logistic regression.  I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.

Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).

In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables.  Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.

> set.seed(1)
> n=500
> library(clusterGeneration)
> library(mnormt)
> S=genPositiveDefMat("eigen",dim=15)
> S=genPositiveDefMat("unifcorrmat",dim=15)
> X=rmnorm(n,varcov=S$Sigma)
> library(corrplot)
> corrplot(cor(X), order = "hclust")

See Gosh & Hendersen (2003) for more details on the methodology.

Continue reading ‘Variable Importance Plot’ and Variable Selection

p-hacking, or cheating on a p-value

Yesterday evening, I discovered some interesting slides on False-Positives, p-Hacking, Statistical Power, and Evidential Value, via  ‘s post on Twitter. More precisely, there was this slide on how cheating (because that’s basically what it is) to get a ‘good’ model (by targeting the p-value)

As mentioned by @david_colquhoun  one should be careful when reading the slides : some statistician might have a heart attack when they read

But still, there are interesting points in that slide.

Continue reading p-hacking, or cheating on a p-value