ENSAE, devoir 2

Pour le second devoir, je propose de faire un modèle de modélisation d’un comptage. Le code pour récupérer la base est le même que pour le premier devoir (on va travailler sur la même base)

rm(list=ls())
download.file(url="http://freakonometrics.free.fr/base_ensae_1.RData",destfile="base.RData")
load("base.RData")

A partir de la base base_ensae_1, proposez deux modèles (une régression de Poisson, et un modèle parmi ceux vus en cours) pour modéliser la variable (au choix) Nb1 ou Nb2, indiquant le nombre de sinistres survenus sur une police d’assurance. Il s’agit, comme en cours, d’expliquer la fréquence annuelle de sinistres.

Le rapport doit contenir entre 5 et 10 pages, expliquant les modèles retenus, les choix qui ont été fait (pour rendre catégorielles de variables continues, ou au contraire les lisser, etc).

En plus du rapport, je veux une prévision à l’aide des deux modèles sur la base de données pricing. Je veux, par mail, avec un fichier csv, contenant le numéro de la police, PolNum, et les fréquences annuelles de déclaration d’accidents, avec les deux modèles (Freq1 et Freq2 respectivement)

pricing <- read.csv2("http://freakonometrics.free.fr/pricing.csv")

Le tout (rapport pdf et base csv) doit être envoyé pour le retour des vacances de Noël, par mail à arthur.charpentier@gmail.com. La première page du rapport doit indiquer les noms des deux étudiants du binôme.

Petite précision, une description (succincte) des variables est faite ci-dessous,

Variable Importance with Correlated Features

Variable importance graphs are great tool to see, in a model, which variables are interesting. Since we usually use it with random forests, it looks like it is works well with (very) large datasets. The problem with large datasets is that a lot of features are ‘correlated’, and in that case, interpretation of the values of variable importance plots can hardly be compared. Consider for instance a very simple linear model (the ‘true’ model, used to generate data)

Here, we use a random forest to model the relationship between the features, but actually, we consider another feature – not used to generate the data – , that is correlated to . And we consider a random forest on those three features, .

In order to get some more robust results, I geneate 100 datasets, of size 1,000.

library(mnormt)

impact_correl=function(r=.9){
nsim=10
IMP=matrix(NA,3,nsim)
n=1000
R=matrix(c(1,r,r,1),2,2)
for(s in 1:nsim){
X1=rmnorm(n,varcov=R)
X3=rnorm(n)
Y=1+2*X1[,1]-2*X3+rnorm(n)
db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3)
library(randomForest)
RF=randomForest(Y~.,data=db)
IMP[,s]=importance(RF)}
apply(IMP,1,mean)}

C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999)
VI=matrix(NA,3,length(C))
for(i in 1:length(C)){VI[,i]=impact_correl(C[i])}

plot(C,VI[1,],type="l",col="red")
lines(C,VI[2,],col="blue")
lines(C,VI[3,],col="purple")

The purple line on top is the variable importance value of , which is rather stable (almost constant, as a first order approximation). The red line is the variable importance function of  while the blue line is the variable importance function of .  For instance, the importance function with two very correlated variable is

It looks like  is much more important than the other two, which is – somehow – not the case. It is just that the model cannot choose between  and : sometimes,  is slected, and sometimes it is. I think I find that graph confusing because I would probably expect the importance of  to be constant. It looks like we have a plot of the importance of each variable, given the existence of all the other variables.

Actually, what I have in mind is what we get when we consider the stepwise procedure, and when we remove each variable from the set of features,

library(mnormt)
impact_correl=function(r=.9){
  nsim=100
  IMP=matrix(NA,4,nsim)
  n=1000
  R=matrix(c(1,r,r,1),2,2)
  for(s in 1:nsim){
    X1=rmnorm(n,varcov=R)
    X3=rnorm(n)
    Y=1+2*X1[,1]-2*X3+rnorm(n)
    db=data.frame(Y=Y,X1=X1[,1],X2=X1[,2],X3=X3)
    IMP[1,s]=AIC(lm(Y~X1+X2+X3,data=db))
    IMP[2,s]=AIC(lm(Y~X2+X3,data=db))
    IMP[3,s]=AIC(lm(Y~X1+X3,data=db))
    IMP[4,s]=AIC(lm(Y~X1+X2,data=db))
  }
  apply(IMP,1,mean)}

Here, if we uses the same code as previously,

C=c(seq(0,.6,by=.1),seq(.65,.9,by=.05),.99,.999)
VI=matrix(NA,3,length(C))
for(i in 1:length(C)){VI[,i]=impact_correl(C[i])}

 

we get the following graph

plot(C,VI[2,],type="l",col="red")
lines(C,VI2[3,],col="blue")
lines(C,VI2[4,],col="purple")

The purple line is obtained when we remove  : it is the worst model. When we keep and , we get the blue line. And this line is constant: the quality of the does not depend on  (this is what puzzled me in the previous graph, that having  does have an impact on the importance of). The red line is what we get when we remove . With 0 correlation, it is the same as the purple line, we get a poor model. With a correlation close to 1, it is same as having ,  and we get the same as the blue line.

Nevertheless, discussing the importance of features, when we have a lot of correlation features is not that intuitive…

Applications of Chi-Square Tests

This morning, in our mathematical statistical class, we’ve seen the use of the chi-square test. The first one was related to some goodness of fit of a multinomial distribution. Assume that . In order to test  against , use the statistic

Under . For instance, we have the number of weddings, in a large city, per season,

> n=c(301,356,413,262)

We want to test if weddings are celebrated uniformely over the year, i.e. .

> np=rep(sum(n)/4,4)
> cbind(n,np)
       n  np
[1,] 301 333
[2,] 356 333
[3,] 413 333
[4,] 262 333
> Q=sum( (n-np)^2/np  )
> Q
[1] 39.02102

This quantity should be compared with the quantile of the chi-square distribution

> qchisq(.95,df=4-1)
[1] 7.814728

but it is also possible to compute the p-value,

> 1-pchisq(Q,df=4-1)
[1] 1.717959e-08

Here, we reject the assumption that weddings are celebrated uniformly over the year.

Continue reading Applications of Chi-Square Tests

Inference for the Multinomial Distribution

This morning, in our mathematical statistical class, we’ve seen briefly the multinomial distribution, and statistical inference.  has a  distribution if its probability function is

with  and .

The maximum likelihood estimator is then the optimum of

We use Lagrange multiplier to solve this constrained optimization problem,

First order conditions are here

and

Thus,

From

we can easily get that Lagrande multiplier is . And then

One can easily get that this maximum likelihood estimator is unbiased, since . Actually, we can easily prove that

and that , while . The trick to get the later is simple,

and . Thus, we can easily get the covariance. From that term, we can write that

with

while

Pricing Game, the results

Thursday, I will be in Paris, to discuss the results we got from the pricing game. I will present 12 ou 13 models sent to me, an discuss what happened when I created a market, where the models were competing. One or two models were clearly underestimating the losses, so with the results as they were send, each time, one company goy 80% market share and over 250% loss ratio. So I decided to normalize all the premiums, so that the average premium was the same, for all the companies. Slides are now available.

ENSAE, devoir 1

Pour le premier devoir, je propose de faire un modèle de classification. Le code pour récupérer la base est

rm(list=ls())
download.file(url="http://freakonometrics.free.fr/base_ensae_1.RData",destfile="base.RData")
load("base.RData")

A partir de la base base_ensae_1, proposez deux modèles (une régression logistique, et un modèle parmi ceux vus en cours) pour modéliser la variable (au choix) Surv1 ou Surv2, indiquant la survenance, ou pas, de sinistres sur une police d’assurance.

Le rapport doit contenir entre 5 et 10 pages, expliquant les modèles retenus, les choix qui ont été fait (pour rendre catégorielles de variables continues, ou au contraire les lisser, etc). Sur la dernière page, les courbes ROC des deux modèles devront être présentées.

En plus du rapport, je veux une prévision à l’aide des deux modèles sur la base de données pricing. Je veux, par mail, avec un fichier csv, contenant le numéro de la police, PolNum, et les probabilité d’avoir un accident, avec les deux modèles (Proba1 et Proba2 respectivement)

pricing <- read.csv2("http://freakonometrics.free.fr/pricing.csv")

Le tout (rapport pdf et base csv) doit être envoyé avant les vacances de Noël, par mail à arthur.charpentier@gmail.com. La première page du rapport doit indiquer les noms des deux étudiants du binôme.

Petite précision, une description (succincte) des variables est faite ci-dessous,