Tag Archives: gam()

Interprétabilité et explicabilité (formalisé) des modèles prédictifs

Dans Confessiones, Saint Augustin écrivait

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio

que l’on traduit

Qu’est-ce-que le temps ? Si personne ne me le demande, je le sais. Si je veux l’expliquer à qui me le demande, je ne le sais plus.

Pour aller un peu plus loin (car souvent, si on nous demande d’expliquer, on a quelques idées), dans Une étude en rouge de Sir Arthur Conan Doyle, paru en 1887, on a l’échange suivant, entre Sherlock Holmes et le docteur Watson,

– Je me demande ce que cherche ce type là-bas, demandai-je,
désignant un grand individu habillé simplement qui suivait
l’autre côté de la rue, en examinant anxieusement les numéros.
Il tenait à la main une grande enveloppe bleue et, de toute
évidence, portait un message.
– Vous parlez de ce sergent d’infanterie de marine ? dit Sherlock Holmes.

puis, comme il s’avère que la personne est effectivement sergent dans la marine (tout comme un autre personnage de l’histoire, un certain Arthur Charpentier), le docteur Holmes lui demande une explication, il veut savoir comment il est arrivé à cette conclusion

« Comment diable avez-vous pu deviner cela ? demandai-je.
– Deviner quoi ? fit-il sans aménité.
– Eh bien, qu’il était un sergent de marine en retraite ?
– Je n’ai pas de temps à perdre en bagatelles ! répondit-il
avec brusquerie avant d’ajouter dans un sourire : excusez ma rudesse ! Vous avez rompu le fil de mes pensées. Mais c’est peut-être aussi bien. Ainsi donc vous ne voyiez pas que cet homme était un sergent de marine ?
Non, certainement pas !
– Décidément, l’explication de ma méthode me coûte plus que son application ! Si l’on vous demandait de prouver que deux et deux font quatre, vous seriez peut-être embarrassé ; et cependant, vous êtes sûr qu’il en est ainsi. Malgré la largeur de la rue, j’avais pu voir une grosse ancre bleue tatouée sur le dos de la main du gaillard. Cela sentait la mer. Il avait la démarche militaire et les favoris réglementaires ; c’était, à n’en pas douter, un marin. Il avait un certain air de commandement et d’importance. Rappelez-vous son port de tête et le balancement de sa canne ! En outre, son visage annonçait un homme d’âge moyen, sérieux, respectable. Tous ces détails m’ont amené à penser qu’il était sergent.
– C’est merveilleux ! m’écriai-je.
– Peuh ! L’enfance de l’art ! dit Holmes, mais d’un air qui me
parut trahir sa satisfaction devant ma surprise et mon admiration manifestes.

(pour être honnête, c’est Liu Cixin qui en parle dans Le problème à trois corps). Pour l’anecdote, c’est la première histoire du couple Holmes-Watson, qui introduit la méthode de travail de Sherlock Holmes. Pour ceux qui sont familier avec les nouvelles, cette approche narrative sera largement reprise par la suite: Sherlock Holmes énonce un fait, le docteur Watson est étonné et demande une explication, et Sherlock Holmes explique, point par point, comment il est arrivé à cette conclusion. C’est un peu cette approche qu’on tente de mettre en place quand on va construire un modèle prédictif : sur la base des données du Titanic, si on prédit que telle personne va mourir, et que telle autre va survivre, on veut comprendre pourquoi le modèle arrive à cette conclusion.

Continue reading Interprétabilité et explicabilité (formalisé) des modèles prédictifs

Interpretability and explainability of predictive models

In 400 AD, in his Confessiones, Augustine wrote

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio

that can be translated as

What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.

To go a little further (because often, if we are asked to explain, we have some ideas), in A Study in Scarlet by Sir Arthur Conan Doyle, published in 1887, we have the following exchange, between Sherlock Holmes and Doctor Watson

– “I wonder what that fellow is looking for?” I asked, pointing to a stalwart, plainly-dressed individual who was walking slowly down the other side of the street, looking anxiously at the numbers. He had a large blue envelope in his hand, and was evidently the bearer of a message.
– “You mean the retired sergeant of Marines,” said Sherlock Holmes.

then, as it turns out that the person is indeed a sergeant in the navy (as is another character in the story, someone named Arthur Charpentier), Dr. Holmes asks him for an explanation, he wants to know how he arrived at this conclusion

– “How in the world did you deduce that?” I asked.
“Deduce what?” said he, petulantly.
“Why, that he was a retired sergeant of Marines.”
“I have no time for trifles,” he answered, brusquely; then with a smile, “Excuse my rudeness. You broke the thread of my thoughts; but perhaps it is as well. So you actually were not able to see that that man was a sergeant of Marines?”
“No, indeed.”
– “It was easier to know it than to explain why I knew it. If you were asked to prove that two and two made four, you might find some difficulty, and yet you are quite sure of the fact. Even across the street I could see a great blue anchor tattooed on the back of the fellow’s hand. That smacked of the sea. He had a military carriage, however, and regulation side whiskers. There we have the marine. He was a man with some amount of self-importance and a certain air of command. You must have observed the way in which he held his head and swung his cane. A steady, respectable, middle-aged man, too, on the face of him – all facts which led me to believe that he had been a sergeant.”

(to be honest, it is Liu Cixin who talks about it in The Three-Body Problem). For the record, this is the first story of the Holmes-Watson couple, which introduces Sherlock Holmes’ working method. For those who are familiar with the short stories, this narrative approach will be widely used thereafter: Sherlock Holmes states a fact, Dr. Watson is astonished and asks for an explanation, and Sherlock Holmes explains, point by point, how he arrived at this conclusion. This is a bit like the approach we try to implement when we build a predictive model: on the basis of the Titanic data, if we predict that such and such a person will die, and that such and such a person will survive, we want to understand why the model arrives at this conclusion.
Continue reading Interpretability and explainability of predictive models

Estimer la surmortalité

Ce matin, Baptiste Coulmont m’envoyait un tweet avec un joli graphique, présentant le nombre quotidien de décès en France.

Comme les données sont en ligne, je me suis dit que je pourrais jouer un peu avec. Pour les plus curieux, on a la liste de tous les décès depuis… longtemps ! (on a plus de 25 millions de décès). La seule chose qui nous intéresse, c’est la date, alors on la récupère. Puis on compte, par jour, combien de décès on a eu,

D = read.csv("insee_deces.csv",header=TRUE)
Vecteur_Dates = as.character(D[,8])
TV = table(Vecteur_Dates)
B = data.frame(dateC=names(TV),nb=as.numeric(TV))
B$date=as.Date(B$dateC,"%Y-%m-%d")
B$year = format(B$date,"%Y")
B$month = format(B$date,"%m")
B$day = format(B$date,"%d")
B$deb=as.Date(paste(B$year,"01-01",sep="-"),"%Y-%m-%d")
B$fin=as.Date(paste(B$year,"12-31",sep="-"),"%Y-%m-%d")
B$dif=as.numeric(B$date-B$deb)/as.numeric(B$fin-B$deb)

Je crée ici une une variable qui me dit où je suis dans l’année en cours (0 pour le 1er janvier, et 1 pour pour 31 décembre). Ça me permet de contourner un petit problème technique : les années bissextiles. On peut ensuite représenter toutes les années,

plot(B2$dif[B2$year == "2020"],B2$NormCpte[B2$year == "2020"],type="l",xlim=0:1,ylim=c(1200,4000),col="white")
for(i in as.character(2000:2019)) lines(B2$dif[B2$year == i],B2$NormCpte[B2$year == i],col="light blue")

On veut ensuite avoir la tendance moyenne. Baptiste prenait la moyenne par jour (la courbe noire). Ici, on va lisser avec des splines

library(gam)
reg = gam(NormCpte~bs(dif,40),data=B2[B2$year != "2020",])
vx = seq(0,1,length=501)
vy = predict(reg,newdata=data.frame(dif=vx))
lines(vx,vy,col="blue",lwd=3)

ce qui donne la courbe bleue. C’est l’écart à cette courbe qui donne la surmortalité. Par exemple (prenons un exemple bien connu, et largement visible sur le graphique précédant) l’année 2003, et les quinze premiers jours d’août, correspondant (plus ou moins) à la période de canicule.

i = 2003
x = B2$dif[(B2$year == i)&((B2$month == "08"))&(B2$day %in% c(paste("0",1:9,sep=""),10:15))]
y = B2$NormCpte[(B2$year == i)&((B2$month == "08"))&(B2$day %in% c(paste("0",1:9,sep=""),10:15))]
yp = predict(reg,newdata=data.frame(dif=x))
e = y-yp
sum(e)
[1] 14294.39

Plusieurs chiffres sont mentionnés sur wikipédia, dont le rapport de l’INSERM qui annonçait 14,802 morts. On n’est pas trop loin… On peut visualiser cette surmortalité sur le graphique ci-dessous

plot(B2$dif[B2$year == "2020"],B2$NormCpte[B2$year == "2020"],type="l",col="white",xlim=0:1,ylim=c(1200,4000))
for(i in as.character(2000:2019)) lines(B2$dif[B2$year == i],B2$NormCpte[B2$year == i],col="light blue")
lines(vx,vy,col="blue")
lines(B2$dif[B2$year == 2003],B2$NormCpte[B2$year == 2003],col="red")
for(u in 1:length(x)) segments(x[u],y[u],x[u],yp[u],lwd=3,col="red")

Prenons un autre exemple qu’on voit bien sur le graphique de Baptiste, la grippe de 2017 (dite de 2016-2017). Si on prend les 45 premiers jours de 2017, on obtient les chiffres suivants

i = 2017
x=B2$dif[(B2$year == i)&((B2$month == "01")|
  ((B2$month == "02")&(B2$day %in% c(paste("0",1:9,sep=""),10:15))))]
y=B2$NormCpte[(B2$year == i)&((B2$month == "01")|
  ((B2$month == "02")&(B2$day %in% c(paste("0",1:9,sep=""),10:15))))]
yp=predict(reg,newdata=data.frame(dif=x))
yp = predict(reg,newdata=data.frame(dif=x))
e = y-yp
sum(e)
[1] 21177.33

Le rapport de Santé Publique France parle de 21,200 décès (pour la “surmortalité toute cause”) ce qui est là aussi comparable…

Maintenant, avant de conclure, et avant de me faire troller pendant quelques jours, je tiens à préciser que c’est juste un exercice de modélisation, et oui, j’ai exclu 2020. Je ne tiens pas à faire de comparaison avec la Covid-19, à qui 21,000 morts sont attribués, en France, ce soir. Si l’ordre de grandeur est le même, qu’on en me fasse pas dire qu’on est dans une situation comparable, et que la Covid-19 est finalement une simple grippe. Je ne me prononcerais pas sur ce point car je ne suis pas virologue. Mais surtout, on a 21,000 morts malgré des mesures incroyables de confinement. Je reviendrais probablement dans les jours qui viennent sur l’impact du contrôle par confinement, mais l’objet du court billet de ce soir était de voir comment on pouvait rapidement quantifier une surmortalité.

  • petite note technique : pour les 3 graphiques (mais pas pour les codes et les calculs) j’avais initialement normalisé les données. En effet, la population française a relativement pas mal changé en 20 ans, augmentant de 10%… Aussi, 1000 morts en 2003, ça n’est pas autant (relativement à la taille de la population) qu’en 2020.

STT5100, quiz (régression de Poisson #1)

Pour la fin de la semaine, j’avais donné un petit quiz, basé sur la base suivante, qui donne le nombre de cyclistes à une intersection, à New York City,

download.file("http://freakonometrics.free.fr/NYCVelo.RData","velo.RData")
load("velo.RData")
str(base)
'data.frame':	214 obs. of  7 variables:
 $ Date    : chr  "1-Apr-17" "2-Apr-17" "3-Apr-17" "4-Apr-17" ...
 $ HIGH_T  : num  46 62.1 63 51.1 63 48.9 48 55.9 66 73.9 ...
 $ LOW_T   : num  37 41 50 46 46 41 43 39.9 45 55 ...
 $ PRECIP  : num  0 0 0.03 1.18 0 0.73 0.01 0 0 0 ...
 $ BB_COUNT: int  606 2021 2470 723 2807 461 1222 1674 2375 3324 ...
 $ DAY     : chr  "Sam" "Dim" "Lun" "Mar" ...
 $ DIFF_T  : num  9 21.1 13 5.1 17 7.9 5 16 21 18.9 ......

En utiisant une régression de Poisson, il fallait prédire combien de cyclistes passeront un dimanche, s’il fait une température maximale de 85F, minimale de 70F, et s’il ne pleut pas. Et il fallait voir ce que donnerait la prévision pour un lundi.

newbase = data.frame(DAY=as.factor(c("Lun","Dim")),
 HIGH_T=c(85,85),LOW_T=c(70,70),
 PRECIP=c(0,0))

Faisons un modèle avec toutes les variables explicatives.

reg0 = glm(BB_COUNT~HIGH_T+LOW_T+PRECIP+DAY,data=base,family=poisson)

Rajoutons aussi une indicatrice pour indiquer les jours où il ne pleut pas,

reg = glm(BB_COUNT~HIGH_T+LOW_T+PRECIP+I(PRECIP==0)+DAY,data=base,family=poisson)
summary(reg) 
 
Coefficients:
                     Estimate Std. Error z value Pr(>|z|)    
(Intercept)         6.8844970  0.0110463 623.241   <2e-16 ***
HIGH_T              0.0210950  0.0003133  67.328   <2e-16 ***
LOW_T              -0.0114006  0.0003351 -34.024   <2e-16 ***
PRECIP             -0.6570450  0.0071899 -91.384   <2e-16 ***
I(PRECIP == 0)TRUE  0.1303908  0.0033283  39.176   <2e-16 ***
DAYJeu              0.1683475  0.0049690  33.880   <2e-16 ***
DAYLun              0.1144129  0.0050480  22.665   <2e-16 ***
DAYMar              0.1655886  0.0049936  33.160   <2e-16 ***
DAYMer              0.1688035  0.0049190  34.317   <2e-16 ***
DAYSam              0.0466447  0.0051838   8.998   <2e-16 ***
DAYVen              0.1003536  0.0050919  19.709   <2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for poisson family taken to be 1)
 
    Null deviance: 70021  on 213  degrees of freedom
Residual deviance: 26493  on 203  degrees of freedom
AIC: 28580
 
Number of Fisher Scoring iterations: 4

Tout semble vraiment significatif. Et même avoir du sens (si on regarde les signes, par exemple). Et si on a peur de rater un effet non-linéaire, on peut mettre des splines sur toutes les variables continues

library(gam)
reggam = gam(BB_COUNT~bs(HIGH_T)+bs(LOW_T)+bs(PRECIP)+I(PRECIP==0)+DAY,data=base,family=poisson)
plot(reggam, se=TRUE)

pour la température maximale, ou pour la température minimale

et la courbe suivante pour la précipitation, avec un lissage linéaire entre l’observation maximale (3) et celle juste avant (vers 1.8)

On peut aussi régresser sur la température minimale, et l’écart entre le maximum et le minimum (dans un modèle linéaire les modèles sont équivalents, mais avec de transformation nonlinéaire, la différence pourrait donner un modèle plus simple)

library(gam)
reggam2 = gam(BB_COUNT~bs(DIFF_T)+bs(LOW_T)+bs(PRECIP)+I(PRECIP==0)+DAY,data=base,family=poisson)
plot(reggam2, se=TRUE)

On peut maintenant comparer les quatre modèles, et leurs prévision. Par exemple, pour le modèle linéaire (avec la variable indicatrice indiquant qu’il ne pleut pas),

P = predict(reg,newdata=newbase,type="response",se.fit=TRUE)

pour le lundi, on obtient l’intervalle de confiance à 95% pour \widehat{\lambda}

P$fit[1]+c(-2,2)*P$se.fit[1]
[1] 3349.842 3401.395

et pour le dimanche l’intervalle de confiance à 95% est

P$fit[2]+c(-2,2)*P$se.fit[2]
[1] 2987.497 3033.861

On peut visualiser les quatre intervalles de confiance (avec, de haut en bas, le second modèle gam, le premier, le modèle linéaire avec l’indicatrice d’absence de pluie, puis le premier modèle linéaire)

alors que pour le dimanche, on a

Autrement dit, en changeant de modèle, on change beaucoup les intervalles de confiance sur la prévision (et les intervalles sont parfois complètement disjoints). Ce qui n’est pas forcément une bonne nouvelle.

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Graduate Course on Advanced Methods in Econometrics

I will give a short graduate course for PhD students, in Rennes, on Thurday mornings, in March (2nd, 9th, 23rd and 30th). The agenda will be

  1. Nonlinear Regression Models and Smoothing Techniques

  2. Bootstrapping and Regression

  3. Penalized Regression Models and LASSO

  4. Quantile Regression and Expectiles

There will be slides available by the end of February.

 

Choosing a Classifier

In order to illustrate the problem of chosing a classification model consider some simulated data,

> n = 500
> set.seed(1)
> X = rnorm(n)
> ma = 10-(X+1.5)^2*2
> mb = -10+(X-1.5)^2*2
> M = cbind(ma,mb)
> set.seed(1)
> Z = sample(1:2,size=n,replace=TRUE)
> Y = ma*(Z==1)+mb*(Z==2)+rnorm(n)*5
> df = data.frame(Z=as.factor(Z),X,Y)

A first strategy is to split the dataset in two parts, a training dataset, and a testing dataset.

> df1 = training = df[1:300,]
> df2 = testing  = df[301:500,]
  • The Holdout Method: Training and Testing Datasets

The two datasets can be visualised below, with the training dataset on top, and the testing dataset below

> plot(df1$X,df1$Y,pch=19,col=c(rgb(1,0,0,.4),
+ rgb(0,0,1,.4))[df1$Z])

Continue reading Choosing a Classifier

On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

https://latex.codecogs.com/gif.latex?\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})

where https://latex.codecogs.com/gif.latex?m(\cdot) is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30, 
+ col="green", ticktype="detailed",shade=TRUE)

Continue reading On Some Alternatives to Regression Models

Multiple (smoothed) regression and portfolio exposure

Wednesday, in class, we’ve seen how to visualize a multiple regression model (with two continuous explanatory variables). Here, the goal is to predict the average cost of an insurance claim, using some covariates, e.g. the age of the driver, and the age of the car (recall that losses here are liability losses). The prediction obtained from a (standard) generalized linear model, with a log-link

> reg1=glm(cout~ageconducteur+agevehicule,data=base,family=Gamma(link="log"))

The code to visualize the predicted average cost is the following: first, we have to compute predictions for specific values,

> pred=function(x,y){
+ predict(reg,newdata=data.frame(ageconducteur=x,
+ agevehicule=y),type="response")

Then, we use this function to compute values on a grid,

> X=seq(20,80,by=5)
> Y=0:20
> Z=outer(X,Y,p)
> image(X,Y,Z,col=rev(heat.colors(101)))
> contour(X,Y,Z,add=TRUE,
+ levels=c(1400,1800,2000,2200,2400,2600,2800,3000,3200,4000,5000))

If we use factors, and not continuous variates (cut versions of those two variates),

> reg2=glm(cout~cut(ageconducteur,breaks=c(0,22,35,55,80,100))*
+               cut(agevehicule,breaks=c(-1,1,3,5,10,100)),
+ data=base,family=Gamma(link="log"))

(note that we consider the Cartesian product, so values are computed for each product of factors, age of the driver and age of the car) we obtain

Obviously, we’re missing something here: the most expensive class with one model is the cheapeast for the other one! Of course, it might come from our classes (that were chosen a bit randomly), but it might be interesting to use nonlinear functions of the ages. So, let us use splines to smooth those two variables,

> reg3=glm(cout~bs(ageconducteur)+bs(agevehicule),data=base,
+ family=Gamma(link="log"))

With additive smoothed functions, we obtained a symmetric graph (due to the additive property)

while with a bivariate spline

> library(mgcv)
+ reg4=gam(cout~s(ageconducteur,agevehicule),data=base,
+ family=Gamma(link="log"))

(for some odd reasons, I could not use – easily – a bivariate spline in the Generalized Linear Model, but it did work considering a Generalized Additive Model – which is, by no means additive now). We can identify here some regions where the average cost can be extremely expensive… But, as mentioned wednesday, one should keep in mind that some parts of the square above are not reached. More precisely, the distribution of the portfolio, as a function of those two covariates is the following

Thus, the proportion of young drivers driving a brand new car, and the proportion of old drivers driving a very old car is rather small… If the goal is to find niches, one should look at the prediction more carefully, but if the goal is to make that everyone gets an insurance cover, maybe we should allow that some drivers are under-priced (especially when are rare in the portfolio). And one should keep in mind that average costs are extremely sensitive to large losses, as discussed previously http://freakonometrics.hypotheses.org/3490 (and in class)

In the univariate case, I have migrated an old post, we I tried to reproduce (in R and in French) some standard graphs in the insurance industry: it is always interesting to visualize not only the prediction obtained from our models, but also the size of each class in the portfolio,

The post is online here http://freakonometrics.hypotheses.org/1224

Introduction aux modèles linéaires généralisés

J’ai un peu d’avance dans le cours. Je vais mettre en ligne les transparents pour la semaine prochaine (normalement), où nous aborderons la classe des modèles linéaires généralisés. Les transparents sont en ligne ici.

Je n’ai pas mis de section sur lesGeneralized Additive Models, on se contentera de la section sur le lissage évoquée à la fin des transparents sur la modélisation de la fréquence. Afin de légitimer les méthodes de lissage (sur l’âge de l’assuré en particulier), je renvoie vers un graphique produit il y a plusieurs années par un cabinet de conseil, qui notait que la forme de la fonction de lissage, liant l’âge à la fréquence est identique, dans tous les pays,

http://freakonometrics.hypotheses.org/files/2013/02/assurance4.jpgMais je pense que je ferais un billet dédié au lissage, dans la problématique de la tarification en assurance IARD.