# The myth of interpretability of econometric models

There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model.

We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency $Y$ is driven by some non-observable risk factor $\Theta$, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor $\Theta$ is strongly correlated with $X_1$, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of $Y$ on $X_1$ to derive our actuarial pricing model. But assume that, possibly, risk factor $\Theta$ is also strongly correlated with $X_2$, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”.

Of course, since $\Theta$ is strongly correlated with $X_1$ and $X_2$, it means that $X_1$ and $X_2$ are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to  illustrate.

 set.seed(123) n=1e5 Theta=rnorm(n) X1=Theta+rnorm(n)/8 X2=Theta+rnorm(n)/8 L=exp(-3+Theta) Y=rpois(n,L) B=data.frame(Y,X1,X2)

Our first idea was to consider a model where $Y$ is “explained” by the first variable $X_1$,

 g1=glm(Y~X1,data=B,family=poisson) summary(g1)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97778 0.01544 -192.88 &lt;2e-16 *** X1 0.97926 0.01092 89.64 &lt;2e-16 ***

As expected, our variable is “significant”, but also, probably more interesting, $X_2$, has no impact on the residuals

 B$e1=residuals(g1,type="pearson") g1e=lm(e1~X2,data=B) summary(g1e) Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0003618 0.0031696 0.114 0.909 X2 0.0028601 0.0031467 0.909 0.363 The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers. But we can also consider the other story. We can consider a model where $Y$ is “explained” by the second variable $X_2$,  g2=glm(Y~X2,data=B,family=poisson) summary(g2) Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97724 0.01544 -192.81 &lt;2e-16 *** X2 0.97915 0.01093 89.56 &lt;2e-16 *** Here also we have a valid model, that can be interpreted, and here also $X_1$, has no impact on the residuals  B$e2=residuals(g2,type="pearson") g2e=lm(e2~X1,data=B) summary(g2e)   Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0004863 0.0031733 0.153 0.878 X1 0.0027979 0.0031504 0.888 0.374

The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver.

So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable

 AIC(g1) [1] 51013.39 AIC(g2) [1] 51013.15

Why not incorporate the two explanatory variables $X_1$ and $X_2$, at the same time, in our regression model, and let “the model” decide what to do…?

 g=glm(Y~X1+X2,data=B,family=poisson) summary(g)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.98132 0.01547 -192.723 2e-16 *** X1 0.49310 0.06226 7.920 2.38e-15 *** X2 0.49375 0.06225 7.931 2.17e-15 ***

It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it?  So why not try some LASSO here?

library(glmnet) fit=glmnet(x=as.matrix(B[,c("X1","X2")]), y=B$Y,family="poisson") plot(fit,xvar="lambda") Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here? # What is a Linear Trend, by the way? I had a very strange discussion on twitter (yes, another one), about regression curves. I think it started with a tweet based on some xkcd picture (just for fun, because it was New Year’s Day) There were comments on that picture, by econometricians, mainly about ‘significant’ trends when datasets are very noisy. And I mentioned a graph that I saw earlier, a couple of days ago Let us reproduce that graph (Roger kindly sent me the dataset) db=data.frame(year=1990:2016, ratio=c(.23,.27,.32,.37,.22,.26,.29,.15,.40,.28,.14,.09,.24,.18,.29,.51,.13,.17,.25,.13,.21,.29,.25,.2,.15,.12,.12)) library(ggplot2) The graph is here (with the same aesthetic conventions as Roger’s initial graph, i.e. using some sort of barplot) ggplot(db, aes(year, ratio)) + geom_bar(stat="identity") + stat_smooth(method = "lm", se = FALSE) My point was that we miss the ‘confidence band’ of the regression In R, at least, it is quite natural to get (and actually, it is the default version of the graph function) ggplot(db, aes(year, ratio)) + geom_bar(stat="identity") + stat_smooth(method = "lm", se = TRUE) It is hard to claim that the ‘regression line’ is significant (in the sense “significantly non horizontal”). To be more specific, if we look at the output of the regression model, we get summary(lm(ratio~year,data=db))  Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.158531 4.549672 2.013 0.055 . year -0.004457 0.002271 -1.962 0.061 . --- Signif. codes: 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (which is exactly what Roger used in his graph to plot his red straight line). The p-value of the estimator of the slope, in a linear regression model is here 6%. But I found Roger’s point puzzling See also First of all, let us get back to a more standard graph, with a scatterplot, and not bars, ggplot(db, aes(year, ratio)) + stat_smooth(method = "lm") + geom_point()  Here, we observe points $\{y_{1990},y_{1991},\cdots,y_{2016}\}$. In order to draw that blue line, we assume (Econometrics 101, actually) that those observations are realizations of random variables $\{Y_{1990},Y_{1991},\cdots,Y_{2016}\}$. Randomness here does not come from a survey, or from ‘balls in an urn’. Randomness is because hurricanes and floods are themselves seen are realizations of random events. Yes, there might be measurement errors, but that’s not where randomness comes from (here). When we talk about ‘randomness’, it should be related to ‘model error’ i.e. the error we make if we consider a linear model (here), that is $Y_t=\beta_0+\beta_1t+\varepsilon_t$ Even if observations are not obtained from balls in an urn, there is some kind of randomness here. Randomness means that we might have errors (random errors) around the estimated value (that is on the blue curve), $y_t=\widehat{y}_t+\widehat{\varepsilon}_t$. One might consider a nonlinear model to reduce the error, ggplot(db, aes(year, ratio)) + geom_point() + geom_smooth() but in the case, the danger is to overfit So yes, when we fit a linear model, there is always some kind of randomness, and it is possible to get a ‘confidence band’, that will be very useful for predictions (e.g. for reinsurance purpose here). # Ce que la courbe ROC (et l’AUC) ne raconte pas En préparant une intervention pour mardi prochain, j’épluchais les résultats renvoyés pour un exercice, et j’ai eu un résultat assez étrange avec un modèle de classification. J’avais donné la même base cet automne à l’ensae, et j’avais donc près d’une trentaine d’autres modèles, pour comparer (disons plutôt que sur la même base de test, j’ai une trentaine de prévisions). Les observations noires sont celles obtenues cet automne (le trait correspond aux meilleurs AUC sur la base de test), et les observations rouges sont celles que j’ai obtenu pour l’intervention de mardi (là encore, le trait vertical correspond aux meilleurs modèles, au sens du AUC), sur une observation de la base de test, Ce sont les probabilités prédites (mais j’ai enlevé l’échelle). Pour presque toutes mes observations, les poids rouges sont bien au dessus des autres… Mais cela n’enlève en rien le fait que l’AUC obtenu (pour les deux modèles rouges) est très bon. C’est effectivement un résultat important (et connu) : le critère AUC (mais plus généralement la courbe ROC en entier) n’indique en rien si la valeur prédite est bonne, ou pas. Il nous dit juste si l’ordre obtenu est correct. Si les valeurs les plus importantes sont effectivement les valeurs pour lesquelles on a un 1, l’AUC sera très bon. C’est ce qu’on peut observer sur le petit exemple ci-dessous. Considérons un modèle logistique simulé assez simple, > n=1e3 > set.seed(1) > x1=rnorm(n) > x2=runif(n) > u=-3+x2+x1 > p=exp(u)/(1+exp(u)) > y=rbinom(n,prob=p,size=1) > library(ROCR) > df=data.frame(y,x1,x2) > mean(df$y)
[1] 0.116
> reg=glm(y~.,data=df,family=binomial)
> p=predict(reg,type="response")
> mean(p)
[1] 0.116
> pred1=prediction(p, df$y) > L=performance(pred1, "tpr", "fpr") L’AUC est ici > auc=performance(pred1, "auc")@y.values[[1]] > auc [1] 0.7681191 et la courbe ROC est la suivante, > plot(unlist(L@x.values),unlist(L@y.values), + type="s",col="blue") Supposons que l’on change la constante du modèle logistique, > reg$coefficients[1]=0

Dans ce cas, notre prévision est assez mauvaise, car la probabilité moyenne prédite est ici

> u=reg$coefficients[1]+reg$coefficients[2]*
+ df$x1+reg$coefficients[3]*df$x2 > p=exp(u)/(1+exp(u)) > mean(p) [1] 0.6060676 (on est loin des 11,6% de 1 dans la base). Pourtant le AUC est bon > pred1=prediction(p, df$y)
> L=performance(pred1, "tpr", "fpr")
> auc=performance(pred1, "auc")@y.values[[1]]
> auc
[1] 0.7681191

(c’est en fait la même valeur qu’auparavant, ce qui a du sens puisque la courbe ROC est identique)

> lines(unlist(L@x.values),unlist(L@y.values),
+ type="s",col="red")

Autrement dit, ces outils, classiquement utilisés pour juger la qualité d’un classifieur, ne permettent en aucun cas de dire que la probabilité prédite a du sens. Ces critères permettent juste de dire qu’on identifie assez bien les personnes qui ont le plus de chance d’avoir la réponse 1. Ce qui n’est pas si mal… mais c’est un autre problème que celui d’avoir une probabilité qui soit pertinente.

# Data Science for Actuaries, Regression Models with R

After an introduction to Advanced R, we will discuss for the last part of our crash course visualization and graphs (from the previous set of slides), and I just uploaded additional slides on regression models (including some pdf version)

# How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in $n$, the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time(
stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé
0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean)) + } We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations, > vN = c(10,100,1000,10000) > T = Vectorize(time_lm)(vN) we can then plot it > plot(vN,T[2,]/1e6,log="xy",col="red",type="b", + xlab="Number of Observations",ylab="Time") > lines(vN,T[1,]/1e6,col="blue",type="b") It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones). And actually, the same story olds for logistic regressions. Consider the following dataset > n=1e6 > X=runif(n) > S=-3+2*X+rnorm(n) > Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S))) > B=data.frame(X,Y) The frequentist version of the logistic regression is > glm_freq = function(n){ + idx = sample(1:1e6,size=n) + reg = glm(Y~X,data=B[idx,],family=binomial) + summary(reg) + } and the Bayesian one, using stan, > stan_glm = " + data { + int N; + vector[N] x; + int<lower=0,upper=1> y[N]; + } + parameters { + real alpha; + real beta; + } + model { + alpha ~ normal(0, 10); + beta ~ normal(0, 10); + y ~ bernoulli_logit(alpha + beta * x); + } + " > stanmodel = stan_model(model_code = stan_glm) ) > glm_bayes = function(n){ + idx = sample(1:1e6,size=n) + fit = sampling(stanmodel, + data = list(N=n, + x = X[idx], + y = Y[idx]), + iter = 1000, warmup=200) + summary(fit) + } Again, we can see how long it takes to run those regression models > time_gl m= function(n){ + M = microbenchmark(glm_freq(n), + glm_bayes(n),times=50) + return(apply( matrix(M$time,nrow=2),1,mean))
+ }

# Computational Actuarial Science, with R, in Barcelona

This Wednesday, I will give a graduate crash course on computational actuarial science, with R, which will be the second part of the lecture of Tuesday. Slides are now available,

# Econometrics vs. Machine Learning with Temporal Patterns

A few months ago, I did publish a (long) post entitled ‘some thoughts on economics, mathematics, econometrics, machine learning, etc‘. In that post, I was discussing possible differences between foundations of econometrics, and machine learning. I wanted to get back today on an important point, related to training/sampling datasets, when we have temporal data.

I was discussing this morning, with a student of the Data Science for Actuaries program, an interesting point related to claim frequency models, for insurance ratemaking. Since the goal is to predict claims frequency (to assess the level of the insurance premium), he suggested to use old data to train the model, and more recent one to test it. The problem is that the model did not incorporate any temporal pattern, and we got surprising results.

Consider here a simple dataset,

> set.seed(1)
> n=50000
> X1=runif(n)
> T=sample(2000:2015,size=n,replace=TRUE)
> L=exp(-3+X1-(T-2000)/20)
> E=rbeta(n,5,1)
> Y=rpois(n,L*E)
> B=data.frame(Y,X1,L,T,E)

Claims frequency is driven by a Poisson process, with one covariate, X1, and we assume that the intensity decreases (with an exponential rate). Consider here a standard linear regression, without any time effect

> reg=glm(Y~X1+offset(log(E)),data=B,
+ family=poisson)

We can also compute the empirical annualized claims frequency

> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B[abs(B$X1-x)<.1,] + sum(B$Y)/sum(B$E) + } > vp=Vectorize(p)(seq(.05,.95,by=.1)) and plot the two curves on the same graph, > plot(seq(.05,.95,by=.1),vp,type="b") > lines(u,exp(v),lty=2,col="red") This is what we usually do in econometrics. In machine learning, and more specifically to assess the quality of the model, and for model selection, it is common to split the dataset in two parts. A training sample, and a validation sample. Consider some randomized training/validation samples, then fit a model on the training sample, and finally use it to get a prediction, > idx=sample(1:nrow(B),size=nrow(B)*7/8) > B_a=B[idx,] > B_t=B[-idx,] > reg=glm(Y~X1+offset(log(E)),data=B_a, + family=poisson) > u=seq(0,1,by=.01) > v=predict(reg,newdata=data.frame(X1=u,E=1)) > p=function(x){ + B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,] + sum(B$Y)/sum(B$E) + } > vp_t=Vectorize(p)(seq(.05,.95,by=.1)) > lines(seq(.05,.95,by=.1),vp_t,col="red") The blue curve is the prediction on the training sample (as we usually do in econometrics), but then the red curve is the prediction on the testing sample. Here, volatility probably comes from the small size of the testing sample (1 observation out of 8). Now, what if we use the year as a splitting criteria : we fit a model on old years to fit a model, and we test it on recent years, > B_a=subset(B,T<2014) > B_t=subset(B,T>=2014) > reg=glm(Y~X1+offset(log(E)),data=B_a,family=poisson) > u=seq(0,1,by=.01) > v=predict(reg,newdata=data.frame(X1=u,E=1)) > p=function(x){ + B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,] + sum(B$Y)/sum(B$E) + } > vp_t=Vectorize(p)(seq(.05,.95,by=.1)) > lines(seq(.05,.95,by=.1),vp_t,col="red") Clearly, we miss something here… We were looking at such a graph this morning, and it took me some time to understand how training and validation samples were designed, and that there was a possible temporal effect (actually, this morning, it was based on a 3 year training sample, and a 1 year validation sample). Since there is a temporal pattern, let us capture it. As an econometrician, let me use a regression model > reg=glm(Y~X1+T+offset(log(E)),data=B, + family=poisson) > C=coefficients(reg) > u=seq(1999,2016,by=.1) > v=exp(-(u-2000)/20-3) > plot(2000:2015,exp(C[1]+C[3]*(2000:2015))) > lines(u,v,lty=2,col="red") (I focus only on the evolution of the temporal variate on that graph). Here, we use a linear model, but there are usually no reason to assume linearity. So we might consider splines > library(splines) > reg=glm(Y~X1+bs(T)+offset(log(E)), + data=B,family=poisson) > u=seq(1999,2016,by=.1) > v=exp(-(u-2000)/20-3) > v2=predict(reg,newdata=data.frame(X1=0, + T=2000:2015,E=1)) > plot(2000:2015,exp(v2),type="b") > lines(u,v,lty=2,col="red") But here again, why should we assume that there is an underlying smooth function? There might be some ruptures… So let us consider a regression on factors > reg=glm(Y~0+X1+as.factor(T)+offset(log(E)), + data=B,family=poisson) > C=coefficients(reg) > u=seq(1999,2016,by=.1) > v=exp(-(u-2000)/20-3) > plot(2000:2015,exp(C[2:17]),type="b") > lines(u,v,lty=2,col="red") An alternative might be to consider some more general model, like a regression tree > library(rpart) > reg=rpart(Y~X1+T+offset(log(E)),data=B, + method="poisson",cp=1e-4) > p=function(t){ + B=B[B$T==t,]
+   B$E=1 + mean(predict(reg,newdata=B)) + } > y_m=Vectorize(function(t) p(t))(2000:2015) > u=seq(1999,2016,by=.1) > v=exp(-(u-2000)/20-3+.5) > plot(2000:2015,y_m,ylim=c(.02,.085),type="b") > lines(u,v,lty=2,col="red") Here, it seems that something went wrong. I guess it’s coming from the exposure. So consider a simplier model, on the annualized frequency, and with weights that are related to the exposure > reg=rpart(Y/E~X1+T,data=B,weights=B$E,cp=1e-4)
> p=function(t){
+   B=B[B$T==t,] + B$E=1
+   mean(predict(reg,newdata=B))
+ }
> y_m=Vectorize(function(t) p(t))(2000:2015)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3+.5)
> plot(2000:2015,y_m,ylim=c(.02,.085),type="b")
> lines(u,v,lty=2,col="red")

That was for the econometrician perspective. With a machine learning perspective, consider a training sample (here based on old data) and a validation sample (based on more recent ones)

> B_a=subset(B,T<2014)
> B_t=subset(B,T>=2014)

If we consider a model, it is easy to get a prediction on recent years, even if the model was designed to model older ones,

> reg_a=glm(Y~X1+T+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[1]+C[3]*c(2000:2013,
+ NA,NA)),type="b")
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(C[1]+C[3]*2014:2015),
+ pch=19,col="blue")

But if we use years as factors, things are more complicated.

> reg_a=glm(Y~0+X1+as.factor(T)+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> RMSE=function(A){
+   L=exp(C[1]*B_t$X1+ A[1]*(B_t$T==2014) + A[2]*(B_t$T==2015)) + Y_t=L*B_t$E
+   sum( (Y_t - B_t$Y )^2)} > i=optim(c(.4,.4),RMSE)$par
> plot(2000:2015,c(exp(C[2:15]),NA,NA),)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(i),pch=19,col="blue")

becase we need to get a prediction on levels that were not in our training sample. Here, we minimize the RMSE to quantify factor levels for recent years. And the output is not that bad.

So yes, it is possible to get a training dataset on older data, and test it on recent years. But one should be careful, and take into account, properly, temporal patterns.

# An Attempt to Understand Boosting Algorithm(s)

Last tuesday, at the annual meeting of the French Economic Association, I was having lunch with Alfred, and while we were chatting about modeling issues (econometric models against machine learning prediction), he asked me what boosting was. Since I could not be very specific, we’ve been looking at wikipedia webpage.

Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones

One should admit that it is not very informative. But at least, there is the idea that ‘weak learners’ can be used to provide a good predictor. Now, to be honest, I guess I understand the concept. But I still can’t reproduce what I got with standard ‘boosting’ packages.

There are a lot of publications about the concept of ‘boosting’. In 1988, Michael Kearns published Thoughts on Hypothesis Boosting, which is probably the oldest one. About the algorithms, it is possible to find some references. Consider for instance Improving Regressors using Boosting Techniques, by Harris Drucker. Or The Boosting Approach to Machine Learning An Overview by Robert Schapire, among many others. In order to illustrate the use of boosting in the context of regression (and not classification, since I believe it provides a better visualisation) consider the section in Dong-Sheng Cao’s In The boosting: A new idea of building models.

# Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

$\frac{1}{n}\sum_{i=1}^n \widehat{Y}_i=\frac{1}{n}\sum_{i=1}^n Y_i$

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist) [1] 42.98 One can prove that it is also the prediction for the average individual in our sample > predict(lm(dist~speed,data=cars), + newdata=data.frame(speed=mean(cars$speed)))
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue") > abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

# On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

$\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})$

where $m(\cdot)$ is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30,
+ col="green", ticktype="detailed",shade=TRUE)

# Modeling Earthquake Dynamics

In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

The paper is online on https://hal.archives-ouvertes.fr/.

# Inequalities and Quantile Regression

In the course on inequality measure, we’ve seen how to compute various (standard) inequality indices, based on some sample of incomes (that can be binned, in various categories). On Thursday, we discussed the fact that incomes can be related to different variables (e.g. experience), and that comparing income inequalities between coutries can be biased, if they have very different age structures.

So we’ve seen quantile regressions. I can mention some old slides (used in a crash course at McGill three years ago)., as well as a more technical discussion on ties, and non-unicity of the regression line.

In order to illustrate, consider  the following dataset

> salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)
> plot(salary$yd,salary$sl)
> abline(lm(sl~yd,data=salary),col="blue")

We have here the stndard regression line, obtained using ordinary least squares. Here we have the expected income given the experience. But we can also use a quantile regression,

$Q_\tau(Y\vert\boldsymbol{X})=\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta}$

> library(quantreg)
> Q10 <- rq(sl~yd,data=salary,tau=.1)
> Q90 <- rq(sl~yd,data=salary,tau=.9)
> abline(Q10,col="red")
> abline(Q90,col="purple")

A classical tool to describe inequalities is the ratio of the 90% quantile over the 10% quantile (among so many others,

> ratio9010 = function(age){
+   predict(Q90,newdata=data.frame(yd=age))/
+   predict(Q10,newdata=data.frame(yd=age))
+ }

For instance, among people with 5 years of experience, there is an inequality index of

> ratio9010(5)
1.401749

while for people with 30 years of experience, it would be

> ratio9010(30)
1.9488

If we plot the evolution of this 90-10 ratio, as a function of the experience, we get the following increasing trend

> A=0:30
> plot(A,Vectorize(ratio9010)(A),type="l",ylab="90-10 quantile ratio")

So clearly, comparing inequalitis ceteris paribus between two groups, should be performed carefully, and probably including some covariates.

# Correction examen final, modèles de prévision

Une ébauche de correction pour les 50 questions à choix multiples de la semaine passée est en ligne (avec les statistiques de réponses). Comme je le rappelle en introduction, il s’agit d’un extrait d’un vrai examen donné en début d’année. Merci de me dire rapidement si vous n’êtes pas d’accord avec ma correction.

# Régression linéaire, quelques codes

Un rapide billet pour mettre en ligne les codes utilisés la semaine passée, complétant les codes des transparents. On travaille toujours sur la même base, ou on cherche à prévoir une distance de freinage d’un véhicule, tenant compte de la vitesse du véhicule.

> plot(cars)
> reg=lm(dist~speed,data=cars)
> summary(reg)

Call:
lm(formula = dist ~ speed, data = cars)

Residuals:
Min      1Q  Median      3Q     Max
-29.069  -9.525  -2.272   9.215  43.201

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -17.5791     6.7584  -2.601   0.0123 *
speed         3.9324     0.4155   9.464 1.49e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15.38 on 48 degrees of freedom
Multiple R-squared:  0.6511,	Adjusted R-squared:  0.6438
F-statistic: 89.57 on 1 and 48 DF,  p-value: 1.49e-12

Pour faire plusieurs prévisions, à la main, on peut utiliser le code suivant (la boucle permet de faire des prévisions pour plusieurs valeurs)

> for(x in seq(3,30,by=.25)){
+ b0=coef(reg)[1]
+ b1=coef(reg)[2]
+ Yx=b0+b1*x
+ V=vcov(reg)
+ Vx=V[1,1]+2*V[1,2]*x+V[2,2]*x^2
+ IC1=Yx+c(-1,+1)*1.96*sqrt(Vx)
+ s=summary(reg)$sigma + IC2=Yx+c(-1,+1)*1.96*s + points(x,Yx,pch=19,col="red") + points(c(x,x),IC1,pch=3,col="blue") + points(c(x,x),IC2,pch=3,col="purple")} On avait ensuite fait une régression linéaire sur une sous-base, avec 20 observations tirées au hasard > I=sample(1:50,size=20) > reg=lm(dist~speed,data=cars[I,]) Le but était de visualiser l’impact du nombre d’observation sur la qualité de la régression > summary(reg) Call: lm(formula = dist ~ speed, data = cars[I, ]) Residuals: Min 1Q Median 3Q Max -23.529 -7.998 -5.394 11.634 39.348 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -20.7408 9.4639 -2.192 0.0418 * speed 4.2247 0.6129 6.893 1.91e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 16.62 on 18 degrees of freedom Multiple R-squared: 0.7252, Adjusted R-squared: 0.71 F-statistic: 47.51 on 1 and 18 DF, p-value: 1.91e-06 > for(x in seq(3,30,by=.25)){ + b0=coef(reg)[1] + b1=coef(reg)[2] + Yx=b0+b1*x + V=vcov(reg) + Vx=V[1,1]+2*V[1,2]*x+V[2,2]*x^2 + IC=Yx+c(-1,+1)*1.96*sqrt(Vx) + points(x,Yx,pch=19,col="purple") + points(c(x,x),IC,pch=3,col="green")} Notons qu’il est possible d’utiliser des fonctions de R pour faire des prévisions, avec des intervalles de confiance > predict(reg, + newdata=data.frame(speed=c(15,25)),interval= "confidence") fit lwr upr 1 42.62976 34.75450 50.50502 2 84.87677 68.92746 100.82607 > predict(reg, + newdata=data.frame(speed=15),interval= "prediction") fit lwr upr 1 42.62976 6.836077 78.42344 Quand on a plus d’une variable explicative, c’est plus compliqué de “visualiser” la régression > chicago=read.table("http://freakonometrics.free.fr/chicago.txt", + header=TRUE,sep=";") > Y=chicago$Fire
>  X1=chicago$X_1 > X2=chicago$X_2
>  X3=chicago$X_3 > base=data.frame(Y,X1,X2,X3) > plot(X2,X3) > reg=lm(Y~X2+X3,data=base) > y=function(x2,x3) predict(reg,newdata=data.frame(X2=x2,X3=x3)) > VX2=seq(0,80) > VX3=seq(5,25) > VY=outer(VX2,VX3,y) > image(VX2,VX3,VY) > contour(VX2,VX3,VY,add=TRUE) qui correspond à un plan de régression > persp(VX2,VX3,VY,theta=30,ticktype=detailed) On reviendra plus en détails sur ce point, mais il est possible de faire des régressions non linéaires assez facilement, à partir de ce modèle linéaire. On avait commencé par un modèle linéaire sur le logarithme de la distance > plot(cars$speed,log(cars$dist)) > reg1=lm(log(dist)~speed,data=cars) > abline(reg1,col="red") (on le verra, ce n’est pas fini, car on n’a pas ici de prévision sur la distance, juste sur son logarithme… mais promis, on en reparlera) ou sur la racine carrée > plot(cars$speed,sqrt(cars$dist)) > reg1=lm(sqrt(dist)~speed,data=cars) > abline(reg1,col="red") Au lieu de transformer la variable d’intérêt, on peut aussi transformer la variable explicative. On peut pendre des puissances, ou des fonctions simples, mais aussi mettre des ruptures. On avait commencé par une variable indicatrice, > plot(cars$speed,cars$dist) > s=10 > abline(v=s,col="green") > regs=lm(dist~speed+I(speed>s),data=cars) > summary(regs) Call: lm(formula = dist ~ speed + I(speed > s), data = cars) Residuals: Min 1Q Median 3Q Max -29.472 -9.559 -2.088 7.456 44.412 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -17.2964 6.7709 -2.555 0.0139 * speed 4.3140 0.5762 7.487 1.5e-09 *** I(speed > s)TRUE -7.5116 7.8511 -0.957 0.3436 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15.39 on 47 degrees of freedom Multiple R-squared: 0.6577, Adjusted R-squared: 0.6432 F-statistic: 45.16 on 2 and 47 DF, p-value: 1.141e-11 Mais on peut aussi mettre des fonctions afin d’avoir un modèle linéaire par morceaux, tout en étant continu > plot(cars) > s=15 > abline(v=s,col="green") > positive=function(x) ifelse(x>0,x,0) > regs=lm(dist~speed+positive(speed-s),data=cars) > summary(regs) Call: lm(formula = dist ~ speed + positive(speed - s), data = cars) Residuals: Min 1Q Median 3Q Max -29.502 -9.513 -2.413 5.195 45.391 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.6519 10.6254 -0.720 0.47500 speed 3.0186 0.8627 3.499 0.00103 ** positive(speed - s) 1.7562 1.4551 1.207 0.23350 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15.31 on 47 degrees of freedom Multiple R-squared: 0.6616, Adjusted R-squared: 0.6472 F-statistic: 45.94 on 2 and 47 DF, p-value: 8.761e-12 On a ici une rupture, mais on pourrait imaginer en avoir plusieurs > nouvellebase=data.frame(speed=5:25) > y=predict(regs,newdata=nouvellebase) > lines(5:25,y,col="red") > > plot(cars$speed,cars$dist) > s1=10 > s2=20 > abline(v=c(s1,s2),col="green") > positive=function(x) ifelse(x>0,x,0) > regs=lm(dist~speed+positive(speed-s1)+positive(speed-s2),data=cars) > summary(regs) Call: lm(formula = dist ~ speed + positive(speed - s1) + positive(speed - s2), data = cars) Residuals: Min 1Q Median 3Q Max -24.374 -9.475 -2.625 6.639 43.914 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.6305 16.2941 -0.468 0.6418 speed 3.0630 1.8238 1.679 0.0998 . positive(speed - s1) 0.2087 2.2453 0.093 0.9263 positive(speed - s2) 4.2812 2.2843 1.874 0.0673 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15 on 46 degrees of freedom Multiple R-squared: 0.6821, Adjusted R-squared: 0.6613 F-statistic: 32.89 on 3 and 46 DF, p-value: 1.643e-11 Comme vu en cours, le test de significativité des deux derniers coefficients ne veut pas dire que la pente est nulle, mais qu’elle est significativement différente de cette obtenue sur la zone de gauche (avant les deux seuils). # There is no “Too Big” Data, is there? A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques. So, consider here a logistic regression $Y_i\sim\mathcal{B}(p_i)$, based on some covariates. We have $k$ explanatory variables ($k$ will be large, but not too large) and $n$ observations (with $n­\gg k$), $\frac{p_i}{1-p_i}=\exp[\boldsymbol{X}_i^{\text{\sffamily T}}\boldsymbol{\beta}]$ Here we have a big (potentially) matrix product $\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta}$ i.e. ${\begin{matrix}{}_{ \uparrow} \\ {}_{n} \\ {}_{ \downarrow}\end{matrix}} \underset{\leftarrow \ \ k+1 \ \ \rightarrow}{\begin{bmatrix}\vdots&\vdots&&\vdots\\1 & X_{1,i} &\cdots &X_{k,i}\\\vdots&\vdots&&\vdots\end{bmatrix}}\begin{bmatrix}\beta_0 \\\beta_1\\ \vdots \\ \beta_k \end{bmatrix}$ with a large $\boldsymbol{X}$ matrix. Here, assume that we have a $100,000\times101$ matrix, with $100,000$ individual observations, and $100$ possible variables (and the intercept). Actually, in my model, only $2$ variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated. n=100000 library(mnormt) k=50 r=.2 Sig=matrix(r,k,k) diag(Sig)=1 X=rmnorm(n,varcov=Sig) U=pnorm(rmnorm(n,varcov=Sig)) p=exp(-U[,1]-X[,1]-1)/(1+exp(-U[,1]-X[,1]-1)) Y=rbinom(n,size=1,p) df=data.frame(Y,U,X) names(df)=c("Y",paste("U",1:50,sep=""),paste("X",1:50,sep="")) reg=glm(Y~.,data=df,family="binomial") In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared, • generate $100$ datasets with $n/10$ observations, by sub-sampling • generate $100$ datasets with $n/100$ observations, by sub-sampling, On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of $100$ explanatory variables, only $2$ were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used. L1=L2=L1s=L2s=list() library(MASS) ns1=n/10 ns2=n/100 for(s in 1:100){ i=sample(1:n,size=ns1,replace=TRUE) reg_sub=glm(Y~.,data=df[i,],family="binomial") L1[[s]]=reg_sub L1s[[s]]=stepAIC(reg_sub) i=sample(1:n,size=ns2,replace=TRUE) reg0=glm(Y~.,data=df[i,],family="binomial") L2[[s]]=reg_sub L2s[[s]]=stepAIC(reg_sub) } For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get VC=c(-1,-1,rep(0,49),-1,rep(0,49)) coef=function(k){ C1=unlist(lapply(L1,function(x) coefficients(x)[k])) C2=unlist(lapply(L2,function(x) coefficients(x)[k])) m=summary(reg)$coefficients
u=seq(quantile(C2,.2),quantile(C2,.8),length=501)
v=dnorm(u,m[k,1],m[k,2])
plot(u,v,col="white",xlab="",ylab="",axes=FALSE)
axis(1)
polygon(c(u,rev(u)),c(v,rep(0,length(u))),col="grey",border=NA)
abline(v=VC[k],lty=2)
}

coef(2)

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

• the ROC curve for the $\widehat{Y}_i$‘s obtained with the model on the complete dataset [red]
• the ROC curves for the $\widehat{Y}_i^{(b)}$‘s obtained with the model on the $b$‘s subsample [light blue]
• the ROC curve for the $\widetilde{Y}_i$ ‘s obtained by averaging the previous estimates [blue]

$\widetilde{Y}_i=\frac{1}{B}\sum_{b=1}^B\widehat{Y}_i^{(b)}$

S=predict(reg,type="response")
Y=def$Y M.ROC=ROC.curve(S,Y) plot(M.ROC[1,],M.ROC[2,],type="s",col="red") Z=df$Y*0
for(si in 1:100){
S=predict(L1s[[si]],type="response",newdata=df)
Z=Z+S
Y=df$Y M.ROC=ROC.curve(S,Y) lines(M.ROC[1,],M.ROC[2,],type="s",col="light blue") } S=Z/100 Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="blue",lwd=2)

If we consider sub-samples of size $n/10$, we get the following, and when we consider sub-samples of size $n/100$, without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

• running one regression on our $100,000\times101$  matrix
• averaging prediction after running $100$ regressions on some $1,000\times101$  matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with $100$ variables (where $98$ should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.