A brief history of sports betting

this article was originaly published – in French – in variance.eu

A report by the American Gaming Association (May 2017) estimated that between $100 billion and $400 billion was bet each year on an estimated gross income of between $5 billion and $20 billion, just for sports betting. We will return here to a brief history of sports betting, emphasizing the concept of pari-mutuel betting. We will see, in a second article, the links of this principle with mathematical finance, and insurance.

From games to sports

Sports betting has been around for a long time, even if the origin of the first bet is impossible to date. We can think of the Greeks, inventors of the Olympic Games, where it was not uncommon for spectators to bet among themselves on the winners (Decker & Thuiller, 2004). Closer to home, as Georges Vigarello reminds us, “Under the Ancien Régime, gambling was the subject of a real passion. It takes the form of either betting games or prize games.

The first, bets, are made between people from the same social world, between farmers or between nobles. The second, the prize games, take place during parish celebrations, and show different regional practices, with the struggle in Brittany, or the jump in Provence. We can also think of the confrontations between villages at the soule for example. Among the nobles, prize games are organized for special occasions (birth or wedding). These games were recreational and festive moments.

It was not until the end of the 19th century that gambling became a sport, in line with the hygienist theories of the time. We can think of Baron Pierre de Coubertin, who wanted to “use all the means appropriate to develop our physical qualities to make them serve the collective good” through “sport”. We will find the Baron again in 1887 with the creation of the Union of French Societies of Athletic Sports, the official appearance of the notion of “sport”, replacing that of “game”, as Dietschy & Clastres (2006) points out, noting in passing that this Union is based on amateurism, in reaction against the companies of cycling (from 1860) and walking (around 1870) which resumed the traditions of price and betting games. Around 1890, this union, dedicated to athletics, opened up to other sports (rugby, field hockey, fencing, swimming) which were represented by specialized commissions.

The first bookmakers and gambling

A little earlier, during the Industrial Revolution, horse betting organised by bookmakers developed. These bets were popular in the United Kingdom in the 16th and 17th centuries, but remained reserved for the aristocracy and the landed gentry. And in reality, only horse owners were allowed to bet on the results of these private races, known as “matches”. One of his races, launched by the twelfth Earl of Derby (Edward Smith-Stanley) around 1870, also left its mark on sporting vocabulary. If these races were originally private, Charles II’s passion for these races made them more ambitious, attracting huge crowds, betting more and more important sums. Innkeepers and pub owners were then major promoters of these races, which encouraged owners to organize the races near their establishments. They then naturally became the first bookmakers, organizing the first steeple-chases, a form of race (first created in Ireland) where riders ran from one church tower to another by jumping everything in their path! In 1826, at the stables in Saint Alban, north of London, the idea of horses starting and finishing in the same place was launched, giving rise to modern racecourses.

Betting was not yet regulated and betting on races was based on a credit system. And since gambling near a place where alcohol was available in large quantities can have dramatic consequences, the British government banned gambling in pubs, which led to the opening of betting shops, run by bookmakers, with the adoption of the Gambling Act in 1845. The bookmakers not only played the role of scribes, keeping track of transactions in registers, they also served as arbitrators in betting. The bookmaker has become the intermediary with whom to bet, he receives the bets, but does not bet against the player. The arbitrator does not only act at the end, in the event of a dispute, but above all to make the bet official. Indeed, cash bets were rare, and bookmakers decided whether the items bet had the same value and, if not, what the difference was. One of the players then adds money to a cap. Players put their hands in the hat and remove them, either to agree with the assessment or to indicate their disagreement. This is referred to as “hand in cap”, which refers to the amount of money needed to ensure a fair bet. The word “handicap” was then commonly used in horse betting (to designate disadvantaged participants at the start of a race) and then to have a medical connotation from 1950 onwards.

Thereafter, bookmakers will not lack imagination, introducing cash bets, then offering fixed odds against each horse in a race. Parliament then went backwards with the Suppression of Betting Houses Act in 1853. Credit institutions and games of chance on racetracks were allowed. At the same time, in France, Léon Sari invented the “pari mutuel” in 1857 with Charles de Morny, owner of the Maisons-Laffitte racetracks (which became a building with stands in June 1878). Joseph Oller, who co-founded the Moulin-Rouge, is the concessionaire. As the Senate report on gambling in France reminds us, the law of June 2, 1891 legalizes betting on horse races and establishes the principle of mutualization. As we will see later, this principle means that bettors play against each other and share the winnings (once the legal levies provided for by law have been made for the benefit of the State and the institution of racing). In mathematical finance, we speak of “self-hedging strategy”. In March 1931, the PMU (“pari mutuel urbain”) was born, and it was not until 1985 that the “sports lotto” arrived.

From horses to other sports

The “pool” has long referred in England to a game of cards played for collective stakes, drawing its etymology from the French “hen”, or rather from the old French “hen”, referring to a young poultry (we will find the Latin word pulla, de pullus, the “young animal”), but also “booty” or “looting”. Here we find the idea of playing for money. This use can be traced back to 1870 (in the sense of “collective betting”) before becoming a pool during the First World War, and then to designate a group of people sharing skills. As early as 1920, the term “football pool” was coined, as recalled by Forrest (1999).

In Liverpool, England, John Moores founded Littlewoods in 1923, a retail company, before launching mail order sales, while offering football bets. The most famous game was the “Treble Chance”, where players could choose to bet on 10, 11 or 12 football matches for the coming weekend. Anecdotally, as noted by Forrest & Pérez (20013), when a match could not take place (for example because of rain), a panel of experts appointed by Littlewoods had to model the match, and provide a forecast. After the Second World War, in Europe, we will see the arrival of so-called 1X2 formulas where the player must predict whether, in a set of 12 to 15 games, the home team will win (1), lose (2) or draw (X). It can be noted that these “football pools” could refer to any form of pari-mutuel betting, very strongly resembling a lotto. The main difference is that in the lottery, the draw is supposed to be a pure random process, unlike football matches. And for the players, the difference is significant! In the 1980s, Liverpool was one of the largest private companies in Europe. Before decreasing with the birth of online betting sites….

Internet and online betting

Now, in addition to the betting companies that still exist in the United Kingdom, the strong point of bookmakers is their online presence. The first sites were created around 1995, for example Intertops, which was based on a law passed by the island nation of Antigua and Barbuda (an officially independent, Commonwealth member country) in 1994, granting licences to companies wishing to provide gambling services over the Internet (subsequently, they obtained licences from the Mohawk territory of Kahnawake in Quebec, or Malta). Betting on sports events has quickly become very popular.

In 2000, Betfair was launched, and revolutionized the industry: Betfair itself did not take customer bets, but rather offered customers to place bets between them. These peer-to-peer betting was quickly very popular. In 2002, the first live betting was launched, offering bettors the opportunity to bet on a sporting event while it was taking place. Today, on lƒes larger sites, all kinds of sports are available, whether collective (football, basketball) or individual (tennis, boxing), with possibly a competition involving more than two players or teams (athletics, cycling). The player can choose an objective, which can be a final score (1X2 in football), a number of goals scored, etc., then he concludes the bet by choosing the amount he is willing to bet (the bet). On all sites, no less than 20,000 bets are possible every day.

Decker, Wolfgang & Thuillier, Jean-Paul (2004). Le sport dans l’antiquité. Picard.

Dietschy, Paul & Clastres, Patrick (2006). Sport, société et culture en France du XIXe siècle à nos jours. Hachette, Carré Histoire.

Forrest, David (1999). The Past and Future of the British Football Pools. Journal of Gambling Studies, 15:2, 161-176.

Forrest, David & Pérez, Levi (2013) The Football Pools in The Oxford Handbook of the Economics of Gambling, 147-162

Vigarello, Georges (2004) Le sport est-il encore un jeu ? Sciences Humaines, no 152.

To be continued…. with a post on how bets, predictions and players’ beliefs are linked.

What it the interpretation of the diagonal for a ROC curve

Last Friday, we discussed the use of ROC curves to describe the goodness of a classifier. I did say that I will post a brief paragraph on the interpretation of the diagonal. If you look around some say that it describes the “strategy of randomly guessing a class“, that it is obtained with “a diagnostic test that is no better than chance level“, even obtained by “making a prediction by tossing of an unbiased coin“.

Let us get back to ROC curves to illustrate those points. Consider a very simple dataset with 10 observations (that is not linearly separable)

x1 = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
x2 = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
y = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x1,x2=x2,y=as.factor(y))

here we can check that, indeed, it is not separable

plot(x1,x2,col=c("red","blue")[1+y],pch=19)

Consider a logistic regression (the course is on linear models)

reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))

but any model here can be used… We can use our own function

Y=df$y
S=predict(reg)
roc.curve=function(s,print=FALSE){
  Ps=(S>=s)*1
   
  FP=sum((Ps==1)*(Y==0))/sum(Y==0)
     
  TP=sum((Ps==1)*(Y==1))/sum(Y==1)if(print==TRUE){print(table(Observed=Y,Predicted=Ps))}
   
vect=c(FP,TP)names(vect)=c("FPR","TPR")return(vect)}

or any R package actually

library(ROCR)

perf=performance(prediction(S,Y),"tpr","fpr")

We can plot the two simultaneously here

plot(performance(prediction(S,Y),"tpr","fpr"))
V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

So our code works just fine, here. Let us consider various strategies that should lead us to the diagonal.

The first one is : everyone has the same probability (say 50%)

S=rep(.5,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

Indeed, we have the diagonal. But to be honest, we have only two points here : (0,0) and (1,1). Claiming that we have a straight line is not very satisfying… Actually, note that we have this situation whatever the probability we choose

S=rep(.2,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

We can try another strategy, like “making a prediction by tossing of an unbiased coin“. This is what we obtain

set.seed(1)

S=sample(0:1,size=10,replace=TRUE)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

We can also try some sort of “random classifier”, where we choose the score randomly, say uniform on the unit interval

set.seed(1)

S=runif(10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Let us try to go further on that one. For convenience, let us consider another function to plot the ROC curve

V=Vectorize(roc.curve)(seq(0,1,length=251))

roc_curve=Vectorize(function(x) max(V[2,which(V[1,]<=x)]))

We have the same line as previously

x=seq(0,1,by=.025)

y=roc_curve(x)lines(x,y,type="s",col="red")

But now, consider many scoring strategies, all randomly chosen

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(10)
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,df$y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

The red line is the average of all random classifiers. It is not a straight line, be we observe oscillations around the diagonal.

Consider a dataset with more observations


myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")

myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1

reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))

Y=myocarde$PRONO

S=predict(reg)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Here is a “random classifier” where we draw scores randomly on the unit interval

S=runif(nrow(myocarde)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

And if we do that 500 times, we obtain, on average

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(length(Y))
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,Y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

So, it looks like me might say that the diagonal is what we have, on average, when drawing randomly scores on the unit interval…

I did mention that an interesting visual tool could be related to the use of the Kolmogorov Smirnov statistic on classifiers. We can plot the two empirical cumulative distribution functions of the scores, given the response Y

score=data.frame(yobs=Y,
                 ypred=predict(reg,type="response"))

f0=c(0,sort(score$ypred[score$yobs==0]),1)

f1=c(0,sort(score$ypred[score$yobs==1]),1)plot(f0,(0:(length(f0)-1))/(length(f0)-1),col="red",type="s",lwd=2,xlim=0:1)lines(f1,(0:(length(f1)-1))/(length(f1)-1),col="blue",type="s",lwd=2)

we can also look at the distribution of the score, with the histogram (or density estimates)

S=score$ypred

hist(S[Y==0],col=rgb(1,0,0,.2),
     probability=TRUE,breaks=(0:10)/10,border="white")hist(S[Y==1],col=rgb(0,0,1,.2),
     probability=TRUE,breaks=(0:10)/10,border="white",add=TRUE)lines(density(S[Y==0]),col="red",lwd=2,xlim=c(0,1))lines(density(S[Y==1]),col="blue",lwd=2)

The underlying idea is the following : we do have a “perfect classifier” (top left corner)

is the supports of the scores do not overlap

otherwise, we should have errors. That the case below

we in 10% of the cases, we might have misclassification

or even more missclassification, with overlapping supports

Now, we have the diagonal

when the two conditional distributions of the scores are identical

Of course, that only valid when n is very large, otherwise, it is only what we observe on average….

On the poor performance of classifiers in insurance models

Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !

By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega  is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?

So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,

In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.

Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)

a=m*(m*(1-m)/v-1)
b=(1-m)*(m*(1-m)/v-1)
p=rbeta(n,a,b)

from those probabilities, I generate occurences of claims, or deaths,

Y=rbinom(n,size = 1,prob = p)

Then, I compute the AUC of my “perfect” model,

auc.tmp=performance(prediction(p,Y),"auc")

And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code

library(ROCR)
n=1000
ns=200
ab_beta = function(m,inter){
  a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter,
            interval=c(.0000001,1000000))$root
  b=a/m-a
  return(c(a,b))
}
Sim_AUC_mean_inter=function(m=.5,i=.05){
  V_auc=rep(NA,ns)
  b=-1
  essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){
    for(s in 1:ns){
      p=rbeta(n,a,b)
      Y=rbinom(n,size = 1,prob = p)
      auc.tmp=performance(prediction(p,Y),"auc")
      V_auc[s]=as.numeric(auc.tmp@y.values)}
    L=list(moy_beta=m,
           var_beat=v,
           q05=qbeta(.05,a,b),
           q95=qbeta(.95,a,b),
           moy_AUC=mean(V_auc),
           sd_AUC=sd(V_auc),
           q05_AUC=quantile(V_auc,.05),
           q95_AUC=quantile(V_auc,.95))
    return(L)}
  if((a<0)|(b<0)){return(list(moy_AUC=NA))}}
Vm=seq(.025,.975,by=.025)
Vi=seq(.01,.5,by=.01)
V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) 
Sim_AUC_mean_inter(x,y)$moy_AUC))
library("RColorBrewer")
image(Vm,Vi,V,
      xlab="Probability (Average)",
      ylab="Dispersion (Q95-Q5)",
      col=
        colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101))
contour(Vm,Vi,V,add=TRUE,lwd=2)

On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !

My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

Variance of the slope in a regression model

In my “applied linear models” exam, there was a tricky question (it was a multiple choice, so no details were asked). I was simply asking if the following statement was valid, or not

Consider a linear regression with one single covariate, y=\beta_0+\beta_1x_1+\varepsilon and the least-square estimates. The variance of the slope is \text{Var}[\widehat{\beta}_1] Do we decrease this variance if we add one variable, and consider y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon ?

For the exam, the expected answer was simply “no”. In a nutshell, there are two cases where we should expect different changes,

  • if x_1 and x_2 are highly correlated, then we should expect the variance to increase
  • if x_1 and x_2 are not correlated, then we should expect the variance to decrease

We did briefly observed (and discussed) those points on examples during the lecture… but I wanted to go a bit further, since I couldn’t find any analytical results. Let us generate a model y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon, and then compare the variance \text{Var}[\widehat{\beta}_1] on the two fitted modes, depending on the correlation between x_1 and x_2

library(mnormt)
n=200
s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+X[,2]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

Let us generate 500 samples for each value of the correlation, from -0.9 to _0.9

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and let us plot the ratio of the two variances

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

If the ratio exceeds 1, the variance increases when adding a covariate.

Indeed, here, when the two variables are independent, the variance is divided by two. But when covariates are highly correlated, the variance is multiplied by two…

Now, what if, actually, x_2 is not a real explanatory variable : the true model we generate is y=\beta_0+\beta_1x_1+\varepsilon. In that case,

s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

we get our samples as previously

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and we plot those ratios

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

In the case we add a useless variable x_2, whatever the correlation with x_1, it will always, on average, increase the variance of \widehat{\beta}_1.

Annual UCSB InsurTech Summit

Just a quick post to mention that an Insurtech Summit will be organized in May 2019, on Friday 3rd, by Mike Ludkovski, and I will be there, with Francois Millard (Vitality Group), Adam Tashman (Carpe Data, Santa Barbara), Emil Valdez (University of Connecticut), and Howard Zail (Elucidor, LLC, New York City). That will be nice… I will actually also give a talk on the Monday before at the actuarial seminar !

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Do risk classes go beyond stereotypes?

Generalization, stereotypes and clichés

In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.

To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?

The usual suspects

For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!

In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.

Machines, causality, and stereotypes

A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe

P[carries | brushing your teeth] < P[carries | don’t brush your teeth]

brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.

Figure 1: Näytä Data – Author’s visualization

The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.

When actuaries tell each other stories

As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.

But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.

Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.

Hyper-individualization as an answer?

While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…

Antonio, K. & Charpentier, A. (2017).  La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.

Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.

Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98

Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.

Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.

Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 4

This post is the eighth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 7 is online here.

Penalization and variables selection

One important concept in econometrics is Ockham’s razor – also known as the law of parsimony (lex parsimoniae) – which can be related to abductive reasoning.

Akaike’s criterion was based on a penalty of likelihood taking into account the complexity of the model (the number of explanatory variables retained). If in econometrics, it is customary to maximize the likelihood (to build an asymptotically unbiased estimator), and to judge the quality of the ex-post model by penalizing the likelihood, the strategy here will be to penalize ex-ante in the objective function, even if it means building a biased estimator. Typically, we will build: (\widehat{\beta}_{0,\lambda},\widehat{\beta}_{\lambda})=\text{argmin}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda \text{ penalization}( \boldsymbol{\beta})\right\rbrace, ~~~(11)where the penalty function will often be a norm \|\cdot\| chosen a priori, and a penalty parameter \lambda (we find in a way the distinction between AIC and BIC if the penalty function is the complexity of the model – the number of explanatory variables retained). In the case of the \ell_2 norm, we find the ridge estimator, and for the \ell_1 norm, we find the lasso estimator (“Least Absolute Shrinkage and Selection Operator”). The penalty previously used involved the number of degrees of freedom of the model, so it may seem surprising to use \|\beta\|_{\ell_2} as in the ridge regression. However, we can envisage a Bayesian vision of this penalty. It should be recalled that in a Bayesian model : \underbrace{\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]}_{\text{posterior}} \propto \underbrace{\mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{likelihood}} \cdot \underbrace{\mathbb{P}[\boldsymbol{\theta}]}_{\text{prior}} or\log\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]= \underbrace{\log \mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{log likelihood}} + \underbrace{\log\mathbb{P}[\boldsymbol{\theta}]}_{\text{{penalty}}}In a Gaussian linear model, if we assume that the a priori law of \theta follows a centred Gaussian distribution, we find a penalty based on a quadratic form of the components of \theta.

Before going back in detail to these two estimators, obtained using the \ell_1 or \ell_2 norm, let us return for a moment to a very similar problem: the best choice of explanatory variables. Classically (and this will be even more true in large dimension), we can have a large number of explanatory variables, p, but many are just noise, in the sense that \beta_j=0 for a large number of j. Let s be the number of (really) relevant covariates, s=\#S, with S=\{j=1,\cdots,p:\beta_j\neq 0\}. If we note \mathbf{X}_S the matrix composed of the relevant variables (in columns), then we assume that the real model is of the form y=\mathbf{x}_S^T \beta_S+\varepsilon. Intuitively, an interesting estimator would then be \widehat{\beta}_S=[\mathbf{X}_S^T \mathbf{X}_S ]^{-1} \mathbf{X}_S^T \mathbf{y}, but this estimator is only theoretical because the set S is unknown, here. This estimator can actually be seen as the oracle estimator mentioned above. One may then be tempted to solve (\widehat{\beta}_{0,s},\widehat{\beta}_{s})=\underset{\beta_S\in\mathbb{R}^s}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta_S)\right\rbrace,\text{ s.t. } \# {S}=s This problem was introduced by Foster & George (1994) using the \ell_0 notation. More precisely, let us define here the following three norms, where \mathbf{a}\in\mathbb{R}^d, \Vert\boldsymbol{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~ \Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|~~\text{ and }~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}

Table 1: Constrained optimization and regularization.

Let us consider the optimization problems in Table 1. If we consider the classical problem where the quadratic norm is used for \ell, the two problems of the equation (\ell1) of Table 1 are equivalent, in the sense that, for any solution (\beta^\star,s) to the left problem, there is \lambda^\star such that (\beta^\star,\lambda^\star) is the solution of the right problem; and vice versa. The result is also true for problems(\ell2). These are indeed convex problems. On the other hand, the two problems (\ell0) are not equivalent: if for (\beta^\star,\lambda^\star) solution of the right problem, there is s^\star such that \beta^\star is solution of the left problem, the reverse is not true. More generally, if you want to use an \ell_p norm, sparsity is obtained if p\leq 1 whereas you need p\geq1 to have the convexity of the optimization program.

One may be tempted to resolve the penalized program (\ell0) directly, as suggested by Foster & George (1994). Numerically, it is a complex combinatorial problem in large dimension (Natarajan (1995) notes that it is a NP-difficult problem), but it is possible to show that if \lambda\sim\sigma^2 \log(p), then \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \leq \underbrace{\mathbb{E}\big(\mathbf{x}_{ {S}}^T\widehat{\beta}_{{S}}-\mathbf{x}^T \beta_0]^2\big)}_{=\sigma^2 \#{S}}\cdot \big(4\log p+2+o(1)\big) Observe that in this case \widehat{\beta}_{\lambda,j}^{\text{sub}} = \left\lbrace\begin{array}{l}0 \text{ if } j\notin{S}_\lambda(\beta)\\ \widehat{\beta}_{j}^{\text{ols}} \text{ if } j\in{S}_\lambda(\beta),\end{array}\right. where S_\lambda (\beta) refers to all non-zero coordinates when solving (\ell0).

The problem (\ell2) is strictly convex if \ell is the quadratic norm, in other words, the Ridge estimator is always well defined, with in addition an explicit form for the estimator, \widehat{ {\beta}}_\lambda^{\text{ ridge}}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{y}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}(\mathbf{X}^T\mathbf{X})\widehat{ {\beta}}^{\text{ ols}} Therefore, it can be deduced that \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=-\lambda[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}~\widehat{ {\beta}}^{\text{ ols}} and\text{Var}[\widehat{\beta}_\lambda^{\text{ ridge}}]=\sigma^2[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{X}[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}With a matrix of orthonormal explanatory variables (i.e. \mathbf{X}^T \mathbf{X}=\mathbb{I}), the expressions can be simplified \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\lambda}{1+\lambda}~\widehat{ {\beta}}^{\text{ ols}}\text{ and }\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\sigma^2}{(1+\lambda)^2}\mathbb{I} Observe that \text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]<\text{Var}[\widehat{ {\beta}}^{\text{ ols}}]. And because  \text{mse}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{p\sigma^2}{(1+\lambda)^2}+\frac{\lambda^2}{(1+\lambda)^2}\beta^T\beta we obtain an optimal value for \lambda: \lambda^\star=k\sigma^2/\beta^T\beta

On the other hand, if \ell is no longer the quadratic norm but the \ell_1 norm, the problem (\ell1) is not always strictly convex, and in particular, the optimum is not always unique (for example if \mathbf{X}^T \mathbf{X} is singular). But if it is strictly convex, then predictions \mathbf{X}\beta will be unique. It should also be noted that two solutions are necessarily consistent in terms of sign of coefficients: it is not possible to have \beta_j<0 for one solution and \beta_j>0 for another. From a heuristic point of view, the program (\ell1) is interesting because it allows to obtain in many cases a corner solution, which corresponds to a problem resolution of type (\ell0) – as shown visually on Figure 2.

Figure 2 : Penalization based on norms \ell_0, \ell_1 and \ell_2 (from Hastie et al. (2016)).

Let us consider a very simple model: y_i=x_i \beta+\varepsilon, with a penalty \ell_1 and a loss function \ell_2. The problem (\ell2) then becomes  \min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda|\beta|\big\} The first order condition is then -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}\pm 2\lambda=0And the sign of the last term depends on the sign of \beta. Suppose that the least square estimator (obtained by setting \lambda=0) is (strictly) positive, i. e. \mathbf{y}^T \mathbf{x}>0. If \lambda is not too big, we can imagine that \beta is of the same sign as \widehat{\beta}^{\text{mco}}, and therefore the condition becomes -2\mathbf{y}^T \mathbf{x}+2\mathbf{x}^T \mathbf{x}\beta+2\lambda=0, and the solution is \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}} By increasing \lambda, there will be a time such that \widehat{\beta}_λ=0. If we increase \lambda a bit little more, \widehat{\beta}_λ does not become negative because in this case the last term of the first order condition changes, and in this case we try to solve -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}- 2\lambda=0 whose solution is then \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}+\lambda}{\mathbf{x}^T\mathbf{x}}But this solution is positive (we assumed \mathbf{y}^T \mathbf{x}>0), and so it is possible to have \widehat{\beta}_\lambda <0at the same time. Also, after a while, \widehat{\beta}_\lambda=0, which is then a corner solution. Things are of course more complicated in larger dimensions (Tibshirani & Wasserman (2016) goes back at length on the geometry of the solutions) but as Candès & Plan (2009) notes, under minimal assumptions guaranteeing that the predictors are not strongly correlated, the Lasso obtains a quadratic error almost as good as if we had an oracle providing perfect information on the set of \beta_j‘s that are not zero. With some additional technical hypotheses, it can be shown that this estimator is “sparsistant” in the sense that the support of \widehat{\beta}_\lambda^{\text{lasso}} is that of \beta, in other words Lasso has made it possible to select variables (more discussions on this point can be obtained in Hastie et al. (2016)).

More generally, it can be shown that \widehat{\beta}_\lambda^{\text{lasso}} is a biased estimator, but may be of sufficiently low variance that the mean square error is lower than using least squares. To compare the three techniques, relative to the least square estimator (obtained when \lambda=0), if we assume that the explanatory variables are orthonormal, then \widehat{\beta}_{\lambda,j}^{\text{ subset}}=\widehat{\beta}_{j}^{\text{ ols}}\boldsymbol{1}_{|\widehat{\beta}_{\lambda,j}^{\text{ subset}}|>b}, ~~\widehat{\beta}_{\lambda,j}^{\text{ ridge}}=\frac{\widehat{\beta}_{j}^{\text{ ols}}}{1+\lambda}and\widehat{\beta}_{\lambda,j}^{\text{ lasso}}=\text{sign}[\widehat{\beta}_{j}^{\text{ ols}}]\cdot(|\widehat{\beta}_{j}^{\text{ ols}}|-\lambda)_+

Figure 3 : Penalization based on norms ,  and  (from Hastie et al. (2016)).

To be continued with probably a final post this week (references are online here)…

Foundations of Machine Learning, part 3

This post is the seventh one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 6 is online here.

Boosting and sequential learning

As we have seen before, modelling here is based on solving an optimization problem, and solving the problem described by equation (6) is all the more complex because the functional space \mathcal{M} is large. The idea of boosting, as introduced by Shapire & Freund (2012), is to learn, slowly, from the errors of the model, in an iterative way. In the first step, we estimate a model m_1 for y, from \mathbf{X}, which will give an error \varepsilon_1. In the second step, we estimate a model m_2 for \varepsilon_1, from X, which will give an error \varepsilon_2, etc. We will then retain as a model, after k iterations m^{(k)}(\cdot)=\underbrace{m_1(\cdot)}_{\sim y}+\underbrace{m_2(\cdot)}_{\sim \epsilon_1}+\underbrace{m_3(\cdot)}_{\sim \epsilon_2}+\cdots+\underbrace{m_k(\cdot)}_{\sim \epsilon_{k-1}}=m^{(k-1)}(\cdot)+m_k(\cdot)~~~(7)Here, the error \varepsilon is seen as the difference between y and the model m(\mathbf{x}), but it can also be seen as the gradient associated with the quadratic loss function. Formally, \varepsilon can be seen as \nabla\ell in a more general context (here we find an interpretation that reminds us of residuals in generalized linear models).

Equation (7) can be seen as a descent of the gradient, but written in a dual way. The problem will then be rewritten as an optimization problem: m^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\boldsymbol{x}_i)}_{\varepsilon_{k,i}},h(\boldsymbol{x}_i))\right\rbrace~~~(8)where the trick is to consider a relatively simple space \mathcal{H} (we will speak of “weak learner”). Classically, \mathcal{H} functions are step-functions (which will be found in classification and regression trees) called “stumps”. To ensure that learning is indeed slow, it is not uncommon to use a shrinkage parameter, and instead of setting, for example, \varepsilon_1=y-m_1 (\mathbf{x}), we will set \varepsilon_1=y-\alpha\cdot m_1 (\mathbf{x}) with \alpha\in[0.1]. It should be noted that it is because a non-linear space is used for \mathcal{H}, and learning is slow, that this algorithm works well. In the case of the Gaussian linear model, remember that the residuals \varepsilon=y-\mathbf{x}^T\beta are orthogonal to the explanatory variables, \mathbf{X}, and it is then impossible to learn from our errors. The main difficulty is to stop in time, because after too many iterations, it is no longer the m function that is approximated, but the noise. This problem is called overlearning.

This presentation has the advantage of having a heuristic reminiscent of an econometric model, by iteratively modelling the residuals by a (very) simple model. But this is often not the presentation used in the learning literature, which places more emphasis on an optimization algorithm heuristic (and gradient approximation). The function is learned iteratively, starting from a constant value, m^{(0)}=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m)\right\rbracethen we consider the following learning procedure{\displaystyle m^{(k)}=m^{(k-1)}+{\underset{h\in {\mathcal {H}}}{\text{argmin}}}\sum _{i=1}^{n}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})+h(\mathbf{x}_{i}))}~~~(9)which can be written, if \mathcal{H} is a set of differentiable functions, {\displaystyle m^{(k)}=m^{(k-1)}-\gamma_{k}\sum _{i=1}^{n}\nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})),} where {\displaystyle \gamma _{k}=\underset{\gamma }{\text{argmin }}\sum _{i=1}^{n}\ell\left(y_{i},m^{(k-1)}( \mathbf{x}_{i})-\gamma \nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}( \mathbf{x}_{i}))\right).} To better understand the relationship with the approach described above, at step k, pseudo-residuals are defined by setting r_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x})=m^{(k-1)}( \mathbf{x})}\text{ where }i=1,\cdots,nA simple model is then sought to explain these pseudo-residuals according to the explanatory variables \mathbf{x}_i, i.e. r_{i,k}=h^\star(\mathbf{x}_i) , where h^\star\in\mathcal{H}. In a second step, we look for an optimal multiplier by solving\gamma_k = \underset{\gamma\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m^{(k-1)}( \mathbf{x}_i)+\gamma h^\star(\mathbf{x}_i))\right\rbrace then update the model by setting m_k (\cdot)=m_(k-1) (\cdot)+\gamma_k h^\star (\cdot) . More formally, we move from equation (8) – which clearly shows that we are building a model on residuals – to equation (9) – which will then be translated as a gradient calculation problem – noting that \ell(y,m+h)=\ell(y-m,h) . Classically, class \mathcal{H} of functions consists in regression trees. It is also possible to use a form of penalty by setting m_k (\cdot)=m_(k-1) (\cdot)+\nu\gamma_k h^\star (\cdot) , with \nu\in(0,1) . But let’s go back  a little further – in our next post – on the importance of penalization before discussing the numerical aspects of optimization.

To be continued (keep in mind that references are online here)…

An Open Lab-Notebook Experiment