On the poor performance of classifiers in insurance models

Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !

By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega  is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?

So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,

In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.

Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)

a=m*(m*(1-m)/v-1)
b=(1-m)*(m*(1-m)/v-1)
p=rbeta(n,a,b)

from those probabilities, I generate occurences of claims, or deaths,

Y=rbinom(n,size = 1,prob = p)

Then, I compute the AUC of my “perfect” model,

auc.tmp=performance(prediction(p,Y),"auc")

And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code

library(ROCR)
n=1000
ns=200
ab_beta = function(m,inter){
  a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter,
            interval=c(.0000001,1000000))$root
  b=a/m-a
  return(c(a,b))
}
Sim_AUC_mean_inter=function(m=.5,i=.05){
  V_auc=rep(NA,ns)
  b=-1
  essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){
    for(s in 1:ns){
      p=rbeta(n,a,b)
      Y=rbinom(n,size = 1,prob = p)
      auc.tmp=performance(prediction(p,Y),"auc")
      V_auc[s]=as.numeric(auc.tmp@y.values)}
    L=list(moy_beta=m,
           var_beat=v,
           q05=qbeta(.05,a,b),
           q95=qbeta(.95,a,b),
           moy_AUC=mean(V_auc),
           sd_AUC=sd(V_auc),
           q05_AUC=quantile(V_auc,.05),
           q95_AUC=quantile(V_auc,.95))
    return(L)}
  if((a<0)|(b<0)){return(list(moy_AUC=NA))}}
Vm=seq(.025,.975,by=.025)
Vi=seq(.01,.5,by=.01)
V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) 
Sim_AUC_mean_inter(x,y)$moy_AUC))
library("RColorBrewer")
image(Vm,Vi,V,
      xlab="Probability (Average)",
      ylab="Dispersion (Q95-Q5)",
      col=
        colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101))
contour(Vm,Vi,V,add=TRUE,lwd=2)

On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !

My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

Variance of the slope in a regression model

In my “applied linear models” exam, there was a tricky question (it was a multiple choice, so no details were asked). I was simply asking if the following statement was valid, or not

Consider a linear regression with one single covariate, y=\beta_0+\beta_1x_1+\varepsilon and the least-square estimates. The variance of the slope is \text{Var}[\widehat{\beta}_1] Do we decrease this variance if we add one variable, and consider y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon ?

For the exam, the expected answer was simply “no”. In a nutshell, there are two cases where we should expect different changes,

  • if x_1 and x_2 are highly correlated, then we should expect the variance to increase
  • if x_1 and x_2 are not correlated, then we should expect the variance to decrease

We did briefly observed (and discussed) those points on examples during the lecture… but I wanted to go a bit further, since I couldn’t find any analytical results. Let us generate a model y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon, and then compare the variance \text{Var}[\widehat{\beta}_1] on the two fitted modes, depending on the correlation between x_1 and x_2

library(mnormt)
n=200
s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+X[,2]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

Let us generate 500 samples for each value of the correlation, from -0.9 to _0.9

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and let us plot the ratio of the two variances

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

If the ratio exceeds 1, the variance increases when adding a covariate.

Indeed, here, when the two variables are independent, the variance is divided by two. But when covariates are highly correlated, the variance is multiplied by two…

Now, what if, actually, x_2 is not a real explanatory variable : the true model we generate is y=\beta_0+\beta_1x_1+\varepsilon. In that case,

s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

we get our samples as previously

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and we plot those ratios

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

In the case we add a useless variable x_2, whatever the correlation with x_1, it will always, on average, increase the variance of \widehat{\beta}_1.

Annual UCSB InsurTech Summit

Just a quick post to mention that an Insurtech Summit will be organized in May 2019, on Friday 3rd, by Mike Ludkovski, and I will be there, with Francois Millard (Vitality Group), Adam Tashman (Carpe Data, Santa Barbara), Emil Valdez (University of Connecticut), and Howard Zail (Elucidor, LLC, New York City). That will be nice… I will actually also give a talk on the Monday before at the actuarial seminar !

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Do risk classes go beyond stereotypes?

Generalization, stereotypes and clichés

In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.

To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?

The usual suspects

For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!

In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.

Machines, causality, and stereotypes

A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe

P[carries | brushing your teeth] < P[carries | don’t brush your teeth]

brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.

Figure 1: Näytä Data – Author’s visualization

The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.

When actuaries tell each other stories

As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.

But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.

Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.

Hyper-individualization as an answer?

While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…

Antonio, K. & Charpentier, A. (2017).  La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.

Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.

Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98

Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.

Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.

Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 4

This post is the eighth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 7 is online here.

Penalization and variables selection

One important concept in econometrics is Ockham’s razor – also known as the law of parsimony (lex parsimoniae) – which can be related to abductive reasoning.

Akaike’s criterion was based on a penalty of likelihood taking into account the complexity of the model (the number of explanatory variables retained). If in econometrics, it is customary to maximize the likelihood (to build an asymptotically unbiased estimator), and to judge the quality of the ex-post model by penalizing the likelihood, the strategy here will be to penalize ex-ante in the objective function, even if it means building a biased estimator. Typically, we will build: (\widehat{\beta}_{0,\lambda},\widehat{\beta}_{\lambda})=\text{argmin}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda \text{ penalization}( \boldsymbol{\beta})\right\rbrace, ~~~(11)where the penalty function will often be a norm \|\cdot\| chosen a priori, and a penalty parameter \lambda (we find in a way the distinction between AIC and BIC if the penalty function is the complexity of the model – the number of explanatory variables retained). In the case of the \ell_2 norm, we find the ridge estimator, and for the \ell_1 norm, we find the lasso estimator (“Least Absolute Shrinkage and Selection Operator”). The penalty previously used involved the number of degrees of freedom of the model, so it may seem surprising to use \|\beta\|_{\ell_2} as in the ridge regression. However, we can envisage a Bayesian vision of this penalty. It should be recalled that in a Bayesian model : \underbrace{\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]}_{\text{posterior}} \propto \underbrace{\mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{likelihood}} \cdot \underbrace{\mathbb{P}[\boldsymbol{\theta}]}_{\text{prior}} or\log\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]= \underbrace{\log \mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{log likelihood}} + \underbrace{\log\mathbb{P}[\boldsymbol{\theta}]}_{\text{{penalty}}}In a Gaussian linear model, if we assume that the a priori law of \theta follows a centred Gaussian distribution, we find a penalty based on a quadratic form of the components of \theta.

Before going back in detail to these two estimators, obtained using the \ell_1 or \ell_2 norm, let us return for a moment to a very similar problem: the best choice of explanatory variables. Classically (and this will be even more true in large dimension), we can have a large number of explanatory variables, p, but many are just noise, in the sense that \beta_j=0 for a large number of j. Let s be the number of (really) relevant covariates, s=\#S, with S=\{j=1,\cdots,p:\beta_j\neq 0\}. If we note \mathbf{X}_S the matrix composed of the relevant variables (in columns), then we assume that the real model is of the form y=\mathbf{x}_S^T \beta_S+\varepsilon. Intuitively, an interesting estimator would then be \widehat{\beta}_S=[\mathbf{X}_S^T \mathbf{X}_S ]^{-1} \mathbf{X}_S^T \mathbf{y}, but this estimator is only theoretical because the set S is unknown, here. This estimator can actually be seen as the oracle estimator mentioned above. One may then be tempted to solve (\widehat{\beta}_{0,s},\widehat{\beta}_{s})=\underset{\beta_S\in\mathbb{R}^s}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta_S)\right\rbrace,\text{ s.t. } \# {S}=s This problem was introduced by Foster & George (1994) using the \ell_0 notation. More precisely, let us define here the following three norms, where \mathbf{a}\in\mathbb{R}^d, \Vert\boldsymbol{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~ \Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|~~\text{ and }~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}

Table 1: Constrained optimization and regularization.

Let us consider the optimization problems in Table 1. If we consider the classical problem where the quadratic norm is used for \ell, the two problems of the equation (\ell1) of Table 1 are equivalent, in the sense that, for any solution (\beta^\star,s) to the left problem, there is \lambda^\star such that (\beta^\star,\lambda^\star) is the solution of the right problem; and vice versa. The result is also true for problems(\ell2). These are indeed convex problems. On the other hand, the two problems (\ell0) are not equivalent: if for (\beta^\star,\lambda^\star) solution of the right problem, there is s^\star such that \beta^\star is solution of the left problem, the reverse is not true. More generally, if you want to use an \ell_p norm, sparsity is obtained if p\leq 1 whereas you need p\geq1 to have the convexity of the optimization program.

One may be tempted to resolve the penalized program (\ell0) directly, as suggested by Foster & George (1994). Numerically, it is a complex combinatorial problem in large dimension (Natarajan (1995) notes that it is a NP-difficult problem), but it is possible to show that if \lambda\sim\sigma^2 \log(p), then \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \leq \underbrace{\mathbb{E}\big(\mathbf{x}_{ {S}}^T\widehat{\beta}_{{S}}-\mathbf{x}^T \beta_0]^2\big)}_{=\sigma^2 \#{S}}\cdot \big(4\log p+2+o(1)\big) Observe that in this case \widehat{\beta}_{\lambda,j}^{\text{sub}} = \left\lbrace\begin{array}{l}0 \text{ if } j\notin{S}_\lambda(\beta)\\ \widehat{\beta}_{j}^{\text{ols}} \text{ if } j\in{S}_\lambda(\beta),\end{array}\right. where S_\lambda (\beta) refers to all non-zero coordinates when solving (\ell0).

The problem (\ell2) is strictly convex if \ell is the quadratic norm, in other words, the Ridge estimator is always well defined, with in addition an explicit form for the estimator, \widehat{ {\beta}}_\lambda^{\text{ ridge}}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{y}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}(\mathbf{X}^T\mathbf{X})\widehat{ {\beta}}^{\text{ ols}} Therefore, it can be deduced that \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=-\lambda[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}~\widehat{ {\beta}}^{\text{ ols}} and\text{Var}[\widehat{\beta}_\lambda^{\text{ ridge}}]=\sigma^2[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{X}[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}With a matrix of orthonormal explanatory variables (i.e. \mathbf{X}^T \mathbf{X}=\mathbb{I}), the expressions can be simplified \text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\lambda}{1+\lambda}~\widehat{ {\beta}}^{\text{ ols}}\text{ and }\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\sigma^2}{(1+\lambda)^2}\mathbb{I} Observe that \text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]<\text{Var}[\widehat{ {\beta}}^{\text{ ols}}]. And because  \text{mse}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{p\sigma^2}{(1+\lambda)^2}+\frac{\lambda^2}{(1+\lambda)^2}\beta^T\beta we obtain an optimal value for \lambda: \lambda^\star=k\sigma^2/\beta^T\beta

On the other hand, if \ell is no longer the quadratic norm but the \ell_1 norm, the problem (\ell1) is not always strictly convex, and in particular, the optimum is not always unique (for example if \mathbf{X}^T \mathbf{X} is singular). But if it is strictly convex, then predictions \mathbf{X}\beta will be unique. It should also be noted that two solutions are necessarily consistent in terms of sign of coefficients: it is not possible to have \beta_j<0 for one solution and \beta_j>0 for another. From a heuristic point of view, the program (\ell1) is interesting because it allows to obtain in many cases a corner solution, which corresponds to a problem resolution of type (\ell0) – as shown visually on Figure 2.

Figure 2 : Penalization based on norms \ell_0, \ell_1 and \ell_2 (from Hastie et al. (2016)).

Let us consider a very simple model: y_i=x_i \beta+\varepsilon, with a penalty \ell_1 and a loss function \ell_2. The problem (\ell2) then becomes  \min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda|\beta|\big\} The first order condition is then -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}\pm 2\lambda=0And the sign of the last term depends on the sign of \beta. Suppose that the least square estimator (obtained by setting \lambda=0) is (strictly) positive, i. e. \mathbf{y}^T \mathbf{x}>0. If \lambda is not too big, we can imagine that \beta is of the same sign as \widehat{\beta}^{\text{mco}}, and therefore the condition becomes -2\mathbf{y}^T \mathbf{x}+2\mathbf{x}^T \mathbf{x}\beta+2\lambda=0, and the solution is \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}} By increasing \lambda, there will be a time such that \widehat{\beta}_λ=0. If we increase \lambda a bit little more, \widehat{\beta}_λ does not become negative because in this case the last term of the first order condition changes, and in this case we try to solve -2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}- 2\lambda=0 whose solution is then \widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}+\lambda}{\mathbf{x}^T\mathbf{x}}But this solution is positive (we assumed \mathbf{y}^T \mathbf{x}>0), and so it is possible to have \widehat{\beta}_\lambda <0at the same time. Also, after a while, \widehat{\beta}_\lambda=0, which is then a corner solution. Things are of course more complicated in larger dimensions (Tibshirani & Wasserman (2016) goes back at length on the geometry of the solutions) but as Candès & Plan (2009) notes, under minimal assumptions guaranteeing that the predictors are not strongly correlated, the Lasso obtains a quadratic error almost as good as if we had an oracle providing perfect information on the set of \beta_j‘s that are not zero. With some additional technical hypotheses, it can be shown that this estimator is “sparsistant” in the sense that the support of \widehat{\beta}_\lambda^{\text{lasso}} is that of \beta, in other words Lasso has made it possible to select variables (more discussions on this point can be obtained in Hastie et al. (2016)).

More generally, it can be shown that \widehat{\beta}_\lambda^{\text{lasso}} is a biased estimator, but may be of sufficiently low variance that the mean square error is lower than using least squares. To compare the three techniques, relative to the least square estimator (obtained when \lambda=0), if we assume that the explanatory variables are orthonormal, then \widehat{\beta}_{\lambda,j}^{\text{ subset}}=\widehat{\beta}_{j}^{\text{ ols}}\boldsymbol{1}_{|\widehat{\beta}_{\lambda,j}^{\text{ subset}}|>b}, ~~\widehat{\beta}_{\lambda,j}^{\text{ ridge}}=\frac{\widehat{\beta}_{j}^{\text{ ols}}}{1+\lambda}and\widehat{\beta}_{\lambda,j}^{\text{ lasso}}=\text{sign}[\widehat{\beta}_{j}^{\text{ ols}}]\cdot(|\widehat{\beta}_{j}^{\text{ ols}}|-\lambda)_+

Figure 3 : Penalization based on norms ,  and  (from Hastie et al. (2016)).

To be continued with probably a final post this week (references are online here)…

Foundations of Machine Learning, part 3

This post is the seventh one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 6 is online here.

Boosting and sequential learning

As we have seen before, modelling here is based on solving an optimization problem, and solving the problem described by equation (6) is all the more complex because the functional space \mathcal{M} is large. The idea of boosting, as introduced by Shapire & Freund (2012), is to learn, slowly, from the errors of the model, in an iterative way. In the first step, we estimate a model m_1 for y, from \mathbf{X}, which will give an error \varepsilon_1. In the second step, we estimate a model m_2 for \varepsilon_1, from X, which will give an error \varepsilon_2, etc. We will then retain as a model, after k iterations m^{(k)}(\cdot)=\underbrace{m_1(\cdot)}_{\sim y}+\underbrace{m_2(\cdot)}_{\sim \epsilon_1}+\underbrace{m_3(\cdot)}_{\sim \epsilon_2}+\cdots+\underbrace{m_k(\cdot)}_{\sim \epsilon_{k-1}}=m^{(k-1)}(\cdot)+m_k(\cdot)~~~(7)Here, the error \varepsilon is seen as the difference between y and the model m(\mathbf{x}), but it can also be seen as the gradient associated with the quadratic loss function. Formally, \varepsilon can be seen as \nabla\ell in a more general context (here we find an interpretation that reminds us of residuals in generalized linear models).

Equation (7) can be seen as a descent of the gradient, but written in a dual way. The problem will then be rewritten as an optimization problem: m^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\boldsymbol{x}_i)}_{\varepsilon_{k,i}},h(\boldsymbol{x}_i))\right\rbrace~~~(8)where the trick is to consider a relatively simple space \mathcal{H} (we will speak of “weak learner”). Classically, \mathcal{H} functions are step-functions (which will be found in classification and regression trees) called “stumps”. To ensure that learning is indeed slow, it is not uncommon to use a shrinkage parameter, and instead of setting, for example, \varepsilon_1=y-m_1 (\mathbf{x}), we will set \varepsilon_1=y-\alpha\cdot m_1 (\mathbf{x}) with \alpha\in[0.1]. It should be noted that it is because a non-linear space is used for \mathcal{H}, and learning is slow, that this algorithm works well. In the case of the Gaussian linear model, remember that the residuals \varepsilon=y-\mathbf{x}^T\beta are orthogonal to the explanatory variables, \mathbf{X}, and it is then impossible to learn from our errors. The main difficulty is to stop in time, because after too many iterations, it is no longer the m function that is approximated, but the noise. This problem is called overlearning.

This presentation has the advantage of having a heuristic reminiscent of an econometric model, by iteratively modelling the residuals by a (very) simple model. But this is often not the presentation used in the learning literature, which places more emphasis on an optimization algorithm heuristic (and gradient approximation). The function is learned iteratively, starting from a constant value, m^{(0)}=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m)\right\rbracethen we consider the following learning procedure{\displaystyle m^{(k)}=m^{(k-1)}+{\underset{h\in {\mathcal {H}}}{\text{argmin}}}\sum _{i=1}^{n}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})+h(\mathbf{x}_{i}))}~~~(9)which can be written, if \mathcal{H} is a set of differentiable functions, {\displaystyle m^{(k)}=m^{(k-1)}-\gamma_{k}\sum _{i=1}^{n}\nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}(\mathbf{x}_{i})),} where {\displaystyle \gamma _{k}=\underset{\gamma }{\text{argmin }}\sum _{i=1}^{n}\ell\left(y_{i},m^{(k-1)}( \mathbf{x}_{i})-\gamma \nabla _{m^{(k-1)}}\ell(y_{i},m^{(k-1)}( \mathbf{x}_{i}))\right).} To better understand the relationship with the approach described above, at step k, pseudo-residuals are defined by setting r_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x})=m^{(k-1)}( \mathbf{x})}\text{ where }i=1,\cdots,nA simple model is then sought to explain these pseudo-residuals according to the explanatory variables \mathbf{x}_i, i.e. r_{i,k}=h^\star(\mathbf{x}_i) , where h^\star\in\mathcal{H}. In a second step, we look for an optimal multiplier by solving\gamma_k = \underset{\gamma\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m^{(k-1)}( \mathbf{x}_i)+\gamma h^\star(\mathbf{x}_i))\right\rbrace then update the model by setting m_k (\cdot)=m_(k-1) (\cdot)+\gamma_k h^\star (\cdot) . More formally, we move from equation (8) – which clearly shows that we are building a model on residuals – to equation (9) – which will then be translated as a gradient calculation problem – noting that \ell(y,m+h)=\ell(y-m,h) . Classically, class \mathcal{H} of functions consists in regression trees. It is also possible to use a form of penalty by setting m_k (\cdot)=m_(k-1) (\cdot)+\nu\gamma_k h^\star (\cdot) , with \nu\in(0,1) . But let’s go back  a little further – in our next post – on the importance of penalization before discussing the numerical aspects of optimization.

To be continued (keep in mind that references are online here)…

Foundations of Machine Learning, part 2

This post is the sixth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 5 is online here.

The probabilistic formalism in the 80’s

We have a training sample, with observations (\mathbf{x}_i,y_i) where the variables y are in a set \mathcal{Y}. In the case of classification, \mathcal{Y}=\{-1,+1\}, but a relatively general set can be considered (note that if econometricians prefer \mathcal{Y}=\{0,1\} – because of the Bernoulli distribution and because 0 and 1 are lower and upper bounds of probabilities, people in the “machine learning” community prefer \mathcal{Y}=\{-1,+1\}). A predictor m is an function taking values in \mathcal{Y}, used to label (or classify) future new observations, using some features that lie in a set \mathcal{X}. It is assumed that the labels are produced by an (unknown) classifier f called target. For a statistician, this function would be the real model. Naturally, we want to build m as close as possible to f. Let \mathbb{P} be a (unknown) distribution on \mathcal{X}. The error of m with respect to target f is defined by \mathcal{R}_{\mathbb{P},f}(m)=\mathbb{P}[m(\boldsymbol{X})\neq f(\boldsymbol{X})]\text{ where }\boldsymbol{X}\sim\mathbb{P}or equivalently,\mathcal{R}_{\mathbb{P},f}(m)=\mathbb{P}\big[\{\boldsymbol{x}\in\mathcal{X}:m(\boldsymbol{x})\neq f(\boldsymbol{x})\}\big]To obtain our “optimal” classifier, it becomes necessary to assume that there is a link between the data in our sample and the pair (\mathbb{P},f) , i.e. a data generation model. We will then assume that the \mathbf{x}_i are obtained by independent draws according to \mathbb{P}, and that then y_i=f(\mathbf{x}_i) . We can define the empirical risk of a classifier m, as \widehat{{R}}(m)=\frac{1}{n}\sum_{i=1}^n \boldsymbol{1}(m(\boldsymbol{x}_i)\neq y_i)

It is important to recognize that a perfect model cannot be found, in the sense that R_{\mathbb{P},f} (m)=0. Indeed, if we consider the simplest case, with \mathcal{X}=\{x_1,x_2\} and \mathbb{P} is such that \mathbb{P}(\{x_1\})=p and \mathbb{P}(\{x_2\})=1-p. The probability of never observing \{x_2\} among the n observations is (1-p)^n, and if p<1/n, it is quite likely never to observe \{x_2\} so it can never be predicted. We cannot therefore hope to have a zero risk whatever \mathbb{P}. And more generally, it is also possible to observe \{x_1\} and \{x_2\}, and despite everything, to make mistakes on the labels. Also, instead of looking for a perfect model, we can try to have an “approximately correct” model. We will then try to find m such that R_{\mathbb{P},f} (m)\leq\varepsilon, where \varepsilon is an a priori specified threshold. But even this condition is too strong, and cannot be fulfilled. Thus, we will usually as to have R_{\mathbb{P},f} (m)\leq\varepsilon with some probability 1-\delta. Hence, we will try to be “probably approximately correct” (PAC), allowing to make a mistake with a probability \delta, again fixed a priori.

Also, when we build a classifier, we do not know either \mathbb{P} or f, but we give ourselves a precision criterion \varepsilon , and a confidence parameter \delta, and we have n observations. Note that n, \varepsilon and \delta can be linked. We then look for a model m such that R_{\mathbb{P},f} (m)\leq\varepsilon with probability (at least) 1-\delta, so that we are probably approximately correct. Wolpert (1996) has shown (see details in Wolpert & Macready (1997)) that there is no universal learning algorithm. In particular, it can be shown that there is \mathbb{P} such that R_{\mathbb{P},f} (m) is relatively high, with a relatively high probability (also).

The interpretation is that since we cannot learn (in the PAC sense) about all the functions m, we will then force m to belong to a particular class, noted \mathcal{M}. Let us suppose, to start with, that \mathcal{M} contains a finite number of possible models. We can then show that for all \varepsilon and \delta, that for all \mathbb{P} and f, if we have enough observations (more precisely n\geq \varepsilon^{-1} \log[\delta^{-1} |\mathcal{M}|], then with a greater probability than 1-\delta, R_{\mathbb{P},f} (m^\star)\leq\varepsilon where m^\star \in \underset{m\in\mathcal{M}}{\text{argmin}}\Big\lbrace\frac{1}{n}\sum_{i=1}^n \boldsymbol{1}(m(\boldsymbol{x}_i)\neq y_i)\Big\rbracein other words m^\star is a model in \mathcal{M} that minimizes empirical risk.

We can go a little further, staying in the case where \mathcal{Y}=\{-1,+1\}. An \mathcal{M} class of classifiers will be called PAC-learnable if there is n_M:[0,1]^2\rightarrow \mathbb{N} such that, for all \varepsilon, \delta, \mathbb{P} and if it is assumed that the target f belongs to \mathcal{M}, then using n>n_M (\varepsilon,\delta) observations \mathbf{x}_i drawn from \mathbb{P}, labelled y_i by f, then there is m\in\mathcal{M} such that, with probability 1-\delta, R_{\mathbb{P},f} (m)\leq\varepsilon. The n_M function is then called “sample complexity to learn”. In particular, we have seen that if M contains a finite number of classifiers, then \mathcal{M} is PAC-learnable with complexity n_M (\varepsilon,\delta)=\varepsilon^{-1} \log[\delta^{-1} |M|].

Naturally, we would like to have a more general result, especially if \mathcal{M} is not finite. To do this, the VC dimension of Vapnik-Chervonenkis must be used, which is based on the idea of shattering points (for a binary classification). Consider k points \{x_1,\cdot,x_k\}, and consider the set {E}_k=\big\lbrace(m(\boldsymbol{x}_1),\cdots,m(\boldsymbol{x}_k))\text{ for }m\in\mathcal{M})\big\rbrace Note that the elements of E_k belong to \{-1,+1\}^k. In other words, |E_k |\leq 2^k. We will say that M shatter all the points if all the combinations are possible, i. e. |E_k |=2^k. Intuitively, the labels of the set of points do not provide enough information on target f, because anything is possible. The VC dimension of \mathcal{M} is then VC(\mathcal{M})=\sup\big\lbrace k\text{ such that }\mathcal{M}\text{ shatters }\{\boldsymbol{x}_1,\cdots\boldsymbol{x}_k\}\big\rbrace

For example, if \mathcal{X}=\mathbb{R} and all (simple) models of the form [1] m_{a,b}=\mathbf{1}_{\pm}(x\in[a,b]) are considered. No set of \{x_1,x_2,x_2,x_3\} ordered points can be shattered because it is sufficient to assign respectively +1, -1 and +1 to x_1, x_2 and x_3 respectively, therefore VC<3. On the other hand \{0,1\} is shattered, so VC\geq 2. The dimension of this predictor set is 2: If we increase by one dimension, \mathcal{X}=\mathbb{R}^2 and consider all (simple) models of the form m_{a,b}=\mathbf{1}_{\pm} (x\in[a,b]) (where [a,b] refers to the rectangle), then the dimension of \mathcal{M} is here 4.

To introduce SVMs, let’s place ourselves in the case where \mathcal{X}=\mathbb{R}^k, and consider separations by hyperplanes passing through the origin (we will say homogeneous), in the sense that m_{\mathbf{w}} (\mathbf{x})=\mathbf{1}_{\pm}(\mathbf{w}^T \mathbf{x}\geq 0) . It can be shown that no set of k+1 points can be shattered by these two homogeneous spaces in \mathbb{R}^k, and therefore VC(M)=k. If we add a constant, in the sense that m_{\mathbf{w},b} (\mathbf{x})=\mathbf{1}_{\pm}(\mathbf{w}^T \mathbf{x}+b\geq 0), we can show that no set of k+2 points can be sprayed by these two (non-homogeneous) spaces in \mathbb{R}^k, and therefore VC(M)=k+1. This dimension reminds us of the dimension of the model we’ve seen in the econometric context.

From this dimension VC, we deduce the so-called fundamental theorem of learning: if \mathcal{M} is a class of dimension d=VC(M) , then there are positive constants \underline{C} and \overline{C} such as the sample complexity for M to be PAC-learnable satisfies \underline{C}\epsilon^{-1}\big(d+\log[\delta^{-1}]\big)\leq n_{\mathcal{M}}(\epsilon,\delta) \leq \overline{C}\epsilon^{-1}\big(d\log[\epsilon^{-1}]+\log[\delta^{-1}]\big)The link between the notion of learning (as defined in Vailiant (1984)) and the VC dimension was clearly established in Blumer et al (1989).

Nevertheless, while the work of Vapnik and Chervonenkis is considered to be the foundation of statistical learning, Thomas Cover’s work in the 1960s and 1970s should also be mentioned, in particular Cover (1965) on the capacities of linear models, and Cover & Hart (1967) on learning in the context of the algorithm of the k-nearest neighbors. These studies have linked learning, information theory (with the textbook Cover & Thomas (1991)), complexity and statistics. Other authors have subsequently brought the two communities closer together, in terms of learning and statistics. For example, Halbert White proposed to see neural networks in a statistical context in White (1989), going so far as to state that « learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods ». This turning point in the late 1980s will anchor learning theory in a probabilistic context.

Objective and loss function

These choices (of objective and loss function) are essential, and very dependent on the problem under consideration. Let us begin by describing a historically important model, Rosenblatt’s (1958) “perceptron”, introduced into classification problems, where y\in\{-1,+1\}, inspired by McCulloch & Pitts (1943). We have data \{(y_i,\mathbf{x}_i)\}, and we will iteratively build a set of m_k[\mathbf{x} models, where at each step, we will learn from the errors of the previous model. In the perceptron, a linear model is considered so that :m(\mathbf{x})=\boldsymbol{1}_{\pm}(\beta_0+\mathbf{x}^T \boldsymbol{\beta}\geq 0)=\left\lbrace\begin{array}{l}+1\text{ si }\beta_0+\mathbf{x}^T \boldsymbol{\beta}\geq 0\\-1\text{ si }\beta_0+\mathbf{x}^T \boldsymbol{\beta}< 0\end{array}\right.where \beta coefficients are often interpreted as “weights” assigned to each of the explanatory variables. We give ourselves initial weights (\beta_0^{(0)},\beta^{(0)} , which we will update taking into account the prediction error made, between y_i and the prediction \widehat{y}_i^{(k)} :\widehat{y}_i^{(k)}=m^{(k)}(\mathbf{x}_i)=\boldsymbol{1}_{\pm}(\beta_0^{(k)}+\mathbf{x}^T \boldsymbol{\beta}^{(k)}\geq 0), with, in the case of the perceptron:\beta_j^{(k+1)}={\beta}_j^{(k)}+\eta\underbrace{(\mathbf{y}-\widehat{\mathbf{y}}^{(k)})^T}_{=\ell({\mathbf{y}},\widehat{\mathbf{y}}^{(k)})}\mathbf{x}_jHere \ell(y,y')=\mathbf{1}(y\neq y') is a loss function, which will allow to give a price to an error made, by predicting \widehat{y}=m(\mathbf{x}) and observing y. For a regression problem, we can consider a quadratic error \ell_2, such that \ell(y,m(\mathbf{x}))=(y-m(\mathbf{x}))^2 or in absolute value \ell_1, with \ell(y,m(\mathbf{x}))=|y-m(\mathbf{x})|. Here, for our classification problem, we used a mis-qualification indicator (we could discuss the symmetry of this loss function, suggesting that a false positive costs as much as a false negative). Once this loss function has been specified, we recognize in the problem previously described a gradient descent, and we see that we are trying to solve:m^\star(\mathbf{x})=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbrace~~~(6)for a predefined set of predictors \mathcal{M}. Any machine learning problem is mathematically formulated as an optimization problem, whose solution determines a set of model parameters (if the \mathcal{M} family is described by a set of parameters – which can be coordinates in a functional database). We can note \mathcal{M}_0 the space of the hyperplanes of \mathbb{R}^p in the sense thatm\in\mathcal{M}_0 \text{\quad means \quad}m(\mathbf{x})=\beta_0+\beta^T\mathbf{x}\text{ where }\beta\in\mathbb{R}^p generating the class of linear predictors. We will then have the estimator that minimizes the empirical risk. Some of the recent work in statistical learning aims to study the properties of the estimator \widehat{m}^\star, known as “oracle”, in a family of \mathcal{M} estimators, \widehat{m}^{\star} =\underset{\widehat{m}\in\mathcal{M}}{\text{argmin}}\big\lbrace\mathcal{R}(\widehat{m},m)\big\rbraceThis estimator is, of course, impossible to define because it depends on m, the real model, unknown.

But let’s come back a little more to these loss functions. A loss function \ell is a function \mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}_+, symmetric, which checks the triangular inequality, and such that \ell(x,y)=0 if and only if x=y. The associated norm is \|\cdot\|, such that \ell(x,y)=\|x-y\|=\ell(x-y,0) (using the fact that \ell(x,y+z)=\ell(x-y,z) – we will review this fundamental property later).

For a quadratic loss function, it should be noted that we can have a particular interpretation of this problem, since:\overline{y}=\underset{m\in\mathbb{R}}{\text{argmin}} \left\lbrace\sum_{i=1}^n\frac{1}{n} [y_i-m]^2\right\rbrace=\underset{m\in\mathbb{R}}{\text{argmin}} \left\lbrace \sum_{i=1}^n \ell_2(y_i,m)\right\rbrace where \ell_2 is the usual quadratic distance If we assume – as we did in econometrics – that there is an underlying probabilistic model, and observe that : \displaystyle{\mathbb{E}(Y)=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\left([Y-m]^2\right)\right\rbrace=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\big[\ell_2(Y,m)\big]\right\rbrace}it should be noted that what we are trying to obtain here, by solving the problem (6) by taking the norm \ell_2, is an approximation (in a given functional space, \mathcal{M}) of the conditional expectation x\mapsto\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]. Another particularly interesting loss function is the loss \ell_1, \ell_1 (y,m)=|y-m|[\latex]. It should be recalled that [latex display="true"]\displaystyle{\text{median}(\boldsymbol{y})=\underset{m\in\mathbb{R}}{\text{argmin}}\left\lbrace\sum_{i=1}^n\ell_1(y_i,m)\right\rbrace}The optimization problem :\widehat{m}^{\star}=\underset{m\in\mathcal{M}_0}{\text{argmin}}\left\lbrace\sum_{i=1}^n\vert y_i-m(\mathbf{x}_i)\vert\right\rbrace is obtained in econometrics by assuming that the conditional law of Y follows a Laplace law centered on m(\mathbf{x}), and by maximizing the likelihood (log) (the sum of the absolute values of the errors corresponds to the log-reasonableness of a Laplace law). It should also be noted that if the conditional law of Y is symmetrical with respect to 0, the median and the mean coincide If this loss function is rewritten   \ell_1(y,m)=\vert (y-m)(1/2-\boldsymbol{1}_{y\leq m})\vert a generalization can be obtained for \tau\in[0.1]:\widehat{m}^\star_\tau=\underset{m\in\mathcal{M}_0}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell_\tau^{ q} (y_i,m(\mathbf{x}_i)) \right\rbracewhere\ell_{\tau}^{q}(x,y)= (x-y)(\tau-\boldsymbol{1}_{x\leq y})  is then the quantile regression of level \tau (Koenker, 2003; d'Haultefœuille & Givord, 2014). Another loss function, introduced by Aigner et al (1977) and analysed in Waltrup et al (2014), is the function associated with the notion of expectations: \displaystyle{\ell}^{\text{ e}}_{\tau}(x,y)= (x-y)^2\cdot\big\vert\tau-\boldsymbol{1}_{x\leq y}\big\vertwith \tau\in[0.1]. We see the parallel with the quantile function: \displaystyle{\ell}^{\text{ q}}_{\tau}(x,y)= \vert x-y\vert \cdot\big\vert\tau-\boldsymbol{1}_{x\leq y}\big\vertKoenker & Machado (1999) and Yu & Moyeed (2001) also noted a link between this condition and the search for maximum likelihood when Y's conditional law follows an asymmetric Laplace law.

In connection with this approach, Gneiting (2011) introduced the notion of "ellicable statistics" - or "ellicable measurement" in its probabilistic (or distributional) version: a statistic T will be said to be "ellicitable" if there is a loss function \ell:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}_+ such that:T(Y)=\underset{x\in\mathbb{R}}{\text{argmin}}\left\lbrace\int_{\mathbb{R}} \ell(x,y)dF(y)\right\rbrace=\underset{x\in\mathbb{R}}{\text{argmin}}\left\lbrace\mathbb{E}\big[ \ell(x,Y)\big]\text{ where }Y\overset{\mathcal{L}}{\sim} F\right\rbrace The mean (mathematical expectation) is thus ellicable by the quadratic distance, \ell_2, while the median is ellicable by the distance \ell_1. According to Gneiting (2011), this property is essential for obtain predictions and forecasts. There may then be a strong link between measures associated with probabilistic models and loss functions. Finally, Bayesian statistics provide a direct link between the form of the a priori law and the loss function, as studied by Berger (1985) and Bernardo & Smith (2000). We will come back to the use of these different norms in the section on penalization.

To be continued (keep in mind that references are online here)…

[1] Where the indicator \mathbf{1}_{\pm} does not take values 0 or 1 (like the classical \mathbf{1} function), but -1 and +1.

NSERC – Discovery Grants Program, over the past 5 years

In a previous post, I discussed how it was possible to scrap the NSERC website to get stats about discovery grants. Since we just got the new 2018 figures, I thought it would be a good opportunity to update my graphs,

library(XML)
library(stringr)
url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
download.file(url,destfile = "GSC.html")
library(XML)
tables=readHTMLTable("GSC.html")
GSC=tables[[1]]$V1
GSC=as.character(GSC[-(1:2)])
namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])
Correction = function(x) as.numeric(gsub('[$,]', '', x))
YEAR=2013:2018
for(i in 1:length(YEAR)){
y=YEAR[i]
grants= function(gsc){
  url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=",y,"&amp;GSC=",gsc,sep="")
  download.file(url,destfile = "GSC.html")
  library(XML)
  tables=readHTMLTable("GSC.html")
  X=as.character(tables[[1]]$"Awarded Amount")
  A=as.numeric(Vectorize(Correction)(X))
  return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100))))
}
M=Vectorize(grants)(GSC[1:12])
plot(M[3:101,8],(1:99)/100,type="s",xlim=c(0,130000),xlab=
paste("Annual Discovery Grant (CAN) - ",y,sep=""),ylab="")
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")
abline(v=M[3,5],lty=2,col=rgb(1,0,0,.4))
idx=which(M[3:101,8]&lt;M[3,5])
lines(M[2+idx,8],(idx)/100,type="s",lwd=4)
legend("bottomright",c("maths","physics","chemestry"),
col=c("black","red","blue"),lty=1,bty="n")}

With those functions, I plot the cumulative distribution functions for three disciplines, manely maths, physics and chemistry. I added a line for the lowest value in physics (the vertical line), and the bold line shows the proportion of researchers in maths who got less than the lowest amount in physics,

Hence, in 2013, 60% of the researchers in maths get less than any researcher in physics (and more than 90% in maths get less than any researcher in chemistry). Then, from 2014 to 2018, we get

It is rather constant : 50% of the researchers in mathematics in Canada get less than any researcher in physics, or in chemistry. I don’t understand why, but it’s interesting to observe that this is very stable…

An Open Lab-Notebook Experiment