Tag Archives: R-english

On the poor performance of classifiers in insurance models

Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !

By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega  is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?

So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,

In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.

Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)

a=m*(m*(1-m)/v-1)
b=(1-m)*(m*(1-m)/v-1)
p=rbeta(n,a,b)

from those probabilities, I generate occurences of claims, or deaths,

Y=rbinom(n,size = 1,prob = p)

Then, I compute the AUC of my “perfect” model,

auc.tmp=performance(prediction(p,Y),"auc")

And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code

library(ROCR)
n=1000
ns=200
ab_beta = function(m,inter){
  a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter,
            interval=c(.0000001,1000000))$root
  b=a/m-a
  return(c(a,b))
}
Sim_AUC_mean_inter=function(m=.5,i=.05){
  V_auc=rep(NA,ns)
  b=-1
  essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){
    for(s in 1:ns){
      p=rbeta(n,a,b)
      Y=rbinom(n,size = 1,prob = p)
      auc.tmp=performance(prediction(p,Y),"auc")
      V_auc[s]=as.numeric(auc.tmp@y.values)}
    L=list(moy_beta=m,
           var_beat=v,
           q05=qbeta(.05,a,b),
           q95=qbeta(.95,a,b),
           moy_AUC=mean(V_auc),
           sd_AUC=sd(V_auc),
           q05_AUC=quantile(V_auc,.05),
           q95_AUC=quantile(V_auc,.95))
    return(L)}
  if((a<0)|(b<0)){return(list(moy_AUC=NA))}}
Vm=seq(.025,.975,by=.025)
Vi=seq(.01,.5,by=.01)
V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) 
Sim_AUC_mean_inter(x,y)$moy_AUC))
library("RColorBrewer")
image(Vm,Vi,V,
      xlab="Probability (Average)",
      ylab="Dispersion (Q95-Q5)",
      col=
        colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101))
contour(Vm,Vi,V,add=TRUE,lwd=2)

On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !

My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

NSERC – Discovery Grants Program, over the past 5 years

In a previous post, I discussed how it was possible to scrap the NSERC website to get stats about discovery grants. Since we just got the new 2018 figures, I thought it would be a good opportunity to update my graphs,

library(XML)
library(stringr)
url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
download.file(url,destfile = "GSC.html")
library(XML)
tables=readHTMLTable("GSC.html")
GSC=tables[[1]]$V1
GSC=as.character(GSC[-(1:2)])
namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])
Correction = function(x) as.numeric(gsub('[$,]', '', x))
YEAR=2013:2018
for(i in 1:length(YEAR)){
y=YEAR[i]
grants= function(gsc){
  url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=",y,"&GSC=",gsc,sep="")
  download.file(url,destfile = "GSC.html")
  library(XML)
  tables=readHTMLTable("GSC.html")
  X=as.character(tables[[1]]$"Awarded Amount")
  A=as.numeric(Vectorize(Correction)(X))
  return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100))))
}
M=Vectorize(grants)(GSC[1:12])
plot(M[3:101,8],(1:99)/100,type="s",xlim=c(0,130000),xlab=
paste("Annual Discovery Grant (CAN) - ",y,sep=""),ylab="")
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")
abline(v=M[3,5],lty=2,col=rgb(1,0,0,.4))
idx=which(M[3:101,8]<M[3,5])
lines(M[2+idx,8],(idx)/100,type="s",lwd=4)
legend("bottomright",c("maths","physics","chemestry"),
col=c("black","red","blue"),lty=1,bty="n")}

With those functions, I plot the cumulative distribution functions for three disciplines, manely maths, physics and chemistry. I added a line for the lowest value in physics (the vertical line), and the bold line shows the proportion of researchers in maths who got less than the lowest amount in physics,

Hence, in 2013, 60% of the researchers in maths get less than any researcher in physics (and more than 90% in maths get less than any researcher in chemistry). Then, from 2014 to 2018, we get

It is rather constant : 50% of the researchers in mathematics in Canada get less than any researcher in physics, or in chemistry. I don’t understand why, but it’s interesting to observe that this is very stable…

The “probability to win” is hard to estimate…

Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.

Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have n students, and 50 questions

set.seed(1)
n=10
M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n)

Let X_{i,j} denote the score of student i at question j. Let S_{i,j} denote the cumulated score, i.e. S_{i,j}=X_{i,1}+\cdots+X_{i,j}. At step j, I can get some sort of prediction of the final score, using \hat{T}_{i,j}=50\times S_{i,j}/j. Here is the code

SM=apply(M,2,cumsum)
NB=SM*50/(1:50)

We can actually plot it

plot(NB[,1],type="s",ylim=c(0,50))
abline(h=25,col="blue")
for(i in 2:n) lines(NB[,i],type="s",col="light blue")
lines(NB[,3],type="s",col="red")


But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !

Let’s try to see how we can do it… If after j questions, the students has 25 correct answer, the probability should be 1 – i.e. if S_{i,j}\geq 25 – since he cannot fail. Another simple case is the following : if after j questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if S_{i,j}+(50-i+1)< 25 the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least 25-S_{i,j} correct answers, out of 50-j questions, when the probability of success is actually S_{i,j}/j. We recognize the survival probability of a binomial distribution. The code is then simply

PB=NB*NA
for(i in 1:50){
  for(j in 1:n){
    if(SM[i,j]&gt;=25) PB[i,j]=1
    if(SM[i,j]+(50-i+1)&lt;25)   PB[i,j]=0
    if((SM[i,j]&lt;25)&amp;(SM[i,j]+(50-i+1)&gt;=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i)
  }}

So if we plot it, we get

plot(PB[,1],type="s",ylim=c(0,1))
abline(h=25,col="red")
for(i in 2:n) lines(PB[,i],type="s",col="light blue")
lines(PB[,3],type="s",col="red")

which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !

Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),

If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed

PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…

Solving the chinese postman problem

Some pre-Halloween post today. It started actually while I was in Barcelona : kids wanted to go back to some store we’ve seen the first day, in the gothic part, and I could not remember where it was. And I said to myself that would be quite long to do all the street of the neighborhood. And I discovered that it was actually an old problem. In 1962, Meigu Guan was interested in a postman delivering mail to a number of streets such that the total distance walked by the postman was as short as possible. How could the postman ensure that the distance walked was a minimum?

A very close notion is the concept of traversable graph, which is one that can be drawn without taking a pen from the paper and without retracing the same edge. In such a case the graph is said to have an Eulerian trail (yes, from Euler’s bridges problem). An Eulerian trail uses all the edges of a graph. For a graph to be Eulerian all the vertices must be of even order.

An algorithm for finding an optimal Chinese postman route is:

  1. List all odd vertices.
  2. List all possible pairings of odd vertices.
  3. For each pairing find the edges that connect the vertices with the minimum weight.
  4. Find the pairings such that the sum of the weights is minimised.
  5. On the original graph add the edges that have been found in Step 4.
  6. The length of an optimal Chinese postman route is the sum of all the edges added to the total found in Step 4.
  7. A route corresponding to this minimum weight can then be easily found.

For the first steps, we can use the codes from Hurley & Oldford’s Eulerian tour algorithms for data visualization and the PairViz package. First, we have to load some R packages

require(igraph)
require(graph)
require(eulerian)
require(GA)

Then use the following function from stackoverflow,

make_eulerian = function(graph){
  info = c("broken" = FALSE, "Added" = 0, "Successfull" = TRUE)
  is.even = function(x){ x %% 2 == 0 }
  search.for.even.neighbor = !is.even(sum(!is.even(degree(graph))))
  for(i in V(graph)){
    set.j = NULL
    uneven.neighbors = !is.even(degree(graph, neighbors(graph,i))) 
if(!is.even(degree(graph,i))){ 
if(sum(uneven.neighbors) == 0){ 
if(sum(!is.even(degree(graph))) &gt; 0){
          info["Broken"] = TRUE
          uneven.candidates &lt;- !is.even(degree(graph, V(graph)))
          if(sum(uneven.candidates) != 0){
            set.j &lt;- V(graph)[uneven.candidates][[1]]
          }else{
            info["Successfull"] &lt;- FALSE
          }
        }       
      }else{
        set.j &lt;- neighbors(graph, i)[uneven.neighbors][[1]]
      }
    }else if(search.for.even.neighbor == TRUE &amp; is.null(set.j)){
      info["Added"] &lt;- info["Added"] + 1     
      set.j &lt;- neighbors(graph, i)[ !uneven.neighbors ][[1]]
      if(!is.null(set.j)){search.for.even.neighbor &lt;- FALSE}
    }
    if(!is.null(set.j)){
      if(i != set.j){
        graph &lt;- add_edges(graph, edges=c(i, set.j))
        info["Added"] &lt;- info["Added"] + 1
      }
    }
  }
  (list("graph" = graph, "info" = info))}

Then, consider some network, with 12 nodes

g1 = graph(c(1,2, 1,3, 2,4, 2,5, 1,5, 3,5, 
4,7, 5,7, 5,8, 3,6, 6,8, 6,9, 9,11, 8,11, 
8,10, 8,12, 7,10, 10,12, 11,12), directed = FALSE)

To plot that network, use

V(g1)$name=LETTERS[1:12]
V(g1)$color=rgb(0,0,1,.4)
ly=layout.kamada.kawai(g1)
plot(g1,vertex.color=V(newg)$color,layout=ly)

Then we convert it to some traversable graph by adding 5 vertices

eulerian = make_eulerian(g1)
eulerian$info
     broken       Added Successfull 
          0           5           1 
g = eulerian$graph

as shown below

ly=layout.kamada.kawai(g)
plot(g,vertex.color=V(newg)$color,layout=ly)

We cut those 5 vertices in two part, and therefore, we add 5 artificial nodes

A=as.matrix(as_adj(g))
A1=as.matrix(as_adj(g1))
newA=lower.tri(A, diag = FALSE)*A1+upper.tri(A, diag = FALSE)*A
for(i in 1:sum(newA==2)) newA = cbind(newA,0)
for(i in 1:sum(newA==2)) newA = rbind(newA,0)
s=nrow(A)
for(i in 1:nrow(A)){
  Aj=which(newA[i,]==2)
  if(!is.null(Aj)){
      for(j in Aj){
        newA[i,s+1]=newA[s+1,i]=1
        newA[j,s+1]=newA[s+1,j]=1
        newA[i,j]=1
        s=s+1
      }}}

We get the following graph, where all nodes have an even number of vertices !

newg=graph_from_adjacency_matrix(newA)
newg=as.undirected(newg)
V(newg)$name=LETTERS[1:17]
V(newg)$color=c(rep(rgb(0,0,1,.4),12),rep(rgb(1,0,0,.4),5))
ly2=ly
transl=cbind(c(0,0,0,.2,0),c(.2,-.2,-.2,0,-.2))
for(i in 13:17){
  j=which(newA[i,]&gt;0)
  lc=ly[j,]
  ly2=rbind(ly2,apply(lc,2,mean)+transl[i-12,])
}
plot(newg,layout=ly2)

Our network is now the following (new nodes are small because actually, they don’t really matter, it’s just for computational reasons)

plot(newg,vertex.color=V(newg)$color,layout=ly2,
     vertex.size=c(rep(20,12),rep(0,5)),
     vertex.label.cex=c(rep(1,12),rep(.1,5)))

Now we can get the optimal path

n &lt;- LETTERS[1:nrow(newA)]
g_2 &lt;- new("graphNEL",nodes=n) for(i in 1:nrow(newA)){ for(j in which(newA[i,]&gt;0)){
    g_2 &lt;- addEdge(n[i],n[j],g_2,1) 
  }}
etour(g_2,weighted=FALSE)
 [1] "A" "B" "D" "G" "E" "A" "C" "E" "H" "F" "I" "K" "H" "J" "G" "P" "J" "L" "K" "Q" "L" "H" "O" "F" "C"
[26] "N" "E" "B" "M" "A"

or

edg=attr(E(newg), "vnames")
ET=etour(g_2,weighted=FALSE)
parcours=trajet=rep(NA,length(ET)-1)
for(i in 1:length(parcours)){
  u=c(ET[i],ET[i+1])
  ou=order(u)
  parcours[i]=paste(u[ou[1]],u[ou[2]],sep="|")
  trajet[i]=which(edg==parcours[i])
}
parcours
 [1] "A|B" "B|D" "D|G" "E|G" "A|E" "A|C" "C|E" "E|H" "F|H" "F|I" "I|K" "H|K" "H|J" "G|J" "G|P" "J|P"
[17] "J|L" "K|L" "K|Q" "L|Q" "H|L" "H|O" "F|O" "C|F" "C|N" "E|N" "B|E" "B|M" "A|M"
trajet
 [1]  1  3  8  9  4  2  6 10 11 12 16 15 14 13 26 27 18 19 28 29 17 25 24  7 22 23  5 21 20

Let us try now on a real network of streets. Like Missoula, Montana.

I will not try to get the shapefile of the city, I will just try to replicate the photography above.

If you look carefully, you will see some problem : 10 and 93 have an odd number of vertices (3 here), so one strategy is to connect them (which explains the grey line).

But actually, to be more realistic, we start in 93, and we end in 10. Here is the optimal (shortest) path which goes through all vertices.

Now, we are ready for Halloween, to go through all streets in the neighborhood !

Monte Carlo techniques to create counterfactuals

In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .

There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.

I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height[Davis$sex=="M"]

We can visualize its distribution (density and cumulative distribution)

u=seq(155,205,by=.5)
par(mfrow=c(1,2))
hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
lines(u,dnorm(u,178,6.5),col="black")
Xs=sort(X)
n=length(X)
p=(1:n)/(n+1)
plot(Xs,p,type="s",col="blue")
lines(u,pnorm(u,178,6.5),col="black")

Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals

hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
  Y=rnorm(n,178,6.5)
  hist(Y,col=rgb(1,0,0,.3))
  lines(density(Y),col="red",lwd=2)
Ys=sort(Y)
plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205))
polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA)
lines(Xs,p,type="s",col="blue",lwd=2)
lines(Ys,p,type="s",col="red",lwd=2)

We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.

d=rep(NA,1e5)
for(s in 1:1e5){
d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic
}
ds=density(d)
plot(ds,xlab="",ylab="")
dks=ks.test(X,"pnorm",178,6.5)$statistic
id=which(ds$x&gt;dks)
polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA)
abline(v=dks,col="red")

If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed

mean(d&gt;dks)
[1] 0.78248

is the computational version of the p-value

ks.test(X,"pnorm",178,6.5)
 
	One-sample Kolmogorov-Smirnov test
 
data:  X
D = 0.068182, p-value = 0.8079
alternative hypothesis: two-sided

I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).

October, grant proposal season

In 2012, Danielle Herbert, Adrian Barnett, Philip Clarke and Nicholas Graves published an article entitled “on the time spent preparing grant proposals: an observational study of Australian researchers“, whose conclusions had been included in Nature under a more explicit title, “Australia’s grant system wastes time” ! In this study, they included 3700 grant applications sent to the National Health and Medical Research Council, and showed that each application represented 37 working days: “Extrapolating this to all 3,727 submitted proposals gives an estimated 550 working years of researchers’ time (95% confidence interval, 513-589)“. But in these times when I have to write my funding application, I find that losing 37 days of work is huge. Because it’s become the norm! And somehow, it’s sad.

Forget about the crazy idea that I would rather, in fact, spend more time doing my research. In fact, the thought I had this morning was that it is rather sad that in the Faculty of Science, mathematicians are asked to spend a considerable amount of time, comparable to that required of physicists or chemists, for often smaller amounts of funding… And I thought it could be easily verified. We start by retrieving the discipline codes

url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
download.file(url,destfile = "GSC.html")
library(XML)
tables=readHTMLTable("GSC.html")
GSC=tables[[1]]$V1
GSC=as.character(GSC[-(1:2)])
namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])

We’re going to need a small function, to remove the $ and other symbols that pollute the data (and prevent them from being treated as numbers)

library(stringr)
Correction = function(x) as.numeric(gsub('[$,]', '', x))

We will now read the 12 pages, and harvest (we will just take the 2017 data, but we could go back a few years before)

grants= function(gsc){
     url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=2017&amp;GSC=",gsc,sep="")
    download.file(url,destfile = "GSC.html")
    library(XML)
    tables=readHTMLTable("GSC.html")
    X=as.character(tables[[1]]$"Awarded Amount")
    A=as.numeric(Vectorize(Correction)(X))
return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100))))
}
M=Vectorize(grants)(GSC[1:12])

The average amounts of individual grants can be compared,

barplot(M[2,])

In mathematics, the average grant amount is $24400. If we normalize by this quantity, we obtain

barplot(M[2,]/M[2,8])

In other words, the average amount of a (individual) grant in chemistry (to pay for students, conferences, etc.) is twice that in mathematics, 60% higher in physics than in maths…

We can also look at the median values (rather than the averages)

barplot(M[1,])

Here again, it is in mathematics that it is the weakest….

barplot(M[1,]/M[1,8])

in comparable proportions. If we think that the time spent writing should be proportional to the amount allocated, we should spend half as much time in math as in chemistry.

Cumulative functions can also be ploted,

plot(M[3:101,8],(1:99)/100,type="s",xlim=range(M))
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")

with math in black, physics in red, and chemistry in blue. What is surprising is the bottom part: a “bad” researcher in chemistry or physics will earn more than the median researcher in mathematics…

Now that my intuition is confirmed, I have to go back, writing my proposal… and explain to my coauthors that I have to postpone some research projects because, well, you know…

Combining automatically factor levels in R

Each time we face real applications in an applied econometrics course, we have to deal with categorial variables. And the same question arise, from students : how can we combine automatically factor levels ? Is there a simple R function ?

I did upload a few blog posts, over the pas years. But so far, nothing satistfying. Let me write down a few lines about what could be done. And if some wants to write a nice R function, that would be awesome. To illustrate the idea, consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
             x2=cut(x2,breaks=
             c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
             labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

There is one (continuous) dependent variable y, one continuous covariable x_1 and one categorical variable x_2, with here ten levels. We can plot the data using

plot(b$x1,y,col="white",xlim=c(0,1.1))
text(b$x1,y,as.character(b$x2),cex=.5)

The output of a linear regression yield the following predictions

for(i in 1:10){
p=function(x) predict(lm(y~x1+x2,data=b),newdata=data.frame(x1=x,x2=LETTERS[i]))
u=seq(-1,1.065,by=.01)
v=Vectorize(p)(u)
lines(u,v)}

the slope for x_1 is the same, we simply add a different constant for each level. As we can see, some levels are very very close, so it seems legitimate to combine them into one single category. Here is the output of the linear regression,

summary(lm(y~x1+x2,data=b))
Coefficients:
             Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.843802   0.119655   7.052 3.23e-11 ***
x1           1.992878   0.053838  37.016  &lt; 2e-16 ***
x2A          0.055500   0.131173   0.423   0.6727    
x2H          0.009293   0.121626   0.076   0.9392    
x2F         -0.177002   0.121020  -1.463   0.1452    
x2B         -0.218152   0.130192  -1.676   0.0955 .  
x2D         -0.206970   0.121294  -1.706   0.0896 .  
x2G         -0.407417   0.129999  -3.134   0.0020 ** 
x2C         -0.526708   0.123690  -4.258 3.24e-05 ***
x2J         -0.664281   0.128126  -5.185 5.54e-07 ***
x2E         -0.816454   0.123625  -6.604 3.94e-10 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2014 on 189 degrees of freedom
Multiple R-squared:  0.8995,	Adjusted R-squared:  0.8942 
F-statistic: 169.1 on 10 and 189 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -60.74443
BIC(lm(y~x1+x2,data=b))
[1] -21.16463

Here the reference category is “I”. And it looks like we could actually combine that category with several others. One strategy here would be to select all categories that seem to be not significantly different, and to run a (multiple) test

library(car)
linearHypothesis(lm(y~x1+x2,data=b), c("x2A = 0", "x2H = 0", "x2F = 0"))
 
Hypothesis:
x2A = 0
x2H = 0
x2F = 0
 
Model 1: restricted model
Model 2: y ~ x1 + x2
 
  Res.Df    RSS Df Sum of Sq      F Pr(&gt;F)    
1    192 8.4651                               
2    189 7.6654  3   0.79971 6.5726  3e-04 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

It seems that we can combine those four categories together.

Here, we can see what’s going on when we change the reference category (actually, loop on all categories)

P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERS[1:10]
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
text(1:10,0,LETTERS[1:10])
text(0,1:10,LETTERS[1:10])
for(i in 1:nlevels(b$x2)){
#levels(b$x2)=LETTERS[1:10]
b$x2=relevel(b$x2,LETTERS[i])
p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
names(p)=substr(names(p),3,3)
P[LETTERS[i],names(p)]=p
p=P[LETTERS[i],]
idx=which(p&gt;.05)
points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
idx=which(p&gt;.1)
points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)}

We are glad to see that it is symmetric : if “H” should be combined with “I”, “I” should also be combined with “H”.

Here black points are related with the 10% p-value, and white points the 5% p-value. This graph is actually hard to read… And actually, this reminds us of  Bertin (1967).

Here, we can predefine manually some ordering (we will see below how it might be automatised)

LETTERSord=c("I","A","H","F","B","D","G","C","J","E")
P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERSord
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
ct=c(3,3,2,1,1)
abline(v=.5+c(0,cumsum(ct)),lty=2)
abline(h=.5+c(0,cumsum(ct)),lty=2)
text(1:10,0,LETTERSord)
text(0,1:10,LETTERSord)
for(i in 1:nlevels(b$x2)){
  #levels(b$x2)=LETTERS[1:10]
  b$x2=relevel(b$x2,LETTERSord[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,3)
  P[LETTERSord[i],names(p)]=p
  p=P[LETTERSord[i],]
  idx=which(p&gt;.05)
  points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
  idx=which(p&gt;.1)
  points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)
}

Here we get the following

It looks like we have our combined categories…

Actually, it is possible to use another strategy. We start from some level, say “A”. Then, we merge it with all non-significantly different levels. If “B” is not one of them, we use it as the new reference. Etc.

for(i in 1:nlevels(b$x2)){
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}

The final categories are

table(b$x2)
 
A+I+H B+D+F   C+G     E     J 
   46    82    35    23    14

with the following regression output

summary(lm(y~x1+x2,data=b))
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.86407    0.03950  21.877  &lt; 2e-16 ***
x1           1.99180    0.05323  37.417  &lt; 2e-16 ***
x2B+D+F     -0.21517    0.03699  -5.817 2.44e-08 ***
x2C+G       -0.50545    0.04528 -11.164  &lt; 2e-16 ***
x2E         -0.83617    0.05128 -16.305  &lt; 2e-16 ***
x2J         -0.68398    0.06131 -11.156  &lt; 2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2008 on 194 degrees of freedom
Multiple R-squared:  0.8975,	Adjusted R-squared:  0.8948 
F-statistic: 339.6 on 5 and 194 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -66.76939
BIC(lm(y~x1+x2,data=b))
[1] -43.68117

Which is consistent with the group we got before. But actually, if we change the order, we can get different combinations. For instance, if we go from “J” to “A”, instead of “A” to “J”, we obtain

for(i in nlevels(b$x2):1){
  #levels(b$x2)=LETTERS[1:10]
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}
table(b$x2)
 
          E         G+C I+A+B+D+F+H           J 
         23          35         128          14

with different information criteria here

AIC(lm(y~x1+x2,data=b))
[1] -36.61665
BIC(lm(y~x1+x2,data=b))
[1] -16.82675

I guess it would be necessary to run randomly the order we go through the levels. Last, but not least, one can use regression trees (even if it not per se in the syllabus of the course). The problem is that there is another explanatory variable that might interphere. So I would suggest (1) to fit a linear model y=\beta_0+\beta_1x_1+u_i, to calculate the residuals, \widehat{u}_i (2) to run a regression tree, to explain \widehat{u}_i with categorical variable x_2 (I did explain how trees are build when the explanatory variable is a categorical one in a previous post)

library(rpart)
library(rpart.plot)
b$e=residuals(lm(y~x1,data=b))
arbre=rpart(e~x2,data=b)
prp(arbre,type=2,extra=1)

Observe that the leaves have the same groups as the one we got.

arbre
n= 200 
 
node), split, n, deviance, yval
      * denotes terminal node
 
1) root 200 22.563500  7.771561e-18  
  2) x2=G,C,J,E 72  4.441495 -3.232525e-01  
    4) x2=J,E 37  1.553520 -4.578492e-01 *
    5) x2=G,C 35  1.509068 -1.809646e-01 *
  3) x2=I,A,H,F,B,D 128  6.366628  1.818295e-01  
    6) x2=F,B,D 82  2.983381  1.048246e-01 *
    7) x2=I,A,H 46  2.030229  3.190993e-01 *

I guess that it should be possible to put all that in an R function, to suggest combinations of level that might improve the regression.

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Game of Friendship Paradox

In the introduction of my course next week, I will (briefly) mention networks, and I wanted to provide some illustration of the Friendship Paradox. On network of thrones (discussed in Beveridge and Shan (2016)), there is a dataset with the network of characters in Game of Thrones. The word “friend” might be abusive here, but let’s continue to call connected nodes “friends”. The friendship paradox states that

People on average have fewer friends than their friends

This was discussed in Feld (1991) for instance, or Zuckerman & Jost (2001). Let’s try to see what it means here. First, let us get a copy of the dataset

download.file("https://www.macalester.edu/~abeverid/data/stormofswords.csv","got.csv")
GoT=read.csv("got.csv")
library(networkD3)
simpleNetwork(GoT[,1:2])

Because it is difficult for me to incorporate some d3js script in the blog, I will illustrate with a more basic graph,

Consider a vertex v\in V in the undirected graph G=(V,E) (with classical graph notations), and let d(v) denote the number of edges touching it (i.e. v has d(v) friends). The average number of friends of a random person in the graph is \mu = \frac{1}{n_V}\sum_{v\in V} d(v)=\frac{2 n_E}{n_V} The average number of friends that a typical friend has is
\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)But
\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)=\sum_{v,v' \in G} \left(<br /> \frac{d(v')}{d(v)}+\frac{d(v)}{d(v')}\right)=\sum_{v,v' \in G}\left(\frac{d(v')^2+d(v)^2}{d(v)d(v')}\right)=\sum_{v,v' \in G} \left(\frac{(d(v')-d(v))^2}{d(v)d(v')}+2\right){\color{red}{\succ}}\sum_{v,v' \in G} \left(2\right)=\sum_{v\in V} d(v)
Thus,\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)\succ \frac{1}{n_V}\sum_{v\in V} d(v)
Note that this can be related to the variance decomposition \text{Var}[X]=\mathbb{E}[X^2]-\mathbb{E}[X]^2i.e.\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]} =\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}\succ\mathbb{E}[X](Jensen inequality). But let us get back to our network. The list of nodes is

M=(rbind(as.matrix(GoT[,1:2]),as.matrix(GoT[,2:1])))
nodes=unique(M[,1])

and we each of them, we can get the list of friends, and the number of friends

friends = function(x) as.character(M[which(M[,1]==x),2])
nb_friends = Vectorize(function(x) length(friends(x)))

as well as the number of friends friends have, and the average number of friends

friends_of_friends = function(y) (Vectorize(function(x) length(friends(x)))(friends(y)))
nb_friends_of_friends = Vectorize(function(x) mean(friends_of_friends(x)))

We can look at the density of the number of friends, for a random node,

Nb  = nb_friends(nodes)
Nb2 = nb_friends_of_friends(nodes)
hist(Nb,breaks=0:40,col=rgb(1,0,0,.2),border="white",probability = TRUE)
hist(Nb2,breaks=0:40,col=rgb(0,0,1,.2),border="white",probability = TRUE,add=TRUE)
lines(density(Nb),col="red",lwd=2)
lines(density(Nb2),col="blue",lwd=2)


and we can also compute the averages, just to check

mean(Nb)
[1] 6.579439
mean(Nb2)
[1] 13.94243

So, indeed, people on average have fewer friends than their friends.

Parallelizing Linear Regression or Using Multiple Sources

My previous post was explaining how mathematically it was possible to parallelize computation to estimate the parameters of a linear regression. More speficially, we have a matrix \mathbf{X} which is n\times k matrix and \mathbf{y} a n-dimensional vector, and we want to compute \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y} by spliting the job. Instead of using the n observations, we’ve seen that it was to possible to compute “something” using the first n_1 rows, then the next n_2 rows, etc. Then, finally, we “aggregate” the m objects created to get our overall estimate.

Parallelizing on multiple cores

Let us see how it works from a computational point of view, to run each computation on a different core of the machine. Each core will see a slave, computing what we’ve seen in the previous post. Here, the data we use are

y = cars$dist
X = data.frame(1,cars$speed)
k = ncol(X)

On my laptop, I have three cores, so we will split it in m=3 chunks

library(parallel)
library(pbapply)
ncl = detectCores()-1
cl = makeCluster(ncl)

This is more or less what we will do: we have our dataset, and we split the jobs,

We can then create lists containing elements that will be sent to each core, as Ewen suggested,

chunk = function(x,n) split(x, cut(seq_along(x), n, labels = FALSE))
a_parcourir = chunk(seq_len(nrow(X)), ncl)
for(i in 1:length(a_parcourir)) a_parcourir[[i]] = rep(i, length(a_parcourir[[i]]))
Xlist = split(X, unlist(a_parcourir))
ylist = split(y, unlist(a_parcourir))

It is also possible to simplify the QR functions we will use

compute_qr = function(x){
  list(Q=qr.Q(qr(as.matrix(x))),R=qr.R(qr(as.matrix(x))))
}
get_Vlist = function(j){
  Q3 = QR1[[j]]$Q %*% Q2list[[j]]
  t(Q3) %*% ylist[[j]]
}
clusterExport(cl, c("compute_qr", "get_Vlist"), envir=environment())

Then, we can run our functions on each core. The first one is

  QR1 = parLapply(cl=cl,Xlist, compute_qr)

note that it is also possible to use

  QR1 = pblapply(Xlist, compute_qr, cl=cl)

which will include a progress bar (that can be nice when the database is rather large). Then use

  R1 = pblapply(QR1, function(x) x$R, cl=cl) %&gt;% do.call("rbind", .)
  Q1 = qr.Q(qr(as.matrix(R1)))
  R2 = qr.R(qr(as.matrix(R1)))
  Q2list = split.data.frame(Q1, rep(1:ncl, each=k))
  clusterExport(cl, c("QR1", "Q2list", "ylist"), envir=environment())
  Vlist = pblapply(1:length(QR1), get_Vlist, cl=cl)
  sumV = Reduce('+', Vlist)

and finally the ouput is

solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Using multiple sources

In practice, it might also happen that various “servers” have the data, but we cannot get a copy. But it is possible to run some functions on their server, and get some output, that we can use afterwards.

Datasets are supposed to be available somewhere. We can send a request, and get a matrix. Then we we aggregate all of them, and send another request. That’s what we will do here. Provider j should run f_1(\mathbf{X}) on his part of the data, that function will return R^{(1)}_j. More precisely, to the first provider, send

function1 = function(subX){
return(qr.R(qr(as.matrix(subX))))}
R1 = function1(Xlist[[1]])

and actually, send that function to all providers, and aggregate the output

for(j in 2:m) R1 = rbind(R1,function1(Xlist[[j]]))

The create on your side the following objects

Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*k+1:k,]

Finally, contact one last time the providers, and send one of your objects

function2=function(subX,suby,Q){
Q1=qr.Q(qr(as.matrix(subX)))
Q2=Q
return(t(Q1%*%Q2) %*% suby)}

Provider j should then run f_2(\mathbf{X},\mathbf{y},Q_j^{(2)}) on his part of the data, using also Q_j^{(2)} as argument (that we obtained on own side) and that function will return (\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j)^{T}_j\mathbf{y}_j. For instance, ask the first provider to run

sumV = function2(Xlist[[1]],ylist[[1]], Q2list[[1]])

and do the same with all providers

for(j in 2:m) sumV = sumV+ function2(Xlist[[j]],ylist[[j]], Q2list[[j]])
solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Linear Regression, with Map-Reduce

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, \mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon} (with classical matrix notations). The OLS estimate of \mathbf{\beta} is \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}. To illustrate, consider a not too big dataset, and run some regression.

lm(dist~speed,data=cars)$coefficients
(Intercept)       speed 
 -17.579095    3.932409
y=cars$dist
X=cbind(1,cars$speed)
solve(crossprod(X,X))%*%crossprod(X,y)
           [,1]
[1,] -17.579095
[2,]   3.932409

How is this computed in R? Actually, it is based on the QR decomposition of \mathbf{X}, \mathbf{X}=\mathbf{Q}\mathbf{R}, where \mathbf{Q} is an orthogonal matrix (ie \mathbf{Q}^T\mathbf{Q}=\mathbb{I}). Then \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y
           [,1]
[1,] -17.579095
[2,]   3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider m blocks

m = 5

and split vectors and matrices
\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right] and \mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]
To split vectors and matrices, use (eg)

Xlist = list()
for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,]
ylist = list()
for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list()
for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of \mathbf{R}^{(1)} which is the first step of the reduce part\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}where\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]

R1 = QR1[[1]]$R
for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R)
Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j
and\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j

Q3list = list()
for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]]
Vlist = list()
for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]]

and finally set – as the step 3 of the reduce part\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j

sumV = Vlist[[1]]
for(j in 2:m) sumV = sumV+Vlist[[j]]
solve(R2) %*% sumV
           [,1]
[1,] -17.579095
[2,]   3.932409

It looks like we’ve been able to parallelise our linear regression…

Quantile Regression (home made)

After my series of post on classification algorithms, it’s time to get back to R codes, this time for quantile regression. Yes, I still want to get a better understanding of optimization routines, in R. Before looking at the quantile regression, let us compute the median, or the quantile, from a sample.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n.
To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve)
A1 = cbind(diag(2*n),0) 
A2 = cbind(diag(n), -diag(n), 1)
r = lp("min", c(rep(1,2*n),0),
rbind(A1, A2),c(rep("&gt;=", 2*n), rep("=", n)), c(rep(0,2*n), y))
tail(r$solution,1) 
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(x,tau)
      30% 
0.6741586

The linear program is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. The R code is now

A1 = cbind(diag(2*n),0) 
A2 = cbind(diag(n), -diag(n), 1)
r = lp("min", c(rep(tau,n),rep(1-tau,n),0),
rbind(A1, A2),c(rep("&gt;=", 2*n), rep("=", n)), c(rep(0,2*n), y))
tail(r$solution,1) 
[1] 0.6741586

So far so good…

Quantile Regression (simple)

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-[\beta_0^\tau+\beta_1^\tau x_i]=a_i-b_i\forall i=1,\cdots,n. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
A1 = cbind(diag(2*n), 0,0) 
A2 = cbind(diag(n), -diag(n), X) 
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2),
       c(rep("&gt;=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,2)
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2),
       c(rep("&gt;=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

Quantile Regression (multiple)

Now that we understand how to run the optimization program with one covariate, why not try with two ? For instance, let us see if we can explain the rent of a flat as a (linear) function of the surface and the age of the building.

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area, base$yearc )
y = base$rent_euro
A1 = cbind(diag(2*n), 0,0,0) 
A2 = cbind(diag(n), -diag(n), X) 
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0,0), rbind(A1, A2),
       c(rep("&gt;=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,3)
[1] 0.000000 3.257562 0.077501

Unfortunately, this time, it is not working well…

library(quantreg)
rq(rent_euro~area+yearc, tau=tau, data=base)
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234

Results are quite different. And actually, another technique can confirm the later (IRLS – Iteratively Reweighted Least Squares)

eps = residuals(lm(rent_euro~area+yearc, data=base))
for(s in 1:500){
  reg = lm(rent_euro~area+yearc, data=base, weights=(tau*(eps&gt;0)+(1-tau)*(eps&lt;0))/abs(eps))
  eps = residuals(reg)
}
reg$coefficients
 (Intercept)         area        yearc 
-5484.443043     3.955134     2.857943

I could not figure out what went wrong with the linear program. Not only coefficients are very different, but also predictions…

yr = r$solution[2*n+1]+r$solution[2*n+2]*base$area+r$solution[2*n+3]*base$yearc
plot(predict(reg),yr)
abline(a=0,b=1,lty=2,col="red")


It’s now time to investigate….

Discrete or continuous modeling ?

Tuesday, we got our conference “Insurance, Actuarial Science, Data & Models” and Dylan Possamaï gave a very interesting concluding talk. In the introduction, he came back briefly on a nice discussion we usually have in economics on the kind of model we should consider. It was about optimal control. In many applications, we start with a one period economy, then a two period economy, and pretend that we can extend it to n period economy. And then, the continuous case can also be considered. A few years ago, I was working on sports game as an optimal effort startegy (within in a game – fixed time). It was with a discrete model, I was running simulations to get an efficient frontier, where coaches might say “ok, now we have enough (positive) difference, and we get closer to the end of the game, so we can ‘lower the effort’ i.e. top players can relax a little bit” (it was on basket-ball games). I asked a good friend of mine, Romuald, to help me on some technical parts of proofs, but he did not like so much my discrete-time model, and wanted to move to continuous time. And for now six years, we keep saying that someday we should get back to that paper….

My initial thoughts were that the difference was really “cultural”: you are either a continuous-time sort of guy, or a discrete-time one (or maybe none of the two, but that’s another problem). He works with stochastic processes, I work with time series. Of course, we can find connections, but most of the time, the techniques are very different. And tuesday, Dylan mentioned a very nice illustration that it’s not necessarily a cultural difference, and sometimes, it is great to move to continuous time. So I wanted to illustrate that idea.

Consider for instance the following curve.

vu = seq(0,1,length=601)
vv = sin(vu*pi)
plot(vu,vv,type="l",lwd=2)

The goal is to find the value of the maximum, numerically. And here, there are two (very) different strategies

  • the discrete one: we see a (finite) collection of points – for instance, the graph above is a collection of 601 points (connected with a straight line) – and in that case, we need a standard algorithm (in O(n)) to get the value of the maximum
  • the continuous one: we see a function x\mapsto \sin(\pi x), and in that case, we use optimization routines

In the second case, use for instance

optim(0,function(x) -sin(pi*x))
$par
[1] 0.5
 
$value
[1] -1

For the first case, we can use the standard R function, and see how long it takes to use simulations to get an approximation of the maximum

library(microbenchmark)
max_time = function(n) median(microbenchmark(max(sin(runif(n)*pi)))$time)
vn = 10^(seq(1,6,length=21))
vt = Vectorize(max_time)(vn)
plot(vn,vt/1e9,col="blue",pch=19,type="b",log="xy")

but of course, some home-made code can also be used

c_max = function(n=100){
  x = sin(runif(n)*pi)
  y = x[1]
  for(i in 2:length(x)) { 
    if(x[i] &gt; y) { y = x[i] }}
  return(y)}
max_time=function(n) median(microbenchmark(c_max(n))$time)
lines(vn,vt/1e9,type="b")

We can add that horizontal red line using

abline(h=median(microbenchmark(optim(.5,function(x) sin(pi*x)))$time)/1e9,lty=2,col="red")

So, indeed, it looks like computational time to find the maximum in a list of n elements is linear in n, i.e. O(n). And R code is faster than home-made code. But also, interestingly, using continus time (based on analysis techniques) can be much faster. So, sometimes, considering continuous time models can be much easier to solve, from a numerical perspective.

Classification from scratch, boosting 11/8

Eleventh post of our series on classification from scratch. Today, that should be the last one… unless I forgot something important. So today, we discuss boosting.

An econometrician perspective

I might start with a non-conventional introduction. But that’s actually how I understood what boosting was about. And I am quite sure it has to do with my background in econometrics.

The goal here is to solve something which looks likem^\star=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbracefor some loss function \ell, and for some set of predictors \mathcal{M}. This is an optimization problem. Well, optimization is here in a function space, but still, that’s simply an optimization problem. And from a numerical perspective, optimization is solve using gradient descent (this is why this technique is also called gradient boosting). And the gradient descent can be visualized like below

Again, the optimum is not some some real value x^\star, but some function m^\star. Thus, here we will have something likem^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(y_i,m^{(k-1)}(\mathbf{x}_i)+h(\mathbf{x}_i))\right\rbrace(as they write it is serious articles) where the term on the right can also be writtenm^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\mathbf{x}_i)}_{\varepsilon_{k,i}},h(\mathbf{x}_i))\right\rbraceI prefer the later, because we see clearly that f is some model we fit on the remaining residuals.

We can rewrite it like that: definer_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x}_i)=m^{(k-1)}(\mathbf{x}_i)}for all i=1,\cdots,n. The goal is to fit a model so that r_{i,k}=h^\star(\mathbf{x}_i), and when we have that optimal function, set m_k(\mathbf{x})=m_{k-1}(\mathbf{x})+\gamma_k h^\star(\mathbf{x}) (yes, we can include some shrinkage here).

Two important comments here. First of all, the idea should be weird to any econometrician. First, we fit a model to explain y by some covariates \mathbf{x}. Then consider the residuals \widehat{\varepsilon}, and to explain them with the same covariate \mathbf{x}. If you try that with a linear regression, you’d done at the end of step 1, since residuals \widehat{\varepsilon} are orthogonal to covariates \mathbf{x}: no way that we can learn from them. Here it works because we consider simple non linear model. And actually, something that can be used is to add a shrinkage parameter. Do not consider \widehat{\varepsilon}=y-\widehat{m}(\mathbf{x}) but \widehat{\varepsilon}=y-\gamma\widehat{m}(\mathbf{x}). The idea of weak learners is extremely important here. The more we shrink, the longer it will take, but that’s not (too) important.

I should also mention that it’s nice to keep learning from our mistakes. But somehow, we should stop, someday. I said that I will not mention this part in this series of posts, maybe later on. But heuristically, we should stop when we start to overfit. And this can be observed either using a split training/validation of the initial dataset or to use cross validation. I will get back on that issue later one in this post, but again, those ideas should probably be dedicated to another series of posts.

Learning with splines

Just to make sure we get it, let’s try to learn with splines. Because standard splines have fixed knots, actually, we do not really “learn” here (and after a few iterations we get to what we would have with a standard spline regression). So here, we will (somehow) optimize knots locations. There is a package to do so. And just to illustrate, use a Gaussian regression here, not a classification (we will do that later on). Consider the following dataset (with only one covariate)

n=300
 set.seed(1)
 u=sort(runif(n)*2*pi)
 y=sin(u)+rnorm(n)/4
 df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

library(freeknotsplines)
xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)

With 5% shrinkage, the code it simply the following

v=.05
 library(splines)
 xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)
 fit=lm(y~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
 yp=predict(fit,newdata=df)
 df$yr=df$y - v*yp
 YP=v*yp
 for(t in 1:200){
   xy.freekt=freelsgen(df$x, df$yr, degree = 1, numknot = 2, 555)
   fit=lm(yr~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
   yp=predict(fit,newdata=df)
   df$yr=df$yr - v*yp
   YP=cbind(YP,v*yp)}
 nd=data.frame(x=seq(0,2*pi,by=.01))
 viz=function(M){
    if(M==1)  y=YP[,1]
    if(M&gt;1)   y=apply(YP[,1:M],1,sum)
    plot(df$x,df$y,ylab="",xlab="")
    lines(df$x,y,type="l",col="red",lwd=3)
    fit=lm(y~bs(x,degree=1,df=3),data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="l",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}

To visualize the ouput after 100 iterations, use

viz(100)


Clearly, we see that we learn from the data here… Cool, isn’t it?

Learning with stumps (and trees)

Let us try something else. What if we consider at each step a regression tree, instead of a linear-by-parts regression (that was considered with linear splines).

library(rpart)
v=.1 
fit=rpart(y~x,data=df)
yp=predict(fit)
df$yr=df$y - v*yp
YP=v*yp
for(t in 1:100){
  fit=rpart(yr~x,data=df)
  yp=predict(fit,newdata=df)
  df$yr=df$yr - v*yp
  YP=cbind(YP,v*yp)}

Again, to visualise the learning process, use

viz=function(M){
y=apply(YP[,1:M],1,sum)
plot(df$x,df$y,ylab="",xlab="")
lines(df$x,y,type="s",col="red",lwd=3)
fit=rpart(y~x,data=df)
yp=predict(fit,newdata=nd)
lines(nd$x,yp,type="s",col="blue",lwd=3)
lines(nd$x,sin(nd$x),lty=2)}


This time, with those trees, it looks like not only we have a good model, but also a different model from the one we can get using a single regression tree.

What if we change the shrinkage parameter?

viz=function(v=0.05){
  fit=rpart(y~x,data=df)
  yp=predict(fit)
  df$yr=df$y - v*yp
  YP=v*yp
  for(t in 1:100){
    fit=rpart(yr~x,data=df)
    yp=predict(fit,newdata=df)
    df$yr=df$yr - v*yp
    YP=cbind(YP,v*yp)}
  y=apply(YP,1,sum)
    plot(df$x,df$y,xlab="",ylab="")
    lines(df$x,y,type="s",col="red",lwd=3)
    fit=rpart(y~x,data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="s",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}


There is clearly an impact of that shrinkage parameter. It has to be small to get a good model. This is the idea of using weak learners to get a good prediction.

Classification and Adaboost

Now that we understand how bootsting works, let’s try to adapt it to classification. It will be more complicated because residuals are usually not very informative in a classification. And it will be hard to shrink. So let’s try something slightly different, to introduce the adaboost algorithm.

In our initial discussion, the goal was to minimize a convex loss function. Here, if we express classes as \{-1,+1\}, the loss function we consider is e^{-y\cdot m(\mathbf{x})} (this product y\cdot m(\mathbf{x})) was already discussed when we’ve seen the SVM algorithm. Note that the loss function related to the logistic model would be \log(1+e^{-y\cdot m(\mathbf{x})}).

What we do here is related to gradient descent (or Newton algorithm). Previously, we were learning from our errors. At each iteration, the residuals are computed and a (weak) model is fitted to these residuals. The the contribution of this weak model is used in a gradient descent optimization process. Here things will be different, because (from my understanding) it is more difficult to play with residuals, because null residuals never exist in classifications. So we will add weights. Initially, all the observations will have the same weights. But iteratively, we ill change them. We will increase the weights of the wrongly predicted individuals and decrease the ones of the correctly predicted individuals. Somehow, we want to focus more on the difficult predictions. That’s the trick. And I guess that’s why it performs so well. This algorithm is well described in wikipedia, so we will use it.

We start with \mathbf{\omega}_0=\mathbf{1}/n, then at each step fit a model (a classification tree) with weights \mathbf{\omega}_k(we did not discuss weights in the algorithms of trees, but it is straigtforward in the formula actually). Let \widehat{h}_{\mathbf{\omega}_k} denote that model (i.e. the probability in each leaves). Then consider the classifier 2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\cdot)>0.5]-1 which returns a value in \{-1,+1\}. Then set \varepsilon_k=\sum_{i\in\mathcal{I}_k}\omega_i where \mathcal{I}_k is the set of misclassified individuals,\mathcal{I}_k=\big\lbrace i:2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)>0.5]-1\neq y_i\big\rbrace Then set \alpha_k = \frac{1}{2} \ln \left(\frac{1-\epsilon_k}{\epsilon_k}\right)and update finally the model usingm_{k=1}=m_k+\alpha_k\widehat{h}_{\mathbf{\omega}_k}as well as the weights\mathbf{\omega}_{k+1}=\mathbf{\omega}_k e^{-\mathbf{y} \alpha_k \widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)}(of course, devide by the sum to insure that the total sum is then 1). And as previously, one can include some shrinkage. To visualize the convergence of the process, we will plot the total error on our dataset.

n_iter = 100
y = (myocarde[,"PRONO"]==1)*2-1
x = myocarde[,1:7]
error = rep(0,n_iter) 
f = rep(0,length(y)) 
w = rep(1,length(y)) #
alpha = 1
library(rpart)
for(i in 1:n_iter){
  w = exp(-alpha*y*f) *w 
  w = w/sum(w)
  rfit = rpart(y~., x, w, method="class")
  g = -1 + 2*(predict(rfit,x)[,2]&gt;.5) 
  e = sum(w*(y*g&lt;0))
  alpha = .5*log ( (1-e) / e )
  alpha = 0.1*alpha 
  f = f + alpha*g
  error[i] = mean(1*f*y&lt;0)
}
plot(seq(1,n_iter),error,type=&quot;l&quot;,
     ylim=c(0,.25),col=&quot;blue&quot;,
     ylab=&quot;Error Rate&quot;,xlab=&quot;Iterations&quot;,lwd=2)


Here we face a classical problem in machine learning: we have a perfect model. With zero error. That is nice, but not interesting. It is also possible in econometrics, with polynomial fits: with 10 observations, and a polynomial of degree 9, we have a perfect fit. But a poor model. Here it is the same. So the trick is to split our dataset in two, a training dataset, and a validation one

set.seed(123)
id_train = sample(1:nrow(myocarde), size=45, replace=FALSE)
train_myocarde = myocarde[id_train,]
test_myocarde = myocarde[-id_train,]

We construct the model on the first one, and we check on the second one that it’s not that bad…

y_train = (train_myocarde[,"PRONO"]==1)*2-1
x_train =  train_myocarde[,1:7]
y_test = (test_myocarde[,"PRONO"]==1)*2-1
x_test = test_myocarde[,1:7]
train_error = rep(0,n_iter) 
test_error = rep(0,n_iter)
f_train = rep(0,length(y_train))
f_test = rep(0,length(y_test)) 
w_train = rep(1,length(y_train)) 
alpha = 1
for(i in 1:n_iter){
  w_train = w_train*exp(-alpha*y_train*f_train) 
  w_train = w_train/sum(w_train)
  rfit = rpart(y_train~., x_train, w_train, method="class")
  g_train = -1 + 2*(predict(rfit,x_train)[,2]&gt;.5)
  g_test = -1 + 2*(predict(rfit,x_test)[,2]&gt;.5)
  e_train = sum(w_train*(y_train*g_train&lt;0))
  alpha = .5*log ( (1-e_train) / e_train )
  alpha = 0.1*alpha 
  f_train = f_train + alpha*g_train
  f_test = f_test + alpha*g_test
  train_error[i] = mean(1*f_train*y_train&lt;0)
  test_error[i] = mean(1*f_test*y_test&lt;0)}
plot(seq(1,n_iter),test_error,col='red')
lines(train_error,lwd=2,col='blue')


Here, as previously, after 80 iterations, we have a perfect model on the training dataset, but it behaves badly on the validation dataset. But with 20 iterations, it seems to be ok…

R function

Of course, it’s possible to use R functions,

library(gbm)
gbmWithCrossValidation = gbm(PRONO ~ .,distribution = "bernoulli",
data = myocarde,n.trees = 2000,shrinkage = .01,cv.folds = 5,n.cores = 1)
bestTreeForPrediction = gbm.perf(gbmWithCrossValidation)

Here cross-validation is considered, and not training/validation, as well as forests instead of single trees, but overall, the idea is the same… Off course, the output is much nicer (here the shrinkage is a very small parameter, and learning is extremely slow)