# Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
x2=cut(x2,breaks=
c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
$y : num 1.345 1.863 1.946 2.481 0.765 ...$ x1: num  0.266 0.372 0.573 0.908 0.202 ...
$x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ... table(b$x2)[LETTERS[1:10]]

A  B  C  D  E  F  G  H  I  J
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, factor = b$x2,
family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
$y : num 1.345 1.863 1.946 2.481 0.765 ...$ x1: num  0.266 0.372 0.573 0.908 0.202 ...
$x2: chr "Group.4" "Group.4" "Group.4" "Group.4" ... - attr(*, ".internal.selfref")= table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4
23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

# On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, $\boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta}$ (in a matrix form), the ordinary least square estimator of parameter $\boldsymbol{\beta}$ is $$\widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}$$The prediction can then be written$$\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}$$where $\color{blue}{\boldsymbol{H}}$ is called the hat matrix.

The matrix is idempotent, i.e. $$\boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}$$so it can be interpreted as a projection matrix. Furthermore, since$\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X}$ (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of $\boldsymbol{X}$. One can also observe that $\mathbb{I}-\boldsymbol{H}$ is also a projection matrix. And we can write$$\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}$$where $\widehat{\boldsymbol{y}}$ is the orthogonal projection of $\boldsymbol{y}$ on the (linear) space of linear combinations of columns of $\boldsymbol{X}$, and $\widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}$, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables $\boldsymbol{X}$).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix $\boldsymbol{H}$. Recall that we have

so entry $\boldsymbol{H}_{i,i}$ is a measure of the influence of entry $\boldsymbol{y}_i$ on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that$$\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)$$which can be written$$\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=p$$where classically $p=k+1$, where $k$ is the number of explanatory variables. Further, since $\boldsymbol{H}$ is idempotent, we can write (from $\boldsymbol{H}=\boldsymbol{H}^2$) that$$\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2$$One the one hand, since the second term is positive $\boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2$, i.e. $1\geq\boldsymbol{H}_{i,i}$. And since both terms are positive, then $\boldsymbol{H}_{i,i}\in[0,1]$. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8)) plot(df)

we obtain

model = lm(y~x,data=df) abline(model,col="red",lwd=2) H = lm.influence(model)$hat plot(1:11,H,type="h") The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, $\boldsymbol{H}_{11,11}=1$. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, $\boldsymbol{H}_{i,i}=1/10$ when $i\in\{1,2,\cdots,10\}$. Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case. • the case where one observation (below the first one) is such that $\widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i}$ (perfect prediction) • the case where one observation (below the tenth one) is such that $\boldsymbol{x}_{i}=\overline{\boldsymbol{x}}$ and $\boldsymbol{y}_{i}=\overline{\boldsymbol{y}}$ (from the first order condition – or normal equation), the fitted regression line always go through point $(\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})$ Let us move two observations from our dataset, mean(c(4,rep(1,8),6)) [1] 1.8 df = data.frame(x = c(4,rep(1,8),6,1.8), y = c(predict(model,newdata=data.frame(x=4)), 2:9,8, predict(model,newdata=data.frame(x=1.8)))) We now have If we compute the leverages, we obtain model = lm(y~x,data=df) H = lm.influence(model)$hat plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, $\boldsymbol{H}_{i,i}=1/n$. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)$$\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}$$which is minimum when $x_i=\overline{x}$, and then $\boldsymbol{H}_{i,i}=1/n$, otherwise $\boldsymbol{H}_{i,i}>1/n$. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let $\tilde{\boldsymbol{X}}$ denote the matrix of centered variables $\boldsymbol{X}$, then we can prove that $$\boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}$$(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining $n-1$ points, on the left. Those have all the same $x$ values. We can prove that if $\boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}$, then $$\boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}$$hence, using the relationship obtained since the hat matrix is idempotent$$\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2$$thus, we now have$$\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0$$i.e. $\boldsymbol{H}_{i_1,i_1}\in[0,1/2]$, where the upper bound becomes $1/(n-1)$ “duplicates”. So for $n-1$ $\boldsymbol{H}_{i,i}$‘s, we have values below $1/(n-1)$, the last one should be below $1$ and the sum has to be $k=2$ . So we have the value of the $n$ $\boldsymbol{H}_{i,i}$‘s.

# Insurance data science : Networks

At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about networks and insurance this Friday. Slides are available online

# Insurance data science : Text

At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about text based data and NLP this Thursday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on tweets. I can upload a few additional slides on LSTM (recurrent neural nets)

# Insurance data science : Pictures

At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.

We will try to identify what is on the following pictures, starting with the car

(we will see that the car is indeed identified)

a skier,

and a fire,

We will also discuss previous pictures from the summer school

# Insurance data science : use and value of unusual data #1

Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).

I will give an introductionary talk on Monday morning, and the slides are now available

There will be some hands-on applications, on R. I will share some codes in the slides.

# Optimal transport on large networks

With Alfred Galichon and Lucas Vernet, we recently uploaded a paper entitled optimal transport on large networks on arxiv.

This article presents a set of tools for the modeling of a spatial allocation problem in a large geographic market and gives examples of applications. In our settings, the market is described by a network that maps the cost of travel between each pair of adjacent locations. Two types of agents are located at the nodes of this network. The buyers choose the most competitive sellers depending on their prices and the cost to reach them. Their utility is assumed additive in both these quantities. Each seller, taking as given other sellers prices, sets her own price to have a demand equal to the one we observed. We give a linear programming formulation for the equilibrium conditions. After formally introducing our model we apply it on two examples: prices offered by petrol stations and quality of services provided by maternity wards (only the later is described here for privacy issues). These examples illustrate the applicability of our model to aggregate demand, rank prices and estimate cost structure over the network. We insist on the possibility of applications to large scale data sets using modern linear programming solvers such as Gurobi.

Demand for gas in gas stations in Britanny, and demand for maternity in France (with border correction)

In addition to this paper we released a R toolbox to implement our results and an online tutorial, optimalnetwork.github.io.

# Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table( "http://freakonometrics.free.fr/saporta.csv", head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE) M=matrix(NA,100,ncol(MYOCARDE)) colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7]) S1=S2=M1=M2=M for(i in 1:100){ idx = which(sample(0:1,size=n, replace=TRUE)==1) reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,])) nm=names(reg$coefficients) M1[i,nm]=reg$coefficients S1[i,nm]=summary(reg)$coefficients[,2] f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="") reg=glm(f,data=MYOCARDE[-idx,]) M2[i,nm]=reg$coefficients S2[i,nm]=summary(reg)$coefficients[,2] } Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining) for(j in 1:8){ idx=which(!is.na(M1[,j])) plot(M1[idx,j],M2[idx,j]) abline(a=0,b=1,lty=2,col="gray") segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j]) segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j]) } For instance, with the intercept, we have the following where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant). We can also visualize the joint distribution of the two estimators, for(j in 1:8){ library(ks) idx = which(!is.na(M1[,j])) Z = cbind(M1[idx,j],M2[idx,j]) H = Hpi(x=Z) fhat = kde(x=Z, H=H) image(fhat$eval.points[[1]], fhat$eval.points[[2]],fhat$estimate) abline(a=0,b=1,lty=2,col="gray") abline(v=0,lty=2) abline(h=0,lty=2) }

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

On the next one, we have again significance on the training sample, but not on the validation sample,

and probably more interesting

where the two are very consistent.

# Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

• The square root link (for the Poisson model)

Consider some random variable $Y$ with mean $\mu$ and variance $\sigma^2$. Using Taylor’s expansion,$$g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)$$we can write$$\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu)$$ $$\text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2$$

Assume that $Y\sim\mathcal{P}(\lambda)$, a consider a square root transformation, $g(y)=\sqrt{y}$, then the second equality becomes $$\text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}$$

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

• The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, $N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}})$ where $\lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]$Thus,$$\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]$$while$$\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})$$where $H(s)=1-\exp[-\exp(s)]$. Let $Y=\mathbf{1}(N>0)$. The previous model seems like a Bernoulli regression with $H$ as link function,$$\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})$$

So, assume now that instead of observing $N$ we observe $Y=\boldsymbol{1}(N>0)$. In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare $e^{\lambda_{\mathbf{x}}}$ and $p_{\mathbf{x}}$ obtained from a standard logistic regression

n=563 set.seed(1) base=data.frame(X1=rnorm(n),X2=rnorm(n)) lambda=base$X1+base$X2 base$Y=rpois(n,exp(lambda)) regPois = glm(Y~.,data=base,family=poisson(link="log")) lambda = predict(regPois,type="response") regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit")) prob = predict(regBinom, type="response") plot(prob,exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") What if $p_{\mathbf{x}}$ was obtained from a Bernoulli regression, with a cloglog link function ? regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog")) prob = predict(regBinom, type="response") plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables) base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE) str(base) x=base$SEX base$SEX="M" base$SEX[x=="0"]="F" x=base$CHILDREN base$CHILDREN="YES" base$CHILDREN[x==0]="NO" regPois = glm(Y~.,data=base,family=poisson(link="log")) lambda = predict(regPois,type="response") regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit")) prob = predict(regBinom, type="response") plot(prob,exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") In that case the two models are very different. But actually, so is the second one regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog")) prob = predict(regBinom, type="response") plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here, library(pscl) regZIP = zeroinfl(Y ~ . | ., data = base) summary(regZIP) Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -0.002274 0.048413 -0.047 0.963 X1 1.019814 0.026186 38.945 &lt;2e-16 *** X2 1.004814 0.024172 41.570 &lt;2e-16 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -4.90190 2.07846 -2.358 0.0184 * X1 -2.00227 0.86897 -2.304 0.0212 * X2 -0.01545 0.96121 -0.016 0.9872 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not… # Extracting information from a picture, round 2 Yesterday, I published a post on extracting information from a picture, but it did not work as expected. I claimed that it was because of the original graph I had. More precisely, the was based on some weird projection, and I could not reconcile. So I decide to cheat a little bit, by creating my own map, Colors are ugly, I know. But I got them using u = seq(0,1,length=30) couleurs = rgb(u,rev(u),0,1) The picture is url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage3.png" library(pixmap) library(png) IMG = readPNG(url) I used those colors because it would make things easy when extracting reds and greens… ROUGE=t(IMG[,,1])[x1:x2,] ROUGE=ROUGE[,y2:y1] library(scales) image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))) VERT=t(IMG[,,2])[x1:x2,] VERT=VERT[,y2:y1] image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01))) Let us see if the contour of France can be overlaid library(maptools) library(PBSmapping) download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds") FR=readRDS("FRA_adm0.rds") library(maptools) PP = SpatialPolygons2PolySet(FR) par(mfrow=c(1,1)) PP=PP[(PP$X&lt;=8.25)&amp;(PP$Y&gt;=42.2),] u=(x1:x2)-x1 v=(y1:y2)-y1 ax=min(PP$X) bx=max(PP$X)-min(PP$X) ay=min(PP$Y) by=max(PP$Y)-min(PP$Y) PP$X=(PP$X-ax)/bx*max(u) PP$Y=(PP$Y-ay)/by*max(v) image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))) points(PP$X,PP$Y) We have a perfect match, don’t we…? Let us now use a shapefile based on départements, download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds") FR2=readRDS("FRA_adm2.rds") library(maptools) PP = SpatialPolygons2PolySet(FR2) image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))) k=35 pX=(PP$X[PP$PID==k]-ax)/bx*max(u) pY=(PP$Y[PP$PID==k]-ay)/by*max(v) points(pX,pY)nge(pX) For instance, the thirty-fifth polygon is the following Let us extract the color inside that polygon u=1:nrow(ROUGE) v=1:ncol(ROUGE) The code would be pX=(PP$X[PP$PID==k]-ax)/bx*max(u) pY=(PP$Y[PP$PID==k]-ay)/by*max(v) E=expand.grid(u,v) M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v)) image(u,v,ROUGE*M,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))) points(pX,pY) Now, for each département, I extract the average value of red, and the average value of green, extract_info = function(k){ pX=(PP$X[PP$PID==k]-ax)/bx*max(u) pY=(PP$Y[PP$PID==k]-ay)/by*max(v) E=expand.grid(u,v) M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v)) nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")] return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M))) } donnees = Vectorize(extract_info)(1:95) x2=donnees[1,] y2=donnees[2,]/(donnees[2,]+donnees[3,]) df2=data.frame(dpt=x2,extract=y2) x1=as.numeric(as.character(baseChomage$no)) y1=baseChomage$chomagePremierTrimestre2017 df1=data.frame(dpt=x1,obs=y1) df=merge(df1,df2) plot(df$obs,df$extract) On the graph below, we have the original values on the x-axis (unemployement, in percent) and the “average value of red”. Note that points are almost perfectly correlated… The accumulation can be explained because on the original map, different values could have the same color So far, I can claim that we’ve been able to extract useful information from the original picture. Consider the case now that the original map was the following one The picture can be downloaded using the following code url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage5.png" library(pixmap) library(png) IMG = readPNG(url) Here, the colors are obtained from a standard palette, library(pals) couleurs = rev(brewer.rdylgn(30)) Here again, we use our previous code to extract reds and greens And if we use our function extract_info = function(k){ pX=(PP$X[PP$PID==k]-ax)/bx*max(u) pY=(PP$Y[PP$PID==k]-ay)/by*max(v) E=expand.grid(u,v) M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v)) nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")] return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M))) } donnees = Vectorize(extract_info)(1:95) x2=donnees[1,] y2=donnees[2,]/(donnees[2,]+donnees[3,]) df2=data.frame(dpt=x2,extract=y2) x1=as.numeric(as.character(baseChomage$no)) y1=baseChomage$chomagePremierTrimestre2017 df1=data.frame(dpt=x1,obs=y1) df=merge(df1,df2) plot(df$obs,df$extract) we obtain the following graph Here again, we have a strong correlation, not to say comonotonic variables (in the sense that ranks are identical). Nice, isn’t it ? # Extracting information from a picture, round 1 This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function… As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green ! Let us get the map, and extract information from the file url="https://f.hypotheses.org/wp-content/blogs.dir/253/files/2018/12/inondation3.png" download.file(url,"inondation3.png") image="inondation3.png" library(pixmap) library(png) IMG=readPNG(image) Information is stored in several matrices – or in arrays. Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map nl=dim(IMG)[1] nc=dim(IMG)[2] MAT=(IMG[,,1]+IMG[,,2])/2 x=apply(MAT,2,max) plot(x,type="l") When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function) y=apply(MAT,1,max) plot(y,type="l") Let us find cutoff values, on the left and on the right, on top and on the bottom image(1:nc,1:nl,t(MAT)) abline(v=min(which(x>.2)),col="blue") abline(v=max(which(x>.2)),col="blue") abline(h=min(which(y>.2)),col="blue") abline(h=max(which(y>.2)),col="blue") We obtain the following (forget about the fact that – somehow – France is upside-down) We can zoom-in, just to make sure that our border are fine par(mfrow=c(1,2)) image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,]) abline(v=min(which(x>.2))+(-5):5,col="white") abline(v=min(which(x>.2)),col="blue") x1=min(which(x>.2))-1 and on the vertical range image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,]) abline(v=max(which(x>.2))+(-5):5,col="white") abline(v=max(which(x>.2)),col="blue") x2=max(which(x>.2))+1 So far so good. Let us keep the subpart of the picture, image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2]) Now, let us focus on the red part / component of that picture ROUGE=t(IMG[,,1])[x1:x2,] ROUGE=ROUGE[,y2:y1] library(scales) image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)) That’s not bad, isn’t it ? And get can have a similar graph for the green part VERT=t(IMG[,,2])[x1:x2,] VERT=VERT[,y2:y1] image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01))) Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France library(maptools) library(PBSmapping) url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds" download.file(url,"FRA_adm0.rds") FR=readRDS("FRA_adm0.rds") library(maptools) PP = SpatialPolygons2PolySet(FR) PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),] u=(x1:x2)-x1 v=(y1:y2)-y1 ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y) by=max(PP$Y)-min(PP$Y) PP$X=(PP$X-ax)/bx*max(u) PP$Y=(PP$Y-ay)/by*max(v) image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))) points(PP$X,PP$Y) We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom, Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea… # GLMs: link vs. distribution Usually, when I give a course on GLMs, I try to insist on the fact that the link function is probably more important than the distribution. In order to illustrate, consider the following dataset, with 5 observations x = c(1,2,3,4,5) y = c(1,2,4,2,6) base = data.frame(x,y) Then consider several model, with various distributions, and either an identity link (and in that case $\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbf{x}^T\mathbf{\beta}$) or a log link function (so that $\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=e^{\mathbf{x}^T\mathbf{\beta}}$) regNId = glm(y~x,family=gaussian(link="identity"),data=base) regNlog = glm(y~x,family=gaussian(link="log"),data=base) regPId = glm(y~x,family=poisson(link="identity"),data=base) regPlog = glm(y~x,family=poisson(link="log"),data=base) regGId = glm(y~x,family=Gamma(link="identity"),data=base) regGlog = glm(y~x,family=Gamma(link="log"),data=base) regIGId = glm(y~x,family=inverse.gaussian(link="identity"),data=base) regIGlog = glm(y~x,family=inverse.gaussian(link="log"),data=base One can also consider some Tweedie distribution, to be even more general library(statmod) regTwId = glm(y~x,family=tweedie(var.power=1.5,link.power=1),data=base) regTwlog = glm(y~x,family=tweedie(var.power=1.5,link.power=0),data=base) Consider the prediction obtained in the first case, with the linear link function library(RColorBrewer) darkcols = brewer.pal(8, "Dark2") plot(x,y,pch=19) abline(regNId,col=darkcols[1]) abline(regPId,col=darkcols[2]) abline(regGId,col=darkcols[3]) abline(regIGId,col=darkcols[4]) abline(regTwId,lty=2) The predictions are very very close, aren’t they ? In the case of the exponential prediction, we obtain plot(x,y,pch=19) u=seq(.8,5.2,by=.01) lines(u,predict(regNlog,newdata=data.frame(x=u),type="response"),col=darkcols[1]) lines(u,predict(regPlog,newdata=data.frame(x=u),type="response"),col=darkcols[2]) lines(u,predict(regGlog,newdata=data.frame(x=u),type="response"),col=darkcols[3]) lines(u,predict(regIGlog,newdata=data.frame(x=u),type="response"),col=darkcols[4]) lines(u,predict(regTwlog,newdata=data.frame(x=u),type="response"),lty=2) We can actually look closer. For instance, in the linear case, consider the slope obtained with a Tweedie model (that will include all the parametric familes mentioned here, actually) pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base))$coefficients[2,1:2] Vgamma = seq(-.5,3.5,by=.05) Vpente = Vectorize(pente)(Vgamma) plot(Vgamma,Vpente[1,],type="l",lwd=3,ylim=c(.965,1.03),xlab="power",ylab="slope")

The slope here is always very very close to one ! Even more if we add a confidence interval

plot(Vgamma,Vpente[1,]) lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2) lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

Heuristically, for the Gamma regression, or the Inverse Gaussian one, because the variance is a power of the prediction, if the prediction is small (here on the left), the variance should be small. So, on the left of the graph, the error should be small with a higher power for the variance function. And that’s indeed what we observe here

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base),newdata=data.frame(x=1),type="response")-y[x==1] Verreur = Vectorize(erreur)(Vgamma) plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(-.1,.04),xlab="power",ylab="error") abline(h=0,lty=2)

Of course, we can do the same with the exponential models

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base))$coefficients[2,1:2] Vpente = Vectorize(pente)(Vgamma) plot(Vgamma,Vpente[1,],type="l",lwd=3) or, if we add the confidence bands, we obtain plot(Vgamma,Vpente[1,],ylim=c(0,.8),type="l",lwd=3,xlab="power",ylab="slope") lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2) lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2) So here also, the “slope” is rather similar… And if we look at the error we make on the left part of the graph, we obtain erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base),newdata=data.frame(x=1),type="response")-y[x==1] Verreur = Vectorize(erreur)(Vgamma) plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(.001,.32),xlab="power",ylab="error") So my point is that the distribution is usually not the most important point on GLMs, even if chapters of books on GLMs are distribution based… But as mentioned in an another post, if you consider a nonlinear transformation, like we have with GAMs, the story is more complicated… # Bailey (1963) and Poisson regression on two factors Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year, base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE) str(base) 'data.frame': 563 obs. of 9 variables:$ SEX : int 1 0 0 1 1 0 0 1 0 1 ... $AGE : num 37 27 32 57 22 32 22 57 32 22 ...$ YEARMARRIAGE: num 10 4 15 15 0.75 1.5 0.75 15 15 1.5 ... $CHILDREN : int 0 0 1 1 0 0 0 1 1 0 ...$ RELIGIOUS : int 3 4 1 5 2 2 2 2 4 4 ... $EDUCATION : int 18 14 12 18 17 17 12 14 16 14 ...$ OCCUPATION : int 7 6 1 6 6 5 1 4 1 4 ... $SATISFACTION: int 4 4 4 5 3 5 3 4 2 5 ...$ Y : int 0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y, religion=as.factor(base$RELIGIOUS), occupation=as.factor(base$OCCUPATION), expo = 1) (E=xtabs(expo~religion+occupation,data=df)) occupation religion 1 2 3 4 5 6 7 1 4 1 8 4 16 9 0 2 23 3 11 17 56 36 6 3 29 1 10 12 39 25 2 4 38 7 12 21 59 44 2 5 13 1 3 10 19 19 3 (N=xtabs(y~religion+occupation,data=df)) occupation religion 1 2 3 4 5 6 7 1 4 1 13 3 13 7 0 2 1 1 13 10 25 43 10 3 15 0 12 11 34 35 1 4 24 1 3 15 11 9 10 5 6 0 0 6 11 7 0 The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that $N\sim\mathcal{P}(\lambda\cdot E)$, where $\lambda$ would be sum(N)/sum(E) [1] 0.6305506 The idea with the margin method is to assume that $N_{i,j}=E_{i,j}\cdot\lambda_{i,j}$ where $\lambda_{i,j}=A_i\cdot B_j$. Bailey (1963) added two series of constraints : per row, $$\sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j$$ for any $i$ and similarly, for any $j$ $$\sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_j$$From the first series of constraints, write $$A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j}$$ and use the second series to write $$B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}$$Because we need $A_i$‘s to compute $B_j$‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity… Consider here some starting values for $A_i$‘s and $B_j$‘s A=rep(1,length(levels(df$religion))) B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E) A [1] 1 1 1 1 1 B [1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 The predicted number of extramarital affairs would be $\hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j$ E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.5222025 0.6305506 5.0444050 2.5222025 10.0888099 5.6749556 0.0000000 2 14.5026643 1.8916519 6.9360568 10.7193606 35.3108348 22.6998224 3.7833037 3 18.2859680 0.6305506 6.3055062 7.5666075 24.5914742 15.7637655 1.2611012 4 23.9609236 4.4138544 7.5666075 13.2415631 37.2024867 27.7442274 1.2611012 5 8.1971581 0.6305506 1.8916519 6.3055062 11.9804618 11.9804618 1.8916519 sum(B*E[1,]) [1] 26.48313 sum(B*E[2,]) [1] 95.84369 apply(t(B*t(E)),1,sum) 1 2 3 4 5 26.48313 95.84369 74.40497 115.39076 42.87744 sum(A*E[,1]) [1] 107 sum(A*E[,2]) [1] 13 apply(A*E,2,sum) 1 2 3 4 5 6 7 107 13 44 64 189 133 13 From expressions above, observe that one can very easily write expressions of $A_i$‘s and $B_j$‘s as functions of $B_j$‘s and $A_i$‘s respectively A=apply(N,1,sum)/apply(t(B*t(E)),1,sum) B=apply(N,2,sum)/apply(A*E,2,sum) Let it iterate one thousand times for(i in 1:1000){ A=apply(N,1,sum)/apply(t(B*t(E)),1,sum) B=apply(N,2,sum)/apply(A*E,2,sum) } We obtain here A 1 2 3 4 5 1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 B 1 2 3 4 5 6 7 0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586111 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450811 0.3898804 12.5342484 12.8899708 28.2722423 28.8008726 4.9677044 4 11.6678702 1.2063307 6.6483904 9.9707299 18.9053460 22.4055332 2.1957997 5 4.0413463 0.1744790 1.6827951 4.8070914 6.1639760 9.7955975 3.3347148 That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers, apply(N,1,sum) 1 2 3 4 5 41 103 108 73 30 apply(E * A%*%t(B),1,sum) 1 2 3 4 5 41 103 108 73 30 as well as sums per colums apply(N,2,sum) 1 2 3 4 5 6 7 50 3 41 45 94 101 21 apply(E * A%*%t(B),2,sum) 1 2 3 4 5 6 7 50 3 41 45 94 101 21 Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates reg=glm(y~religion+occupation,data=df,family=poisson) summary(reg) Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -0.32604 0.21325 -1.529 0.126285 religion2 -0.38832 0.18791 -2.066 0.038783 * religion3 -0.03829 0.18585 -0.206 0.836771 religion4 -0.85470 0.19757 -4.326 1.52e-05 *** religion5 -0.84233 0.24416 -3.450 0.000561 *** occupation2 -0.57758 0.59549 -0.970 0.332083 occupation3 0.59022 0.21349 2.765 0.005699 ** occupation4 0.43588 0.20603 2.116 0.034381 * occupation5 0.04265 0.17590 0.242 0.808399 occupation6 0.50587 0.17360 2.914 0.003569 ** occupation7 1.27415 0.26298 4.845 1.27e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 First of all, observe that the total sum of predictions equals the total sum of observations yp = predict(reg,type="response") sum(yp) [1] 355 sum(df$y) [1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation) df$occupation df$religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586112 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450813 0.3898804 12.5342484 12.8899708 28.2722424 28.8008726 4.9677044 4 11.6678703 1.2063307 6.6483904 9.9707300 18.9053460 22.4055332 2.1957997 5 4.0413464 0.1744790 1.6827951 4.8070914 6.1639761 9.7955975 3.3347148 E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586111 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450811 0.3898804 12.5342484 12.8899708 28.2722423 28.8008726 4.9677044 4 11.6678702 1.2063307 6.6483904 9.9707299 18.9053460 22.4055332 2.1957997 5 4.0413463 0.1744790 1.6827951 4.8070914 6.1639760 9.7955975 3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for $A_i$‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5])) a/a[1] religion2 religion3 religion4 religion5 1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 A/A[1] 1 2 3 4 5 1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for $B_j$‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11])) b/b[1] occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 1.0000000 0.5612551 1.8043769 1.5463210 1.0435773 1.6584203 3.5756477 B/B[1] 1 2 3 4 5 6 7 1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).

# The “probability to win” is hard to estimate…

Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.

Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have $n$ students, and 50 questions

set.seed(1) n=10 M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n)

Let $X_{i,j}$ denote the score of student $i$ at question $j$. Let $S_{i,j}$ denote the cumulated score, i.e. $S_{i,j}=X_{i,1}+\cdots+X_{i,j}$. At step $j$, I can get some sort of prediction of the final score, using $\hat{T}_{i,j}=50\times S_{i,j}/j$. Here is the code

SM=apply(M,2,cumsum) NB=SM*50/(1:50)

We can actually plot it

plot(NB[,1],type="s",ylim=c(0,50)) abline(h=25,col="blue") for(i in 2:n) lines(NB[,i],type="s",col="light blue") lines(NB[,3],type="s",col="red")

But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !

Let’s try to see how we can do it… If after $j$ questions, the students has 25 correct answer, the probability should be 1 – i.e. if $S_{i,j}\geq 25$ – since he cannot fail. Another simple case is the following : if after $j$ questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if $S_{i,j}+(50-i+1)< 25$ the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least $25-S_{i,j}$ correct answers, out of $50-j$ questions, when the probability of success is actually $S_{i,j}/j$. We recognize the survival probability of a binomial distribution. The code is then simply

PB=NB*NA for(i in 1:50){ for(j in 1:n){ if(SM[i,j]&gt;=25) PB[i,j]=1 if(SM[i,j]+(50-i+1)&lt;25) PB[i,j]=0 if((SM[i,j]&lt;25)&amp;(SM[i,j]+(50-i+1)&gt;=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i) }}

So if we plot it, we get

plot(PB[,1],type="s",ylim=c(0,1)) abline(h=25,col="red") for(i in 2:n) lines(PB[,i],type="s",col="light blue") lines(PB[,3],type="s",col="red")

which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !

Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),

If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed

PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…

# October, grant proposal season

In 2012, Danielle Herbert, Adrian Barnett, Philip Clarke and Nicholas Graves published an article entitled “on the time spent preparing grant proposals: an observational study of Australian researchers“, whose conclusions had been included in Nature under a more explicit title, “Australia’s grant system wastes time” ! In this study, they included 3700 grant applications sent to the National Health and Medical Research Council, and showed that each application represented 37 working days: “Extrapolating this to all 3,727 submitted proposals gives an estimated 550 working years of researchers’ time (95% confidence interval, 513-589)“. But in these times when I have to write my funding application, I find that losing 37 days of work is huge. Because it’s become the norm! And somehow, it’s sad.

Forget about the crazy idea that I would rather, in fact, spend more time doing my research. In fact, the thought I had this morning was that it is rather sad that in the Faculty of Science, mathematicians are asked to spend a considerable amount of time, comparable to that required of physicists or chemists, for often smaller amounts of funding… And I thought it could be easily verified. We start by retrieving the discipline codes

url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp" download.file(url,destfile = "GSC.html") library(XML) tables=readHTMLTable("GSC.html") GSC=tables[[1]]$V1 GSC=as.character(GSC[-(1:2)]) namesGSC=tables[[1]]$V2 namesGSC=as.character(namesGSC[-(1:2)])

We’re going to need a small function, to remove the $and other symbols that pollute the data (and prevent them from being treated as numbers) library(stringr) Correction = function(x) as.numeric(gsub('[$,]', '', x))

We will now read the 12 pages, and harvest (we will just take the 2017 data, but we could go back a few years before)

grants= function(gsc){ url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=2017&amp;GSC=",gsc,sep="") download.file(url,destfile = "GSC.html") library(XML) tables=readHTMLTable("GSC.html") X=as.character(tables[[1]]$"Awarded Amount") A=as.numeric(Vectorize(Correction)(X)) return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100)))) } M=Vectorize(grants)(GSC[1:12]) The average amounts of individual grants can be compared, barplot(M[2,]) In mathematics, the average grant amount is$24400. If we normalize by this quantity, we obtain

barplot(M[2,]/M[2,8])

In other words, the average amount of a (individual) grant in chemistry (to pay for students, conferences, etc.) is twice that in mathematics, 60% higher in physics than in maths…

We can also look at the median values (rather than the averages)

barplot(M[1,])

Here again, it is in mathematics that it is the weakest….

barplot(M[1,]/M[1,8])

in comparable proportions. If we think that the time spent writing should be proportional to the amount allocated, we should spend half as much time in math as in chemistry.

Cumulative functions can also be ploted,

plot(M[3:101,8],(1:99)/100,type="s",xlim=range(M)) lines(M[3:101,5],(1:99)/100,type="s",col="red") lines(M[3:101,4],(1:99)/100,type="s",col="blue")

with math in black, physics in red, and chemistry in blue. What is surprising is the bottom part: a “bad” researcher in chemistry or physics will earn more than the median researcher in mathematics…

Now that my intuition is confirmed, I have to go back, writing my proposal… and explain to my coauthors that I have to postpone some research projects because, well, you know…