Tag Archives: bootstrap

Présentation à Bordeaux, Journées de Statistique

Cette semaine, Sam – Samuel Stocksieker – sera à Bordeaux, aux journées de statistiques, pour parler “smoothed bootstrap” et génération de données synthétiques pour la modélisation des extrêmes (papier co-écrit avec Denys Pommeret).

En apprentissage supervisé, il est assez fréquent de se retrouver confronté à des données présentant des distributions déséquilibrées. Cette situation entraîne souvent une difficulté d’apprentissage pour les algorithmes standards. La recherche et les solutions en matière d’apprentissage à partir de distributions déséquilibrées se sont principalement concentrées sur les tâches de classification. Malgré son importance, très peu de solutions existent pour la régression déséquilibrée (Imbalanced Regression). Dans cet article, nous proposons une procédure d’augmentation de données, nommée DENIS, basée sur des estimations à noyau de densité. Cette approche fournit une expression des densités conditionnelles des générateurs. Nous appliquons DENIS en régression déséquilibrée et proposons de le combiner à une nouveau type de générateur de type wild-boostrap pour simuler la variable cible, conditionnellement aux nouvelles données synthétiques. Nous évaluons les performances de l’algorithme DENIS dans des situations de régression déséquilibrée. Nous évaluons empiriquement et comparons notre approche et démontrons une amélioration significative par rapport aux techniques existantes.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Classification from scratch, bagging and forests 10/8

Tenth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside bagging techniques.

Often, bagging is associated with trees, to generate forests. But actually, it is possible using bagging for any kind of model. Recall that bagging means “boostrap aggregation”. So, consider a model m:\mathcal{X}\rightarrow \mathcal{Y}. Let \widehat{m}_{S} denote the estimator of m obtained from sample S=\{y_i,\mathbf{x}_i\} with i=\{1,\cdots,n\}.

Consider now some boostrap sample, S_b=\{y_i,\mathbf{x}_i\} with i is randomly drawn from \{1,\cdots,n\} (with replacement). Based on that sample, estimate \widehat{m}_{S_b}. Then draw many samples, and consider the agregation of the estimators obtained, using either a majority rule, or using the average of probabilities (if a probabilist model was considered). Hence\widehat{m}^{bag}(\mathbf{x})=\frac{1}{B}\sum_{b=1}^B \widehat{m}_{S_b}(\mathbf{x})

Bagging logistic regression #1

Consider the case of the logistic regression. To generate a bootstrap sample, it is natural to use the technique describe above. I.e. draw pairs (y_i,\mathbf{x}_i) randomly, uniformly (with probability 1/n) with replacement. Consider here the small dataset, just to visualize. For the b part of bagging, use the following code

L_logit = list()
n = nrow(df)
for(s in 1:1000){
  df_s = df[sample(1:n,size=n,replace=TRUE),]
  L_logit[[s]] = glm(y~., df_s, family=binomial)}

Then we should aggregate over the 1000 models, to get the agg part of bagging,

p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

We now have a prediction for any new observation

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Bagging logistic regression #2

Another technique that can be used to generate a bootstrap sample is to keep all \mathbf{x}_i‘s, but for each of them, to draw (randomly) a value for y, withY_{i,b}\sim\mathcal{B}(\widehat{m}_{S}(\mathbf{x}_i))since\widehat{m}(\mathbf{x})=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}].Thus, the code for the b part of bagging algorithm is now

L_logit = list()
n = nrow(df)
reg = glm(y~x1+x2, df, family=binomial)
for(s in 1:100){
  df_s = df
  df_s$y = factor(rbinom(n,size=1,prob=predict(reg,type="response")),labels=0:1)
  L_logit[[s]] = glm(y~., df_s, family=binomial)
}

The agg part of bagging algorithm remains unchanged. Here we obtain

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)


Of course, we can use that code we check the prediction obtain on the observations we have in our sample. Just to change, consider here the myocarde data. The entiere code is here

L_logit = list()
reg = glm(as.factor(PRONO)~., myocarde, family=binomial)
for(s in 1:1000){
  myocarde_s = myocarde
  myocarde_s$PRONO = 1*rbinom(n,size=1,prob=predict(reg,type="response"))
  L_logit[[s]] = glm(as.factor(PRONO)~., myocarde_s, family=binomial)
}
p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

For the first observation, with our 1000 simulated datasets, and our 1000 models, we obtained the following estimation for the probability to die.

histo = function(i){
x = as.numeric(myocarde[i,1:7])
v_x = p(x)
hist(v_x,proba=TRUE,breaks=seq(0,1,by=.05),xlab="",main="",
col=rep(c(rgb(0,0,1,.4),rgb(1,0,0,.4)),each=10),ylim=c(0,5))
segments(mean(v_x),0,mean(v_x),5,col="red",lty=2)
points(myocarde$PRONO[i],0,pch=19,cex=2)
xi = round(mean(v_x.5)*1000)/10
text(.75,-.1,paste(xi,"%",sep=""),col=rgb(1,0,0,.6))}
histo(1)
histo(4)

Hence, for the first observation, in 77.8% of the models, the predicted probability was higher than 50%, and the average probability was actually close to 75%.

or, for observation 22, predictions very close to the first one (except that the first one died, while the 22nd survived)

histo(23)
histo(11)

and, we observe here

Bagging trees

Let’s now get back on our trees, mentioned in the previous post. Bagging was introduced in 1994 by Leo Breiman in Bagging Predictors. If the first section describes the procedure, the second one introduces “Bagging Classification Trees”. Trees are nice for interpretation, but most of the time, they are rather poor predictors. The idea of bagging was to improve the accuracy of classification trees.

The idea of bagging to to generate a lot of trees

clr12 = c("#8dd3c7","#ffffb3","#bebada","#fb8072","#80b1d3","#fdb462","#b3de69","#fccde5","#d9d9d9","#bc80bd","#ccebc5","#ffed6f")
n = nrow(myocarde)
par(mfrow=c(4,3))
sed=c(1,2,4,5,6,10,11,21,22,24,27,28,30)
for(i in 1:12){
  set.seed(sed[i])
idx = sample(1:n, size=n, replace=TRUE)
cart =  rpart(PRONO~., myocarde[idx,])
prp(cart,type=2,extra=1,box.col=clr12[i])}


The strategie is actually the same as before. For the bootstrap part, store the tree in a list

L_tree = list()
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(as.factor(PRONO)~., myocarde[idx,])
}

and for the aggregation part, just take the average of predicted probabilities

p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}

Because with this example, we cannot visualize predictions, let us run the same code on the smaller dataset

L_tree = list()
n = nrow(df)
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(y~x1+x2, df[idx,],control = rpart.control(cp = 0.25,
minsplit = 2))
}
p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}
vu=seq(0,1,length=101)
vv=outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Fronm bags to forest

Here, we grew a lot of trees, but it is not stricto sensus a random forest algorithm, as introduced in 1995, in Random decision forests. Actually, the difference is in the creation of decision trees. To understand what happens, get back to the previous post on classification trees. As we’ve seen, when we have a node, we look at possible splits : we consider all possible variable, and all possible threshold. The startegy here will be to draw randomly k variables out of p (with of course k<p, for instance k=\sqrt{p}). That's interesting in high dimension, because at each split, we should look for all variables, all cutoffs, and that can take quite some time (especially with the bootstrap procedure, where the goal will be to grow 1000 trees).

To be continued…

Graduate Course on Advanced Tools for Econometrics (1)

This Monday, I will be giving the first part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon.

In the morning, we will talk about smoothing techniques, and in the afternoon, it will be on simulations and bootstrap techniques.

Graduate Course on Advanced Methods in Econometrics

I will give a short graduate course for PhD students, in Rennes, on Thurday mornings, in March (2nd, 9th, 23rd and 30th). The agenda will be

  1. Nonlinear Regression Models and Smoothing Techniques

  2. Bootstrapping and Regression

  3. Penalized Regression Models and LASSO

  4. Quantile Regression and Expectiles

There will be slides available by the end of February.

 

Confidence Regions for Parameters in the Simplex

Consider here the case where, in some parametric inference problem, parameter  is a point in the Simplex,

For instance, consider some regression, on compositional data,

> library(compositions)
>  data(DiagnosticProb)
>  Y=DiagnosticProb[,"type"]-1
>  X=DiagnosticProb[,c("A","B","C")]
>  model = glm(Y~ilr(X),family=binomial)
>  b = ilrInv(coef(model)[-1],orig=X)
>  as.numeric(b)
[1] 0.3447106 0.2374977 0.4177917

We can visualize that estimator on the simplex, using

>  tripoint=function(s){
+    p=s/sum(s)
+    abc2xy(matrix(p,1,3))
+  }

>  lab=LETTERS[1:3]
>  xl=c(-.1,1.25)
>  yl=c(-.1,1.15)
>  library(trifield)
>  A=abc2xy(matrix(c(1,0,0),1,3)) 
>  B=abc2xy(matrix(c(0,1,0),1,3))
>  C=abc2xy(matrix(c(0,0,1),1,3)) 
>  plot(0:1,0:1,col="white",
+  xlim=xl,ylim=yl,xlab="",ylab="",axes=FALSE)
>  polygon(rbind(A,B,C),col="light yellow")
>  text(B[1],-.05,lab[2])
>  text(A[1],1.05,lab[1])
>  text(C[1],-.05,lab[3])
>  segments((A[1]+C[1])/2,(A[2]+C[2])/2,B[1],B[2],col="grey",lty=2)
>  segments((A[1]+B[1])/2,(A[2]+B[2])/2,C[1],C[2],col="grey",lty=2)
>  segments((B[1]+C[1])/2,(B[2]+C[2])/2,A[1],A[2],col="grey",lty=2)
>  points(tripoint(b),pch=19,cex=2,col="red")

If we want to compute a ‘confidence region’, we can either use Bayesian models (with a Dirichlet distribution as prior distribution), or use bootstrap. We will use here the second idea

>  MB=matrix(NA,1e4,2)
>  for(sim in 1:1e4){
+    idx=sample(1:nrow(DiagnosticProb),
+    size=nrow(DiagnosticProb),replace=TRUE)
+  Y=DiagnosticProb[idx,"type"]-1
+  X=DiagnosticProb[idx,c("A","B","C")]
+  model = glm(Y~ilr(X),family=binomial)
+  MB[sim,]=tripoint(as.numeric(
+    ilrInv(coef(model)[-1],orig=X)))}

To get some ‘confidence region’, we can then use the bagplot, to get either a region where 50% of the boostraped estimators are, or 95%,

>  library(aplpack)
> P1=bagplot(MB[,1],MB[,2], factor =1.96, cex=.9,
+ dkmethod=2,show.baghull=TRUE) 
> P2=bagplot(MB[,1],MB[,2], factor =0.67, cex=.9,
+ dkmethod=2,show.baghull=TRUE) 

Then we can easily plot those two regions,

>  plot(0:1,0:1,col="white")
>  polygon(rbind(A,B,C),col="light yellow")
>  text(B[1],-.05,lab[2])
>  text(A[1],1.05,lab[1])
>  text(C[1],-.05,lab[3])
>  polygon(P1$hull.loop,col="yellow",border=NA)
>  polygon(P2$hull.loop,col="orange",border=NA)
>  segments((A[1]+C[1])/2,(A[2]+C[2])/2,B[1],B[2],col="grey",lty=2)
>  segments((A[1]+B[1])/2,(A[2]+B[2])/2,C[1],C[2],col="grey",lty=2)
>  segments((B[1]+C[1])/2,(B[2]+C[2])/2,A[1],A[2],col="grey",lty=2)
>  points(tripoint(b),pch=19,cex=2,col="red")

 

Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc

There were a lot of posts, recently, related to those topics, starting with Noah Smith ‘s piece entitled “Economics has a Math Problem” and more recently “Econometrics, Math, and Machine Learning…what?” by Matt Bogard. I don’t have (yet) a clear mind on those issues, but there are still a few thoughts that I wanted to share. I did not really want to, but I’ve been asked, on Twitter, and I thought it might be good to write them down, to clarify some ideas I have, but also (probably, hopefully) to get interesting feedbacks.

Continue reading Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc

An Attempt to Understand Boosting Algorithm(s)

Last tuesday, at the annual meeting of the French Economic Association, I was having lunch with Alfred, and while we were chatting about modeling issues (econometric models against machine learning prediction), he asked me what boosting was. Since I could not be very specific, we’ve been looking at wikipedia webpage.

Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones

One should admit that it is not very informative. But at least, there is the idea that ‘weak learners’ can be used to provide a good predictor. Now, to be honest, I guess I understand the concept. But I still can’t reproduce what I got with standard ‘boosting’ packages.

There are a lot of publications about the concept of ‘boosting’. In 1988, Michael Kearns published Thoughts on Hypothesis Boosting, which is probably the oldest one. About the algorithms, it is possible to find some references. Consider for instance Improving Regressors using Boosting Techniques, by Harris Drucker. Or The Boosting Approach to Machine Learning An Overview by Robert Schapire, among many others. In order to illustrate the use of boosting in the context of regression (and not classification, since I believe it provides a better visualisation) consider the section in Dong-Sheng Cao’s In The boosting: A new idea of building models.

Continue reading An Attempt to Understand Boosting Algorithm(s)

Normalité des estimateurs dans une régression

Cette semaine en cours, on a évoqué la normalité des estimateurs dans une régression linéaire. En supposant la normalité des résidus, on a vu en cours que http://freakonometrics.blog.free.fr/public/perso5/ols-002.gif était un estimateur Gaussien. En particulier, chacun des estimateurs est alors Gaussien, au sens où http://freakonometrics.blog.free.fr/public/perso5/ols--0003.gif, ce qui peut se visualiser sur le graphique suivant (la constante est en abscisse, et la pente en ordonnée), avec un intervalle de confiance à 95%,

En fait, la variance étant inconnue (mais pouvant être estimée), si on remplace la variance des résidus par leur estimateur, la loi de l’estimateur est une loi de Student. De même pour l’estimateur de la pente de la droite de régression, http://freakonometrics.blog.free.fr/public/perso5/ols---ooo4.gif

Le fait que le couple soit également Gaussien fait que l’on peut construire non plus un intervalle de confiance (on n’est plus en dimension 1) mais une ellipse de confiance. On retient une forme elliptique car c’est la plus petite région dans laquelle on se trouvera avec une probabilité de 95% (comme discuté dans un vieux billet).

Mais pour mieux comprendre cette notion d’ellipse de confiance, le plus simple est d’avoir recours à du rééchantillonnage dans la base de données. En effet, comme on ne dispose que d’un jeu de données, on a un estimateur. Et la discussion sur la loi de notre estimateur est purement théorique. Pour visualiser la loi de notre estimateur, il faudrait des centaines, voire des milliers de bases similaires. Que l’on n’a pas. La solution est alors de tirer au hasard des points de notre échantillon (avec remise), i.e. de faire du bootstrap,

set.seed(1)
COEF=matrix(NA,10000,2)
for(s in 1:nrow(COEF)){
I=sample(1:nrow(cars),nrow(cars),replace=TRUE)
COEF[s,]=lm(dist~speed,data=cars[I,])$coefficients
}

Si on regarde ce qui se passe, tirage après tirage, on génère un paquet d’échantillon, et pour chaque échantillon, on ajuste une droite de régression.

http://freakonometrics.blog.free.fr/public/perso5/BOOOT.gif

Les distributions – sur tous ces échantillons – de la constante et de la pente semblent effectivement Gaussiennes,

hist(COEF[,1],col="light blue",prob=TRUE)
u=seq(min(COEF[,1]),max(COEF[,1]),length=500)
v=dnorm(u,mean(COEF[,1]),sd(COEF[,1]))
lines(u,v,lwd=3,col="red")
hist(COEF[,2],col="light blue",prob=TRUE)
u=seq(min(COEF[,2]),max(COEF[,2]),length=500)
v=dnorm(u,mean(COEF[,2]),sd(COEF[,2]))
lines(u,v,lwd=3,col="red")

pour la distribution de l’estimateur de la pente, alors que pour l’estimateur de la constante, on a la distribution suivante,

Si on regarde maintenant la loi jointe,

on retrouve une forme elliptique pour le nuage des points (et une forte corrélation négative entre les deux estimateurs). Mais si on creuse un peu

library(ellipse)
reg=lm(dist~speed,data=cars)
e=ellipse(reg)
plot(e,type="l",lwd=2)
polygon(e,col="light blue")
points(COEF,cex=.5)

l’ellipse ne coïncide pas (tout à fait) avec celle obtenu théoriquement. Ce qui laisse à penser que l’hypothèse de normalité des résidus est peut-être à revoir..,

Quels résidus et quelle loi simuler ?

Suite à une question sur les résidus, je vais essayer de prendre deux minutes pour reprendre un point que je n’avais pas abordé en cours, faute de temps. Les résidus les plus intéressants en provisionnement (et dans les GLM) sont les résidus de Pearson,

http://freakonometrics.blog.free.fr/public/perso4/res-boot-01.gif

Mais il existe également des résidus dits ajustés, permettant de prendre en compte le fait que le nombre de paramètres est ici relativement grand, ramené au nombre d’observations. On pose alors

http://freakonometrics.blog.free.fr/public/perso4/res-boot-2.gif

A première vue, utiliser l’un ou l’autre devrait donner la même chose si l’on génère des pseudo-triangles, puisque dans un cas, on utiliserait

http://freakonometrics.blog.free.fr/public/perso4/res-boot.gif

et dans l’autre

http://freakonometrics.blog.free.fr/public/perso4/res-boot-3.gif

Bref, multiplier par une constante pour ensuite diviser par la même constante, ça revient au même. Sauf qu’il peut sembler légitime de poser

http://freakonometrics.blog.free.fr/public/perso4/res-boot-05.gif

(on reviendra là dessus tout à l’heure). En attendant, voilà nos deux résidus

> (E=residuals(regp,"pearson"))
1             2             3             4
9.488238e-01  2.404895e-02  1.168421e-01 -1.082940e+00
5             6             7             8
1.302749e-01 -1.007348e-13 -1.128013e+00  2.773332e-01
9            10            11            13
5.669707e-02  8.919633e-01 -2.110748e-01 -1.533031e+00
14            15            16            19
-2.213449e+00 -1.024162e+00  4.237393e+00 -4.899687e-01
20            21            25            26
7.929194e-01 -2.972380e-01 -4.275912e-01  4.140426e-01
31
-6.202125e-15
> n=sum(is.na(Y)==FALSE)
> k=ncol(PAID)+nrow(PAID)-1
> (R=residuals(regp,"pearson")*sqrt(n/(n-k)))
1             2             3             4
1.374976e+00  3.485024e-02  1.693203e-01 -1.569329e+00
5             6             7             8
1.887862e-01 -1.459787e-13 -1.634646e+00  4.018940e-01
9            10            11            13
8.216186e-02  1.292578e+00 -3.058764e-01 -2.221573e+00
14            15            16            19
-3.207593e+00 -1.484151e+00  6.140566e+00 -7.100321e-01
20            21            25            26
1.149049e+00 -4.307387e-01 -6.196386e-01  6.000048e-01
31
-8.987734e-15

L’autre point sur lequel je voulais revenir sur les simulations, est que la loi de Poisson n’est peut-être pas adaptée pour simuler des scénarios de paiements futurs. En effet, si on fait une régression quasipoisson, on voit que le paramètre de surdispersion n’est pas négligeable,

> regqp=glm(Y~as.factor(D)+as.factor(A),
+     data=base,family=quasipoisson(link="log"))
> summary(regqp)

Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), 
family = quasipoisson(link = "log"),
data = base)

Deviance Residuals:
Min       1Q   Median       3Q      Max
-2.3426  -0.4996   0.0000   0.2770   3.9355

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)       8.05697    0.02769 290.995  < 2e-16 ***
as.factor(D)2    -0.96513    0.02427 -39.772 2.41e-12 ***
as.factor(D)3    -4.14853    0.11805 -35.142 8.26e-12 ***
as.factor(D)4    -5.10499    0.22548 -22.641 6.36e-10 ***
as.factor(D)5    -5.94962    0.43338 -13.728 8.17e-08 ***
as.factor(D)6    -5.01244    0.39050 -12.836 1.55e-07 ***
as.factor(A)2002  0.06440    0.03731   1.726 0.115054
as.factor(A)2003  0.20242    0.03615   5.599 0.000228 ***
as.factor(A)2004  0.31175    0.03535   8.820 4.96e-06 ***
as.factor(A)2005  0.44407    0.03451  12.869 1.51e-07 ***
as.factor(A)2006  0.50271    0.03711  13.546 9.28e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Dispersion parameter for quasipoisson family taken to be 3.18623

Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
(15 observations deleted due to missingness)
AIC: NA

Number of Fisher Scoring iterations: 4

On peut alors envisager de simuler non pas des lois de Poisson, mais des lois quasipoissons, en reprenant un vieux code,

> rqpois = function(n, lambda, phi, roundvalue = TRUE) {
+ b = phi
+  a = lambda/phi
+ r = rgamma(n, shape = a, scale = b)
+  if(roundvalue){r=round(r)}
+ return(r)
+ }

A partir de ces deux remarques, on peut reprendre le code qui permettait de générer des scénarios de paiements pour les années futures,

> Yp=predict(regp,type="response",newdata=base)
> Rs=rep(NA,20000)
> for(s in 1:20000){
+ serreur=sample(erreur,
+ size=36,replace=TRUE)
+ E=matrix(serreur,6,6)
+ sY=matrix(Yp,6,6)+E*sqrt(matrix(Yp,6,6))
+ sbase=data.frame(sY=as.vector(sY),D,A)
+ sbase$sY[is.na(Y)==TRUE]=NA
+ sreg=glm(sY~as.factor(D)+as.factor(A),
+ data=sbase,family=poisson(link="log"))
+ sYp=predict(sreg,type="response",
+ newdata=sbase)
+ sYpscenario=rqpois(36,sYp,phi=3.18623)
+ Rs[s]=sum(sYpscenario[is.na(Y)==TRUE])
+ }

si on regarde la densité de la distribution des paiements futurs, on obtient,

plot(density(Rs))

avec en bleu la distribution obtenue sur les résidus de Pearson brut, et en simulant une loi de Poisson, et en rouge la distribution des provisions avec les deux modifications. On observe clairement que les quantiles (par exemple) ont fortement changé. On peut le voir encore plus précisément sur un box-plot

Si on revient cinq minutes sur ce qu’on vient de faire. Dans le premier rappelons que l’on a besoin d’utiliser, pour prédire les paiements futurs, que

http://freakonometrics.blog.free.fr/public/perso4/siiiim-01.gif

(avec la filtration correspondant à la partie supérieure du triangle disponible), i.e.

http://freakonometrics.blog.free.fr/public/perso4/siiiiim-03.gif

Comme on utilise un modèle de Poisson, on suppose aussi que

http://freakonometrics.blog.free.fr/public/perso4/siiiim-5.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso4/siiiiim-08.gif

c’est à dire que

http://freakonometrics.blog.free.fr/public/perso4/siiiim-04.gif

Il faut avoir des résidus centrés et de variance unitaire. A première vue, c’est le cas pour nos résidus de Pearson (à peu près),

> mean(residuals(regp,"pearson"))
[1] -0.02462518
> sd(residuals(regp,"pearson"))
[1] 1.261934

sauf que l’estimateur de la variance est biaisé. Certes, classiquement, l’estimateur est basé sur une normalisation par un facteur http://freakonometrics.blog.free.fr/public/perso4/siiiim-11.gif (classique en statistique quand on possède http://freakonometrics.blog.free.fr/public/perso4/siiim-10.gif observations)

> sqrt(sum((residuals(regp,"pearson")-
+ mean(residuals(regp,"pearson")))^2)/
+ (length(residuals(regp,"pearson"))-1))
[1] 1.261934

Sauf qu’ici on ne perd pas un degré de liberté (remplacer l’espérance des observations par leur moyenne empirique): en régression, il faut corriger par le nombre de variables explicatives. Et donc, pour avoir un estimateur sans biais de la variance de nos résidus, il faut corriger par un facteur http://freakonometrics.blog.free.fr/public/perso4/siiiim-12.gif. D’où l’utilisation des résidus dits ajustés afin d’avoir des résidus de variance (vraiment) unitaire.
Pour le second point, on utilise le fait qu’en pratique, la dispersion des paiements est plus grande que ce que donnerait un modèle de Poisson. Et il convient d’en tenir compte dans nos simulations. En l’occurrence, on veut que  https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp001.png, mais aussi quehttps://perso.univ-rennes1.fr/arthur.charpentier/latex/qp002.png. On a vu que le paramètre de surdispersion ne pouvait être supposé unitaire. On a alors le choix, entre simuler une loi Gamma de paramètres

https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp003.png et https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp004.png

ou bien simuler une loi binomiale négative, de moyenne https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp008.png, et de paramètre de surdispersion https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp009.png telle que la variance s’écrive

https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp010.png

Aussi, on prend https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp011.png alors que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/qp012.png

Avec ces méthodes, on obtient quelque chose… qui est programmé sous R,

> BootChainLadder(PAID,10000,"od.pois")
BootChainLadder(Triangle = PAID, R = 10000, 
process.distr = "od.pois")

Latest Mean Ultimate Mean IBNR SD IBNR IBNR 75% IBNR 95%
1  4,456         4,456       0.0     0.0        0        0
2  4,730         4,752      22.2    12.2       28       44
3  5,420         5,456      35.5    15.2       43       64
4  6,020         6,086      65.7    19.8       78      102
5  6,794         6,946     151.9    28.7      170      202
6  5,217         7,364   2,147.4   111.0    2,218    2,339

Totals
Latest:         32,637
Mean Ultimate:  35,060
Mean IBNR:       2,423
SD IBNR:           132
Total IBNR 75%:  2,506
Total IBNR 95%:  2,653

> quantile(Rs,c(.75,.95))
75%  95%
2509 2653

et qui donne les mêmes résultats que ce qu’on vient de reprogrammer…

Simulations et incréments négatifs

Pour faire suite (rapidement) à mon dernier billet, le cas le plus fréquent pour observer des incréments négatifs est lorsque l’on fait des simulations pour obtenir la distribution du montant de paiements futurs. Rappelons que pour générer un pseudo-triangle, on utilise les résidus de Pearson, et on pose

http://freakonometrics.blog.free.fr/public/perso4/pseudo-triangle.gifOn voit que l’on risque d’avoir un incrément négatif si le résidu est négatif, et plus grand (en valeur absolue) que la racine carrée de la prédiction (par notre modèle Poissonnien, en l’occurrence). Or dans le triangle vu en cours, on a

> source("https://perso.univ-rennes1.fr/
arthur.charpentier/bases.R")
> INC=PAID
> INC[,2:6]=PAID[,2:6]-PAID[,1:5]
> Y=as.vector(INC)
> D=rep(1:6,each=6)
> A=rep(2001:2006,6)
> base=data.frame(Y,D,A)
> reg=glm(Y~as.factor(D)+as.factor(A),
+     data=base,family=poisson(link="log"))
> Yp=predict(reg,type="response",
+ newdata=base)
> erreurs=residuals(reg,"pearson")
> min(sqrt(Yp[is.na(Y)==FALSE]))
[1] 2.868171
> min(erreurs)
[1] -2.213449

autrement dit, on n’aura jamais d’incrément négatif lors de nos boucle, si l’on génère les résidus par bootstrap. Sinon, avec des lois paramétriques non-bornées inférieurement (loi normale, loi de Student), il est tout a fait possible d’obtenir des incréments négatifs. La méthode pour éviter le problème des incréments négatifs dans les boucles… est probablement de ne pas prendre en compte les scénarios où des incréments négatifs ont été obtenus, i.e.

R=rep(NA,10000)
for(s in 1:10000){
serreur=sample(erreurs,
size=36,replace=TRUE)
E=matrix(serreur,6,6)
sY=matrix(Yp,6,6)+E*sqrt(matrix(Yp,6,6))
if(min(sY[is.na(Y)==FALSE])>=0){
sbase=data.frame(sY=as.vector(sY),D,A)
sbase$sY[is.na(Y)==TRUE]=NA
sreg=glm(sY~as.factor(D)+as.factor(A),
data=sbase,family=poisson(link="log"))
sYp=predict(sreg,type="response",
newdata=sbase)
R[s]=sum(sYp[is.na(Y)==TRUE])}
}

Une stratégie un peu plus propre pourrait être de mettre des 0 dès qu’on obtient un incrément négatif… Mais encore une fois, c’est de la cuisine.

Confidence interval for predictions with GLMs

Consider a (simple) Poisson regression http://freakonometrics.hypotheses.org/files/2016/11/poiss01.gif. Given a sample http://freakonometrics.hypotheses.org/files/2016/11/poiss02.gif where http://freakonometrics.hypotheses.org/files/2016/11/poiss03.gif, the goal is to derive a 95% confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif given http://freakonometrics.hypotheses.org/files/2016/11/poiss05.gif, where http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif is the prediction. Hence, we want to derive a confidence interval for the prediction, not the potential observation, i.e. the dot on the graph below

> r=glm(dist~speed,data=cars,family=poisson)
> P=predict(r,type="response",
+ newdata=data.frame(speed=seq(-1,35,by=.2)))
> plot(cars,xlim=c(0,31),ylim=c(0,170))
> abline(v=30,lty=2)
> lines(seq(-1,35,by=.2),P,lwd=2,col="red")
> P0=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> points(30,P1$fit,pch=4,lwd=3)

i.e.

Let http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif denote the maximum likelihood estimator of http://freakonometrics.hypotheses.org/files/2016/11/poiss07.gif. Then
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif
where http://freakonometrics.hypotheses.org/files/2016/11/poiss101.gif is Fisher information of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif (from standard maximum likelihood theory). Recall that
http://freakonometrics.hypotheses.org/files/2016/11/poiss13.gif
where computation of those values is based on the following calculations
http://freakonometrics.blog.fre<br /><br /> e.fr/public/latex/poiss21.gif
In the case of the log-Poisson regression
http://freakonometrics.hypotheses.org/files/2016/11/poiss36.gif
Let us get back to our initial problem.

  • confidence interval for the linear combination

A first idea to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif is to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss100.gif (by taking exponential values of bounds, since the exponential is a monotone function). Asymptotically, we know that
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif

thus, an approximation for the variance matrix of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif will be based on http://freakonometrics.hypotheses.org/files/2016/11/poiss45.gif, obtained by plugging estimators of the parameters.
Then, since http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif as an asymptotic multivariate distribution, any linear combination of the parameters will also be normal, i.e.
http://freakonometrics.hypotheses.org/files/2016/11/poiss47.gif has a normal distribution, centered on http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif, with variance http://freakonometrics.hypotheses.org/files/2016/11/poiss102.gif where http://freakonometrics.hypotheses.org/files/2016/11/Poiss110.gif is the variance of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif. All those quantities can be easily computed. First, we can get the variance of the estimators

> i1=sum(predict(reg,type="response"))
> i2=sum(cars$speed*predict(reg,type="response"))
> i3=sum(cars$speed^2*predict(reg,type="response"))
> I=matrix(c(i1,i2,i2,i3),2,2)
> V=solve(I)

Hence, if we compare with the output of the regression,

> summary(reg)$cov.unscaled
(Intercept)         speed
(Intercept)  0.0066870446 -3.474479e-04
speed       -0.0003474479  1.940302e-05
> V
[,1]          [,2]
[1,]  0.0066871228 -3.474515e-04
[2,] -0.0003474515  1.940318e-05

Based on those values, it is easy to derive the standard deviation for the linear combination,

> x=30
> P2=predict(r,type="link",se.fit=TRUE,
+ newdata=data.frame(speed=x))
> P2
$fit
1
5.046034

$se.fit
[1] 0.05747075

$residual.scale
[1] 1

> sqrt(V[1,1]+2*x*V[2,1]+x^2*V[2,2])
[1] 0.05747084
> sqrt(t(c(1,x))%*%V%*%c(1,x))
[,1]
[1,] 0.05747084

And once we have the standard deviation, and normality (at least asymptotically), confidence intervals are derived, and then, taking the exponential of the bounds, we get confidence interval

> segments(30,exp(P2$fit-1.96*P2$se.fit),
+ 30,exp(P2$fit+1.96*P2$se.fit),col="blue",lwd=3)

Based on that technique, confidence intervals are no longer centered on the prediction. But who cares ?

  • delta method

Actually, those who like to use “more or less” expressions for confidence intervals will not like non centered intervals. So, an alternative is to use the delta method. Instead of writing (again) something on the theory, we can use a package which computes that method,

> estmean=t(c(1,x))%*%coef(reg)
> var=t(c(1,x))%*%summary(reg)$cov.unscaled%*%c(1,x)
> library(msm)
> deltamethod (~ exp(x1), estmean, var)
[1] 8.931232
> P1=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> P1
$fit
1
155.4048

$se.fit
1
8.931232

$residual.scale
[1] 1

The delta method gives us (asymptotic) normality, so once we have a standard deviation, we get the confidence interval.

> segments(30,P1$fit-1.96*P1$se.fit,30,
+ P1$fit+1.96*P1$se.fit,col="blue",lwd=3)

Note that those quantities – obtained with two different approaches – are rather close here

> exp(P2$fit-1.96*P2$se.fit)
1
138.8495
> P1$fit-1.96*P1$se.fit
1
137.8996
> exp(P2$fit+1.96*P2$se.fit)
1
173.9341
> P1$fit+1.96*P1$se.fit
1
172.9101
  • bootstrap techniques

And a third method (but far from what I expect to teach on that course) is to use bootstrap techniques to about those results based on asymptotic normality (we have only 50 observations). The idea is to sample from out dataset, and to run a log-Poisson regression on those new samples, and to repeat a lot of time,