Timing parfait, on va revenir un peu sur la régression de Poisson, un premier avril. Les slides sont en ligne (slides 8) et la vidéo aussi (slides 8)
Tag Archives: R
Slides 7 – modèle de Poisson
Slides 5 – régression logistique sur variable(s) continue(s)
Function basis and regression
In the first part of the course on linear models, we’ve seen how to construct a linear model when the vector of covariates \boldsymbol{x} is given, so that \mathbb{E}(Y|\boldsymbol{X}=\boldsymbol{x}) is either simply \boldsymbol{x}^\top\boldsymbol{\beta} (for standard linear models) or a functional of \boldsymbol{x}^\top\boldsymbol{\beta} (in GLMs). But more generally, we can consider transformations of the covariates, so that a linear model can be used. In a very general setting, consider \sum_{j=1}^m\beta_j h_j(\boldsymbol{x})with h_j:\mathbb{R}^p\rightarrow\mathbb{R}. The standard linear model is obtained when m=p and h_j(\boldsymbol{x})=x_j , but of course, much more general models can be obtained, for instance with h_k(\boldsymbol{x})=x_j^2 or h_k(\boldsymbol{x})=x_{j}x_{j'}, that could be used to achieve high-order Taylor expansions. In that case, we will obtain the polynomial regression, that we will discuss first. We might also think of piecewise constant functions, h_k(\boldsymbol{x})=\boldsymbol{1}(x_j\in [a,b]) , that could be related to regression trees (but that is not in the scope in the STT5100 course). And if we go on step futher, we might think of piecewise linear or piecewise polynomial function, possibly with additional continuity constraints, that will lead us to spline basis.
- Polynomial regression
For pedagogical purpose, when I talk about polynomial regression, I always have in mind (in the univariate case) y=\beta_0+\beta_1x+\beta_2x^2+\cdots+\beta_kx^k+\varepsilonbut if we use
lm(y~poly(x,k)) |
in R, the output is not the \beta_j‘s.
As discussed in Kennedy & Gentle (1980) Statistical Computing,

Recall that orthogonal polynomials are defined with respect to the classical inner-product (on the finite interval (a,b)){\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x)g(x)~\mathrm {d} x} And a sequence of orthogonal polynomials is (P_n) where P_n is a polynomial of degree n, for all n, and such that P_m\perp P_n for all m\neq n. Note that those polyomials are orthogonal with respect to the inner product defined above, i.e. given some finite interval (a,b). But if (a,b) changes, the polynomials will be different.
A popular family of orthogonal polynomial, on finite interval (-1,+1) is the family of Legendre polynomials, satisfying{\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)~\mathrm {d} x=0}as soon as m\neq n. Those polynomials satisfy Bonnet’s recursion formula{\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} or Rodrigues’ formula {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}}The first values are here{\displaystyle P_{0}(x)=1} {\displaystyle P_{1}(x)=x}{\displaystyle P_{2}(x)={\frac {3x^{2}-1}{2}}}{\displaystyle P_{3}(x)={\frac {5x^{3}-3x}{2}}} {\displaystyle P_{4}(x)={\frac {35x^{4}-30x^{2}+3}{8}}}
Interestingly, we can get those polynomial functions using
library(orthopolynom) (leg4coef = legendre.polynomials(n=4)) [[1]] 1 [[2]] x [[3]] -0.5 + 1.5*x^2 [[4]] -1.5*x + 2.5*x^3 [[5]] 0.375 - 3.75*x^2 + 4.375*x^4 |
Of course, there are many families of orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, etc). Now, in R, there is the standard poly function, that we use in polynomial regression.
x = seq(-1,1,length=101) y = poly(x,4) y 1 2 3 4 [1,] -1.706475e-01 0.215984813 -2.480753e-01 0.270362873 [2,] -1.672345e-01 0.203025724 -2.183063e-01 0.216290298 ... [100,] 1.672345e-01 0.203025724 2.183063e-01 0.216290298 [101,] 1.706475e-01 0.215984813 2.480753e-01 0.270362873 attr(,"coefs") attr(,"coefs")$alpha [1] 3.157229e-17 2.655145e-16 9.799244e-17 5.368224e-16 attr(,"coefs")$norm2 [1] 1.0000000 101.0000000 34.3400000 9.3377328 2.4472330 0.6330176 attr(,"degree") [1] 1 2 3 4 attr(,"class") [1] "poly" "matrix" |

But these are not Legendre polynomials… As explained in 李哲源‘s post on stackoverflow, the idea is to start with P_{-1}(x)=0, P_{0}(x)=1 and P_{1}(x)=x, and then define \ell_n=\langle P_n,P_n\rangle as well as \alpha_n=\langle P_nP_1,P_1\rangle/\ell_n=\langle P_n^2,P_1\rangle/\ell_i= and \beta_n=\ell_n/\ell_{n-1}. Finally, define recursively{\displaystyle P_{n}(x)=(x-\alpha_{n-1})P_{n-1}(x)-\beta_{i-1}P_{i-2}(x)}and its normalized version, \tilde{P}_{n}=P_n/\sqrt{\ell_n}. That is what poly computes.
So, for pedagogical purpose, I said that I like to use y=\boldsymbol{x}^\top\boldsymbol{\beta}+\varepsilon where\boldsymbol{x}=(1,x,x^2,\cdots,xˆ{k-1},x^k)And actually, when using poly, we use the QR decomposition of that matrix. As discussed in in 李哲源‘s post, we can almost reproduce the poly function using
my_poly - function (x, degree = 1) { xbar = mean(x) x = x - xbar QR = qr(outer(x, 0:degree, "^")) X = qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE] X2 = X * X norm2 = colSums(X * X) alpha = drop(crossprod(X2, x)) / norm2 beta = norm2 / (c(length(x), norm2[-degree])) colnames(X) = 1:degree scale = sqrt(norm2) X = X * rep(1 / scale, each = length(x)) X} |
Nevertheless, the two models are equivalent. More precisely,
plot(cars) reg1 = lm(dist~speed+I(speed^2)+I(speed^3),data=cars) reg2 = lm(dist~poly(speed,3),data=cars) u = seq(3,26,by=.1) v1 = predict(reg1,newdata=data.frame(speed=u)) v2 = predict(reg2,newdata=data.frame(speed=u)) lines(u,v1,col="blue") lines(u,v2,col="red",lty=2) |

We have exactly the same prediction here
v1[u==15] 121 38.43919 v2[u==15] 121 38.43919 |
And probably also quite interesting : the coefficients do not have the same interpretation (since we do not have the same basis), but the p-value for the highest degree is exactly the same here ! Here the two models reject, with the same confidence, the polynomial of degree three,
summary(reg1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -19.50505 28.40530 -0.687 0.496 speed 6.80111 6.80113 1.000 0.323 I(speed^2) -0.34966 0.49988 -0.699 0.488 I(speed^3) 0.01025 0.01130 0.907 0.369 Residual standard error: 15.2 on 46 degrees of freedom Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519 F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11 summary(reg2) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.98 2.15 19.988 < 2e-16 *** poly(speed, 3)1 145.55 15.21 9.573 1.6e-12 *** poly(speed, 3)2 23.00 15.21 1.512 0.137 poly(speed, 3)3 13.80 15.21 0.907 0.369 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15.2 on 46 degrees of freedom Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519 F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11 |
- B-splines regression (and GAMs)
Splines are also important in regression models, especially when we start talking about Generalized Additive Models. See Perperoglou, Sauerbrei, Abrahamowicz & Schmid (2019) for a review. In the univariate case, I introduce (linear) splines through positive parts, in the sense thaty=\beta_0+\beta_1x+\beta_2(x-s_1)_++\cdots+\beta_k(x-s_{k-1})_++\varepsilonwhere (x-s)_+ equals 0 if x<s and x-s if x>s. Those functions are nice since they are continuous, so the model is continuous (the weighted sum of continuous functions is continuous). And we can go one step further, with y=\beta_0+\beta_1x+\beta_2x^2+\beta_3(x-s_1)^2_++\cdots+\beta_k(x-s_{k-2})^2_++\varepsilonwith quadratic splines, or y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4(x-s_1)^3_++\cdots+\beta_k(x-s_{k-3})^3_++\varepsilonfor cubic splines. Interestingly, quadratic splines are not only continuous, but their first derivative is also continuous (and the second one for cubic splines). So the knot discontinuity is s_1,s_2,\cdots is now invisible…
I like those models since they are easy to interprete. For example, the simple model \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.
Unfortunately, it is now what R is using when using the bs function in R, which are the standard B-splines. Just to visualize (I will skip the maths here), with R, we have
library(splines) clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02") x = seq(5,25,by=.25) B = bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=1) matplot(x,B,type="l",lty=1,lwd=2,col=clr6) B=bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=2) matplot(x,B,type="l",col=clr6,lty=1,lwd=2) |

while the functions I mentioned were (more or less) the following
pos = function(x,s) (x-s)*(x>s) par(mfrow=c(1,2)) clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02") x = seq(5,25,by=.25) B = cbind(pos(x,5),pos(x,10),pos(x,20)) matplot(x,B,type="l",lty=1,lwd=2,col=clr6) pos2 = function(x,s) (x-s)^2*(x>s) B = cbind(pos(x,5)*20,pos2(x,5),pos2(x,10),pos2(x,20)) matplot(x,B,type="l",col=clr6,lty=1,lwd=2) |

And as for the polynomial regression, the two models are equivalent. For example
plot(cars) reg1 = lm(dist~speed+pos(speed,10)+pos(speed,20),data=cars) reg2 = lm(dist~bs(speed,degree=1,knots=c(10,20)),data=cars) v1 = predict(reg1,newdata=data.frame(speed=u)) v2 = predict(reg2,newdata=data.frame(speed=u)) lines(u,v1,col="blue") lines(u,v2,col="red",lty=2) |

or more specifically
v1[u==15] 121 39.35747 v2[u==15] 121 39.35747 |
So one more time, the two models are equivalent, but I still find the approach with the positive part more intuitive, and easy to understand. As well as the interpretation of coefficients,
summary(reg1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.6305 16.2941 -0.468 0.6418 speed 3.0630 1.8238 1.679 0.0998 . pos(speed, 10) 0.2087 2.2453 0.093 0.9263 pos(speed, 20) 4.2812 2.2843 1.874 0.0673 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15 on 46 degrees of freedom Multiple R-squared: 0.6821, Adjusted R-squared: 0.6613 F-statistic: 32.89 on 3 and 46 DF, p-value: 1.643e-11 summary(reg2) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.621 9.344 0.495 0.6233 bs(speed, degree = 1, knots = c(10, 20))1 18.378 10.943 1.679 0.0998 . bs(speed, degree = 1, knots = c(10, 20))2 51.094 10.040 5.089 6.51e-06 *** bs(speed, degree = 1, knots = c(10, 20))3 88.859 12.047 7.376 2.49e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15 on 46 degrees of freedom Multiple R-squared: 0.6821, Adjusted R-squared: 0.6613 F-statistic: 32.89 on 3 and 46 DF, p-value: 1.643e-11 |
Here we can see directly that the first knot was not interesting (the slope did not change significantly) while the second one was…
Testing for a causal effect (with 2 time series)
A few days ago, I came back on a sentence I found (in a French newspaper), where someone was claiming that
“… an old variable explains 85% of the change in a new variable. So we can talk about causality”
and I tried to explain that it was just stupid : if we consider the regression of the temperature on day t+1 against the number of cyclist on day t, the R^2 exceeds 80%… but it is hard to claim that the number of cyclists on specific day will actually cause the temperature on the next day…

Nevertheless, that was frustrating, and I was wondering if there was a clever way to test for causality in that case. A popular one is Granger causality (I can mention a paper we published a few years ago where we use such a test, Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media). To explain that test, consider a bivariate time series (just like the one we have here), \boldsymbol{z}_t=(x_t,y_t), and consider some bivariate autoregressive model
{\displaystyle {\begin{bmatrix}x_{t}\\y_{t}\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\end{bmatrix}}+{\begin{bmatrix}a_{1,1}&\textcolor{red}{a_{1,2}}\\\textcolor{blue}{a_{2,1}}&a_{2,2}\end{bmatrix}}{\begin{bmatrix}x_{t-1}\\y_{t-1}\end{bmatrix}}+{\begin{bmatrix}u_{t}\\v_{t}\end{bmatrix}}}where \boldsymbol{\varepsilon}_t=(u_t,v_t) is some bivariate white noise, in the sense that (i) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t})=\boldsymbol{0}} (the noise is centered) (ii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^\top)=\Omega } , so the variance matrix is constant, but possibly non-diagonal (iii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t-h}^\top)=\boldsymbol{0} } for all h\neq 0. Note that we can use the simplified expression{\displaystyle {\boldsymbol{z}_t=\boldsymbol{c}+\boldsymbol{A}\boldsymbol{z}_{t-1}+\boldsymbol{\varepsilon}_t}}Now, Granger test is based on several quantities. With off-diagonal terms of matrix \Omega, we have a so-called instantaneous causality, and since \Omega is symmetry, we will write x\leftrightarrow y. With off-diagonal terms of matrix \boldsymbol{A}, we have a so-called lagged causality, with either \textcolor{blue}{x\rightarrow y} or \textcolor{red}{x\leftarrow y} (and possibly both, if both terms are significant).
So I wanted to try on my two-variable problem.
df = read.csv("cyclistsTempHKI.csv") dfts = cbind(C=ts(df$cyclists,start = c(2014, 1,2), frequency = 365), T=ts(df$meanTemp,start = c(2014, 1,2), frequency = 365)) library(vars) |
I now have “time series” objects, and we can fit a VAR model,
var2 = VAR(dfts, p = 1, type = "const") coefficients(var2) $C Estimate Std. Error t value Pr(>|t|) C.l1 0.8684009 0.02889424 30.054460 8.080226e-107 T.l1 70.3042012 20.07247411 3.502518 5.102094e-04 const 807.6394001 187.75472482 4.301566 2.110412e-05 $T Estimate Std. Error t value Pr(>|t|) C.l1 0.0003865391 6.257596e-05 6.177118 1.540467e-09 T.l1 0.6611135594 4.347074e-02 15.208241 6.086394e-42 const -1.6413074565 4.066184e-01 -4.036481 6.446018e-05 |
For instant, we can run a causality, to test if the number of cyclists can cause the temperature (on the next day)
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 38.157, df1 = 1, df2 = 842, p-value = 1.015e-09 |
Here, we should clearly reject H_0, which is that there is no causal effect. Which is the way statisticians say that there should be some causal effect between the number of cyclist and the temperature…
So clearly, something is wrong here. Either it is some sort of superpower that cyclists are not aware of. Or this test that was used for forty years (Clive Granger even got a Nobel price for it) is not working. Or we missed something. Actually… I think we missed something here. Possibly because the series are not stationary. We can almost see it with
Phi = matrix(c(coefficients(var2)$C[1:2,1],coefficients(var2)$T[1:2,1]),2,2) eigen(Phi) eigen() decomposition $values [1] 0.9594810 0.5700335 |
where the highest eigenvalue is very close to one. But actually, we look here at the temperature…
plot(dfts) |

so, at least, we should expect some seasonal unit root here. So let us use two techniques. The first one is a classical one-year difference, \Delta_{365}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-365}
var2 = VAR(diff(dfts,365), p = 1, type = "const") coefficients(var2) $C Estimate Std. Error t value Pr(>|t|) C.l1 0.8376424 0.07259969 11.537823 1.993355e-16 T.l1 42.2638410 28.58783276 1.478386 1.449076e-01 const -507.5514795 219.40240747 -2.313336 2.440042e-02 $T Estimate Std. Error t value Pr(>|t|) C.l1 0.000518209 0.0003277295 1.5812096 1.194623e-01 T.l1 0.598425288 0.1290511945 4.6371154 2.162476e-05 const 0.547828079 0.9904263469 0.5531235 5.823804e-01 |
The test on the fited VAR model yields
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 2.5002, df1 = 1, df2 = 112, p-value = 0.1167 |
i.e., with a 11% p-value, we should reject the assumption that the number of cyclists cause the temperature (on the next day), and actually, we should also reject the other way
causality(var2, cause = "T") $Granger Granger causality H0: T do not Granger-cause C data: VAR object var2 F-Test = 2.1856, df1 = 1, df2 = 112, p-value = 0.1421 |
Nevertheless, if we look at the instantaneous causality, this one makes more sense
$Instant H0: No instantaneous causality between: T and C data: VAR object var2 Chi-squared = 13.081, df = 1, p-value = 0.0002982 |
The second idea would be to use a one day difference, \Delta_{1}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-1} and to fit a VAR model on that one
VARselect(diff(dfts,1), lag.max = 4, type="const") $selection AIC(n) HQ(n) SC(n) FPE(n) 3 3 2 3 |
but on that one, a VAR(1) model – with only one lag – might not be sufficient. It might be better to consider a VAR(3)
var2 = VAR(diff(dfts,1), p = 3, type = "const") |
and on that one, one more time, we should reject the causal effect of the number of cyclists on the temperature (on the next day)
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 0.67644, df1 = 3, df2 = 828, p-value = 0.5666 |
and this time, there could be a (lagged) causal effect of the temperature on the number of cyclists
causality(var2, cause = "T") $Granger Granger causality H0: T do not Granger-cause C data: VAR object var2 F-Test = 7.7981, df1 = 3, df2 = 828, p-value = 3.879e-05 $Instant H0: No instantaneous causality between: T and C data: VAR object var2 Chi-squared = 55.83, df = 1, p-value = 7.905e-14 |
but nothing instantaneously… So it looks like Granger causality performs well on that one !
Combiner les modalités d’une variable factorielle
Un billet rapide pour reprendre un point qu’on a vu ce matin en cours STT5100 pour illustrer le test de Fisher. On va utiliser les données de prix d’appartements en Pologne (données pas mal utilisées dans mon ébauche de notes de cours)
library(DALEX) data(apartments) with(data = apartments, boxplot(m2.price ~ district)) |

On a envie de faire ici des regroupements de modalités (c’est d’ailleurs suggéré par la régression simple, 5 variables explicatives étant ici non significatives). Pour mieux voir, on peut réordonner les modalités
A = with(data = apartments, aggregate(m2.price,by=list(district),FUN=mean)) A = A[order(A$x),] L = as.character(A$Group.1) apartments$district = factor(apartments$district, level=L) with(data = apartments, boxplot(m2.price ~ district)) |

On va prendre ici le district le moins cher comme référence,
reg=lm(m2.price ~ district, data=apartments) > summary(reg) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2968.36 58.02 51.160 <2e-16 *** districtBielany 17.38 84.16 0.207 0.836 districtPraga 26.45 85.12 0.311 0.756 districtUrsynow 42.01 82.65 0.508 0.611 districtBemowo 80.10 83.71 0.957 0.339 districtUrsus 102.01 82.25 1.240 0.215 districtZoliborz 829.59 83.94 9.884 <2e-16 *** districtMokotow 887.10 81.86 10.837 <2e-16 *** districtOchota 987.93 84.16 11.738 <2e-16 *** districtSrodmiescie 2214.39 83.28 26.591 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 597.4 on 990 degrees of freedom Multiple R-squared: 0.5698, Adjusted R-squared: 0.5659 F-statistic: 145.7 on 9 and 990 DF, p-value: < 2.2e-16 |
On peut tester si les 5 premières modalités sont nulles, ce qui est un test multiple, et on va utiliser le test de Ficher :
library(car) linearHypothesis(reg, c("districtBielany = 0", "districtPraga = 0", "districtUrsynow = 0", "districtBemowo = 0", "districtUrsus = 0")) Linear hypothesis test Model 1: restricted model Model 2: m2.price ~ district Res.Df RSS Df Sum of Sq F Pr(>F) 1 995 354051715 2 990 353269202 5 782513 0.4386 0.8217 |
La statistique de Fisher est faible, et avec une p-value de 82%. On peut tenter le diable, et rajouter encore une modalité
library(car) linearHypothesis(reg, c("districtBielany = 0", "districtPraga = 0", "districtUrsynow = 0", "districtBemowo = 0", "districtUrsus = 0", "districtZoliborz = 0")) Linear hypothesis test Model 1: restricted model Model 2: m2.price ~ district Res.Df RSS Df Sum of Sq F Pr(>F) 1 996 405455409 2 990 353269202 6 52186207 24.374 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 |
Mais on a peut être été trop gourmand cette fois. On va regrouper les 6 premières modalités (et appeler A le regroupement de districts). Si on regarde les prix moyens, par districts, on obtient
levels(apartments$district) = c(rep("A",6),levels(apartments$district)[7:11]) with(data = apartments, boxplot(m2.price ~ district)) |

apartments$district = relevel(apartments$district,"Zoliborz") |
On recommence, en mettant le district le moins cher comme référence, et on veut tester si les deux suivants ont des coefficients nuls dans la régression linéaire.
reg=lm(m2.price ~ district, data=apartments) linearHypothesis(reg, c("districtMokotow = 0", "districtOchota = 0")) Linear hypothesis test Model 1: restricted model Model 2: m2.price ~ district Res.Df RSS Df Sum of Sq F Pr(>F) 1 997 355292524 2 995 354051715 2 1240809 1.7435 0.1754 |
Avec une p-value de 17%, on peut accepter de regrouper les trois modalités ensemble. On a alors trois groupes de districts, dont les noms sont A, B et C. On obtient les boîtes à moustaches suivantes
levels(apartments$district) = c("B","A",rep("B",2),"C") apartments$district = relevel(apartments$district,"A") with(data = apartments, boxplot(m2.price ~ district)) |

Je laisse les plus courageux vérifier, mais on a trois districts effectivement différents, et si le but est de prévoir le prix des logements, inutiles d’utiliser un découpage avec 10 modalités, un découpage avec 3 suffit !
On the conjugate function
In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).
Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).
x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
We can see it on the figure below.
viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or
viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}, f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.
For the conjugate of the \ell_p norm, we can use the following code to visualize it
p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or
p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}
To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below
f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Combining automatically factor levels with trees
Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.
consider the following (simulated dataset)
n=200 set.seed(1) x1=runif(n) x2=runif(n) y=1+2*x1-x2+rnorm(n,0,.2) LB=sample(LETTERS[1:10]) b=data.frame(y=y,x1=x1, x2=cut(x2,breaks= c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2), labels=LB)) str(b) 'data.frame': 200 obs. of 3 variables: $ y : num 1.345 1.863 1.946 2.481 0.765 ... $ x1: num 0.266 0.372 0.573 0.908 0.202 ... $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ... table(b$x2)[LETTERS[1:10]] A B C D E F G H I J 11 12 23 34 23 36 12 32 3 14
Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.
Following my post, Przemyslaw sent a comment suggesting to use
library(factorMerger)
It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs
MF = mergeFactors(response = b$y,
factor = b$x2,
family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is
library(tree.bins)
To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged
b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame': 200 obs. of 3 variables:
$ y : num 1.345 1.863 1.946 2.481 0.765 ...
$ x1: num 0.266 0.372 0.573 0.908 0.202 ...
$ x2: chr "Group.4" "Group.4" "Group.4" "Group.4" ...
- attr(*, ".internal.selfref")=
table(b.bins$x2)
Group.1 Group.2 Group.3 Group.4
23 35 26 116
here in four groups. To get the correspondance, use
tree.bins(data=b, y=y, return = "lkup.list") [[1]] x2 Categories 1 E Group.1 2 G Group.2 3 C Group.2 4 B Group.3 5 J Group.3 6 I Group.4 7 A Group.4 8 H Group.4 9 F Group.4 10 D Group.4
(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..
On leverage
Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.
The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).
Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].
We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.
Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar
df = data.frame(x = c(rep(1,10),6), y = c(1:10,8)) plot(df) |

we obtain
model = lm(y~x,data=df) abline(model,col="red",lwd=2) H = lm.influence(model)$hat plot(1:11,H,type="h") |

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.
Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.
- the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
- the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})
Let us move two observations from our dataset,
mean(c(4,rep(1,8),6)) [1] 1.8 df = data.frame(x = c(4,rep(1,8),6,1.8), y = c(predict(model,newdata=data.frame(x=4)), 2:9,8, predict(model,newdata=data.frame(x=1.8)))) |
We now have

If we compute the leverages, we obtain
model = lm(y~x,data=df) H = lm.influence(model)$hat plot(1:11,H,type="h") |

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?
Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).
I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.
Insurance data science : Networks

At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about networks and insurance this Friday. Slides are available online
Insurance data science : Text

At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about text based data and NLP this Thursday. Slides are available online
Ewen Gallic (AMSE) will present a tutorial on tweets. I can upload a few additional slides on LSTM (recurrent neural nets)

Insurance data science : use and value of unusual data #1
Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).
There will be some hands-on applications, on R. I will share some codes in the slides.
Optimal transport on large networks
This article presents a set of tools for the modeling of a spatial allocation problem in a large geographic market and gives examples of applications. In our settings, the market is described by a network that maps the cost of travel between each pair of adjacent locations. Two types of agents are located at the nodes of this network. The buyers choose the most competitive sellers depending on their prices and the cost to reach them. Their utility is assumed additive in both these quantities. Each seller, taking as given other sellers prices, sets her own price to have a demand equal to the one we observed. We give a linear programming formulation for the equilibrium conditions. After formally introducing our model we apply it on two examples: prices offered by petrol stations and quality of services provided by maternity wards (only the later is described here for privacy issues). These examples illustrate the applicability of our model to aggregate demand, rank prices and estimate cost structure over the network. We insist on the possibility of applications to large scale data sets using modern linear programming solvers such as Gurobi.
Demand for gas in gas stations in Britanny, and demand for maternity in France (with border correction)
In addition to this paper we released a R toolbox to implement our results and an online tutorial, optimalnetwork.github.io.
Pareto Models for Top Incomes
With Emmanuel Flachaire, we uploaded on hal a paper on Pareto Models for Top Incomes,
Top incomes are often related to Pareto distribution. To date, economists have mostly used Pareto Type I distribution to model the upper tail of income and wealth distribution. It is a parametric distribution, with an attractive property, that can be easily linked to economic theory. In this paper, we first show that modelling top incomes with Pareto Type I distribution can lead to severe over-estimation of inequality, even with millions of observations. Then, we show that the Generalized Pareto distribution and, even more, the Extended Pareto distribution, are much less sensitive to the choice of the threshold. Thus, they provide more reliable results. We discuss different types of bias that could be encountered in empirical studies and, we provide some guidance for practice. To illustrate, two applications are investigated, on the distribution of income in South Africa in 2012 and on the distribution of wealth in the United States in 2013.
This paper was presented at and UCSB and in several workshops this spring, and this Summer, Emmanuel will present it at ECINEQ.
Note that a R package is also available on github, TopIncomes.
Estimates on training vs. validation samples
Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.
In order to visualize this problem, consider my (simple) dataset
MYOCARDE=read.table( "http://freakonometrics.free.fr/saporta.csv", head=TRUE,sep=";") |
Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)
n=nrow(MYOCARDE) M=matrix(NA,100,ncol(MYOCARDE)) colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7]) S1=S2=M1=M2=M for(i in 1:100){ idx = which(sample(0:1,size=n, replace=TRUE)==1) reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,])) nm=names(reg$coefficients) M1[i,nm]=reg$coefficients S1[i,nm]=summary(reg)$coefficients[,2] f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="") reg=glm(f,data=MYOCARDE[-idx,]) M2[i,nm]=reg$coefficients S2[i,nm]=summary(reg)$coefficients[,2] } |
Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)
for(j in 1:8){ idx=which(!is.na(M1[,j])) plot(M1[idx,j],M2[idx,j]) abline(a=0,b=1,lty=2,col="gray") segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j]) segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j]) } |
For instance, with the intercept, we have the following

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).
We can also visualize the joint distribution of the two estimators,
for(j in 1:8){ library(ks) idx = which(!is.na(M1[,j])) Z = cbind(M1[idx,j],M2[idx,j]) H = Hpi(x=Z) fhat = kde(x=Z, H=H) image(fhat$eval.points[[1]], fhat$eval.points[[2]],fhat$estimate) abline(a=0,b=1,lty=2,col="gray") abline(v=0,lty=2) abline(h=0,lty=2) } |
which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).


On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).
Others are much more consistent (with some possible outliers)






On the next one, we have again significance on the training sample, but not on the validation sample,




and probably more interesting


where the two are very consistent.
