On va maintenant parler des modèles de comptage, en revenant sur le modèle de Poisson. Les slides sont en ligne (slides 7) et la vidéo aussi (slides 7)
Tag Archives: R
Slides 5 – régression logistique sur variable(s) continue(s)
Function basis and regression
In the first part of the course on linear models, we’ve seen how to construct a linear model when the vector of covariates \boldsymbol{x} is given, so that \mathbb{E}(Y|\boldsymbol{X}=\boldsymbol{x}) is either simply \boldsymbol{x}^\top\boldsymbol{\beta} (for standard linear models) or a functional of \boldsymbol{x}^\top\boldsymbol{\beta} (in GLMs). But more generally, we can consider transformations of the covariates, so that a linear model can be used. In a very general setting, consider \sum_{j=1}^m\beta_j h_j(\boldsymbol{x})with h_j:\mathbb{R}^p\rightarrow\mathbb{R}. The standard linear model is obtained when m=p and h_j(\boldsymbol{x})=x_j , but of course, much more general models can be obtained, for instance with h_k(\boldsymbol{x})=x_j^2 or h_k(\boldsymbol{x})=x_{j}x_{j'}, that could be used to achieve high-order Taylor expansions. In that case, we will obtain the polynomial regression, that we will discuss first. We might also think of piecewise constant functions, h_k(\boldsymbol{x})=\boldsymbol{1}(x_j\in [a,b]) , that could be related to regression trees (but that is not in the scope in the STT5100 course). And if we go on step futher, we might think of piecewise linear or piecewise polynomial function, possibly with additional continuity constraints, that will lead us to spline basis.
- Polynomial regression
For pedagogical purpose, when I talk about polynomial regression, I always have in mind (in the univariate case) y=\beta_0+\beta_1x+\beta_2x^2+\cdots+\beta_kx^k+\varepsilonbut if we use
lm(y~poly(x,k)) |
in R, the output is not the \beta_j‘s.
As discussed in Kennedy & Gentle (1980) Statistical Computing,
Recall that orthogonal polynomials are defined with respect to the classical inner-product (on the finite interval (a,b)){\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x)g(x)~\mathrm {d} x} And a sequence of orthogonal polynomials is (P_n) where P_n is a polynomial of degree n, for all n, and such that P_m\perp P_n for all m\neq n. Note that those polyomials are orthogonal with respect to the inner product defined above, i.e. given some finite interval (a,b). But if (a,b) changes, the polynomials will be different.
A popular family of orthogonal polynomial, on finite interval (-1,+1) is the family of Legendre polynomials, satisfying{\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)~\mathrm {d} x=0}as soon as m\neq n. Those polynomials satisfy Bonnet’s recursion formula{\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} or Rodrigues’ formula {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}}The first values are here{\displaystyle P_{0}(x)=1} {\displaystyle P_{1}(x)=x}{\displaystyle P_{2}(x)={\frac {3x^{2}-1}{2}}}{\displaystyle P_{3}(x)={\frac {5x^{3}-3x}{2}}} {\displaystyle P_{4}(x)={\frac {35x^{4}-30x^{2}+3}{8}}}
Interestingly, we can get those polynomial functions using
library(orthopolynom) (leg4coef = legendre.polynomials(n=4)) [[1]] 1 [[2]] x [[3]] -0.5 + 1.5*x^2 [[4]] -1.5*x + 2.5*x^3 [[5]] 0.375 - 3.75*x^2 + 4.375*x^4 |
Of course, there are many families of orthogonal polynomials (Jacobi polynomials, Laguerre polynomials, Hermite polynomials, etc). Now, in R, there is the standard poly function, that we use in polynomial regression.
x = seq(-1,1,length=101) y = poly(x,4) y 1 2 3 4 [1,] -1.706475e-01 0.215984813 -2.480753e-01 0.270362873 [2,] -1.672345e-01 0.203025724 -2.183063e-01 0.216290298 ... [100,] 1.672345e-01 0.203025724 2.183063e-01 0.216290298 [101,] 1.706475e-01 0.215984813 2.480753e-01 0.270362873 attr(,"coefs") attr(,"coefs")$alpha [1] 3.157229e-17 2.655145e-16 9.799244e-17 5.368224e-16 attr(,"coefs")$norm2 [1] 1.0000000 101.0000000 34.3400000 9.3377328 2.4472330 0.6330176 attr(,"degree") [1] 1 2 3 4 attr(,"class") [1] "poly" "matrix" |
But these are not Legendre polynomials… As explained in 李哲源‘s post on stackoverflow, the idea is to start with P_{-1}(x)=0, P_{0}(x)=1 and P_{1}(x)=x, and then define \ell_n=\langle P_n,P_n\rangle as well as \alpha_n=\langle P_nP_1,P_1\rangle/\ell_n=\langle P_n^2,P_1\rangle/\ell_i= and \beta_n=\ell_n/\ell_{n-1}. Finally, define recursively{\displaystyle P_{n}(x)=(x-\alpha_{n-1})P_{n-1}(x)-\beta_{i-1}P_{i-2}(x)}and its normalized version, \tilde{P}_{n}=P_n/\sqrt{\ell_n}. That is what poly computes.
So, for pedagogical purpose, I said that I like to use y=\boldsymbol{x}^\top\boldsymbol{\beta}+\varepsilon where\boldsymbol{x}=(1,x,x^2,\cdots,xˆ{k-1},x^k)And actually, when using poly, we use the QR decomposition of that matrix. As discussed in in 李哲源‘s post, we can almost reproduce the poly function using
my_poly - function (x, degree = 1) { xbar = mean(x) x = x - xbar QR = qr(outer(x, 0:degree, "^")) X = qr.qy(QR, diag(diag(QR$qr), length(x), degree + 1))[, -1, drop = FALSE] X2 = X * X norm2 = colSums(X * X) alpha = drop(crossprod(X2, x)) / norm2 beta = norm2 / (c(length(x), norm2[-degree])) colnames(X) = 1:degree scale = sqrt(norm2) X = X * rep(1 / scale, each = length(x)) X} |
Nevertheless, the two models are equivalent. More precisely,
plot(cars) reg1 = lm(dist~speed+I(speed^2)+I(speed^3),data=cars) reg2 = lm(dist~poly(speed,3),data=cars) u = seq(3,26,by=.1) v1 = predict(reg1,newdata=data.frame(speed=u)) v2 = predict(reg2,newdata=data.frame(speed=u)) lines(u,v1,col="blue") lines(u,v2,col="red",lty=2) |
We have exactly the same prediction here
v1[u==15] 121 38.43919 v2[u==15] 121 38.43919 |
And probably also quite interesting : the coefficients do not have the same interpretation (since we do not have the same basis), but the p-value for the highest degree is exactly the same here ! Here the two models reject, with the same confidence, the polynomial of degree three,
summary(reg1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -19.50505 28.40530 -0.687 0.496 speed 6.80111 6.80113 1.000 0.323 I(speed^2) -0.34966 0.49988 -0.699 0.488 I(speed^3) 0.01025 0.01130 0.907 0.369 Residual standard error: 15.2 on 46 degrees of freedom Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519 F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11 summary(reg2) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 42.98 2.15 19.988 < 2e-16 *** poly(speed, 3)1 145.55 15.21 9.573 1.6e-12 *** poly(speed, 3)2 23.00 15.21 1.512 0.137 poly(speed, 3)3 13.80 15.21 0.907 0.369 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15.2 on 46 degrees of freedom Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519 F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11 |
- B-splines regression (and GAMs)
Splines are also important in regression models, especially when we start talking about Generalized Additive Models. See Perperoglou, Sauerbrei, Abrahamowicz & Schmid (2019) for a review. In the univariate case, I introduce (linear) splines through positive parts, in the sense thaty=\beta_0+\beta_1x+\beta_2(x-s_1)_++\cdots+\beta_k(x-s_{k-1})_++\varepsilonwhere (x-s)_+ equals 0 if x<s and x-s if x>s. Those functions are nice since they are continuous, so the model is continuous (the weighted sum of continuous functions is continuous). And we can go one step further, with y=\beta_0+\beta_1x+\beta_2x^2+\beta_3(x-s_1)^2_++\cdots+\beta_k(x-s_{k-2})^2_++\varepsilonwith quadratic splines, or y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4(x-s_1)^3_++\cdots+\beta_k(x-s_{k-3})^3_++\varepsilonfor cubic splines. Interestingly, quadratic splines are not only continuous, but their first derivative is also continuous (and the second one for cubic splines). So the knot discontinuity is s_1,s_2,\cdots is now invisible…
I like those models since they are easy to interprete. For example, the simple model \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.
Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.
Unfortunately, it is now what R is using when using the bs function in R, which are the standard B-splines. Just to visualize (I will skip the maths here), with R, we have
library(splines) clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02") x = seq(5,25,by=.25) B = bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=1) matplot(x,B,type="l",lty=1,lwd=2,col=clr6) B=bs(x,knots=c(10,20),Boundary.knots=c(5,55),degre=2) matplot(x,B,type="l",col=clr6,lty=1,lwd=2) |
while the functions I mentioned were (more or less) the following
pos = function(x,s) (x-s)*(x>s) par(mfrow=c(1,2)) clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02") x = seq(5,25,by=.25) B = cbind(pos(x,5),pos(x,10),pos(x,20)) matplot(x,B,type="l",lty=1,lwd=2,col=clr6) pos2 = function(x,s) (x-s)^2*(x>s) B = cbind(pos(x,5)*20,pos2(x,5),pos2(x,10),pos2(x,20)) matplot(x,B,type="l",col=clr6,lty=1,lwd=2) |
And as for the polynomial regression, the two models are equivalent. For example
plot(cars) reg1 = lm(dist~speed+pos(speed,10)+pos(speed,20),data=cars) reg2 = lm(dist~bs(speed,degree=1,knots=c(10,20)),data=cars) v1 = predict(reg1,newdata=data.frame(speed=u)) v2 = predict(reg2,newdata=data.frame(speed=u)) lines(u,v1,col="blue") lines(u,v2,col="red",lty=2) |
or more specifically
v1[u==15] 121 39.35747 v2[u==15] 121 39.35747 |
So one more time, the two models are equivalent, but I still find the approach with the positive part more intuitive, and easy to understand. As well as the interpretation of coefficients,
summary(reg1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.6305 16.2941 -0.468 0.6418 speed 3.0630 1.8238 1.679 0.0998 . pos(speed, 10) 0.2087 2.2453 0.093 0.9263 pos(speed, 20) 4.2812 2.2843 1.874 0.0673 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15 on 46 degrees of freedom Multiple R-squared: 0.6821, Adjusted R-squared: 0.6613 F-statistic: 32.89 on 3 and 46 DF, p-value: 1.643e-11 summary(reg2) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.621 9.344 0.495 0.6233 bs(speed, degree = 1, knots = c(10, 20))1 18.378 10.943 1.679 0.0998 . bs(speed, degree = 1, knots = c(10, 20))2 51.094 10.040 5.089 6.51e-06 *** bs(speed, degree = 1, knots = c(10, 20))3 88.859 12.047 7.376 2.49e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15 on 46 degrees of freedom Multiple R-squared: 0.6821, Adjusted R-squared: 0.6613 F-statistic: 32.89 on 3 and 46 DF, p-value: 1.643e-11 |
Here we can see directly that the first knot was not interesting (the slope did not change significantly) while the second one was…
Testing for a causal effect (with 2 time series)
A few days ago, I came back on a sentence I found (in a French newspaper), where someone was claiming that
“… an old variable explains 85% of the change in a new variable. So we can talk about causality”
and I tried to explain that it was just stupid : if we consider the regression of the temperature on day t+1 against the number of cyclist on day t, the R^2 exceeds 80%… but it is hard to claim that the number of cyclists on specific day will actually cause the temperature on the next day…
Nevertheless, that was frustrating, and I was wondering if there was a clever way to test for causality in that case. A popular one is Granger causality (I can mention a paper we published a few years ago where we use such a test, Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media). To explain that test, consider a bivariate time series (just like the one we have here), \boldsymbol{z}_t=(x_t,y_t), and consider some bivariate autoregressive model
{\displaystyle {\begin{bmatrix}x_{t}\\y_{t}\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\end{bmatrix}}+{\begin{bmatrix}a_{1,1}&\textcolor{red}{a_{1,2}}\\\textcolor{blue}{a_{2,1}}&a_{2,2}\end{bmatrix}}{\begin{bmatrix}x_{t-1}\\y_{t-1}\end{bmatrix}}+{\begin{bmatrix}u_{t}\\v_{t}\end{bmatrix}}}where \boldsymbol{\varepsilon}_t=(u_t,v_t) is some bivariate white noise, in the sense that (i) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t})=\boldsymbol{0}} (the noise is centered) (ii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t}^\top)=\Omega } , so the variance matrix is constant, but possibly non-diagonal (iii) {\displaystyle \mathbb{E} (\boldsymbol{\varepsilon}_{t}\boldsymbol{\varepsilon}_{t-h}^\top)=\boldsymbol{0} } for all h\neq 0. Note that we can use the simplified expression{\displaystyle {\boldsymbol{z}_t=\boldsymbol{c}+\boldsymbol{A}\boldsymbol{z}_{t-1}+\boldsymbol{\varepsilon}_t}}Now, Granger test is based on several quantities. With off-diagonal terms of matrix \Omega, we have a so-called instantaneous causality, and since \Omega is symmetry, we will write x\leftrightarrow y. With off-diagonal terms of matrix \boldsymbol{A}, we have a so-called lagged causality, with either \textcolor{blue}{x\rightarrow y} or \textcolor{red}{x\leftarrow y} (and possibly both, if both terms are significant).
So I wanted to try on my two-variable problem.
df = read.csv("cyclistsTempHKI.csv") dfts = cbind(C=ts(df$cyclists,start = c(2014, 1,2), frequency = 365), T=ts(df$meanTemp,start = c(2014, 1,2), frequency = 365)) library(vars) |
I now have “time series” objects, and we can fit a VAR model,
var2 = VAR(dfts, p = 1, type = "const") coefficients(var2) $C Estimate Std. Error t value Pr(>|t|) C.l1 0.8684009 0.02889424 30.054460 8.080226e-107 T.l1 70.3042012 20.07247411 3.502518 5.102094e-04 const 807.6394001 187.75472482 4.301566 2.110412e-05 $T Estimate Std. Error t value Pr(>|t|) C.l1 0.0003865391 6.257596e-05 6.177118 1.540467e-09 T.l1 0.6611135594 4.347074e-02 15.208241 6.086394e-42 const -1.6413074565 4.066184e-01 -4.036481 6.446018e-05 |
For instant, we can run a causality, to test if the number of cyclists can cause the temperature (on the next day)
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 38.157, df1 = 1, df2 = 842, p-value = 1.015e-09 |
Here, we should clearly reject H_0, which is that there is no causal effect. Which is the way statisticians say that there should be some causal effect between the number of cyclist and the temperature…
So clearly, something is wrong here. Either it is some sort of superpower that cyclists are not aware of. Or this test that was used for forty years (Clive Granger even got a Nobel price for it) is not working. Or we missed something. Actually… I think we missed something here. Possibly because the series are not stationary. We can almost see it with
Phi = matrix(c(coefficients(var2)$C[1:2,1],coefficients(var2)$T[1:2,1]),2,2) eigen(Phi) eigen() decomposition $values [1] 0.9594810 0.5700335 |
where the highest eigenvalue is very close to one. But actually, we look here at the temperature…
plot(dfts) |
so, at least, we should expect some seasonal unit root here. So let us use two techniques. The first one is a classical one-year difference, \Delta_{365}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-365}
var2 = VAR(diff(dfts,365), p = 1, type = "const") coefficients(var2) $C Estimate Std. Error t value Pr(>|t|) C.l1 0.8376424 0.07259969 11.537823 1.993355e-16 T.l1 42.2638410 28.58783276 1.478386 1.449076e-01 const -507.5514795 219.40240747 -2.313336 2.440042e-02 $T Estimate Std. Error t value Pr(>|t|) C.l1 0.000518209 0.0003277295 1.5812096 1.194623e-01 T.l1 0.598425288 0.1290511945 4.6371154 2.162476e-05 const 0.547828079 0.9904263469 0.5531235 5.823804e-01 |
The test on the fited VAR model yields
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 2.5002, df1 = 1, df2 = 112, p-value = 0.1167 |
i.e., with a 11% p-value, we should reject the assumption that the number of cyclists cause the temperature (on the next day), and actually, we should also reject the other way
causality(var2, cause = "T") $Granger Granger causality H0: T do not Granger-cause C data: VAR object var2 F-Test = 2.1856, df1 = 1, df2 = 112, p-value = 0.1421 |
Nevertheless, if we look at the instantaneous causality, this one makes more sense
$Instant H0: No instantaneous causality between: T and C data: VAR object var2 Chi-squared = 13.081, df = 1, p-value = 0.0002982 |
The second idea would be to use a one day difference, \Delta_{1}\boldsymbol{z}_t=\boldsymbol{z}_t-\boldsymbol{z}_{t-1} and to fit a VAR model on that one
VARselect(diff(dfts,1), lag.max = 4, type="const") $selection AIC(n) HQ(n) SC(n) FPE(n) 3 3 2 3 |
but on that one, a VAR(1) model – with only one lag – might not be sufficient. It might be better to consider a VAR(3)
var2 = VAR(diff(dfts,1), p = 3, type = "const") |
and on that one, one more time, we should reject the causal effect of the number of cyclists on the temperature (on the next day)
causality(var2, cause = "C") $Granger Granger causality H0: C do not Granger-cause T data: VAR object var2 F-Test = 0.67644, df1 = 3, df2 = 828, p-value = 0.5666 |
and this time, there could be a (lagged) causal effect of the temperature on the number of cyclists
causality(var2, cause = "T") $Granger Granger causality H0: T do not Granger-cause C data: VAR object var2 F-Test = 7.7981, df1 = 3, df2 = 828, p-value = 3.879e-05 $Instant H0: No instantaneous causality between: T and C data: VAR object var2 Chi-squared = 55.83, df = 1, p-value = 7.905e-14 |
but nothing instantaneously… So it looks like Granger causality performs well on that one !
Combiner les modalités d’une variable factorielle
Un billet rapide pour reprendre un point qu’on a vu ce matin en cours STT5100 pour illustrer le test de Fisher. On va utiliser les données de prix d’appartements en Pologne (données pas mal utilisées dans mon ébauche de notes de cours)
library(DALEX) data(apartments) with(data = apartments, boxplot(m2.price ~ district)) |
On a envie de faire ici des regroupements de modalités (c’est d’ailleurs suggéré par la régression simple, 5 variables explicatives étant ici non significatives). Pour mieux voir, on peut réordonner les modalités
A = with(data = apartments, aggregate(m2.price,by=list(district),FUN=mean)) A = A[order(A$x),] L = as.character(A$Group.1) apartments$district = factor(apartments$district, level=L) with(data = apartments, boxplot(m2.price ~ district)) |
On va prendre ici le district le moins cher comme référence,
reg=lm(m2.price ~ district, data=apartments) > summary(reg) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2968.36 58.02 51.160 <2e-16 *** districtBielany 17.38 84.16 0.207 0.836 districtPraga 26.45 85.12 0.311 0.756 districtUrsynow 42.01 82.65 0.508 0.611 districtBemowo 80.10 83.71 0.957 0.339 districtUrsus 102.01 82.25 1.240 0.215 districtZoliborz 829.59 83.94 9.884 <2e-16 *** districtMokotow 887.10 81.86 10.837 <2e-16 *** districtOchota 987.93 84.16 11.738 <2e-16 *** districtSrodmiescie 2214.39 83.28 26.591 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 597.4 on 990 degrees of freedom Multiple R-squared: 0.5698, Adjusted R-squared: 0.5659 F-statistic: 145.7 on 9 and 990 DF, p-value: < 2.2e-16 |