Tag Archives: rstats

Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression).

Normalization of the covariates

The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.

The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
u = seq(-1,1,length=251)
lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue")
lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")

Let us consider the objective function, with the following code

PennegLogLik = function(bbeta,lambda=0){
  b0   = bbeta[1]
  beta = bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*
  log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2)
}

Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case

lambda = 1
beta_init = lm(PRONO~.,data=myocarde)$coefficients
vpar = matrix(NA,1000,8)
for(i in 1:1000){
vpar[i,] = optim(par = beta_init*rnorm(8,1,2), 
function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par}
par(mfrow=c(1,2))
plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1])
plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2])


Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.

The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be

opt_ridge = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]}

and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}

v_lambda = c(exp(seq(-2,5,length=61)))
est_ridge = Vectorize(opt_ridge)(v_lambda)
library("RColorBrewer")
colrs = brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1])
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])

At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).

Ridge, using Netwon Raphson algorithm

We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.

Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then

Y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
X = cbind(1,X)
colnames(X) = c("Inter",names(myocarde[,1:7]))
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
beta[,8:10]
              [,1]        [,2]        [,3]
XInter  0.59619654  0.59619654  0.59619654
XFRCAR  0.09217848  0.09217848  0.09217848
XINCAR  0.77165707  0.77165707  0.77165707
XINSYS  0.69678521  0.69678521  0.69678521
XPRDIA -0.29575642 -0.29575642 -0.29575642
XPAPUL -0.23921101 -0.23921101 -0.23921101
XPVENT -0.33120792 -0.33120792 -0.33120792
XREPUL -0.84308972 -0.84308972 -0.84308972

Again, it seems that convergence is very fast.

And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}

The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then

newton_ridge = function(lambda=1){
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:20){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
Varz = solve(Delta)
Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*%
  Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X)))
return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])


and to get the evolution of the variance

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)


Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
library(glmnet)
glm_ridge = glmnet(X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

as a function of the norm

the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…

Ridge with orthogonal covariates

An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet)
glm_ridge = glmnet(pca_X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}

Application

Let us try with our second set of data

df0 = df
df0$y=as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type='response')}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}

We can try various values of \lambda

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(.2))
plot_lambda(.2)


or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(1.2))
plot_lambda(1.2)


Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…)

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Using convolutions (S3) vs distributions (S4)

Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions.

Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum.

The distribution function of the sum of two independent (positive) random variables is F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)

pSum2 = function(x) integrate(function(y) 
plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value

Let us visualize that cumulative distribution function

vx=seq(0.1,50,by=.1)
vy=Vectorize(pSum2)(vx)
plot(vx,vy,type="l",ylim=c(0,1))
abline(h=.5,lty=2)

Let us find an upper bound to compute (in a decent time) quantiles

pSum2(350)
[1] 0.99195

and then use the uniroot function to inverse that function

qSum = function(u) uniroot(function(x) 
Vectorize(pSum2)(x)-u, interval=c(0,350))$root
vu=seq(.01,.99,by=.01)
vv=Vectorize(qSum)(vu)

The median is here

qSum(.5)
[1] 14.155

Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)

pSum3 = function(x) integrate(function(y) 
pSum2(x-y)*dlnorm(y,2,2),0,x)$value

If we look at some values we good

pSum3(4)
[1] 0.015624
pSum3(5)
Error in integrate(function(y) plnorm(x - y, 1, 2) * 
dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

So obviously, there are computational issues here.

Let us consider the following alternative expression F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x). Of course, it is necessary here to compute the density of the sum of two variables

dSum2 = function(x) integrate(function(y) 
dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value
pSum3 = function(x) integrate(function(y) 
dlnorm(x-y,2,2)*dSum2(y),0,x)$value

Again, let us compute some values

pSum3(4)
[1] 0.0090285
pSum3(5)
[1] 0.01186

This one seems to work quite well. But it is just an illusion.

pSum3(9)
Error in integrate(function(y) dlnorm(x - y, 1, 2) *
 dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more.

Let us consider distributions in the S4-type format of the following package

library(distr)
X1 = Lnorm(mean=1,sd=2)
X2 = Lnorm(mean=2,sd=1)
S2 = X1+X2

To compute the median, we simply have to use

distr::q(S2)(.5)
[1] 14.719

We can also visualize it easily

plot(q(S2))

which looks (very) close to what we got, manually.  But here, it is also possible to work with the sum of 3 (independent) random variables

X3 = Lnorm(mean=2,sd=2)
S3 = X1+X2+X3

To compute the median, use

distr::q(S3)(.5)
[1] 33.208

The function is here

plot(q(S3))

Prévision des vente de voitures au Québec

Après discussion, la date limite pour me renvoyer le second devoir est fixée à vendredi 5 octobre midi, par courriel. Je veux un fichier pdf par équipe, avec une page de garde indiquant le nom des membres du groupe, et le mot clé retenu.

Maintenant, je vais en profiter pour mettre en ligne le code tapé de matin, en cours. On avait travaillé sur la modélisation de la série des ventesde voiture, au Québec. La série est en ligne,

X=read.table(
"http://freakonometrics.blog.free.fr/public/
data/car-sales-quebec.csv",
header=TRUE,sep=";",nrows=108)
Xt=ts(X[,2],start=c(1960,1),frequency=12)

On peut regarder l’évolution de cette série temporelles (car les dessins, et la visualisation, c’est important),

plot(Xt)

On note une tendance linéaire (ou qu’on pourrait supposer linéaire), que l’on va estimer,

X=as.numeric(Xt)
temps=1:length(X)
base=data.frame(temps,X)
reg=lm(X~temps)

et que l’on pourra représenter

plot(temps,X,type="l",xlim=c(0,120))
abline(reg,col="red")

Si on veut une série stationnaire (et c’est effectivement ce que l’on cherche), on va retrancher cette tendance à notre série, et travailler sur la série résiduelle.

La série résiduelle est ici

Y=X-predict(reg)
plot(temps,Y,type="l")

que l’on devrait pouvoir supposer stationnaire. Classiquement, on regarde les autocorrélations,

acf(Y,36,lwd=4)

où on repère un beau cycle annuel, mais on va surtout utiliser les autocorrélations partielles pour identifier l’ordre de la composante autorégressive.

pacf(Y,36,lwd=4)

La 12ème est non-nulle, et tous les autres ensuite sont significativement nulles (ou presque). On va donc tenter un https://latex.codecogs.com/gif.latex?AR(12).

> fit.ar12=arima(Y,order=c(12,0,0))
> fit.ar12
Series: Y
ARIMA(12,0,0) with non-zero mean

Coefficients:
ar1     ar2      ar3      ar4     ar5      ar6     ar7
      0.1975  0.0832  -0.1062  -0.1212  0.1437  -0.1051  0.0319
s.e.  0.0838  0.0809   0.0826   0.0843  0.0850   0.0833  0.0854
ar9     ar10    ar11    ar12  intercept
      -0.0332  -0.0616  0.2635  0.4913  -148.3180
s.e.   0.0853   0.0840  0.0840  0.0841   384.5095

sigma^2 estimated as 2177974:  log likelihood=-946.75
AIC=1921.51   AICc=1926.03   BIC=1959.06

La douzième composante https://latex.codecogs.com/gif.latex?\phi_{12} semble significative, si on repense au test de Student,

u=seq(-6,6,by=.1)
plot(u,dnorm(u),type="l")
points(0.4913/0.0841,0,pch=19,col="red")

En revanche, la huitième https://latex.codecogs.com/gif.latex?\phi_{8} ne l’est pas

points(-0.1018/0.0847,0,pch=19,col="blue")

(voilà pour le petit retour sur la lecture des tests). Vérifions maintenant que le bruit résiduel est bien blanc… Si on regarde les autocorrélations

acf(residuals(fit.ar12),36,lwd=4)

Mais ce n’est pas un test de bruit blanc. En particulier, si une autocorrélation semblerait significative, on pourrait accepter que – globalement – les autocorrélations soient très proches de 0. On va alors faire des tests de Box-Pierce

> Box.test(residuals(fit.ar12),lag=12,
+  type='Box-Pierce')

Box-Pierce test

data:  residuals(fit.ar12)
X-squared = 7.7883, df = 12, p-value = 0.8014

Pour l’interpétation du test, c’est toujours pareil: on a une statistique de l’ordre de 7, et la loi sous-jacente (la somme des carrés des 12 premières autocorrélations du bruit) est une loi du chi-deux, à 12 degrés de liberté,

u=seq(0,30,by=.1)
plot(u,dchisq(u,df=12),type="l")
points(7.7883,0,pch=19,
col="red")

On accepte l’hypothèse que les 12 premières autocorrélations soient nulles. Si on va un peu plus loin,

> Box.test(residuals(fit.ar12),lag=18,
+  type='Box-Pierce')

Box-Pierce test

data:  residuals(fit.ar12)
X-squared = 20.3861, df = 18, p-value = 0.3115

on a la même conclusion,

Pour rappels, on peut retrouver à la main la p-value,

> 1-pchisq(20.3861,df=18)
[1] 0.3115071

Faisons notre petite dessin de toutes les p-value, pour s’assurer que le bruit est bien un bruit blanc,

 BP=function(h) Box.test(residuals(fit.ar12),lag=h,
type='Box-Pierce')$p.value
plot(1:24,Vectorize(BP)(1:24),type='b',
ylim=c(0,1),col="red")
abline(h=.05,lty=2,col="blue")

Cette fois c’est bon, on tient notre premier modèle, un https://latex.codecogs.com/gif.latex?AR(12). Si on veut aller plus loin, on peut regarder ce que donnerait de la sélection automatique,

> library(caschrono)
> armaselect(Y,nbmod=5)
p q      sbc
[1,] 14 1 1635.214
[2,] 12 1 1635.645
[3,] 15 1 1638.178
[4,] 12 3 1638.297
[5,] 12 4 1639.232

On peut être tenté d’aller voir ce qui se passerait en rajoutant une composante moyenne mobile à notre modèle précédant,

> fit.arma12.1=arima(Y,order=c(12,0,1))
> fit.arma12.1
Series: Y
ARIMA(12,0,1) with non-zero mean

Coefficients:
ar1     ar2      ar3      ar4     ar5      ar6     ar7
      0.0301  0.1558  -0.0941  -0.1461  0.1063  -0.0688  -0.002
s.e.  0.1235  0.0854   0.0757   0.0784  0.0807   0.0774   0.080
ar9     ar10    ar11    ar12     ma1  intercept
      -0.0646  -0.0798  0.2538  0.5786  0.2231  -131.3495
s.e.   0.0802   0.0766  0.0751  0.0861  0.1393   368.8156

sigma^2 estimated as 2127759:  log likelihood=-945.65
AIC=1921.31   AICc=1926.52   BIC=1961.54

Le nouveau coefficient, https://latex.codecogs.com/gif.latex?\theta_1 est à peine significatif. Éventuellement avec un seuil à 10%… Pourquoi pas? Si on regarde les résidus, sans grande surprise, on a toujours un bruit blanc, encore plus blanc qu’auparavant (en violet sur le dessin ci-dessous)

BP=function(h) Box.test(
residuals(fit.arma12.1),lag=h,
type='Box-Pierce')$p.value
plot(1:24,Vectorize(BP)(1:24),type='b',
ylim=c(0,1),col="red")
abline(h=.05,lty=2,col="blue")

BP=function(h) Box.test(residuals(fit.ar12),lag=h,
type='Ljung-Box')$p.value
lines(1:24,Vectorize(BP)(1:24),col="purple",
type="b")

On peut aussi aller voir parmi les modèles proposés, en particulier le modèle https://latex.codecogs.com/gif.latex?AR(14).

> fit.ar14=
+ arima(Y,order=c(14,0,0),method="CSS")
> fit.ar14
Series: Y
ARIMA(14,0,0) with non-zero mean

Coefficients:
ar1     ar2      ar3      ar4     ar5      ar6     ar7
      0.2495  0.2105  -0.0584  -0.1569  0.1282  -0.1152  0.0268
s.e.  0.0956  0.0972   0.0854   0.0830  0.0838   0.0840  0.0847
ar9     ar10    ar11    ar12     ar13     ar14  intercept
      -0.0327  -0.1116  0.2649  0.5887  -0.1575  -0.1572    80.5
s.e.   0.0855   0.0851  0.0853  0.0886   0.1031   0.0999   338.9

sigma^2 estimated as 2218612:  part log likelihood=-942.31

C’est un peu tiré par les cheveux, mais on pourrait accepter l’hypothèse que https://latex.codecogs.com/gif.latex?\phi_{14}soit significativement non-nuls. Mais on est encore limite…  Allez, on l’accepte aussi dans notre gang de modèle.

On a finalement trois modèles. Si on fait un peu de backtesting, sur les 12 derniers mois,

T=length(Y)
backtest=12
subY=Y[-((T-backtest+1):T)]
subtemps=1:(T-backtest)
plot(temps,Y,type="l")
lines(subtemps,subY,lwd=4)
fit.ar12.s=arima(subY,
order=c(p=12,d=0,q=0),method="CSS")
fit.arma12.1.s=arima(subY,
order=c(p=12,d=0,q=1),method="CSS")
fit.ar14.s=arima(subY,
order=c(p=14,d=0,q=0),method="CSS")
p.ar12=predict(fit.ar12.s,12)
pred.ar12=as.numeric(p.ar12$pred)
p.arma12.1=predict(fit.arma12.1.s,12)
pred.arma12.1=as.numeric(p.arma12.1$pred)
p.ar14=predict(fit.ar14.s,12)
pred.ar14=as.numeric(p.ar14$pred)

on obtient les prévisions suivantes (avec les valeurs observées dans la première colonne)

> (M=cbind(observé=Y[((T-backtest+1):T)],
+  modèle1=pred.ar12,
+  modèle2=pred.arma12.1,
+  modèle3=pred.ar14))
observé    modèle1    modèle2    modèle3
97  -4836.2174 -5689.3331 -5885.4486 -6364.2471
98  -3876.4199 -4274.0391 -4287.2193 -4773.8116
99   1930.3776  1817.8411  2127.9915  2290.1460
100  3435.1751  4089.3598  3736.1110  4039.4150
101  7727.9726  6998.9829  7391.6694  7281.4797
102  2631.7701  3456.8819  3397.5478  4230.5324
103  -509.4324 -2128.6315 -2268.9672 -2258.7216
104 -1892.6349 -3877.7609 -3694.9409 -3620.4798
105 -4310.8374 -3384.0905 -3430.4090 -2881.4942
106  2564.9600  -504.6883  -242.5018   183.2891
107 -1678.2425 -1540.9904 -1607.5996  -855.7677
108 -4362.4450 -3927.4772 -3928.0626 -3718.3922
>  sum((M[,1]-M[,2])^2)
[1] 19590931
>  sum((M[,1]-M[,3])^2)
[1] 17293716
>  sum((M[,1]-M[,4])^2)
[1] 21242230

I.e. on aurait envie de retenir le second modèle, https://latex.codecogs.com/gif.latex?ARMA(12,1). On va maintenant l’utiliser pour faire un peu de prévision,

library(forecast)
fit.arma12.1=
arima(Y,order=c(12,0,1))
fit.arma12.1
PREDARMA=forecast(fit.arma12.1,12)
plot(Y,xlim=c(1,120),type="l")
temps=T+1:12
polygon(c(temps,rev(temps)),c(PREDARMA$lower[,2],
rev(PREDARMA$upper[,2])),col="yellow",border=NA)
polygon(c(temps,rev(temps)),c(PREDARMA$lower[,1],
rev(PREDARMA$upper[,1])),col="orange",border=NA)
lines(temps,PREDARMA$mean,col="red")

Cela dit, on peut aussi aller beaucoup plus loin dans la prévision,

PREDARMA=forecast(fit.arma12.1,120)
plot(Y,xlim=c(1,210),type="l")
temps=T+1:120
polygon(c(temps,rev(temps)),c(PREDARMA$lower[,2],
rev(PREDARMA$upper[,2])),col="yellow",border=NA)
polygon(c(temps,rev(temps)),c(PREDARMA$lower[,1],
rev(PREDARMA$upper[,1])),col="orange",border=NA)
lines(temps,PREDARMA$mean,col="red")

Bon, on y est presque, car on a modélisé https://latex.codecogs.com/gif.latex?(Y_t), la série obtenue en enlevant la tendance linéaire. Pour remonter sur https://latex.codecogs.com/gif.latex?(X_t), on va rajouter la tendance à la prévision faite auparavant,

X=as.numeric(Xt)
temps=1:length(X)
plot(temps,X,type="l",xlim=c(0,210),
ylim=c(5000,30000))
base=data.frame(temps,X)
reg=lm(X~temps)
abline(reg,col="red")
PREDTENDANCE=predict(reg,newdata=
data.frame(temps=T+1:120))
temps=T+1:120
polygon(c(temps,rev(temps)),c(PREDTENDANCE+
PREDARMA$lower[,2],rev(PREDTENDANCE+PREDARMA$upper[,2])),
col="yellow",border=NA)
polygon(c(temps,rev(temps)),c(PREDTENDANCE+
PREDARMA$lower[,1],rev(PREDTENDANCE+PREDARMA$upper[,1])),
col="orange",border=NA)
lines(temps,PREDTENDANCE+PREDARMA$mean,col="red")

Transformation logarithmique de séries temporelles

Pour poursuivre une discussion amorcée en fin de cours, dans certains cas, on peut avoir l’impression que modéliser une série pourrait être compliqué,

plot(X,xlim=c(1,length(X)+20))

Mais on peut avoir l’intuition que modéliser le logarithme de la série pourrait être plus simple,

> X=log(Y)
> plot(X,xlim=c(1,length(X)+20))

On va alors tenter une modélisation par un processus ARMA de cette dernière série,

> md=arima(X,c(12,0,1))
> P=predict(md,24)
> E=P$pred
> V=P$se^2

On peut alors faire une prévision sur cette série plus simple à modéliser, et visualiser cette prévision.

> temps=length(X)+1:24
> ciu=(E+2*sqrt(V))
> cil=(E-2*sqrt(V))
> polygon(c(temps,rev(temps)),c(ciu,
+ rev(cil)),col="yellow",border=NA)
> lines(temps,E,col="red",lwd=2)
> lines(temps,ciu,col="red",lty=2)
> lines(temps,cil,col="red",lty=2)

Maintenant, on va devoir remonter. On va utiliser un résultat que l’on a vu sur la transformation logarithmique dans une régression: si après avoir pris le logarithme, on a un modèle simple, Gaussien, c’est que le modèle initial était log-normal. On peut alors utiliser les propriétés de la loi lognormale, dont on connait les moments à partir de ceux de la loi Gaussienne sous-jacente. Pour la prévision, on n’a pas trop le choix,

> mu=exp(E+.5*V)

Par contre, pour construire un intervalle de confiance, soit on utilise la variance de notre loi lognormale pour avoir la variance de notre processus, et on oublie cette histoire de loi lognormale pour construire un intervalle Gaussien,

> sig2=(exp(V)-1)*exp(2*E+V)
> ci1u=mu+2*sqrt(sig2)
> ci1l=mu-2*sqrt(sig2)

ou alors on utilise le fait que comme la transformation est monotone, l’intervalle de confiance peut etre vu comme une transformation du précédant intervalle de confiance,

> ci2u=exp(E+2*sqrt(V))
> ci2l=exp(E-2*sqrt(V))

Si on compare visuellement les deux, on a dans le premier cas,

> plot(Y,xlim=c(1,length(X)+20))
> temps=length(X)+1:24
> polygon(c(temps,rev(temps)),c(ci1u,
> rev(ci1l)),col="yellow",border=NA)
> lines(temps,mu,col="red",lwd=2)
> lines(temps,ci1u,col="red",lty=2)
> lines(temps,ci1l,col="red",lty=2)

(qui est symétrique et centré sur notre prévision) et dans le second

> plot(Y,xlim=c(1,length(X)+20))
> temps=length(X)+1:24
> ci1u=mu+2*sqrt(sig2)
> ci1l=mu-2*sqrt(sig2)
> polygon(c(temps,rev(temps)),c(ci2u,
> rev(ci2l)),col="yellow",border=NA)
> lines(temps,mu,col="red",lwd=2)
> lines(temps,ci2u,col="red",lty=2)
> lines(temps,ci2l,col="red",lty=2)

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est1.gif

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample https://latex.codecogs.com/gif.latex?(U_i,V_i) on the unit square, consider some transformed sample https://latex.codecogs.com/gif.latex?(Q(U_i),Q(V_i)), where https://latex.codecogs.com/gif.latex?Q:(0,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the https://latex.codecogs.com/gif.latex?\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est2.gif

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est3.gif

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Copulas and tail dependence, part 1

As mentioned in the course last week Venter (2003) suggested nice functions to illustrate tail dependence (see also some slides used in Berlin a few years ago).

  • Joe (1990)’s lambda

Joe (1990) suggested a (strong) tail dependence index. For lower tails, for instance, consider

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.2.php_.png

i.e

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.3.php_.png
  • Upper and lower strong tail (empirical) dependence functions

The idea is to plot the function above, in order to visualize limiting behavior. Define

http://freakonometrics.hypotheses.org/files/2017/07/Llatex2png.2.php_.png

for the lower tail, and

http://freakonometrics.hypotheses.org/files/2017/07/Clatex2png.2.php_.png

for the upper tail, where http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-12.2.php_.png is the survival copula associated with http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-13.2.php_.png, in the sense that
http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-14.2.php_.png

while

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-15.2.php_.png

Now, one can easily derive empirical conterparts of those function, i.e.

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-18.2.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-19.2.php_.png

Thus, for upper tail, on the right, we have the following graph

http://freakonometrics.hypotheses.org/files/2017/07/upper-lambda.gif

and for the lower tail, on the left, we have

http://freakonometrics.hypotheses.org/files/2017/07/lower-lambda.gif

For the code, consider some real data, like the loss-ALAE dataset.

> library(evd)
> X=lossalae

The idea is to plot, on the left, the lower tail concentration function, and on the right, the upper tail function.

> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1)
> Lemp=function(z) sum((U<=z)&(V<=z))/sum(U<=z)
> Remp=function(z) sum((U>=1-z)&(V>=1-z))/sum(U>=1-z)
> u=seq(.001,.5,by=.001)
> L=Vectorize(Lemp)(u)
> R=Vectorize(Remp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL          UPPER TAIL")
> abline(v=.5,col="grey")

Now, we can compare this graph, with what should be obtained for some parametric copulas that have the same Kendall’s tau (e.g.). For instance, if we consider a Gaussian copula,

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) pCopula(c(z,z),copgauss)/z
> Rgaussian=function(z) (1-2*z+pCopula(c(z,z),copgauss))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel’s copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) pCopula(c(z,z),copgumbel)/z
> Rgumbel=function(z) (1-2*z+pCopula(c(z,z),copgumbel))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

That’s nice (isn’t it?), but since we do not have any confidence interval, it is still hard to conclude (even if it looks like Gumbel copula has a much better fit than the Gaussian one). A strategy can be to generate samples from those copulas, and to visualize what we had. With a Gaussian copula, the graph looks like

> u=seq(.0025,.5,by=.0025); nu=length(u)
> nsimul=500
> MGS=matrix(NA,nsimul,2*nu)
> for(s in 1:nsimul){
+ Xs=rCopula(nrow(X),copgauss)
+ Us=rank(Xs[,1])/(nrow(Xs)+1)
+ Vs=rank(Xs[,2])/(nrow(Xs)+1)
+ Lemp=function(z) sum((Us<=z)&(Vs<=z))/sum(Us<=z)
+ Remp=function(z) sum((Us>=1-z)&(Vs>=1-z))/sum(Us>=1-z)
+ MGS[s,1:nu]=Vectorize(Lemp)(u)
+ MGS[s,(nu+1):(2*nu)]=Vectorize(Remp)(rev(u))
+ lines(c(u,u+.5-u[1]),MGS[s,],col="red")
+ }

(including – pointwise – 90% confidence bands)

> Q95=function(x) quantile(x,.95)
> V95=apply(MGS,2,Q95)
> lines(c(u,u+.5-u[1]),V95,col="red",lwd=2)
> Q05=function(x) quantile(x,.05)
> V05=apply(MGS,2,Q05)
> lines(c(u,u+.5-u[1]),V05,col="red",lwd=2)

while it is

with Gumbel copula. Isn’t it a nice (graphical) tool ?

But as mentioned in the course, the statistical convergence can be slow. Extremely slow. So assessing if the underlying copula has tail dependence, or not, it now that simple. Especially if the copula exhibits tail independence. Like the Gaussian copula. Consider a sample of size 1,000. This is what we obtain if we generate random scenarios,

or we look at the left tail (with a log-scale)

Now, consider a 10,000 sample,

or with a log-scale

We can even consider a 100,000 sample,

or with a log-scale

On those graphs, it is rather difficult to conclude if the limit is 0, or some strictly positive value (again, it is a classical statistical problem when the value of interest is at the border of the support of the parameter). So, a natural idea is to consider a weaker tail dependence index. Unless you have something like 100,000 observations…

Ordres d’un processus ARMA

Dans la méthodologie de Box & Jenkins, une étape qui arrive très rapidement est le choix des ordres du processus http://freakonometrics.blog.free.fr/public/perso6/ARMA__latex.gif, une fois que l’on validé l’hypothèse de stationnarité de la série, comme on l’a vu en cours la semaine passée. Considérons la série du trafic autoroutier,

> autoroute=read.table(
+"http://freakonometrics.blog.free.fr/public/data/autoroute.csv",
+ header=TRUE,sep=";")
> a7=autoroute$a007
> A7=ts(a7,start = c(1989, 9), frequency = 12)

Un outils pratique de sélections des ordres http://freakonometrics.blog.free.fr/public/perso6/platex.gif et http://freakonometrics.blog.free.fr/public/perso6/qlatex.gif dans un modèle http://freakonometrics.blog.free.fr/public/perso6/ARMAlatex.gif est la fonction d’autocorrélation étendue. La définition est donnée dans les notes de cours (Def. 223) à partir des statistiques proposées par Tsay & Tiao (1984),

>  EACF=eacf(A7)
AR/MA
  0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 x o o x x x x x o o x  x  x  o
1 x x o o x x x o o o x  x  x  x
2 o x o o o o x o o o o  x  x  o
3 o x o x o o o o o o o  x  x  x
4 x x x x o o o o o o o  x  o  o
5 x x o o o x o o o o o  x  o  x
6 x x o o o x o o o o o  x  o  o
7 o x o o o x o o o o o  x  o  o
>  EACF
$eacf
           [,1]       [,2]        [,3]        [,4]
[1,]  0.6476234  0.2124105 -0.02413173 -0.24234535
[2,]  0.4889076  0.2797152 -0.02494135 -0.06094037
[3,]  0.1006541 -0.2285771  0.03514148  0.08000588
[4,]  0.1390240 -0.2788742  0.04386746  0.28307260
[5,] -0.5091680  0.3144899 -0.34572269  0.31450865
[6,] -0.4224571 -0.4877505  0.16054232 -0.09130728
[7,] -0.4731353 -0.4324857 -0.04847184 -0.10500350
[8,] -0.2129591 -0.4072901  0.09487899 -0.06493243
             [,5]        [,6]        [,7]        [,8]
[1,] -0.514330187 -0.61634046 -0.52314403 -0.28008661
[2,] -0.254912957 -0.28966664 -0.33963243 -0.21863077
[3,] -0.156624357 -0.01199786 -0.25116738  0.13079231
[4,] -0.183283544  0.03651508 -0.08711829  0.11626377
[5,] -0.190885091 -0.09786463 -0.09182557  0.08818875
[6,] -0.072904071  0.29271777 -0.09334712  0.01972648
[7,] -0.009873289  0.36909726  0.01698660 -0.03317456
[8,]  0.020485930  0.38342158  0.16981715 -0.02592442
             [,9]       [,10]       [,11]     [,12]
[1,] -0.082607058  0.15178887  0.56583179 0.8368975
[2,] -0.085026400  0.02731460  0.26158357 0.7844748
[3,]  0.001866994 -0.11658312  0.03038621 0.7026361
[4,]  0.025183793 -0.21608692  0.05660781 0.6674301
[5,] -0.006831894 -0.02514440 -0.07390257 0.4762563
[6,]  0.010058718 -0.03888613 -0.04382043 0.5338091
[7,]  0.032124905 -0.07022090 -0.04427400 0.4674165
[8,]  0.024189179 -0.20818201  0.01459933 0.4830369
          [,13]       [,14]
[1,]  0.5637439  0.17862571
[2,]  0.4530716  0.24413569
[3,]  0.2534178 -0.20160890
[4,]  0.2409861 -0.29462510
[5,] -0.1517324 -0.14763294
[6,] -0.1701182 -0.28771495
[7,] -0.1651515  0.05466457
[8,] -0.1403198 -0.04095030

$ar.max
[1] 8

$ma.ma
[1] 14

Les ronds dans la matrice désignent des valeurs non-significatives. Par défaut, le nombre de retards, pris en compte pour la composante autorégressive est faible, mais on peut l’augmenter.

> EACF=eacf(A7,13,13)
AR/MA
   0 1 2 3 4 5 6 7 8 9 10 11 12 13
0  x o o x x x x x o o x  x  x  o
1  x x o o x x x o o o x  x  x  x
2  o x o o o o x o o o o  x  x  o
3  o x o x o o o o o o o  x  x  x
4  x x x x o o o o o o o  x  o  o
5  x x o o o x o o o o o  x  o  x
6  x x o o o x o o o o o  x  o  o
7  o x o o o x o o o o o  x  o  o
8  o x o o o x o o o o o  x  o  o
9  x x o o x o x o o o o  x  o  o
10 x x o o x o x o o o o  x  o  o
11 x x o o x x x o o o x  x  o  o
12 x x o o o o o o o o o  o  o  o
13 x x o o o o o o o o o  o  o  o
> EACF$eacf
      [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9] [,10]
[1,]  0.64  0.21 -0.02 -0.24 -0.51 -0.61 -0.52 -0.28 -0.08  0.15
[2,]  0.48  0.27 -0.02 -0.06 -0.25 -0.28 -0.33 -0.21 -0.08  0.02
[3,]  0.10 -0.22  0.03  0.08 -0.15 -0.01 -0.25  0.13  0.00 -0.11
[4,]  0.13 -0.27  0.04  0.28 -0.18  0.03 -0.08  0.11  0.02 -0.21
[5,] -0.50  0.31 -0.34  0.31 -0.19 -0.09 -0.09  0.08  0.00 -0.02
[6,] -0.42 -0.48  0.16 -0.09 -0.07  0.29 -0.09  0.01  0.01 -0.03
[7,] -0.47 -0.43 -0.04 -0.10  0.00  0.36  0.01 -0.03  0.03 -0.07
[8,] -0.21 -0.40  0.09 -0.06  0.02  0.38  0.16 -0.02  0.02 -0.20
[9,] -0.14 -0.50 -0.10 -0.06  0.11  0.38 -0.01 -0.02 -0.08 -0.20
[10,] -0.29  0.48  0.00  0.18  0.27 -0.12  0.41  0.00  0.16  0.15
[11,]  0.24  0.48 -0.04  0.02  0.46  0.10  0.37 -0.10  0.05  0.16
[12,] -0.59  0.49 -0.18  0.05  0.31 -0.32  0.34 -0.16  0.07  0.16
[13,] -0.31  0.31 -0.12  0.16 -0.11  0.04 -0.04  0.12 -0.09  0.00
[14,]  0.47  0.26  0.11  0.13 -0.01 -0.01  0.00  0.07 -0.03  0.02

On peut aussi visualiser graphiquement les différentes valeurs, avec en ordonnée les ordres autoréregressifs (http://freakonometrics.blog.free.fr/public/perso6/platex.gif) et en abscisse les ordres moyenne mobile (http://freakonometrics.blog.free.fr/public/perso6/qlatex.gif).

> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> ceacf=matrix(as.numeric(cut(EACF$eacf,
+1,nrow(EACF$eacf),
+ ncol(EACF$eacf))
> for(i in 1:ncol(EACF$eacf)){
+ for(j in 1:nrow(EACF$eacf)){
+ polygon(c(i-1,i-1,i,i)-.5,c(j-1,j,j,j-1)-.5,
+ col=CL[ceacf[j,i]])
+ }}


Un bruit blanc est en bas à gauche (http://freakonometrics.blog.free.fr/public/perso6/platex.gif et http://freakonometrics.blog.free.fr/public/perso6/qlatex.gif nuls). Dans cette méthode, dite méthode des coins, on cherche un coin telle que dans le quadrant supérieur, les valeurs soient non-significatives (ternes sur le dessin ci-dessus). La figure ci-desssous correspond à l’analyse d’un modèle http://freakonometrics.blog.free.fr/public/perso6/ARMA126latex.gif

Les fortes valeurs positives sont en bleu foncé, les fortes valeurs négatives sont en rouge foncé. On peut aussi regarder la fonction suivante, qui utilise une identification de modèle par MINIC (Minimum Information Criterion) à l’aide du critère de Shcwarz (BIC, ou SBC Schwarz’s Bayesian Criterion)

> armaselect(A7,nbmod=5)
p q      sbc
[1,] 12 1 1441.798
[2,] 13 0 1442.628
[3,] 12 0 1443.188
[4,] 12 2 1443.362
[5,] 14 0 1445.069

Enfin, une dernière fonction possible est évoquée dans la section 6.5. du livre deCryer & Chan (2008),

> ARMA.SELECTION=
+ armasubsets(A7,nar=14,nma=14,ar.method='ols')
> plot(ARMA.SELECTION)

basé sur le critère de Schwarz.
Je vais me répéter, mais ces méthodes ne sont que des outils, histoire d’avoir des pistes si on ne sait trop dans quelle direction partir. Et compte tenu de la saisonnalité de la série, je serais pour ma part parti sur un modèle avec http://freakonometrics.blog.free.fr/public/perso6/platex.gif=12, histoire de voir si on ne pourrait pas avoir un modèle simple, et facilement interprétable en plus…

  1. -3):3)/3,labels=1:6 []

Simulation de séries temporelles

Un billet rapide pour reprendre le code tapé en cours, la semaine passée. Considérons  un processus autorégressif d’ordre 1,  où  est un bruit blanc, stationnaire, i.e.  appartient à l’intervalle . Le code pour simuler un tel processus est

n=1000
bruit=rnorm(n)
phi1= .85
X=rep(NA,n)
X[1]=0
for(t in 2:n){X[t]=phi1*X[t-1]+bruit[t]}
plot(acf(X),lwd=5,col='blue')
plot(pacf(X),lwd=5,col='blue')

ou avec un autocorrélation au premier ordre négative,

phi1= -0.7

On peut aussi regarder un processus autorégressif au second ordre,

sur la figure ci-dessous (avec en haut à gauche le triangle de stationnarité du couple de paramètres).

phi1=  0.3
phi2=  0.5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=phi1*X[t-1]+phi2*X[t-2]+bruit[t]}

Histoire de changer un peu, on peut regarder un processus moyenne mobile au premier ordre,  où  est un paramètre dans .

theta1=  .8
X=rep(NA,n)
X[1]=0
for(t in 2:n){
X[t]=bruit[t]+theta1*bruit[t-1]}

ou une moyenne mobile du second ordre,

theta1= -.6
theta2=  .5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=bruit[t]+theta1*bruit[t-1]+
theta2*bruit[t-2]}

 

Visualizing uncertainty using Jackknife

Once again, I (re)discovered last week at the Rmetrics conference that old tools can be extremely interesting to illustrate complex ideas, like uncertainty in fnancial markets, and stock prices. For instance a 99.5% quantile: we look for the scenario that occur with a probability of 1 out of 200. Are there nice ways to illustrate that quantity ?

Consider the monthly evolution of the SP500 index over the last 22 years,

> library(quantmod) 
> getSymbols('^GSPC', from='1990-01-01') 
[1] "GSPC" 
> GSPC = adjustOHLC(GSPC,
+ symbol.name='^GSPC') 
> MGSPC = to.monthly(GSPC) 
> CLOSE = MGSPC$GSPC.Close 
> plot(CLOSE)

It is possible to use Jackknife technique to illustrate uncertainty. The idea, in Jackknife, it to remove one of the observations, and to do that for all observations. More formally, from a sample , we define a (sub)sample where observation  as been removed, i.e. . Then, we can study all samples when one observation was removed.

Here, in the context of financial time series, over 270 months, we can wonder what might have been the final value of the index if one observation (i.e. one month) had been removed. It is actually the idea of Jackknife,

> R=diff(log(CLOSE)); R=R[-1] 
> n=length(R) 
> X=rnorm(n,mean(R),sd(R)) 
> X=R 
> MX=t(matrix(X,n,n)) 
> MX=exp(MX) 
> diag(MX)=1 
> SMX=MX 
> for(k in 2:n){SMX[,k]=SMX[,k-1]*(MX[,k])}

We can plot the different trajectories of the index, when we remove one month,

> init=as.numeric(CLOSE[1]) 
> plot(1:n,init*cumprod(exp(X)),type="l", 
+ xlab="",ylab="",col="white")
 > for(k in 1:n){lines(0:n,init*c(1,SMX[k,]), 
+ col="light blue")} 
> lines(0:n,init*c(1,cumprod(exp(X))),lwd=2, 
+ col="blue")

This can be used to understand sensitivity, or unccertainty, of financial time series,

We can then look closer at the final value of the index, over those 270 scenarios,

or we also use a Box-Plot,

Here we can clearly see the impact: if we remove one good month, the index ends around 1250, while it reaches 1650 if we remove a bad month. The difference is huge. So instead of talking about volatility (which is actually a complex concept), that Jackknife idea of remove observations might be more intuitive, and much easier to get a first understanding of uncertainty. But those ideas of resampling are great. I will post a nice application soon (but first, I will discuss with some colleagues in Lyon).

Qui se ressemble se suit (sur Twitter au moins)

Un nouveau billet pour reprendre une analyse marrante faite par @3wen  (Ewen Gallic, qui travaille à Montréal alors que je profite de la Suisse). Suite à l’analyse amusante des trolls de Twitter, j’avais lancé l’idée qu’il pourrait être amusant de regarder parmi les députés français (que j’avais un peu suivis l’autre jour), qui tweete avec qui. Ewen a eu la bonne idée de regarder sur  http://lelab.europe1.fr/ ce qui lui a permis de récupérer la liste des comptes Twitter des députés, en France. L’idée est simple: parmi les députés français, on regarde qui suit qui. Quelqu’un de très suivi sera au centre du nuage, alors que quelqu’un qui se contente de suivre sera sur le bord (intensément connecté au reste du nuage). Pour les personnes peu familières, Twitter n’est pas Facebook: on n’a pas des “amis“, il y a des gens que l’on suit parce qu’ils racontent des choses qui peuvent nous intéresser.

En utilisant gephi Ewen a ensuite pu visualiser le graphique des interconnexions entre les députés.

Sans grande surprise, les députés de gauche suivent essentiellement les députés de gauche, et inversement. Quelques gros comptes font la passerelle entre les deux groupes parlementaires. En fait, si on regarde en détail (voire sur le fichier image complet), on peut observer un peu mieux ce qui se passe.

Bon, la grande difficulté est de lire ces interactions correctement. En particulier, on ne peut pas conclure (à la vue seule du graphique) que Cécile Duflot est de gauche ! Ce que cela nous dit est que ce qu’elle raconte intéresse les gens de gauche (ou en tous cas les députés du Parti Socialiste), et pas du tout les gens de droite (les députés de l’UMP) ! On notera aussi que le centre n’existe pas. Sur Twitter en tous cas. Et si on regarde tout en bas, à droite, on retrouve le Front National, et on a la confirmation que ce que raconte le Front National n’intéresse personne !

A la recherche des groupes de trolls sur Twitter

Il y a quelques jours @olihb avait mis en ligne sur son blog http://olihb.com/ une très jolie carte permettant de visualiser les personnes (ou les comptes) impliquées sur Twitter sur les manifestations récentes au Québec (la carte était basée sur gephi). Beaucoup de monde twittait avec des hashtags, et l’idée était d’utiliser les plus classiques et les plus populaires pour identifier les comptes actifs (#ggi#manifencours, etc). Mais la semaine passée j’avais l’impression que le débat s’était durci, avec beaucoup plus de trolls dans les discussions.

Bref, le principe est que les trolls évoluent souvent en petits groupes, donc avec @3wen on a voulu voir si on pouvait identifier facilement les groupes de trolls (sans pour autant faire du sentiment analysis, juste visualiser des personnes qui se suivent beaucoup entre elles, les connivences entre twittos, i.e. du suivi mutuel).

La méthodologie est simple. On a deux bases à notre disposition. Un première contient tous les tweets contenant #ggi entre le 7 juin à 1 heure du matin et le 8 à minuit 45 (soit 14,700 tweets). La seconde contient tous les tweets contenant #ggi entre le 12 juin à 23 heures 30 et le 14 juin à 17 heures (soit 14,784 tweets). Ah oui, les heures sont GMT. Plus précisément, on a la fréquence d’arrivée suivante

Les durées d’observations sont courtes, mais l’idée est que si une personne a tweetté au moins une fois pendant ces périodes, on les a dans notre base, et ça suffit pour les suivre. On ne va pas étudier l’intensité de la mise en ligne de tweets. Je ne rentre pas dans le détail du code, mais c’est basé sur les fonctions présentées dans un précédant billet, et toujours avec la même librairie,

require("RJSONIO")

Sur des deux bases, i.e. 29,484 tweets, on récupère un ensemble de 6,556 twittos uniques (mais répartis dans deux listes: ceux qui ont tweeté le 7 et ceux qui ont tweeté le 14). Pour toutes ces personnes, on est allé voir qui les suivaient. Sur le principe, c’est simple, mais on fonctionne avec des API qui imposent des limites horaires. Bref, il faut bricoler. Pour une personne, on récupère avec la fonction suivante la liste de ses followers,

recuperefollowers=function(id,nbrequetes){
followers=try(scan(paste(
"http://api.twitter.com/1/followers/ids.json?cursor=-1&user_id=",
id,sep=""),what = "character", encoding="latin1"))
if(is.null(attr(followers,"condition"))){
followers=paste(followers[1:length(followers)],collapse=" ")
followers=fromJSON(followers, method = "C")
id_followers=followers$id
next_cursor=followers$next_cursor_str
nbrequetes=nbrequetes+1
plusDe5000=FALSE
while(nbrequetes<150 & next_cursor!="0" &
is.null(attr(followers,"condition"))){
plusDe5000=TRUE
followers=scan(paste(
"http://api.twitter.com/1/followers/ids.json?cursor=",
next_cursor,"&user_id=",id,sep="")
,what = "character", encoding="latin1")
followers=paste(followers[1:length(followers)],collapse=" ")
followers=fromJSON(followers, method = "C")
id_followers=c(id_followers,followers$id)
next_cursor=followers$next_cursor_str
nbrequetes=nbrequetes+1
}
if(plusDe5000 & nbrequetes>=149){
return(c(list(),nbrequetes,TRUE))
}else{
return(c(list(id_followers),nbrequetes,FALSE))
}}else{
nbrequetes=nbrequetes+1
print(paste("Probleme rencontre avec ",id,sep=""))
return(c(list("PROBLEME"),nbrequetes,FALSE))}}

On peut ensuite lancer cette fonction sur tous les identifiants qu’on a récupéré, et on stocke tout le monde dans une (petite) base

temp=try(recuperefollowers(
lesIDAParcourir[unID_index],nbrequetes))
lesFollowers=temp[[1]]
resul=cbind(rep(lesIDAParcourir[unID_index]
,length(lesFollowers)),lesFollowers)
nbrequetes=temp[[2]]
refaire=temp[[3]]
Sys.sleep(runif(1,1,1.5))
compte=compte+1
write.table(resul,paste("auto_resul_",
compte,".txt",sep=""),sep=";")

Si on veut que ça tourne, il faut juste faire patienter un heure après avoir fait 150 requêtes. Bref, dans la boucle, on met une petite fonction qui temporise,

if(nbrequetes>148){
nbrequetes=1
Sys.sleep(60*60+60*trunc(runif(1,2,3))) }

Une fois créées les 6,500 bases de nos 6,500 comptes, contenant la liste de tous les followers, on va les agréger (fort heureusement, on n’avait pas les  gros comptes avec plus d’un million de followers). La liste se récupère avec

N=list.files("id/petits")

D’ailleurs, pour recoller les 6,500 bases ensemble, Ewen suggérait le code suivant, beaucoup beaucoup plus rapide qu’une gros boucle,

recupere_petits_fichiers=function(x){
temp=read.table(paste("id/petits/",x,sep=""),
header=TRUE,sep=";",comment.char="",
check.names=FALSE,colClasses=
c("character","character"))
colnames(temp)=list("id_twittos","id_follower")
head(temp)

On fait ensuite une récupération des followers pour la première base de tweets, et une autre pour ceux de la seconde. Les deux listes sont obtenues avec le code suivant

present_avant_temp=temp[which(
temp$id_twittos%in%lesID_avant),]
if(nrow(present_avant_temp)>0){
present_avant_temp=
present_avant_temp[which(
present_avant_temp$id_follower%in%lesID_avant),]}
else{
present_avant_temp=NULL}

present_apres_temp=
temp[which(temp$id_twittos%in%lesID_apres),]
if(nrow(present_apres_temp)>0){present_apres_temp=
present_apres_temp[
which(present_apres_temp$id_follower%in%lesID_apres),]}
else{present_apres_temp=NULL}
return(c(list(present_avant_temp),
list(present_apres_temp),
list(nbfollowers=nrow(temp))))

Pour constituer notre grosse base, la ruse est d’utiliser non pas une boucle sur tous les petites bases, mais

resul=lapply(N,recupere_petits_fichiers)
liens_avant=data.frame(do.call(
"rbind",lapply(resul,function(x) x[[1]])),
stringsAsFactors=FALSE)
liens_apres=data.frame(do.call(
"rbind",lapply(resul,function(x) x[[2]])),
stringsAsFactors=FALSE)

On a ainsi récupéré une liste avec plusieurs centaines de milliers de comptes Twitter. On a retenu seulement ceux qui avaient twitté avec le hashtag #ggi. On a ainsi pu constituer, pour nos 6,556 twittos une base de personnes qui tweettent, et de personnes qui les suivent (et qui ont twitté pendant cette – courte – période avec le hastag #ggi). Bref, une grosse matrice de correspondance de qui suit qui. On a pu le faire sur les deux bases. C’est précisément la différence entre les deux bases que l’on a cherché à visualiser, et gephi semble permettre de comparer les bases dans le temps. A gauche, on a le nuage de connexions entre les comptes pour la première période, et à droite pour la seconde.

Bon, le premier est plus compact, mais on voit qu’il y a des différences, en particulier sur les bords, que ce soit en haut

ou à droite du nuage,

Dans les deux cas, on voit que plein de petits comptes sont venus se greffer au nuage, souvent en suivant quelqu’un qui le suit en retour. Et parfois, ce sont les seuls connexions qu’ils ont (qui utilisent le hashtag #ggi). Ce sont ces comptes qui pourraient constituer les nouveaux trolls qu’on espérait identifier. L’avantage est qu’on a les noms de tous ces comptes (on ne les a pas mis sur le graphique), mais dans un second temps, on va aller regarder ce qu’on fait précisément ces comptes. Peut-être interprétons nous mal ces graphiques, qui représentent une arrivée de nouveaux comptes, et que cette dynamique est la même pour tous les nouveaux comptes, qui commencent par de faibles connexions, et ensuite vont s’étoffer au fur et à mesure. Ce qu’il faut que l’on comprenne aussi c’est pourquoi ils sont tous aussi rapprochés. Ils semblent être suivis par des personnes qui sont suivis pas les mêmes personnes. Ce qui laisse à penser que l’on a effectivement identifié une certaine catégorie de comptes.
On peut reprendre les graphs avec une résolution plus fine, avant

et après

ou alors en changeant les options, avant

à comparer avec après

On voit peut être encore plus clairement sur ces deux dernières images les nouveaux comptes se greffer sur le graph existant. Si la qualité ne suffit pas, des graphiques (très) haute résolution sont téléchargeables (15Mo chacun), avant etaprès (avec les identifiants cette fois). Sinon, en promenant la souris sur le dessin dessous, on voit encore mieux la différence,
 

Maintenant – pour être complètement honnête – sur la lecture des graphs, j’avoue ne pas avoir tout bien compris. Les représentations semblent claires, mais j’aurais bien aimé être certain d’avoir compris la construction proposée par gephi avant de les mettre en ligne. Car si on regarde sur moins ce compte, on comprend que l’algorithme permet effectivement de bien mettre en avant les réseaux. Parmi les illustrations les plus frappantes, la figure suivante montre des interconnections avec Facebook à gauche, et purement aléatoire à droite (illustration trouvé sur le site de gephi),

La grande difficulté est alors de représenter spatialement les groupes, comme le notent Gastner et Newman dans The Spatial Structure of Networks, Watts et Strogatz dansCollective dynamics of ‘small-world’ networksou encore Nisha et Venkatesh dans Small worlds: How and why. Mais pour etre tout à fait complet, je renverrais aussi vers la thèse de doctorat de Jure Leskovec, intituléeDynamics of large networks ainsi qu’au livre (intégralement téléchargeable) Networks, Crowds, and Markets: Reasoning About a Highly Connected World de David Easley et Jon Kleinberg. Bref, on a découvert un outils probablement très riche, mais il va nous falloir du temps pour comprendre ce qu’il fait vraiment.

Date of death, birthday and Elvis Presley

10 days ago, a study published on http://www.annalsofepidemiology.org/ mentioned that “Death has a preference for birthdays” (as claimed in the title). The conclusion of the paper is that, in general, birthdays do not evoke a postponement mechanism but appear to end up in a lethal way more frequently than expected (“anniversary reaction”). Well, this is not new, and several previous articles have mentioned that point, e.g. Angermeyer et al. (1987).

I found the idea interesting since in demography, there is a large literature trying to extrapolate death rates from discrete to continuous time. Extrapolation are usually extremely smooth. But none of them integrate that aspect of mortality precisely on the birthday. The problem is that it is rather difficult to say something since datasets with individual observations are rare, online.

But yesterday, @coulmont sent me a tweet mentioning a website. I do not know if this is legal (even if some explanations are given), but I will mention courtesy of http://ssdmf.info/. It is a so-called Social Security Death Master File, containing individual informations about deaths in the US, as well as geographic information (as described on http://www.ssa.gov/), for people having a social security number.

With R, it is possible to work on those files (even they are huge, with tens of millions observations). For instance, we can check who is inside.

> elvis=scan("ssdm2",skip=22371720,n=1,what="character",sep=",")
> elvis
[1] " 409522002PRESLEY         ELVIS     0800197701081935  "

If you believe that Elvis is dead, you might agree that this database can be accurate (or at least, not too bad). And further, we can see here how to read the result: Elvis was born on January 8, 1935 (8 last digits), and died on August 16, 1977 (8 digits before). Obviously here, there are some problems with the dataset (we do not have the day of the death of Elvis). So here, we remove all the observations that do not give us proper dates. Then, the idea is to assume that the person died in 2000 (or any year since the point is to focus on days and months). Then, we count the number of days between the day of death and the birthday in 2001 (that would have been after) and the one in 2000 (that was either before or after the death), so that we can derive the number of days after the birthday,

dates=substr(base,66,81)
death=as.Date(substr(dates,1,8),"%m%d%Y")
birth=as.Date(substr(dates,9,16),"%m%d%Y")
indice=is.na(death)|is.na(birth)
mean(indice)
mdeath=substr(dates,1,2)
ddeath=substr(dates,3,4)
mbirth=substr(dates,9,10)
dbirth=substr(dates,11,12)
indice=which(ddeath!="00")
birth1=as.Date(paste(mbirth[indice],
dbirth[indice],"2000",sep=""),"%m%d%Y")
birth2=as.Date(paste(mbirth[indice],
dbirth[indice],"2001",sep=""),"%m%d%Y")
death=as.Date(paste(mdeath[indice],ddeath[indice],
"2000",sep=""),"%m%d%Y")
k=length(indice)
diffday=cbind((as.numeric(death-birth1))[1:k],
(as.numeric(death-birth2))[1:k])
DIFF=apply(diffday,1,function(x) {min(x[x>=0])})

What we have here is the number of days following the previous birthday. If we look at the distribution of that number of days, we obtain

counts=table(DIFF)
plot(as.numeric(names(counts)),
as.numeric(counts))
counts["0"]/(mean(counts[100:200]))
> counts["0"]/(mean(counts[100:200]))
0
1.121261

Thus, the death excess on the day of birth was around 12%, which is rather close to the one obtained from the Swiss mortality statistics 1969–2008 (in Ajdacic-Gross et al. (2012)). Note that here, we just play with a small subset of the entire dataset,

That database is probably extremely interesting, except that it suffers a huge selection bias, since only dead people are in that database. So it might be useless if we wish to study life expectancy of people named Bill versus people named Georges (that was something I wanted to investigate initially). But we’ll see what else we can do with it (since Ewen have been able to write some code to go through that huge dataset).

Do you still have time to sleep ?

Last week, @3wen (Ewen) helped me to write nice R functions to extract tweets in R and build datasets containing a lot of information. I’ve tried a couple of time on my own. Once on tweet contents, but it was not convincing and once on the activity on Twitter following an event (e.g. the death of someone famous). I have to admit that I am not a big fan of databases that can be generated using standard function to study tweets. For instance, we can only extract tweets, notre-tweets (which is also an important indicator of tweet-activity). @3wen suggested to use

require("RJSONIO")

The first step is to extract some information from a tweet, and store it in a dataset (details can be found on https://dev.twitter.com/)

obtenir_ligne <- function(unTweet){
date_courante=unTweet$created_at
id_courant=unTweet$id_str
text=unTweet$text
nb_followers=unTweet$user$followers_count
nb_amis=unTweet$user$friends_count
utc_offset=unTweet$user$utc_offset
listeMentions=unTweet$entities$user_mentions
return(c(list(c(id_courant,date_courante,text,
nb_followers,nb_amis,utc_offset)),
list(do.call("rbind",lapply(listeMentions,
function(x,id_courant) c(id_courant,
x$screen_name),unTweet$id_str)))))
}

Now that we  have the code to extract information from one tweet, let us find several tweets, from one user, say my account,

nom="Freakonometrics"

The (small) problem here, is that we have a limitation: we can only get 100 tweets per call of the function

n=100
tweets_courants=scan(paste(
"http://api.twitter.com/1/statuses/user_timeline.json?
include_entities=true&include_rts=true&screen_name=
",nom,"&count=",n,sep=""),what = "character",
encoding="latin1")
tweets_courants=paste(tweets_courants[
1:length(tweets_courants)],collapse=" ")
tweets_courants=fromJSON(tweets_courants,
method = "C")

Then, we use our function to build a database with 100 lines,

extracTweets <- lapply(tweets_courants,
obtenir_ligne)
mentions=do.call("rbind",lapply(extracTweets,
function(x) x[[2]]))
colnames(mentions)=list("id","screen_name")
res=t(sapply(extracTweets,function(x) x[[1]]))
colnames(res) <- list("id","date","text",
"nb_followers","nb_amis","utc_offset")

The idea then is simply to use a loop, based on the latest id observed

dernier_id=tweets_courants[[length(
tweets_courants)]]$id_str

So, here we go,

compteurLimite=100

while(compteurLimite<4100){
tweets_courants=scan(paste(
"http://api.twitter.com/1/statuses/user_timeline.json?
include_entities=true&include_rts=true&screen_name=
",nom,"&count=",n,"&max_id=",dernier_id,sep=""),
what = "character", encoding="latin1")
tweets_courants=paste(tweets_courants[
1:length(tweets_courants)],collapse=" ")
tweets_courants=fromJSON(tweets_courants,
method = "C")

extracTweets <- lapply(tweets_courants[
2:length(tweets_courants)],obtenir_ligne)
mentions=rbind(mentions,do.call("rbind",
lapply(extracTweets,function(x) x[[2]])))
res=rbind(res,t(sapply(extracTweets,function(x) x[[1]])))
t(sapply(extracTweets,function(x) x[[1]]))
dernier_id=tweets_courants[[length(
tweets_courants)]]$id_str
compteurLimite=compteurLimite+100
}

resFreakonometrics=res=
data.frame(res,stringsAsFactors=FALSE)

All the information about my own tweets (and re-tweets) are stored in a nice dataset. Actually, we have even more, since we have extracted also names of people mentioned in tweets,

mentionsFreakonometrics=
data.frame(mentions)

We can look at people I mention in my tweets

gazouillis=sapply(split(mentionsFreakonometrics,
mentions$screen_name),nrow)
gazouillis=gazouillis[order(gazouillis,
decreasing=TRUE)]

plot(gazouillis)
plot(gazouillis,log="xy")
> gazouillis[1:20]
tomroud freakonometrics       adelaigue       dmonniaux
155              84              77              56
J_P_Boucher         embruns      SkyZeLimit        coulmont
42              39              35              31
Fabrice_BM            3wen          obouba          msotod
31              30              29              27
StatFr     nholzschuch        renaudjf        squintar
26              25              23              23
Vicnent        pareto35        romainqc        valatini
23              22              22              22

If we plot those frequencies, we can clearly observe a standard Pareto distribution,

Now, let us spend some time with dates and time of tweets (it was the initial goal of this post)… One more time, there is a (small) technical problem that we have to deal with: language. We need a function to convert date in English (on Twitter) to dates in French (since I have a French version of R),

changer_date_anglais <- function(date_courante){
mois <- c("Jan","Fév", "Mar", "Avr", "Mai",
"Jui", "Jul", "Aoû", "Sep", "Oct", "Nov", "Déc")
months <- c("Jan", "Feb", "Mar", "Apr", "May",
"Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
jours <- c("Lun","Mar","Mer","Jeu",
"Ven","Sam","Dim")
days <- c("Mon","Tue","Wed","Thu",
"Fri","Sat","Sun")
leJour <- substr(date_courante,1,3)
leMois <- substr(date_courante,5,7)
return(paste(jours[match(leJour,days)]," ",
mois[match(leMois,months)],substr(
date_courante,8,nchar(date_courante)),sep=""))
}

So now, it is possible to plot the times where I am online, tweeting,

DATE=Vectorize(changer_date_anglais)(res$date)
DATE=sapply(resSkyZeLimit$date,
changer_date_anglais,simplify=TRUE)

DATE2=strptime(as.character(DATE),
"%a %b %d %H:%M:%S %z %Y")
lt= as.POSIXlt(DATE2, origin="1970-01-01")
heure=lt$hour+lt$min/60
plot(DATE2,heure)

On this graph, we can see that I am clearly not online almost 6 hours a day (or at least not on Twitter). It is possible to visualize more precisely the period of the day where I might be on Twitter,

hist(heure,breaks=0:24,col="light green",proba=TRUE)
X=c(heure-24,heure,heure+24)
d=density(X,n = 512, from=0, to=24,bw=1)
lines(d$x,d$y*3,lwd=3,col="red")

or, if we want to illustrate with some kind of heat plot,

Note that we did it for my Twitter account, but we can also run the code on (almost) anyone on Twitter. Consider e.g. @adelaigue. Since Alexandre is tweeting in France, we have to play with time-zones,

res=extractR("adelaigue")
DATE=Vectorize(changer_date_anglais)(res$date)
DATE2=strptime(as.character(DATE),
"%a %b %d %H:%M:%S %z %Y",tz = "GMT")+2*60*60

or I can also look at @skythelimit who’s usually twitting from Singapore (I am in Montréal). I can seen clearly when we might have overlaps,

res=extractR("skythelimit")

Nice isn’t it. But it is possible to do much better… for instance, for those who do not ask specifically not to be Geo-located, we can see where they do tweet during the day, and during the night… I am quite sure a dozen posts with those functions can be written…