Tag Archives: Renglish

Lasso Regression (home made)

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be

soft_thresholding = function(x,a){
sign(x) * pmax(abs(x)-a,0)
}

To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0list = numeric(length(maxiter+1))
  beta0 = sum(y-X%*%beta)/(length(y))
  beta0list[1] = beta0
  for (j in 1:maxiter){
    for (k in 1:length(beta)){
      r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
      beta[k] = (1/sum(omega*X[,k]^2))*
        soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
    }
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[j+1] = beta0
    betalist[[j+1]] = beta
    obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
           beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
    if (norm(rbind(beta0list[j],betalist[[j]]) - 
             rbind(beta0,beta),'F') &lt; tol) { break } 
  } 
  return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4]
for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
y = chicago$Fire
y = (y-mean(y))/sd(y)

To initialize the algorithm, use the OLS estimate

beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001)
$obj
[1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119
 
$beta
          [,1]
X_1  0.0000000
X_2  0.3836087
X_3 -0.5026137
 
$intercept
[1] 2.060999e-16

and we can get the standard Lasso plot by looping,

Quantile Regression (home made, part 2)

A few months ago, I posted a note with some home made codes for quantile regression… there was something odd on the output, but it was because there was a (small) mathematical problem in my equation. So since I should teach those tomorrow, let me fix them.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. Heuristically, the idea is to write y_i=\mu+\varepsilon_i, and then define a_i‘s and b_i‘s so that \varepsilon_i=a_i-b_i and |\varepsilon_i|=a_i+b_i, i.e. a_i=(\varepsilon_i)_+=\max\lbrace0,\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i>0}andb_i=(-\varepsilon_i)_+=\max\lbrace0,-\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i<0}denote respectively the positive and the negative parts.

Unfortunately (that was the error in my previous post), the expression of linear programs is\min_{\mathbf{z}}\left\lbrace\boldsymbol{c}^\top\mathbf{z}\right\rbrace\text{ s.t. }\boldsymbol{A}\mathbf{z}=\boldsymbol{b},\mathbf{z}\geq\boldsymbol{0}In the equation above, with the a_i‘s and b_i‘s, we’re not far away. Except that we have \mu\in\mathbb{R}, while it should be positive. So similarly, set \mu=\mu^+-\mu^- where \mu^+=(\mu)_+ and \mu^-=(-\mu)_+.

Thus, let\mathbf{z}=\big(\mu^+;\mu^-;\boldsymbol{a},\boldsymbol{b}\big)^\top\in\mathbb{R}_+^{2n+2}and then write the constraint as \boldsymbol{A}\mathbf{z}=\boldsymbol{b} with \boldsymbol{b}=\boldsymbol{y} and \boldsymbol{A}=\big[\boldsymbol{1}_n;-\boldsymbol{1}_n;\mathbb{I}_n;-\mathbb{I}_n\big]And for the objective function\boldsymbol{c}=\big(\boldsymbol{0},\boldsymbol{1}_n,-\boldsymbol{1}_n\big)^\top\in\mathbb{R}_+^{2n+2}

To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve) 
X = rep(1,n) 
A = cbind(X, -X, diag(n), -diag(n))
b = y
c = c(rep(0,2), rep(1,n),rep(1,n))
equal_type = rep("=", n) 
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(y,tau)
      30% 
0.6741586

The linear program is now\min_{q^+,q^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i,q^+,q^-\geq 0 and y_i=q^+-q^-+a_i-b_i, \forall i=1,\cdots,n. The R code is now

c = c(rep(0,2), tau*rep(1,n),(1-tau)*rep(1,n))
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 0.6741586

So far so good…

Quantile Regression

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\boldsymbol{\beta}^+,\boldsymbol{\beta}^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i=\boldsymbol{x}^\top[\boldsymbol{\beta}^+-\boldsymbol{\beta}^-]+a_i-b_i\forall i=1,\cdots,n and \beta_j^+,\beta_j^-\geq 0 \forall j=0,\cdots,k. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",c,A,const_type,b)
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

And we can adapt the later to multiple regressions, of course,

X = cbind(1,base$area,base$yearc)
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] -5542.503252     3.978135     2.887234

to be compared with

library(quantreg)
rq(rent_euro~ area + yearc, tau=tau, data=base)
 
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234 
 
Degrees of freedom: 4571 total; 4568 residual

On Cochran Theorem (and Orthogonal Projections)

Cochran Theorem – from The distribution of quadratic forms in a normal system, with applications to the analysis of covariance published in 1934 – is probably the most import one in a regression course. It is an application of a nice result on quadratic forms of Gaussian vectors. More precisely, we can prove that if \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d) is a random vector with d \mathcal{N}(0,1) variable then (i) if A is a (squared) idempotent matrix \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r where r is the rank of matrix A, and (ii) conversely, if \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r then A is an idempotent matrix of rank r. And just in case, A is an idempotent matrix means that A^2=A, and a lot of results can be derived (for instance on the eigenvalues). The prof of that result (at least the (i) part) is nice: we diagonlize matrix A, so that A=P\Delta P^\top, with P orthonormal. Since A is an idempotent matrix observe thatA^2=P\Delta P^\top=P\Delta P^\top=P\Delta^2 P^\topwhere \Delta is some diagonal matrix such that \Delta^2=\Delta, so terms on the diagonal of \Delta are either 0 or 1‘s. And because the rank of A (and \Delta) is r then there should be r 1‘s and d-r 1‘s. Now write\boldsymbol{Y}^\top A\boldsymbol{Y}=\boldsymbol{Y}^\top P\Delta P^\top\boldsymbol{Y}=\boldsymbol{Z}^\top \Delta\boldsymbol{Z}where \boldsymbol{Z}=P^\top\boldsymbol{Y} that satisfies\boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},PP^\top) i.e. \boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d). Thus \boldsymbol{Z}^\top \Delta\boldsymbol{Z}=\sum_{i:\Delta_{i,i}-1}Z_i^2\sim\chi^2_rNice, isn’t it. And there is more (that will be strongly connected actually to Cochran theorem). Let A=A_1+\dots+A_k, then the two following statements are equivalent (i) A is idempotent and \text{rank}(A)=\text{rank}(A_1)+\dots+\text{rank}(A_k) (ii) A_i‘s are idempotents, A_iA_j=0 for all i\neq j.

Now, let us talk about projections. Let \boldsymbol{y} be a vector in \mathbb{R}^n. Its projection on the space \mathcal V(\boldsymbol{v}_1,\dots,\boldsymbol{v}_p) (generated by those p vectors) is the vector \hat{\boldsymbol{y}}=\boldsymbol{V} \hat{\boldsymbol{a}} that minimizes \|\boldsymbol{y} -\boldsymbol{V} \boldsymbol{a}\| (in \boldsymbol{a}). The solution is\hat{\boldsymbol{a}}=( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top \boldsymbol{y} \text{ and } \hat{\boldsymbol{y}} = \boldsymbol{V} \hat{\boldsymbol{a}}
Matrix P=\boldsymbol{V} ( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top is the orthogonal projection on \{\boldsymbol{v}_1,\dots,\boldsymbol{v}_p\} and \hat{\boldsymbol{y}} = P\boldsymbol{y}.

Now we can recall Cochran theorem. Let \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{\mu},\sigma^2\mathbb{I}_d) for some \sigma>0 and \boldsymbol{\mu}. Consider sub-vector orthogonal spaces F_1,\dots,F_m, with dimension d_i. Let P_{F_i} be the orthogonal projection matrix on F_i, then (i) vectors P_{F_1}\boldsymbol{X},\dots,P_{F_m}\boldsymbol{X} are independent, with respective distribution \mathcal{N}(P_{F_i}\boldsymbol{\mu},\sigma^2\mathbb{I}_{d_i}) and (ii) random variables \|P_{F_i}(\boldsymbol{X}-\boldsymbol{\mu})\|^2/\sigma^2 are independent and \chi^2_{d_i} distributed.

We can try to visualize those results. For instance, the orthogonal projection of a random vector has a Gaussian distribution. Consider a two-dimensional Gaussian vector

library(mnormt)
r = .7
s1 = 1
s2 = 1
Sig = matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
Sig
Y = rmnorm(n = 1000,mean=c(0,0),varcov = Sig)
plot(Y,cex=.6)
vu = seq(-4,4,length=101)
vz = outer(vu,vu,function (x,y) dmnorm(cbind(x,y),
mean=c(0,0), varcov = Sig))
contour(vu,vu,vz,add=TRUE,col='blue')
abline(a=0,b=2,col="red")

Consider now the projection of points \boldsymbol{y}=(y_1,y_2) on the straight linear with directional vector \overrightarrow{\boldsymbol{u}} with slope a (say a=2). To get the projected point \boldsymbol{x}=(x_1,x_2) recall that x_2=ay_1 and \overrightarrow{\boldsymbol{x},\boldsymbol{y}}\perp\overrightarrow{\boldsymbol{u}}. Hence, the following code will give us the orthogonal projections

p = function(a){
x0=(Y[,1]+a*Y[,2])/(1+a^2)
y0=a*x0
cbind(x0,y0)
}

with

P = p(2)
for(i in 1:20) segments(Y[i,1],Y[i,2],P[i,1],P[i,2],lwd=4,col="red")
points(P[,1],P[,2],col="red",cex=.7)

Now, if we look at the distribution of points on that line, we get… a Gaussian distribution, as expected,

z = sqrt(P[,1]^2+P[,2]^2)*c(-1,+1)[(P[,1]>0)*1+1]
vu = seq(-6,6,length=601)
vv = dnorm(vu,mean(z),sd(z))
hist(z,probability = TRUE,breaks = seq(-4,4,by=.25))
lines(vu,vv,col="red")

Or course, we can use the matrix representation to get the projection on \overrightarrow{\boldsymbol{u}}, or a normalized version of that vector actually

a=2
U = c(1,a)/sqrt(a^2+1)
U
[1] 0.4472136 0.8944272
matP = U %*% solve(t(U) %*% U) %*% t(U)
matP %*% Y[1,]
[,1]
[1,] -0.1120555
[2,] -0.2241110
P[1,]
x0 y0
-0.1120555 -0.2241110

(which is consistent with our manual computation). Now, in Cochran theorem, we start with independent random variables,

Y = rmnorm(n = 1000,mean=c(0,0),varcov = diag(c(1,1)))

Then we consider the projection on \overrightarrow{\boldsymbol{u}} and \overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{u}}^\perp

U = c(1,a)/sqrt(a^2+1)
matP1 = U %*% solve(t(U) %*% U) %*% t(U)
P1 = Y %*% matP1
z1 = sqrt(P1[,1]^2+P1[,2]^2)*c(-1,+1)[(P1[,1]>0)*1+1]
V = c(a,-1)/sqrt(a^2+1)
matP2 = V %*% solve(t(V) %*% V) %*% t(V)
P2 = Y %*% matP2
z2 = sqrt(P2[,1]^2+P2[,2]^2)*c(-1,+1)[(P2[,1]>0)*1+1]

We can plot those two projections

plot(z1,z2)

and observe that the two are indeed, independent Gaussian variables. And (of course) there squared norms are \chi^2_{1} distributed.

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
  x2=cut(x2,breaks=
  c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
  labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, 
             factor = b$x2, 
             family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: chr  "Group.4" "Group.4" "Group.4" "Group.4" ...
 - attr(*, ".internal.selfref")= 
table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4 
     23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
   x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

 

On leverage

Last week, in our STT5100 (applied linear models) class, I’ve introduce the hat matrix, and the notion of leverage. In a classical regression model, \boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta} (in a matrix form), the ordinary least square estimator of parameter \boldsymbol{\beta} is \widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top\boldsymbol{y}The prediction can then be written\widehat{\boldsymbol{y}}=\boldsymbol{X}\widehat{\boldsymbol{\beta}}=\underbrace{\color{blue}{\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}}_{\color{blue}{\boldsymbol{H}}}\boldsymbol{y}where \color{blue}{\boldsymbol{H}} is called the hat matrix.

The matrix is idempotent, i.e. \boldsymbol{H}^2={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\textcolor{grey}{\boldsymbol{X}^\top{\boldsymbol{X}}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}}\boldsymbol{X}^\top}={\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top}=\boldsymbol{H}so it can be interpreted as a projection matrix. Furthermore, since\boldsymbol{H}\boldsymbol{X}=\boldsymbol{X} (just do the maths), the projection is on a subspace that contains all the linear combinations of columns of \boldsymbol{X}. One can also observe that \mathbb{I}-\boldsymbol{H} is also a projection matrix. And we can write\boldsymbol{y}=\underbrace{\boldsymbol{H}\boldsymbol{y}}_{\widehat{\boldsymbol{y}}}+\underbrace{(\mathbb{I}-\boldsymbol{H})\boldsymbol{y}}_{\widehat{\boldsymbol{\varepsilon}}}where \widehat{\boldsymbol{y}} is the orthogonal projection of \boldsymbol{y} on the (linear) space of linear combinations of columns of \boldsymbol{X}, and \widehat{\boldsymbol{y}}\perp\widehat{\boldsymbol{\varepsilon}}, which gives the classical interpretation of residuals, being unpredictible (at least with a linear model using variables \boldsymbol{X}).

Let’s move a bit faster now (we’ve seen many other properties last week), and consider elements on the diagonal of matrix \boldsymbol{H}. Recall that we have

so entry \boldsymbol{H}_{i,i} is a measure of the influence of entry \boldsymbol{y}_i on its prediction latex]\widehat{\boldsymbol{y}}_i[/latex].

We have seen that\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}(\boldsymbol{H})=\text{trace}(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}^\top)which can be written\sum_{i=1}^n\boldsymbol{H}_{i,i}=\text{trace}\boldsymbol{X}^\top(\boldsymbol{X}(\boldsymbol{X}^\top\boldsymbol{X})^{-1})=\text{trace}(\mathbb{I})=pwhere classically p=k+1, where k is the number of explanatory variables. Further, since \boldsymbol{H} is idempotent, we can write (from \boldsymbol{H}=\boldsymbol{H}^2) that\boldsymbol{H}_{i,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}\boldsymbol{H}_{j,i}=\boldsymbol{H}_{i,i}^2 + \sum_{j\neq i}\boldsymbol{H}_{i,j}^2One the one hand, since the second term is positive \boldsymbol{H}_{i,i}\geq\boldsymbol{H}_{i,i}^2, i.e. 1\geq\boldsymbol{H}_{i,i}. And since both terms are positive, then \boldsymbol{H}_{i,i}\in[0,1]. And there was a question in the course on the sharpeness of the bounds.

Using Anscombe’s dataset, we’ve seen that it was possible to get a leverage of 1. Using something rather similar

df = data.frame(x = c(rep(1,10),6), y = c(1:10,8))
plot(df)

we obtain

model = lm(y~x,data=df)
abline(model,col="red",lwd=2)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

The very last observation, the one one the right, is here extremely influencial : if we remove it, the model is completely different ! And here, we reach the upper bound, \boldsymbol{H}_{11,11}=1. Observe that all other points are equally influencial, and because on the constraint on the trace of the matrix, \boldsymbol{H}_{i,i}=1/10 when i\in\{1,2,\cdots,10\}.

Now, what about the lower bound ? In order to have some sort of “non-influencial” observations, consider the two following case.

  • the case where one observation (below the first one) is such that \widehat{\boldsymbol{y}}_{i}=\boldsymbol{y}_{i} (perfect prediction)
  • the case where one observation (below the tenth one) is such that \boldsymbol{x}_{i}=\overline{\boldsymbol{x}} and \boldsymbol{y}_{i}=\overline{\boldsymbol{y}} (from the first order condition – or normal equation), the fitted regression line always go through point (\overline{\boldsymbol{x}},\overline{\boldsymbol{y}})

Let us move two observations from our dataset,

mean(c(4,rep(1,8),6))
[1] 1.8
df = data.frame(x = c(4,rep(1,8),6,1.8),
y = c(predict(model,newdata=data.frame(x=4)),
2:9,8,
predict(model,newdata=data.frame(x=1.8))))

We now have

If we compute the leverages, we obtain

model = lm(y~x,data=df)
H = lm.influence(model)$hat
plot(1:11,H,type="h")

so, for the first observation, its leverage actually increased (the blue part), and for the tenth one, we have the lowest influence, but it is not zero. Is it possible to reach zero ?

Here, observe that for the tenth observation, \boldsymbol{H}_{i,i}=1/n. And actually, that’s the best we can do… We can prove that, in the case of a simple regression (as above)\boldsymbol{H}_{i,i}=\frac{1}{n}+\frac{(x_i-\overline{x})^2}{n\text{Var}(x)}which is minimum when x_i=\overline{x}, and then \boldsymbol{H}_{i,i}=1/n, otherwise \boldsymbol{H}_{i,i}>1/n. And this property is also valid in a multiple regression (as soon as an intercept is included in the regression – which should always be the case). To prove that result, let \tilde{\boldsymbol{X}} denote the matrix of centered variables \boldsymbol{X}, then we can prove that \boldsymbol{H}_{i,i}=\frac{1}{n}+\big[\tilde{\boldsymbol{X}}(\tilde{\boldsymbol{X}}^\top\tilde{\boldsymbol{X}})^{-1}\tilde{\boldsymbol{X}}^\top\big]_{i,i}(which is basically a matrix version of the previous equation).

I can maybe add another comment on Anscombe’s data. We’ve seen that on the right that we did reach 1. But I did not prove it. One way to prove it is actually to focus on the remaining n-1 points, on the left. Those have all the same x values. We can prove that if \boldsymbol{X}_{i_1}=\boldsymbol{X}_{i_2}, then \boldsymbol{H}_{i_1,i_2}=\boldsymbol{X}_{i_1}^\top(\boldsymbol{X}^\top\boldsymbol{X})^{-1}\boldsymbol{X}_{i_2}=\boldsymbol{H}_{i_1,i_1}hence, using the relationship obtained since the hat matrix is idempotent\boldsymbol{H}_{i_1,i_1}=2\boldsymbol{H}_{i_1,i_1}^2+\sum_{j\notin\{i_1,i_2\}}\boldsymbol{H}_{i_1,j}^2thus, we now have\boldsymbol{H}_{i_1,i_1}\big(1-2\boldsymbol{H}_{i_1,i_1}\big)>0i.e. \boldsymbol{H}_{i_1,i_1}\in[0,1/2], where the upper bound becomes 1/(n-1) “duplicates”. So for n-1 \boldsymbol{H}_{i,i}‘s, we have values below 1/(n-1), the last one should be below 1 and the sum has to be k=2 . So we have the value of the n \boldsymbol{H}_{i,i}‘s.

 

Insurance data science : Pictures

At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.

We will try to identify what is on the following pictures, starting with the car

(we will see that the car is indeed identified)

We will also discuss previous pictures from the summer school

Insurance data science : use and value of unusual data #1

Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).

I will give an introductionary talk on Monday morning, and the slides are now available

There will be some hands-on applications, on R. I will share some codes in the slides.

Optimal transport on large networks

With Alfred Galichon and Lucas Vernet, we recently uploaded a paper entitled optimal transport on large networks on arxiv.

This article presents a set of tools for the modeling of a spatial allocation problem in a large geographic market and gives examples of applications. In our settings, the market is described by a network that maps the cost of travel between each pair of adjacent locations. Two types of agents are located at the nodes of this network. The buyers choose the most competitive sellers depending on their prices and the cost to reach them. Their utility is assumed additive in both these quantities. Each seller, taking as given other sellers prices, sets her own price to have a demand equal to the one we observed. We give a linear programming formulation for the equilibrium conditions. After formally introducing our model we apply it on two examples: prices offered by petrol stations and quality of services provided by maternity wards (only the later is described here for privacy issues). These examples illustrate the applicability of our model to aggregate demand, rank prices and estimate cost structure over the network. We insist on the possibility of applications to large scale data sets using modern linear programming solvers such as Gurobi.

Demand for gas in gas stations in Britanny, and demand for maternity in France (with border correction)

In addition to this paper we released a R toolbox to implement our results and an online tutorial, optimalnetwork.github.io.

Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table(
  "http://freakonometrics.free.fr/saporta.csv",
  head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE)
M=matrix(NA,100,ncol(MYOCARDE))
colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7])
S1=S2=M1=M2=M
for(i in 1:100){
idx = which(sample(0:1,size=n, replace=TRUE)==1)
reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,]))
nm=names(reg$coefficients)
M1[i,nm]=reg$coefficients
S1[i,nm]=summary(reg)$coefficients[,2]
f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="")
reg=glm(f,data=MYOCARDE[-idx,])
M2[i,nm]=reg$coefficients
S2[i,nm]=summary(reg)$coefficients[,2]
}

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){
idx=which(!is.na(M1[,j]))
plot(M1[idx,j],M2[idx,j])
abline(a=0,b=1,lty=2,col="gray")
segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j])  
segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j])  
}

For instance, with the intercept, we have the following

 

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

for(j in 1:8){
library(ks)
idx = which(!is.na(M1[,j]))
Z = cbind(M1[idx,j],M2[idx,j])
H = Hpi(x=Z)
fhat = kde(x=Z, H=H)
image(fhat$eval.points[[1]],
fhat$eval.points[[2]],fhat$estimate)
abline(a=0,b=1,lty=2,col="gray")
abline(v=0,lty=2)
abline(h=0,lty=2)
}

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

 

 

On the next one, we have again significance on the training sample, but not on the validation sample,

 

 

and probably more interesting

where the two are very consistent.

Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

  • The square root link (for the Poisson model)

Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2

Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

  • The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})

So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression

n=563
set.seed(1)
base=data.frame(X1=rnorm(n),X2=rnorm(n))
lambda=base$X1+base$X2
base$Y=rpois(n,exp(lambda))
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
x=base$SEX
base$SEX="M"
base$SEX[x=="0"]="F"
x=base$CHILDREN
base$CHILDREN="YES"
base$CHILDREN[x==0]="NO"
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

In that case the two models are very different. But actually, so is the second one

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,

library(pscl)
regZIP = zeroinfl(Y ~ . | ., data = base)
summary(regZIP)
 
Count model coefficients (poisson with log link):
             Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept) -0.002274   0.048413  -0.047    0.963    
X1           1.019814   0.026186  38.945   &lt;2e-16 ***
X2           1.004814   0.024172  41.570   &lt;2e-16 *** 
Zero-inflation model coefficients (binomial with logit link): 
            Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept) -4.90190    2.07846  -2.358   0.0184 *
X1          -2.00227    0.86897  -2.304   0.0212 *
X2          -0.01545    0.96121  -0.016   0.9872  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…

Extracting information from a picture, round 2

Yesterday, I published a post on extracting information from a picture, but it did not work as expected. I claimed that it was because of the original graph I had. More precisely, the was based on some weird projection, and I could not reconcile. So I decide to cheat a little bit, by creating my own map,

Colors are ugly, I know. But I got them using

u = seq(0,1,length=30)
couleurs = rgb(u,rev(u),0,1)

The picture is

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage3.png"
library(pixmap)
library(png)
IMG = readPNG(url)

I used those colors because it would make things easy when extracting reds and greens…

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Let us see if the contour of France can be overlaid

library(maptools)
library(PBSmapping)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
par(mfrow=c(1,1))
PP=PP[(PP$X&lt;=8.25)&amp;(PP$Y&gt;=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We have a perfect match, don’t we…?

Let us now use a shapefile based on départements,

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds")
FR2=readRDS("FRA_adm2.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR2)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
k=35
pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
points(pX,pY)nge(pX)

For instance, the thirty-fifth polygon is the following

Let us extract the color inside that polygon

u=1:nrow(ROUGE)
v=1:ncol(ROUGE)

The code would be

pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
E=expand.grid(u,v)
M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
image(u,v,ROUGE*M,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(pX,pY)

Now, for each département, I extract the average value of red, and the average value of green,

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

On the graph below, we have the original values on the x-axis (unemployement, in percent) and the “average value of red”.  Note that points are almost perfectly correlated… The accumulation can be explained because on the original map, different values could have the same color

So far, I can claim that we’ve been able to extract useful information from the original picture.

Consider the case now that the original map was the following one

The picture can be downloaded using the following code

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage5.png"
library(pixmap)
library(png)
IMG = readPNG(url)

Here, the colors are obtained from a standard palette,

library(pals)
couleurs = rev(brewer.rdylgn(30))

Here again, we use our previous code to extract reds and greens

And if we use our function

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

we obtain the following graph

Here again, we have a strong correlation, not to say comonotonic variables (in the sense that ranks are identical). Nice, isn’t it ?

Extracting information from a picture, round 1

This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function…

As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green !

Let us get the map, and extract information from the file

url="https://freakonometrics.hypotheses.org/files/2018/12/inondation3.png"
download.file(url,"inondation3.png")
image="inondation3.png"
library(pixmap)
library(png)
IMG=readPNG(image)

Information is stored in several matrices – or in arrays.  Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map

nl=dim(IMG)[1]
nc=dim(IMG)[2]
MAT=(IMG[,,1]+IMG[,,2])/2
x=apply(MAT,2,max)
plot(x,type="l")

When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function)

y=apply(MAT,1,max)
plot(y,type="l")

Let us find cutoff values, on the left and on the right, on top and on the bottom

image(1:nc,1:nl,t(MAT))
abline(v=min(which(x>.2)),col="blue")
abline(v=max(which(x>.2)),col="blue")
abline(h=min(which(y>.2)),col="blue")
abline(h=max(which(y>.2)),col="blue")

We obtain the following (forget about the fact that – somehow – France is upside-down)

We can zoom-in, just to make sure that our border are fine

par(mfrow=c(1,2))
image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,])
abline(v=min(which(x>.2))+(-5):5,col="white")
abline(v=min(which(x>.2)),col="blue")
x1=min(which(x>.2))-1

and on the vertical range

image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,])
abline(v=max(which(x>.2))+(-5):5,col="white")
abline(v=max(which(x>.2)),col="blue")
x2=max(which(x>.2))+1

So far so good. Let us keep the subpart of the picture,

image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2])

Now, let us focus on the red part / component of that picture

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))

That’s not bad, isn’t it ? And get can have a similar graph for the green part

VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France

library(maptools)
library(PBSmapping)
url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds"
download.file(url,"FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom,

Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea…