French dataset: population and GPS coordinates

A short post today based on recent work by @3wen (Ewen Gallic, graduate Student in Rennes, spending a year in Montreal). Since we were working on a detailed French dataset (per commune), we needed a dataset containing a list allcommunes, with population and location. GPS coordinates were extracted from Google, using the following php file, inspired by http://www.andrew-kirkpatrick.com/ on Google geocoding api with php webpage. Population was interpolated from INSEE’s datasets, i.e. http://www.insee.fr/ (since data are over a 35 year period, from 1975 to 2010, changes have been taken into account as carefully are possible – e.g. merges and splits of cities – based on thatdescription). A spline model has been used for all cities (with three degrees of freedom, and null and negative interpolation became one, since we’ll be using loglinear models afterwards). Names are from that dataset, still on INSEE’s website, http://www.insee.fr/.

A zipped file can be downloaded here popfr19752010.zip, but it is also possible to use the code below (it is a 24Mo dataset). Since it was hard to find such a dataset online (different files can be found, but we found none with population and location), we have decided to upload that dataset. Please let us know if there are problems with those data…

> base=read.csv(
+ "http://freakonometrics.free.fr/popfr19752010.csv",
+ header=TRUE)

Using that code, it is possible to locate all the communes in France (metropolitan), for instance

> library(maps)
> map("france")
> points(base$long,base$lat,cex=.1,col="red",pch=19)
> points(base$long,base$lat,cex=2*base$pop_2010/
+ max(base$pop_2010),col="blue",pch=19)

Several additional lines of code on that dataset (and also others) will be uploaded, soon.

Cette oeuvre est mise à disposition sous licence Paternité – Partage à l’Identique 3.0 non transposé. Pour voir une copie de cette licence, visitez http://creativecommons.org/. Date : 24 mai 2012, par Ewen GALLIC. Sources : INSEE, API Google Maps v3 et GeoHack (coordonnées GPS), propres calculs (estimation de population à partir des données INSEE).

  • reg : code region INSEE (character)
  • dep : code departement INSEE (character, corse 201 et 202 au lieu de 2A et 2B)
  • com : code commune INSEE (character)
  • article : article du nom de la commune (character)
  • com_nom : nom de la commune (character)
  • long : longitude (numeric)
  • lat : latitude (numeric)
  • pop_i : estimation de la population à la date i (ramenée à 1 si <=0), i=1975,…,2010 (numeric)

Qui ne voit pas la présence du diable dans la loi ?

Je suis de plus en plus dérouté par le mélange des genres dans l’espace médiatique, de ces prétendus spécialistes qui nous expliquent des choses qu’ils ont du découvrir 10 minutes avant de commencer à taper un article. Ce matin, des éditorialistes se sentaient autoriser à fabuler sur un sondage conduit sur internet auprès de 800 personnes, et j’avoue que cela m’a agacé… J’ignore si j’ai davantage de légitimité, mais je me suis dit que la loi 78 adoptée vendredi (au Québec) pourrait être un prétexte intéressant pour écrire sur un sujet d’actualité (chose que j’hésite d’ordinaire à faire, car ce n’est pas mon métier, loin de là). Pourquoi ne pourrais-je pas, moi aussi, faire mon malin et analyser un texte de loi ? On va donc regarder cette loi dans le détail, à partir du texte en ligne sur http://profscontrelahausse.org/ (ou pour être complètement honnête, le début seulement de la loi)

L=paste("Dans la presente loi, a moins",
"que le contexte nindique",
"un sens different, on entend par:",
"association d’etudiants}}:une",
"association ou un regroupement ",
"d’associations de niveau postsecondaire",

etc

"la qualite de l’enseignement, les ",
"services requis de façon a ",
"tenir compte des circonstances particulieres",
"resultant de l’interruption ",
"de la session d’hiver de l’annee 2012 ou ",
"de la session d’ete de l’annee 2012.")

L2=substring(toupper(L),1:nchar(L),1:nchar(L))
alphabet=c("A","B","C","D","E","F","G"
,"H","I","J","K","L","M","N","O",
"P","Q","R","S","T","U","V","W","X","Y","Z")
L2=L2[L2%in%alphabet]
ML=t(matrix(L2,nc,nl))

On peut visualiser le début du texte ci-dessous (l’idée est de ne garder que les lettres, pour faire simple, mais on peut tenter plus largement en gardant la ponctuation, et les chiffres, ça ne change rien aux algorithmes que nous allons développer)

Commencons pas des trivialités: le texte de cette loi est fondamentalement inégalitaire: ce sont toujours les mêmes qui sortent gagnants, et les mêmes qui sortent perdants,

> sort(table(L))[c(1,2,3,4,5,20,21,22,23)]
L
Y   J   X   H   F   N   A   S   E
7   8  10  24  26 355 386 402 820
On retrouve ici reproduites les inégalités observées dans presque tous les textes (sauf peut être La Disparition, mais c’est une autre histoire): N, A, S et E en tête, et toujours les mêmes petites lettres qui se trouvent honteusement oubliées…

Maintenant, si on prend le temps de regarder en détail le contenu de la loi, force est de constater qu’on y voit la présence du Malin constamment ! Pour reprendre une idée de Bahye ben Asher ibn Halawa (רבינו בחיי) on peut tenter d’utiliser le code de la ThorahJulien Prévieux avait utilisé cette technique il y a quelques temps pour retrouver toute une terminologie associée aux crashs boursiers dans une page du Capital de Karl Marx, prise au hasard, ou aux paradis fiscaux dans la Richesse des Nations d’Adam Smith. Les amateurs de la théorie de Ramsey prétendront que c’est une supercherie, et ils auront probablement raison. Mais après avoir joué l’autre jour sur la recherche de schémas graphiques dans des images (en l’occurrence des rayures rouges et blanches, afin de repérer Charlie), j’ai eu envie de jouer à chercher des mots dans une matrice de lettres.

Commençons par nous donner un mot, au hasard, e.g.`
MOT="DEMON"
Le but est de voir si ce mot figure, ou pas, dans la matrice de lettre, suivant un schéma simple (c’est l’idée du code de la Thorah). Ma stratégie est ici assez simple: il faut commencer par chercher la lettre la moins présente dans la matrice, qui figure dans le mot. En fait, dans DEMON, je vais me focaliser sur les lettres du centre (en excluant la première et la dernière), i.e. DEMON
lettrelaplusrare=function(mot=MOT){
lettres=substring(mot,2:(nchar(mot)-1),
2:(nchar(mot)-1));
fL=TL[lettres];
return(names(fL)[which.min(fL)])}
rare=lettrelaplusrare()
On trouve ainsi la lettre qui va servir de base dans notre algorithme. A la rigueur, je peux faire une petite fonction qui va colorer une lettre dans la matrice,
dessinlettre=function(ltr="A",clr="yellow"){
which(L==ltr);
CLR=rep(NA,length(ML));
CLR[which(as.vector(ML)==ltr)]=clr;
dessin(ML,CLR,plt=FALSE)}
mais pour le moment, ça ne sert à rien. Dans un second temps, on regarde les lettres qui entourent la lettre la plus rare (c’est pour ça que j’ai exclus les lettres au bord, afin d’être certain d’avoir toujours une lettre avant, et une après),
lettres=substring(MOT,1:(nchar(mot)),
1:(nchar(mot)))
avant=lettres[which(lettres==rare)-1]
apres=lettres[which(lettres==rare)+1]
avant=avant[1]
apres=apres[1]
dessinlettre(rare,"yellow")
dessinlettre(avant,"green")
dessinlettre(apres,"pink")
On a ainsi trois lettres importantes: une relativement rare au centre, en jaune, une avant, en vert, et une après en rose. C’est ce qu’on peut visualiser sur le dessin ci-dessous,
Bon, ensuite on rentre dans l’algorithme, un peu lourd à mon gout: on va étudier tous les triplets possibles de ces trois lettres, en identifiants ceux pour lesquels la lettre au centre est précisément au milieu, coincée entre les deux autres lettres. C’est le principe de ce code de la Thorah. La première étape est de récupérer les coordonnées (dans la matrice) de ces trois lettres,
lrr=which(as.vector(ML)==rare)
lav=which(as.vector(ML)==avant)
lap=which(as.vector(ML)==apres) 
LIGrr=(lrr-1)%%nrow(ML)+1
COLrr=(lrr-1)%/%nrow(ML)+1
LIGav=(lav-1)%%nrow(ML)+1
COLav=(lav-1)%/%nrow(ML)+1
LIGap=(lap-1)%%nrow(ML)+1
COLap=(lap-1)%/%nrow(ML)+1
ptrr=cbind(LIGrr,COLrr)
ptav=cbind(LIGav,COLav)
ptap=cbind(LIGap,COLap)
Ensuite, on balaye l’ensemble des triplets (et ça peut être long),
pointsalignes=function(x,y,z){
((y[1]+2*(x[1]-y[1]))==z[1])&((y[2]+
2*(x[2]-y[2]))==z[2])}
LEQUEL=rep(NA,3)
for(a in 1:length(LIGrr)){
for(b in 1:length(LIGav)){
for(c in 1:length(LIGap)){
if(pointsalignes(ptrr[a,],ptav[b,],ptap[c,])==
TRUE){LEQUEL=cbind(LEQUEL,c(a,b,c))}
}}}
Parmi les triplets, on retient seulement ceux pour lesquels la lettre du milieu est précisément au centre, entre les deux autres. Enfin arrive la dernière étape pénible: regarder suivant la suite logique ce que sont les mots que l’on obtient (une fois placées ces trois lettres, la récurrence est complètement décrite). On retient alors les quintuplets correspondant au mot DEMON,
V=LEQUEL
for(k in 2:ncol(V)){
CLR=rep(NA,length(ML));
i1=ptrr[V[1,k],];
CLR[(i1[2]-1)*ncol(ML)+i1[1]]=clr;
lettres=substring(MOT,1:(nchar(mot)),
1:(nchar(mot)))
milieu=which(lettres==rare)
obtenu=ML[i1[1],i1[2]]
for(u in 1:(milieu-1)){
i2=ptav[V[2,k],]+(u-1)*(ptav[V[2,k],]-ptrr[V[1,k],]);
indice=(i2[2]-1)*ncol(ML)+i2[1]
if((i2[2]<=ncol(ML))&(i2[1]>=1)&(i2[1]<=
nrow(ML))&(i2[1]>=1)){CLR[indice]=clr;
obtenu=paste(ML[i2[1],i2[2]],obtenu,sep="")}}
for(u in (milieu+1):nchar(MOT)){
i3=ptap[V[3,k],]+(u-milieu-1)*(ptap[V[3,k],]-
ptrr[V[1,k],]);
indice=(i3[2]-1)*ncol(ML)+i3[1]
if((i3[2]<=ncol(ML))&(i3[1]>=1)&(i3[1]<=
nrow(ML))&(i3[1]>=1)){CLR[indice]=clr;
obtenu=paste(obtenu,ML[i3[1],i3[2]],sep="")}}
if(obtenu==MOT)
{print(paste(k,"  ",obtenu))}}
Reste à faire tourner cette boucle. Et ca ne manque pas, DEMON apparaît effectivement dans le texte,
On peut aller vérifier, pour ceux qui douteraient,

Coïncidence ? Peut-être…. alors tentons un autre mot, comme MALIN, qui apparait aussi,

ou alors SATAN… ce dernier apparaît huit fois dans le début du texte !

Étonnant non…. en tous les cas, je me suis bien amusé à coder ce petit algorithme, et de voir qu’il fonctionnait aussi bien ! Les amateurs de probabilités pourront se lancer dans des calculs, ou consulter quelques articles qui évoquent cette technique, comme McKay,Bar-Natan, Bar-Hillel & Kalai (2001), et tous les articles évoqués dans les références… Quant au juge qui devra se prononcer sur ma condamnation pour blasphème et pour hérésie, il va de soi que ce billet est une blague. Le diable n’existe pas ! On n’est plus au moyen-age quand même….

Births and week-ends, in France

This week, I have seen on the internet (sorry, I cannot find proper references) the graph produced here on the right: which birthday is most likely ? The fact that I have no further information is important, since I do not know in which country such a graph was obtained. At least, I know it should not be France…

In France, I have already mentioned that there is a strong week-end effect: nowadays, there is 25% less deliveries during week-ends than during the week. Calot (1981) observed already that there were less deliveries on Sundays. This has been confirmed more recently, e.g. in http://www.lepoint.fr/ or http://www.prepabl.fr/, with a significant difference between week days, and week-ends. Here  is the number of birth per day, over 40 years, with in blue the average trend during the week, and in red, during week-ends,

naissance=read.table(
"http://freakonometrics.free.fr/naissanceFR2.txt")
attach(naissance)
date=as.Date(date)
plot(date, nbre,cex=.5)
t2=as.POSIXlt(date)
jour=t2$wday
X=naissance$date
Y=naissance$nbre
J=jour
df=data.frame(X,Y,J)
library(splines)
regs=lm(Y~bs(X,df=20),data=df[jour%in%c(0,6),])
Yp=predict(regs,newdata=df)
lines(X,Yp,col="red",lwd=3)
regs=lm(Y~bs(X,df=20),data=df[jour%in%1:5,])
Yp=predict(regs,newdata=df)
lines(X,Yp,col="blue",lwd=3)

If we look at the evolution of the ratio week-ends over weeks days, we have the following graph

t2=as.POSIXlt(date)
jour=t2$wday
jour=jour[1:(1982*7)]
nbre2=jour
for(i in 1:1982){
taux=sum(nbre[6:7+7*(i-1)])/
sum(nbre[1:5+7*(i-1)])/2*5
nbre2[1:5+7*(i-1)]=nbre[1:5+7*(i-1)]*taux
nbre2[6:7+7*(i-1)]=nbre[6:7+7*(i-1)]
nbre2[1:7+7*(i-1)]=
mean(nbre[1:7+7*(i-1)])/mean(nbre2[1:7+7*(i-1)])*
nbre2[1:7+7*(i-1)]
}
nbretaux=jour
for(i in 1:1982){
taux=sum(nbre[6:7+7*(i-1)])/
sum(nbre[1:5+7*(i-1)])/2*5
nbretaux[1:7+7*(i-1)]=taux
}
plot(date[1:length(nbre2)],nbretaux)
X= date[1:length(nbre2)]
Y=nbretaux
library(splines)
reg=lm(Y~bs(X,df=20))
Yp=predict(reg)
lines(X,Yp,col="red",lwd=3)

In the beginning of the 70’s, during week-ends, there were 5% less deliveries, but 25% less around 2000. It is then possible to produce the same kind of graphs as the one above, per year of birth. And here, we clearly observe the importance of the week end effect (maybe also because of color choice)

naissance=read.csv(
"http://freakonometrics.free.fr/naissanceFR.csv",
sep=";")
M=as.matrix(naissance[,3:ncol(naissance)])
BIRTH=as.vector(t(M))
YEAR=rep(1968:2005,each=12*31)
MONTH=rep(rep(1:12,each=31),38)
DAY=rep(1:31,12*38)
X=NA
for(y in 1968:2005){
sbase=base[YEAR==y,]
X=c(X,sbase$BIRTH/sum(sbase$BIRTH,
na.rm=TRUE))
}
base=data.frame(YEAR,MONTH,DAY,
BIRTH,BIRTHDAYPROB=X[-1])

m1=min(base$BIRTHDAYPROB,na.rm=TRUE)
m2=max(base$BIRTHDAYPROB,na.rm=TRUE)
y=1980
colr=rev(heat.colors(100))
sbase=base[YEAR==y,]
plot(0:1,0:1,col="white",xlim=c(-1,12),
ylim=c(-31,1),axes=FALSE,xlab=
paste("Naissance en",y,sep=" "),ylab="")
for(x in 1:nrow(sbase)){
a=sbase$MONTH[x];b=sbase$DAY[x]
polygon(c(a-.9,a-.9,a-.1,a-.1),-c(b-.9,b-.1,
b-.1,b-.9),col=colr[(sbase$BIRTHDAYPROB[x]-m1)/
(m2-m1)*100],border=NA)
}
text((1:12)-.5,.5,c("J","F","M","A","M","J","J",
"A","S","O","N","D"),cex=.7)
text(-.5,-(1:31)+.5,1:31,cex=.7)

Non transitivity of correlation for random vectors in dimension 3

Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)

“If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic”

Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,

Projections on bivariate planes of the three dimensional vector are

Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?

If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent

Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,

Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,

that can be visualized below,

In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.

To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix

http://freakonometrics.blog.free.fr/public/perso6/CORRMATRICE.gif

Given two pairs of correlations, http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, what could we say about http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif ? For instance, the intuition is that if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif are positive, then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by  http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif.Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like

http://freakonometrics.blog.free.fr/public/perso6/kendall1.gif

which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,

But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif. Consider the following code

U=seq(-1,1,by=.1)
V=seq(-1,1,by=.001)
FSUP=function(a,b){
DF=function(c){min(eigen(matrix
(c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a,b){
DF=function(c){min(eigen(matrix(
c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}
MSUP=outer(U,U,Vectorize(FSUP))
MINF=outer(U,U,Vectorize(FINF))
library(RColorBrewer)
clr=rev(brewer.pal(6, "RdBu"))
U=U[2:20]
MSUP=MSUP[2:20,2:20]
MINF=MINF[2:20,2:20]
persp(U,U,MSUP,col="green",shade=TRUE)
image(U,U,MSUP,breaks=((-3):3)/3,col=clr)
persp(U,U,MINF,col="green",shade=TRUE)
image(U,U,MINF,breaks=((-3):3)/3,col=clr)

Here, we can derive the lower and the upper bound for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif.

In the dark blue area, the bound for the correlation can be really low, while in the dark red, the bound is very high (either the lower bound on the left, or the upper bound on the right). Since it might be hard to read, it is possible to fix for instance http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, and to derive bonds for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif.
V=seq(-1,1,by=.001)
U=seq(-1,1,by=.1)
U=U[2:(length(U)-1)]
V=V[2:(length(V)-1)]
U=c(-.9999,U,.9999)
V=c(-.99999,V,.99999)
FSUP=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}

VS=Vectorize(FSUP)(U)
VI=Vectorize(FINF)(U)
plot(c(U,U),c(VS,VI),col="white")
polygon(c(U,rev(U)),c(VS,rev(VI)),
col="yellow",border=NA)
lines(U,VS,lwd=2,col="red")
lines(U,VI,lwd=2,col="red")
On the graph below, we have bound for a negative correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the left, with -0.7) and a positive correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the right, here +0.7),

We do observe here extremely nice ellipses… Consider the case of a null correlation http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif then the region for possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is the unit circle.
The interpretation is that if http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif is null, and so is http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif is either -1 or +1 (perfect negative/positive correlation) then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif has to be null…

Finding Waldo, a flag on the moon and multiple choice tests, with R

I have to admit, first, that finding Waldo has been a difficult task. And I did not succeed. Neither could I correctly spot his shirt (because actually, it was what I was looking for). You know, that red-and-white striped shirt. I guess it should have been possible to look for Waldo’s face (assuming that his face does not change) but I still have problems with size factor (and resolution issues too). The problem is not that simple. At thehttp://mlsp2009.conwiz.dk/ conference, a price was offered for writing an algorithm in Matlab. And one can even find Mathematica codes online. But most of the those algorithms are based on the idea that we look for similarities with Waldo’s face, as described in problem 3 on http://www1.cs.columbia.edu/~blake/‘s webpage. You can find papers on that problem, e.g. Friencly & Kwan (2009) (based on statistical techniques, but Waldo is here a pretext to discuss other issues actually), or more recently (but more complex) Garg et al. (2011) on matching people in images of crowds.

What about codes in R ? On http://stackoverflow.com/, some ideas can be found (and thank Robert Hijmans for his help on his package). So let us try here to do something, on our own. Consider the following picture,

With the following code (based on the following file) it is possible to import the picture, and to extract the colors (based on an RGB decomposition),

> library(raster)
> waldo=brick(system.file("DepartmentStoreW.grd",
+ package="raster"))
> waldo
class       : RasterBrick
dimensions  : 768, 1024, 786432, 3 (nrow,ncol,ncell,nlayer)
resolution  : 1, 1  (x, y)
extent      : 0, 1024, 0, 768  (xmin, xmax, ymin, ymax)
coord. ref. : NA
values      : C:\R\win-library\raster\DepartmentStoreW.grd
min values  : 0 0 0
max values  : 255 255 255

My strategy is simple: try to spot areas with white and red stripes (horizontal stripes). Note that here, I ran the code on a Windows machine, the package is not working well on Mac. In order to get a better understanding of what could be done, let us start with something much more simple. Like the picture below, with Waldo (and Waldo only). Here, it is possible to extract the three colors (red, green and blue),

> plot(waldo,useRaster=FALSE)

It is possible to extract the red zones (already on the graph above, since red is a primary color), as well as the white ones (green zones on the graphs means a white region on the picture, on the left)

# white component
white = min(waldo[[1]] , waldo[[2]] , waldo[[3]])>220
focalswhite = focal(white, w=3, fun=mean)
plot(focalswhite,useRaster=FALSE)

# red component
red = (waldo[[1]]>150)&(max(  waldo[[2]] , waldo[[3]])<90)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

i.e. here we have the graphs below, with the white regions, and the red ones,

From those two parts, it has been possible to extract the red-and-white stripes from the picture, i.e. some regions that were red above, and white below (or the reverse),

# striped component
striped = red; n=length(values(striped)); h=5
values(striped)=0
values(striped)[(h+1):(n-h)]=(values(red)[1:(n-2*h)]==
TRUE)&(values(red)[(2*h+1):n]==TRUE)
focalsstriped = focal(striped, w=3, fun=mean)
plot(focalsstriped,useRaster=FALSE)

So here, we can easily spot Waldo, i.e. the guy with the red-white stripes (with two different sets of thresholds for the RGB decomposition)

Let us try somthing slightly more complicated, with a zoom on the large picture of the department store (since, to be honest, I know where Waldo is…).

Here again, we can spot the white part (on the left) and the red one (on the right), with some thresholds for the RGB decomposition

Note that we can try to be (much) more selective, playing with threshold. Here, it is not very convincing: I cannot clearly identify the region where Waldo might be (the two graphs below were obtained playing with thresholds)

And if we look at the overall pictures, it is worst. Here are the white zones, and the red ones,

and again, playing with RGB thresholds, I cannot spot Waldo,

Maybe I was a bit optimistic, or ambitious. Let us try something more simple, like finding a flag on the moon. Consider the picture below on the left, and let us see if we can spot an American flag,

Again, on the left, let us identify white areas, and on the right, red ones

Then as before, let us look for horizontal stripes

Waouh, I did it ! That’s one small step for man, one giant leap for R-coders ! Or least for me… So, why might it be interesting to identify areas on pictures ? I mean, I am not Chloe O’Brian, I don’t have to spot flags in a crowd, neither Waldo, nor some terrorists (that might wear striped shirts). This might be fun if you want to give grades for your exams automatically. Consider the two following scans, the template, and a filled copy,

A first step is to identify regions where we expect to find some “red” part (I assume here that students have to use a red pencil). Let us start to check on the template and the filled form if we can identify red areas,

exam = stack("C:\\Users\\exam-blank.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE) 
exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

First, we have to identify areas where students have to fill the blanks. So in the template, identify black boxes, and get the coordinates (here manually)

exam = stack("C:\\Users\\exam-blank.png")
black = max(  exam[[1]] ,exam[[2]] , exam[[3]])<50
focalsblack = focal(black, w=3, fun=mean)
plot(focalsblack,useRaster=FALSE)
correct=locator(20)
coordinates=locator(20)
X1=c(73,115,156,199,239)
X2=c(386,428.9,471,510,554)
Y=c(601,536,470,405,341,276,210,145,79,15)
LISTX=c(rep(X1,each=10),rep(X2,each=10))
LISTY=rep(Y,10)
points(LISTX,LISTY,pch=16,col="blue")

The blue points above are where we look for students’ answers. Then, we have to define the vector of correct answers,

CORRECTX=c(X1[c(2,4,1,3,1,1,4,5,2,2)],
X2[c(2,3,4,2,1,1,1,2,5,5)])
CORRECTY=c(Y,Y)
points(CORRECTX, CORRECTY,pch=16,col="red",cex=1.3)
UNCORRECTX=c(X1[rep(1:5,10)[-(c(2,4,1,3,1,1,4,5,2,2)
+seq(0,length=10,by=5))]],
X2[rep(1:5,10)[-(c(2,3,4,2,1,1,1,2,5,5)
+seq(0,length=10,by=5))]])
UNCORRECTY=c(rep(Y,each=4),rep(Y,each=4))

Now, let us get back on red areas in the form filled by the student, identified earlier,

exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=5, fun=mean)

Here, we simply have to compare what the student answered with areas where we expect to find some red in,

ind=which(values(focalsred)>.3)
yind=750-trunc(ind/610)
xind=ind-trunc(ind/610)*610
points(xind,yind,pch=19,cex=.4,col="blue")
points(CORRECTX, CORRECTY,pch=1,
col="red",cex=1.5,lwd=1.5)

Crosses on the graph on the right below are the answers identified as correct (here 13),

> icorrect=values(red)[(750-CORRECTY)*
+ 610+(CORRECTX)]
> points(CORRECTX[icorrect], CORRECTY[icorrect],
+ pch=4,col="black",cex=1.5,lwd=1.5)
> sum(icorrect)
[1] 13

In the case there are negative points for non-correct answer, we can count how many incorrect answers we had. Here 4.

> iuncorrect=values(red)[(750-UNCORRECTY)*610+
+ (UNCORRECTX)]
> sum(iuncorrect)
[1] 4

So I have not been able to find Waldo, but I least, that will probably save me hours next time I have to mark exams…

Notes de cours sur les séries temporelles

La session d’hiver n’étant pas terminée, je vais poster mes notes de cours sur la dernière section (sur la modélisation de séries temporelles) pour le cours ACT2040. Il s’agit – comme je l’avais dit en cours – d’une remise au goût du jour de notes tapées il y a une dizaine d’années. J’ai également rajouté du code R, mais il doit resté un certain nombre de coquilles et de fautes de frappe. Je profiterais des jours qui viennent pour réviser cette version.

Basketball: score dynamics and game theory

Tomorrow morning, I will be giving a talk at Mont Tremblant, for the Journées de la Société Canadienne de Sciences Economiques. I will present a joint work – in progress – with Nathalie Colombier and Romuald Elie. Since the working paper is not online yet, I will wait a little bit before uploading the slides. But they will be online, someday (hopefully soon)…

An important aspect of the strategy of most organizations is the provision of incentives to the employees to meet the organization’s objectives. Typically this implies tying pay to performance (see Prendergast, 1999). In order to reward employees for their effort, firms spend considerable resources on performance evaluations. In many cases, evaluation consists of comparing actual performance to a pre-defined individual target. Another frequently used format is relative performance evaluation. Relative performance evaluation may motivate employees to work harder.But it may also be demoralizing and create an excessively competitive workplace, which may hinder overall performance; see Lazear (1989). Determining the overall impact of relative performance evaluation is crucial for companies. Economic research on relative performance evaluation has mainly focused on the comparison of final performances between competitors,like in tournament theory, and on quantitative and subjective performance ratings (Lazear and Gibbs, 2009). In contrast, what happens during a competition and the impact of feedback frequency on effort have so far received little attention. Following Berger and Pope (2011), we decided to use a basketball application to get a better understanding of the role of the feedback information. Sports datasets allow to observe score and team behavior continuously (during a game but also during the season) which can be use as a proxy of the effort. Berger an Pope (2010) asked ”can loosing lead to winning ?” looking at the impact of the halftime score difference on winning probability in NCAA (college) and NBA(pro) games. More precisely, they studied whether a team loosing at halftime is more likely to win than expected using a logit model. They find that usually the higher the score difference the more likely the are to win. But if the halftime score difference is around 0 they observe a discontinuity: loosing with a small difference (e.g. down by 1 point) can lead to increase the effort and win the game. In this paper we try answer the question ”when loosing lead to winning ?”.

Bayes is playing Russian roulette

There was (once again) a nice puzzle inhttp://www.futilitycloset.com/. Bayes and a good friend are playing Russian roulette. The revolver has six chambers. He puts two bullets in two adjacent chambers, spin the cylinder, hold the gun to his friend’s head, and pull the trigger. It clicks. So it is now Bayes’s turn: he can choose either to spin the cylinder again or leave it as it is. Which is better? Hopefully, Bayes knows his theorem: if he does spin it, the probability of getting killed is 2 out of 6 (four empty chambers out of six), but if he does not, since his friend is still alive, then the hammer should be next to one of the four cylinders in red, below


So here, there is 3 chance out of 4 to survive, i.e. the probability of getting killed is 1 out of 4 (while it was 1 out of 3 when spinning). So Bayes should not spin. And as always, it is possible to see it is a more general result: more generally, in a revolver with http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif chambers, it there are http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif bullets in http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif adjacent chambers,  if the first player survives, the probability of getting killed is k over http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif, when spinning, while it would be 1 over http://freakonometrics.blog.free.fr/public/perso5/bullet03.gif if we don’t. Not spinning is better if and only if

http://freakonometrics.blog.free.fr/public/perso5/bullet04.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso5/bullet05.gif

So you’d better not spin, unless there was one bullet in the revolver, i.e. http://freakonometrics.blog.free.fr/public/perso5/bullet06.gif… or http://freakonometrics.blog.free.fr/public/perso5/bullet07.gif (in that case, it might not be a good idea actually to play the game).

Correlations, dimension, and risk measure

Yesterday, while I was attending the IFM2 conference, at HEC Montreal, I heard a nice talk about credit risk, and a comparison between contagion (or at least default correlation), for corporate and retail companies (in the US). And it was mentioned that default correlation was much lower for retail companies than it could be for corporate risk. In a discussion that followed those slides, it was mentioned that banks in the US should actually have been working more with those small firms, since contagion risk was much lower.

A problem here is that the link between correlation, risk and dimension is rather complicated:

  • corporate means a small number of firms, high correlation (and possible large individual losses)
  • retail means a large number of firms (even perhaps extremely large), lower correlation (and small individual losses)

A simple model for default models is based on the assumption that we deal with an exchangeable portfolio (as in a previous post). With the following code, given an (individual) default probability, a default correlation, and a number of firms, it is possible to calculate the probability to have more than a given number of defaults.

 proba=function(s,a,m,n){
 b=a/m-a
 choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
 dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
 stop.on.error =  FALSE)$value}

CDF=function(x=10,r=.4,m=.1,n=50){
a=m*(1-r)/r ;
V=rep(NA,n+1)
 for(i in 0:n){
 V[i+1]=proba(i,a,m,n)}
 V=V/sum(V);
 return(sum(V[1:(x+1)])) }

It is possible to calculate, for a large range of correlations, the probability to have – at least – 20% of default in the portfolio (in order to compare things that are comparable).

R=seq(.01,.99,by=.01)
VQ=matrix(NA,length(A),2)
for(i in 1:length(A)){
VQ[i,1]=1-CDF(r=A[i],x=4,n=20);  
VQ[i,2]=1-CDF(r=A[i],x=200,n=1000)}

With 20 firms (corporate) we want to have at least 4 defaults, while with 1000 firms (retail) there should be 200 defaults. As mentioned in the previous post, the relationship between correlation and quantiles of sums is not simple. Hence, it might not be monotone. The dotted line is the probability to have at least 4 defaults when default correlation is 50% (around 10%). The plain line is the probability to have at least 200 defaults, as a function of the correlation,

plot(A,1-VQ[,2],type="l",col="red",ylim=c(0,.22))
abline(h=1-VQ[50,1],lty=2,col="red")

In that case, with only a correlation of 10% among retail firms, the probability of having 20% defaults is the same as the same probability for corporate, but with 50% correlation… One should remember that in portfolio analysis, the links between correlation, dimension and risk measure is a sensitive issue…

Talk on bivariate count times series in finance and risk management

I will be giving a talk on May 4th, at the Mathematical Finance Days, at HEC Montréal, on multivariate dynamic models for counts. The conference is organized by IFM2 (Institut de Finance Mathématique de Montréal). I will be chairing some session and I will give a talk based on the joint paper with Mathieu Boudreault.

The slides can be downloaded from the blog,

In various situations in the insurance industry, in finance, in epidemiology, etc., one needs to represent the joint evolution of the number of occurrences of an event. In this paper, we present a multivariate integer‐valued autoregressive (MINAR) model, derive its properties and apply the model to earthquake occurrences across various pairs of tectonic plates. The model is an extension of Pedelis & Karlis (2011) where cross autocorrelation (spatial contagion in a seismic context) is considered. We fit various bivariate count models and find that for many contiguous tectonic plates, spatial contagion is significant in both directions. Furthermore, ignoring cross autocorrelation can underestimate the potential for high numbers of occurrences over the short‐term. An application to risk management and cat‐bond pricing will be discussed.

http://freakonometrics.free.fr/ringfire.gif