Tag Archives: R-english

When will my papers appear as references (if they do…) ?

Following my post on citations in academic journals, I wanted to go one step further in the understanding of the dynamic of citations. So here, the dataset looks like that: for each article, we have the name of the journal, the year of publication (also the title of the article, but here we do not use it, as well as the authors), and more interesting, the number of citations in journals (any kind of academic journal) published in 1996, 1997, …, 2011. Of course, articles published in 1999 might have their first citation only starting in 1999.

base[1000:1002,]
     Publication.Year
7188             1999
7191             1999
7195             1999
     Document.Title
7188 Sequential inspection 
7191 On equitable resource approach
7195 Method for strategic  
                                        Authors     ISSN       Journal.Title
7188                         Yao D.D., Zheng S. 0030364X Operations Research
7191                                    Luss H. 0030364X Operations Research
7195 Seshadri S., Khanna A., Harche F., Wyle R. 0030364X Operations Research
     Volume Issue X139 DEV1996 DEV1997 DEV1998 DEV1999 DEV2000 DEV2001 DEV2002
7188     47     3    0       0       0       0       0       1       0       2
7191     47     3    0       0       0       0       0       0       2       0
7195     47     3    0       0       0       0       0       0       0       0
     DEV2003 DEV2004 DEV2005 DEV2006 DEV2007 DEV2008 DEV2009 DEV2010 DEV2011
7188       0       0       0       1       0       0       0       0       0
7191       3       4       1       4       4       8       4       6       1
7195       0       1       2       2       1       0       1       0       0
     X130655 X0 X130794
7188       4  0       4
7191      37  0      37
7195       7  0       7

The first step is to aggregate data, not to look at each article, but to look at all paper published in 1999 (say). And then, we look at the number in citations the year of publication, the year after, two years after, etc. It will appear in a triangle since if we look at articles published in 2010, there is only on possible year for citations (2010, since I removed 2011).

VOL=rev(unique(base$Volume))
VOL=VOL[is.na(VOL)==FALSE]
TRIANGLE=matrix(NA,16,16)
for(v in VOL){
k=k+1
sb=base[base$Volume==v,9:24]
sb=sb[is.na(sb[,1])==FALSE,]
TRIANGLE[k,1:(17-k)]=apply(sb,2,sum)[k:16]}

Then, a standard idea (at least in insurance business, for claims payment development) is to consider that data are Poisson distributed, and the number of citations should depend on the year of publication of the article (a row effect) and the development (how many years after are we looking at, i.e. a column effect). More formally, let http://freakonometrics.blog.free.fr/public/perso2/citationD01.gif denote the number of citations of articles published year http://freakonometrics.blog.free.fr/public/perso2/citD02.gifduring year http://freakonometrics.blog.free.fr/public/perso2/citD03.gif (or after http://freakonometrics.blog.free.fr/public/perso2/citD04.gif years). And we assume that http://freakonometrics.blog.free.fr/public/perso2/citD05.gif

TRIANGLE=TRIANGLE[-16,]
TRIANGLE=TRIANGLE[,-16]
Y=as.vector(TRIANGLE)
YEAR=rep(1996:2010,15)
DEV =rep(1:15,each=15)
baseT=data.frame(Y,YEAR,DEV)
reg=glm(Y~as.factor(YEAR)+as.factor(DEV),
data=baseT,family=poisson)

Since those are incremental values, in order to look at the paper of distribution, we need to sum them on a line. Thus, we can plot

http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-1.gif
http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-2.gif

(because we used factors, the first component has been replaced by the constant in the regression) or a normalized version to compare among journals. For instance, we would like to get 100 citations over 15 years.

DYN=exp(c(reg$coefficients[1],reg$coefficients[1]+
reg$coefficients[16:29]))
DYNN=cumsum(DYN)/sum(DYN)
plot(0:15,DYNN)

And this is what we get, for several academic journals,

The pattern is rather different. For instance, in Health Economics, citations is a quick process: more than 40% of citations obtained over 15 years, were obtained during the first 4 years. On the other hand, in the Journal of Finance, it is much smaller: less than 15% of the citations were obtained during the first 4 years (on average). So it means that comparing citation based index (namely g or h) is a difficult exercise, especially with you researchers in different areas. The same gor index for young researcher, publishing either in Stochastic Processes and their Applications or Annals of Statistics, means that after 3 years, it can be 50% higher.


Now it is possible to look more into details, with below JRSS-B (on applied statistics). Note that here, citations come extremely slowly… to it might not be a good “strategy” (assuming that a researcher’s target is simply to get – quickly – a high citation index) for a young researcher to publish in JRSS-B

On the other hand, Biometrika is much faster (both are on applied statistics, but we’ve seen here that they were not in the same cluster)

We can also observe that Annals of Probability
and Stochastic Processes and their Applications

have (almost) similar patterns (SPA might be a bit faster). Anyway, I have been surprised to see that in theoretical journals citations are extremely fast. Especially if we compare with the Journal of Finance for instance

where I though citations were extremely fast. But I might have a non-correct interpretation: it might simply mean that in the Journal of Finance it is common to cite old papers (published 10 or 15 years ago), maybe more common that in stochastic processes…
Anyway, all suggestions about the interpretation are welcomed !

Think academic journals look the same ? Well, some do…

We have seen yesterday that finding an optimal strategy to publish is not that simple. And actually, it can be even more difficult in the case the journal rejects the paper (not because it is not correct, but because “it does not fit” with the standards, the quality of the journal, the audience, the editor’s mood, or whatever). The author has basically two choices,

  • forget about the article and move to something else (e.g. start a blog where he/she will be the author and the editor)
  • pretend that the article is worth publishing and then try to find another journal with similar interests


But this last choice is not that easy, since sometimes the author think that this journal was indeed the one that should publish it (e.g. all the articles on the subject have been published in that journal).
So I was wondering if there were clusters of journals, i.e. journals that publish almost the same kind of articles (so that next time one of my paper is rejected by the editor, I just go to for some journal in the same cluster).
So what I did is extremely simple: I looked at articles titles and looked for correlations between words frequency (I could have done that in key words, but I am not a big fan of those key words). I looked at 35 journals (that are somehow related to my areas of interest) and looked at titles of all articles published over the last 20 years. Then I kept the top 1000 of words, and I removed standard short words (“a“, “the“, “is“, etc). Actually, my top words looks like

"models" "model" "data" "estimation" "analysis" "time" 
"processes" "risk" "random" "stochastic" "regression" 
"market" "approach" "optimal" "based" "information" 
"evidence" "linear" "games" "bayesian" "theory" "effects"
"distribution" "multivariate" "tests" "markets" "markov"
"equilibrium" "dynamic" "process" "distributions" 
"application" "stock" "likelihood"

Then, I ran a principal component analysis on my dataset (containing 960 variables – here words – and 35 observations – here journal names).

library("FactoMineR")
res.pca = PCA(MATRICE, scale.unit=TRUE, ncp=5, 
graph=FALSE)
plot.PCA(res.pca, axes=c(1, 2), choix="ind")

The projection of the journals on the first two axis looks like that

Here, we can clearly observe some clusters : on the up-left Journal of Finance and Journal of Banking and Finance (say financial journals) on the top-right Biometrika, Biometrics, Computational Statistics and Data Analysis and Journal of Econometrics (JASA is not far away, i.e. applied statistics journal). And below, on the right, Stochastic Processes and their Applications, Annals of Applied Probability, Journal of Applied Probability, Annals of Probability, Proceedings of AMS and Topology and Applications (ie more theoretical journal).
Note that the projection is rather robust: if I consider my first 200 words, the graph is the same

In order to go further in the interpretation, we can also plot variables, i.e. words from titles,

where we cannot distinguish anything. So if I just look at my top 30, here they are,

On top left we see market(s), risk or information; on top right analysis, effects, models or tests; while below we see Markov or process(es). And we can observe interesting facts: in finance in statistics, we talk about dynamics while in theoretical (mathematical) journal it is about processes.
But the goal was to find cluster, i.e. classes of journals that publish papers with similar titles.

DISTANCE = dist(MATRICE)
cah = hclust(DISTANCE) 
plot(cah)

Here we have

If some classes a rather natural (Journal of Applied Proba. and Advances in Applied Proba.or Economic Theory, Journal of Economic Theory and Journal of Mathematical Economics) some strong correlation are not simple to understand, (e.g. Insurance: Mathematics and Economics and Management Science or Annals of Statistics and the Journal of Multivariate Analysis).
Again, it might be possible to spend hours on the graphs, but if I want – someday – to submit something to one of those journals, I guess I have to stop here, and move to something else…

 

Open data might be a false good opportunity…

I am always surprised to see many people on Twitter tweeting about #opendata, e.g. @data4all, @usdatagov, @datapublicatwit, @ProPublica or @open3 among so many others… Initially, I was also very enthousiastic, but I have to admit thatopen data are rarely raw data. Which is what I am usually looking for, as a statistician…
Consider the following example: I was wondering (Valentine’s day is approaching)when will a man born in 1975 (say) get married – if he ever gets married ?More technically, I was looking for a distribution of the age of first marriage (given the year of birth), including the proportion of men that will never get married, for that specific cohort.

The only data I found on the internet is the following, on statistics.gov.uk/

Note that we can also focus on women (e.g. here). Is it possible to use that opendata to get an estimation of the distribution of first marriage for some specific cohort ? (and to answer the question I asked). Here, we have two dimensions: on line http://freakonometrics.free.fr/blog/latex/marriage01.gif, the year (of the marriage), and on column http://freakonometrics.free.fr/blog/latex/marriage02.gif, the age of the man when he gets married. Assume that those were rawdata, i.e. that we have the number of marriages of men of age http://freakonometrics.free.fr/blog/latex/marriage02.gif during the year http://freakonometrics.free.fr/blog/latex/marriage01.gif.

We are interested at a longitudinal lecture of the table, i.e. consider some man born year http://freakonometrics.free.fr/blog/latex/marriage03.gif, we want to estimate (or predict) the age he will get married, if he gets married. With raw data, we can do it… The first step is to build up triangles (to have a cohort vs. age lecture of the data), and then to consider a model, e.g.

http://freakonometrics.free.fr/blog/latex/marriage04.gif

where http://freakonometrics.free.fr/blog/latex/marriage05.gif is a year effect, and http://freakonometrics.free.fr/blog/latex/marriage06.gif is a cohort effect.

base=read.table("http://freakonometrics.free.fr/mariage-age-uk.csv",
sep=";",header=TRUE)
m=base[1:16,]
m=m[,3:10]
m=as.matrix(m)
triangle=matrix(NA,nrow(m),ncol(m))
n=ncol(m)
for(i in 1:16){
triangle[i,]=diag(m[i-1+(1:n),])
}
triangle[nrow(m),1]=m[nrow(m),1]
 
triangle
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
 [1,]   12  104  222  247  198  132   51   34
 [2,]    8   89  228  257  202  102   75   49
 [3,]    4   80  209  247  168  129   92   50
 [4,]    4   73  196  236  181  140   88   45
 [5,]    3   78  242  206  161  114   68   47
 [6,]   11  150  223  199  157  105   73   39
 [7,]   12  117  194  183  136   96   61   36
 [8,]   11  118  202  175  122   92   62   40
 [9,]   15  147  218  162  127   98   72   48
[10,]   20  185  204  171  138  112   82   NA
[11,]   31  197  240  209  172  138   NA   NA
[12,]   34  196  233  202  169   NA   NA   NA
[13,]   35  166  210  199   NA   NA   NA   NA
[14,]   26  139  210   NA   NA   NA   NA   NA
[15,]   18  104   NA   NA   NA   NA   NA   NA
[16,]   10   NA   NA   NA   NA   NA   NA   NA
 
Y=as.vector(triangle)
YEARS=seq(1918,1993,by=5)
AGES=seq(22,57,by=5)
X1=rep(YEARS,length(AGES))
X2=rep(AGES,each=length(YEARS))
reg=glm(Y~as.factor(X1)+as.factor(X2),family="poisson")
summary(reg)
 
Call:
glm(formula = Y ~ as.factor(X1) + as.factor(X2), family = "poisson")
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-5.4502  -1.1611  -0.0603   1.0471   4.6214  
 
Coefficients:
                    Estimate Std. Error z value Pr(>|z|)    
(Intercept)        2.8300461  0.0712160  39.739  < 2e-16 ***
as.factor(X1)1923  0.0099503  0.0446105   0.223 0.823497    
as.factor(X1)1928 -0.0212236  0.0449605  -0.472 0.636891    
as.factor(X1)1933 -0.0377019  0.0451489  -0.835 0.403686    
as.factor(X1)1938 -0.0844692  0.0456962  -1.848 0.064531 .  
as.factor(X1)1943 -0.0439519  0.0452209  -0.972 0.331082    
as.factor(X1)1948 -0.1803236  0.0468786  -3.847 0.000120 ***
as.factor(X1)1953 -0.1960149  0.0470802  -4.163 3.14e-05 ***
as.factor(X1)1958 -0.1199103  0.0461237  -2.600 0.009329 ** 
as.factor(X1)1963 -0.0446620  0.0458508  -0.974 0.330020    
as.factor(X1)1968  0.1192561  0.0450437   2.648 0.008107 ** 
as.factor(X1)1973  0.0985671  0.0472460   2.086 0.036956 *  
as.factor(X1)1978  0.0356199  0.0520094   0.685 0.493423    
as.factor(X1)1983  0.0004365  0.0617191   0.007 0.994357    
as.factor(X1)1988 -0.2191428  0.0981189  -2.233 0.025520 *  
as.factor(X1)1993 -0.5274610  0.3241477  -1.627 0.103689    
as.factor(X2)27    2.0748202  0.0679193  30.548  < 2e-16 ***
as.factor(X2)32    2.5768802  0.0667480  38.606  < 2e-16 ***
as.factor(X2)37    2.5350787  0.0671736  37.739  < 2e-16 ***
as.factor(X2)42    2.2883203  0.0683441  33.482  < 2e-16 ***
as.factor(X2)47    1.9601540  0.0704276  27.832  < 2e-16 ***
as.factor(X2)52    1.5216903  0.0745623  20.408  < 2e-16 ***
as.factor(X2)57    1.0060665  0.0822708  12.229  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 
 
(Dispersion parameter for poisson family taken to be 1)
 
    Null deviance: 5299.30  on 99  degrees of freedom
Residual deviance:  375.53  on 77  degrees of freedom
  (28 observations deleted due to missingness)
AIC: 1052.1
 
Number of Fisher Scoring iterations: 5

Here, we have been able to derive http://freakonometrics.free.fr/blog/latex/marriage12.gif and http://freakonometrics.free.fr/blog/latex/marriage13.gif, where now http://freakonometrics.free.fr/blog/latex/marriage14.gifdenotes the cohort.
We can now predict the number of marriages per year, and per cohort

http://freakonometrics.free.fr/blog/latex/marriage15.gif

Here, given the cohort http://freakonometrics.free.fr/blog/latex/marriage03.gif, the shape of http://freakonometrics.free.fr/blog/latex/marriage16.gif is the following

Yp=predict(reg,type="response")
tYp=matrix(Yp,nrow(m),ncol(m))
tYp[16,]
tYp[16,]
[1]  10.00000 222.94525 209.32773 159.87855 115.06971  42.59102
[7]  18.70168 148.92360
The errors (Pearson error) look like that
Ep=residuals(reg,type="pearson")
 

(where the darker the blue, the smaller the residuals, and the darker the red, the higher the residuals). Obviously, we are missing something here, like a diagonal effect. But this is not the main problem here…

I guess that study here is not valid. The problem is that we deal with open data, and numbers of marriages are not given here: what is given is a he proportion of marriage of men of age http://freakonometrics.free.fr/blog/latex/marriage02.gif during the year http://freakonometrics.free.fr/blog/latex/marriage01.gif, with a yearly normalization. There is a constraint on lines, i.e. we observe

http://freakonometrics.free.fr/blog/latex/marriage08.gif

so that

http://freakonometrics.free.fr/blog/latex/marriage09.gif

This is mentioned in the title

It is still possible to consider a Poisson regression on the http://freakonometrics.free.fr/blog/latex/marriage10.gif, but unfortunately, I do not think any interpretation is valid (unless demography did not change last century). For instance, the following sum

http://freakonometrics.free.fr/blog/latex/marriage17.gif

looks like that

apply(tYp,1,sum)
 [1] 919.948 838.762 846.301 816.552 943.559 930.280 857.871 896.113
 [9] 905.086 948.087 895.862 853.738 826.003 816.192 813.974 927.437

i.e. if we look at the graph

But I do not think we can interpret that sum as the probability (if we divide by 1,000) that a man in that cohort gets married…. And more basically, I cannot do anything with that dataset…

So open data might be interesting. The problem is that most of the time, the data are somehow normalized (or aggregated). And then, it becomes difficult to use them…

So I will have to work further to be able to write something (mathematically valid) on marriage strategy before Valentine’s day…. to be continued.

Will I ever be a bayesian statistician ? (part 1)

Last week, during the workshop on Statistical Methods for Meteorology and Climate Change (here), I discovered how powerful bayesian techniques could be, and that there were more and more bayesian statisticians. So, if I was to fully understand applied statisticians in conferences and workshops, I really have to understand basics of bayesian statistics. I have published some time ago some posts on bayesian statistics applied to actuarial problems (here or there), but so far, I always thought that bayesian was a synonym for magician. To be honest, I am a Muggle, and I have not been trained as a bayesian. But I can be an opportunist…

So I decided to publish some posts on bayesian techniques, in order to prove that it is actually not that difficult to implement.

As far as I understand it, in bayesian statistics, the parameter is considered as a random variable (which is also the case, in classical mathematical statistics). But here, here assume that this parameter does have a parametric distribution….
Consider a classical statistical problem: assume we have a sample http://freakonometrics.free.fr/blog/bayy1.png i.i.d. with distribution http://freakonometrics.free.fr/blog/bayy2.png. Here we note

http://freakonometrics.free.fr/blog/bayy3.png

since parameter http://freakonometrics.free.fr/blog/bayyyyy001.png is a random variable. The idea is to assume that http://freakonometrics.free.fr/blog/bayyyyy001.png has a (so called a priori) distribution, e.g.

http://freakonometrics.free.fr/blog/bayy4.png

So far it was simple. The idea is then to consider the posterior distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png, given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png. Thus, we need to compute the distribution of http://freakonometrics.free.fr/blog/bayyyyyy03.png which is here extremely simple (due to properties of the Gaussian distribution), i.e.

http://freakonometrics.free.fr/blog/bayyyyyy04.png

where

http://freakonometrics.free.fr/blog/bayyyyyy05.png

And them, it becomes extremely natural to consider http://freakonometrics.free.fr/blog/bayy20.png as an estimator of given our sample data (and thus, we also have a confidence interval since we know the distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png).
In order to be sure that we understood, consider now a heads and tails problem, i.e. http://freakonometrics.free.fr/blog/bayy5.png. Note, first, that \theta has support http://freakonometrics.free.fr/blog/bayy6.png. So we need a distribution on that support. Why not a beta distribution ? E.g.

http://freakonometrics.free.fr/blog/bayy7.png

Thus,

http://freakonometrics.free.fr/blog/bayy8.png

and

http://freakonometrics.free.fr/blog/bayy9.png

From Bayes formula,

http://freakonometrics.free.fr/blog/bayy10.png

and we get easily

http://freakonometrics.free.fr/blog/bayy11.png

which is the density of a Beta distribution, i.e.

http://freakonometrics.free.fr/blog/bayy12.png
prior=dbeta(u,a,b)
posterior=dbeta(u,a+y,n-y+b)

The estimator proposed is then the expected value of that conditional distribution,

http://freakonometrics.blog.free.fr/public/perso/bayyyyyyyyyyy.png

Note that

http://freakonometrics.free.fr/blog/bayy13.png

Further, it is possible to derive confidence intervals using quantiles of the posterior distribution.
On the graphs below, we consider the following heads/tails sample

A first idea is to consider a uniform prior distribution.

http://freakonometrics.free.fr/blog/bayes-cv-1.gif

A second idea is to consider an asymmetric beta distribution. First, with an asymmetry on the left,

http://freakonometrics.free.fr/blog/bayes-cv-3.gif

or on the right
http://freakonometrics.free.fr/blog/bayes-cv-2.gif

Finally a third idea is simply to get back to the standard Gaussian approximation,

http://freakonometrics.free.fr/blog/bayes-cv-gauss.gif

If we compare the four models, we obtain (the plain black line is the Gaussian approximated distribution for the empirical mean), and red lines are obtained from prior beta distributions

http://freakonometrics.free.fr/blog/bayes-cv-all.gif

The code to generate those graphs is the following
a1=1; b1=1
D1[1,]=dbeta(u,a,b)
a2=4; b2=2
D2[1,]=dbeta(u,a,b)
a3=2; b3=4
D3[1,]=dbeta(u,a,b)
setseed(1)
S=sample(0:1,size=100,replace=TRUE)
COULEUR=rev(rainbow(120))
D1=D2=D3=D4=matrix(NA,101,length(u))
for(s in 1:100){
y=sum(S[1:s])
D1[s+1,]=dbeta(u,a1+y,s-y+b1)
D2[s+1,]=dbeta(u,a2+y,s-y+b2)
D3[s+1,]=dbeta(u,a3+y,s-y+b3)
D4[s+1,]=dnorm(u,y/s,sqrt(y/s*(1-y/s)/s))
plot(u,D1[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D1[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D2[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D2[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D3[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D3[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[1,],col="white",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D4[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[s+1,],col="black",lwd=2,type="l",
ylim=c(0,8),xlab="",ylab="")
lines(u,D1[1+i,],col="blue")
lines(u,D2[1+i,],col="red")
lines(u,D3[1+i,],col="purple")
points(y/s,0,pch=3,cex=2)
}

Here, we can see that computations are simple if the prior distribution has a distribution which is the conjugate of the observations’ distribution (see here for the list of prior and posterior standard distributions).
So far, I have two questions that naturally show up

  • is it possible to start with a neutral prior distribution, non informative ?
  • what if we are no longer working with conjugate distributions ?

Well, I guess I have to work a bit more to answer those questions…. to be continued

Warming in Paris: minimas versus maximas ?

Recently, I received comments (here and on Twitter) about my previous graphs on the temperature in Paris. I mentioned in a comment (there) that studying extremas (and more generally quantiles or interquantile evolution) is not the same as studying the variance. Since I am not a big fan of the variance, let us talk a little bit about extrema behaviour.

In order to study the average temperature it is natural to look at the linear (assuming that it is linear, but I proved that it could reasonably be assumed as linear in the paper) regression, i.e. least square regression, which gives the expected value. But if we care about extremes, or almost extremes, it is natural to look at quantile regression.

For instance, below, the green line is the least square regression, the red one is 97.5% quantile, and the blue on the 2.5% quantile regression.

It looks like the slope is the same, i.e. extremas are increasing as fast as the average…

tmaxparis=read.table("temperature/TG_SOUID100845.txt",
skip=20,sep=",",header=TRUE)
head(tmaxparis)
Dparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d")
Tparis=as.numeric(tmaxparis$TG)/10
Tparis[Tparis==-999.9]=NA
I=sample(1:length(Tparis),size=5000,replace=FALSE)
plot(Dparis[I],Tparis[I],col="grey")
abline(lm(Tparis~Dparis),col="green")
library(quantreg)
abline(rq(Tparis~Dparis,tau=.025),col="blue")
abline(rq(Tparis~Dparis,tau=.975),col="red")

(here I plot randomly some points to avoid a too heavy figure, since I have too many observations, but I keep all the observations in the regression !).

Now, if we look at the slope for different quantile level (Fig 6 in the paper, here, but on minimum daily temperature, here I look at average daily temperature), the interpretation is different.

s=0
COEF=SD=rep(NA,199)
for(i in seq(.005,.995,by=.005)){
s=s+1
REG=rq(Tparis~Dparis,tau=i)
COEF[s]=REG$coefficients[2]
SD[s]=summary(REG)$coefficients[2,2]
}

with the following graph below,

s=0
plot(seq(.005,.995,by=.005),COEF,type="l",ylim=c(0.00002,.00008))
for(i in seq(.005,.995,by=.005)){
s=s+1
segments(i,COEF[s]-2*SD[s],i,COEF[s]+2*SD[s],col="grey")
}
REG=lm(Tparis~Dparis)
COEFlm=REG$coefficients[2]
SDlm=summary(REG)$coefficients[2,2]
abline(h=COEFlm,col="red")
abline(h=COEFlm-2*SDlm,lty=2,lw=.6,col="red")
abline(h=COEFlm+2*SDlm,lty=2,lw=.6,col="red")

Here, for minimas (quantiles associated to low probabilities, on the left), the trend has a higher slope than the average, so in some sense, warming of minimas is stronger than average temperature, and on other hand, for maximas (high probabilities on the right), the slope is smaller – but positive – so summer are warmer, but not as much as winters.
Note also that the story is different for minimal temperature (mentioned in the paper) compared with that study, made here on average daily temperature (see comments)… This is not a major breakthrough in climate research, but this is all I got…

More climate extremes, or simply global warming ?

In the paper on the heat wave in Paris (mentioned here) I discussed changes in the distribution of temperature (and autocorrelation of the time series).

During the workshop on Statistical Methods for Meteorology and Climate Change today (here) I observed that it was still an important question: is climate change affecting only averages, or does it have an impact on extremes ? And since I’ve seen nice slides to illustrate that question, I decided to play again with my dataset to see what could be said about temperature in Paris.
Recall that data can be downloaded here (daily temperature of the XXth century).

tmaxparis=read.table("/temperature/TX_SOUID100124.txt",
skip=20,sep=",",header=TRUE)
Dmaxparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d")
Tmaxparis=as.numeric(tmaxparis$TX)/10
tminparis=read.table("/temperature/TN_SOUID100123.txt",
skip=20,sep=",",header=TRUE)
Dminparis=as.Date(as.character(tminparis$DATE),"%Y%m%d")
Tminparis=as.numeric(tminparis$TN)/10
Tminparis[Tminparis==-999.9]=NA
Tmaxparis[Tmaxparis==-999.9]=NA
annee=trunc(tminparis$DATE/10000)
MIN=tapply(Tminparis,annee,min)
plot(unique(annee),MIN,col="blue",ylim=c(-15,40),xlim=c(1900,2000))
abline(lm(MIN~unique(annee)),col="blue")
abline(lm(Tminparis~unique(Dminparis)),col="blue",lty=2)
annee=trunc(tmaxparis$DATE/10000)
MAX=tapply(Tmaxparis,annee,max)
points(unique(annee),MAX,col="red")
abline(lm(MAX~unique(annee)),col="red")
abline(lm(Tmaxparis~unique(Dmaxparis)),col="red",lty=2)

On the plot below, the dots in red are the annual maximum temperatures, while the dots in blue are the annual minimum temperature. The plain line is the regression line (based on the annual max/min), and the dotted lines represent the average maximum/minimum daily temperature (to illustrate the global tendency),

It is also possible to look at annual boxplot, and to focus either on minimas, or on maximas.

annee=trunc(tminparis$DATE/10000)
boxplot(Tminparis~as.factor(annee),ylim=c(-15,10),
xlab="Year",ylab="Temperature",col="blue")
x=boxplot(Tminparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[1,],pch=19,col="blue")
abline(lm(x$stats[1,]~xx),col="blue")
annee=trunc(tmaxparis$DATE/10000)
boxplot(Tmaxparis~as.factor(annee),ylim=c(15,40),
xlab="Year",ylab="Temperature",col="red")
x=boxplot(Tmaxparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[5,],pch=19,col="red")
abline(lm(x$stats[5,]~xx),col="red")

Plain dots are average temperature below the 5% quantile for minima, or over the 95% quantile for maxima (again with the regression line),

We can observe an increasing trend on the minimas, but not on the maximas !
Finally, an alternative is to remember that we focus on annual maximas and minimas. Thus, Fisher and Tippett theory (mentioned here) can be used. Here, we fit a GEV distribution on a blog of 10 consecutive years. Recall that the GEV distribution is

http://freakonometrics.blog.free.fr/public/perso/gev1.png
install.packages("evir")
library(evir)
Pmin=Dmin=Pmax=Dmax=matrix(NA,10,3)
for(s in 1:10){
X=MIN[1:10+(s-1)*10]
FIT=gev(-X)
Pmin[s,]=FIT$par.ests
Dmin[s,]=FIT$par.ses
X=MAX[1:10+(s-1)*10]
FIT=gev(X)
Pmax[s,]=FIT$par.ests
Dmax[s,]=FIT$par.ses
}

The location parameter http://freakonometrics.blog.free.fr/public/perso/gev4.png is the following, with on the left the minimas and on the right the maximas,

while the scale parameter http://freakonometrics.blog.free.fr/public/perso/gev3.png is

and finally the shape parameter http://freakonometrics.blog.free.fr/public/perso/gev2.png is

On those graphs, it is very difficult to say anything regarding changes in temperature extremes… And I guess this is a reason why there is still active research on that area…

Cursed numbers ?

In Lost, Hugo “Hurley” Reyes played the numbers 4, 8, 15, 16, 23 and 42 at the lottery, and ended up winning the $114-million jackpot. And over the ensuing weeks, everyone around him seems to suffer increasingly bad luck: Hurley’s grandfather dies of a heart attack, his brother’s wife walks out, his mother breaks her ankle while the house Hurley bought her goes up in flames, and Hurley himself is falsely arrested.
Anyway, last week (here) 4 numbers (out of 6) appeared at the lottery in LA. As pointed out by Xian (here), the odds were not that small, i.e. it is a 1‰ chance,

http://freakonometrics.blog.free.fr/public/perso/probaloto.png
Hence, with one lottery per week, the return period is 16 years. Note this percentage is very close to what we did observe on the French lottery (below the statistics in ‰, from here, in a zip file),

> loto=read.table("loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3",
   "boule_4","boule_5","boule_6")])
> P=rep(NA,nrow(N))
> for(s in 1:nrow(N)){
+ P[s]=sum(N[s,1]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,2]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,3]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,4]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,5]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,6]%in%c(4, 8, 15, 16, 23, 42))
+ }
> table(P)/nrow(N)*1000
P
         0          1          2          3          4
435.732113 405.366057 137.271215  19.966722   1.663894

But what about the full sequence…? Imagine that in France, at the official lottery, the exact sequence played by Hugo appears. What a coincidence.The probability that the sequence appears, assuming that there are 48 possible numbers in the lottery, is

http://freakonometrics.blog.free.fr/public/perso/lotto2.png
i.e. the expected number of draws we need before seeing that sequence for the first time is almost a billion.
Now if we look at all official lotteries around the world, say 100 per week, what is the probability to see Hurley’s sequence shows up – at least once – in 25 years (assuming that after 25 years, no one will remember Lost and those cursed numbers) ? It looks like it is a 1% chance…
http://freakonometrics.blog.free.fr/public/perso/proba-lost.png
So let us wait and see…

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

I really need to find hot (and sexy) topics

50 days ago (here), I was supposed to be very optimistic about the probability that I could reach a million viewed pages on that blog (over a bit more than two years). Unfortunately, the wind has changed and today, the probability is quite low…

 base=read.table("millionb.csv",sep=";",header=TRUE)
X1=cumsum(base$nombre)
base=read.table("million2b.csv",sep=";",header=TRUE)
X2=cumsum(base$nombre)
X=X1+X2
 D=as.Date(as.character(base$date),"%m/%d/%Y")
kt=which(D==as.Date("01/06/2010","%d/%m/%Y"))
D0=as.Date("08/11/2008","%d/%m/%Y")
D=D0+1:length(X1)
P=rep(NA,(length(X)-kt)+1)
for(h in 0:(length(X)-kt)){
model  <- arima(X[1:(kt+h)],c(7 1,7),method="CSS") 
 forecast <- predict(model,200)
u=max(D[1:kt+h])+1:300
k=which(u==as.Date("01/01/2011","%d/%m/%Y"))
(P[h+1]=1-pnorm(1000000,forecast$pred[k],forecast$se[k]))
}
plot( D[length(D)-length(P)]+1:220,c(P,rep(NA,220-length(P))),
ylab="Probability to reach 1,000,000",xlab="",
type="l",col="red",ylim=c(0,1))
So, I guess my posts on multiple internal rates of return, or Young’s inequality will have to wait next year… I really need to find some more sexy post to attract readers.. Challenge accepted !

Is it that stupid to make extremely long term forecast when studying mortality ?

I received recently a comment by FCA (here) who raised an important question, about forecast in dynamic mortality models. (S)he mentioned that from his(her) point of view, the econometric models I considered were “good to predict for the next, say, 3 or 4 years. Not for the next 50 years…”. Which was the message I tried to stress last year in a conference about retirement in France (here). But from a quantitativepoint of view, how inconsistent were forecasts made 35 years ago, or 60 years ago ?

Consider here the Lee Carter model, obtained on the periods 1816-1950 (in black below), 1816-1975 (in red) and 1816-2000 (in blue), unfortunately, it is difficult to compare http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s since we have identifiability problems here. Nevertheless, we if consider affine transformation so that  http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s are equal in 1900 and 1950 (say), we obtain

On that graph, we considered an ETS (AAN) forecast. If we do not consider the entire series for forecasting, but only observations following WWI (1945), we obtain

For sketches of the R code,

T=1980
base0=data.frame(D,E,A,Y,a=as.factor(A),
y=as.factor(Y))
base=base0[base0$Y<=T,]
LC2=gnm(D~a+Mult(a,y),offset=log(E),family=
poisson,data=base)
A=LC2$coefficients[1]+LC2$coefficients[2:110]
B=LC2$coefficients[111:220]
K0=LC2$coefficients[221:length(LC2$coefficients)]
Y=as.numeric(K0)
K1=c(K0,forecast(ets(Y,model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y,allowdrift=TRUE),h=240)$mean)
MU=matrix(NA,length(A),length(K1))
MU1=MU2=MU
for(i in 1:length(A)){
for(j in 1:length(K1)){
MU1[i,j]=exp(A[i]+B[i]*K1[j])
MU2[i,j]=exp(A[i]+B[i]*K2[j])
}}
x=40
s=seq(0,109-x-1)
t=2000
Pxt1=cumprod(exp(-diag(MU1[x+1+s,t+s-base1$Year[1]-1])))
Pxt2=cumprod(exp(-diag(MU2[x+1+s,t+s-base1$Year[1]-1])))
r=.035
m=70
h=seq(0,39)
V1=1/(1+r)^(m-x+h)*Pxt1[m-x+h]
V2=1/(1+r)^(m-x+h)*Pxt2[m-x+h]
M=cbind(V1,V2)
apply(M,2,sum)

Actually, it is not that bad…. even if it is only a qualitative intuition. Again, I am not a demographer, and my interest is more on actuarial science… so if we look at the estimation of annuities (still the same insurance contract, as here) for some insured of age 40 in 2000, we get the following graph (where forecasts http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s were obtained on the complete series, i.e. from 1816 until the year we consider),

(here it means that in 1900, I had to forecast mortality for someone of age 40 in 2000… so we had to forecast mortality with a 150 year horizon). Obviously, even if we are able to forecast improvement of mortality rates, it is not enough since it looks like, each year, improvement are alway higher than what what expected. Note that if we run it twice (since there might be problem with initial values in the econometric procedure) we obtain something similar,

So, the output is consistent. And if we change the way we predict future values, e.g. on focusing only on the past 50 years, i.e.

K1=c(K0,forecast(ets(Y[(length(Y)-50):length(Y)],
model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y[(length(Y)-50):length(Y)],
allowdrift=TRUE),h=240)$mean)

we obtain the following graph for the annuity associated to an insurance contract sold in 2000,

so that relative changes compared with 1980 are (in %)

Hence, over a bit more than 25 years, we underestimated annuities of 25%. We if start to take into account possible investments, it is not so bad, I think….  don’t you think ?

 

Finding roots of functions in actuarial science

The following simple code can be used to find roots of functions (based on the secant algorithm),

secant=function(fun, x0, x1, tolerence=1e-07, niter=500){
for ( i in 1:niter ) {
	x2 <- x1-fun(x1)*(x1-x0)/(fun(x1)-fun(x0))
	if (abs(fun(x2)) < tolerence)
		return(x2)
	x0 <- x1
	x1 <- x2
}}

It can be interesting in actuarial science, e.g. to find the actuarial rate so that to present values are equal. For instance, consider the following capital, given only if the insured is still alive (this example was initially considered here). We would like to find the rate so that the probable discounted value is 600,

> Lx=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ header=TRUE,sep=";")
> capital=c(100,100,125,125,150,150)
> n=length(capital)
> x=0.035
> X=45
> f=function(x){
+ capital.act=capital*(1/(1+x))^(1:n)
+ PROBA=Lx[((Lx[,1]>X)*(Lx[,1]<=(X+n)))==1,2]/Lx[(Lx[,1]==X)==1,2]
+ return(sum(capital.act*PROBA))}
>
> f1=function(x){f(x)-600}
> secant(f1,0,0.1)
[1] 0.06022313
> f(0.06022313)
[1] 600

*

Comments on probabilities

The only thing I remember from courses I had in probability a few years ago is that we also have to clearly defined the event we want to calculate the probability. On the Freakonomics blog, last week, the Israeli lottery was mentioned (here, see also there where I mentioned that, and odds facts from the French lottery),

Yesterday, Andrew Gelman claimed (here) that there was a probability error… Well, since Andrew is really a statistician (and a good one… while I am barely an economist), I tried to do the maths…. and to understand where the error was coming from…

Since 6 numbers are drawn out of a pool of numbers from 1 to 37, the total number of combination at each lottery is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto01.png
> (n=choose(37,6))
[1] 2324784

Over 8 lotteries (since there are two draws per week, we can assume there 8 draws per month), the probability of no identical draws is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto02.png

Here is the R code for those who want to check, again,

> prod(n-0:7)/n^8
[1] 0.999988

Each month, the probability of “coincidence” (I define “coincidence” the event “over 8 draws, at least two times, we obtained the same 6-uplet” or more precisely (as mentioned here) “over one calendar month, at least two times, we obtained the same 6-uplet“) is

> (p=1-(prod(n-0:7)/n^8))
[1] 1.204407e-05

The occurrence of a coincidence each month as a Geometric distribution, with probability p. And it is classical, following Gumbel’s definition (here), to consider 1/p, called the “return period“, i.e. the number of months we have to wait until we observe a coincidence (i.e. a repetition in the same month), since for a geometric distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto03.png
> 1/p/(12)
[1] 6919.034

Here, the (expected) return period is 6919 years.

From my point of view, this is “the incident of six numbers repeating themselves within a calendar month”, and this is an event of once in 6919.034 years. On the other hand the median of a geometric distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto04.png
> -log(2)/log(1-p)/(12)
[1] 4795.88

which means that we have 50% chance to get such a coincidence over 4796 years.

Of course, if instead of looking at a longer period, say 100 draws, i.e. one year (here I define “coincidence” the event “over 100 draws, at least two times, we obtained the same 6-uplet“), we have in red the expected return period, and in blue the median of the geometric distribution,

> M=E=rep(NA,100)
> for(i in 2:100){
+ p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
+ E[i]=1/p/(100/i)
+ M[i]=-log(2)/log(1-p)/(100/i)
+ }
> plot(1:100,E,ylim=c(0,10000),type="l",col="red",lwd=2)
> lines(1:100,M,col="blue",lwd=2)
> abline(v=8,lty=2)
> points(8,E[8],pch=19,col="red")
> points(8,M[8],pch=19,col="blue")

or below of a log-scaled version

As Xi’an did (here), assume now that there is a lottery over 100 countries. Here I define “coincidence” the event “over k lottery draws over 100 around the world, at least two times, we obtained the same 6-uplet“, and then the previous graph becomes (with on the x axis the level of k)

Here I have a 12% chance if we consider probability to have identical numbers over a month…

But here, we can have one 6-uplet in Israel, and the other one in Egypt, say… If we want to get the same 6-uplet in the same country, the graph is now

i.e. each month there is a chance over one thousand…

> i=8
> p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
> 1-(1-p)^100
[1] 0.001203689

Note: actually, Xi’an mentioned that the probability that this coincidence [of two identical draws over 188 draws] occurred in at least one out of 100 lotteries (there are hundreds of similar lotteries across the World) is 53%! And I got the same,

> 1-(1-P[188])^100
[1] 0.5305219

Names of villages, in France

Keith Briggs published a post here on names of English place name element distribution, which contains almost twenty maps like the one where names ends by -bourn,bourne,burn (here) or -head (there). Actually, it is possible (Robin mentioned that already here) to do similar things in France… Consider the dataset containing the 35,250 commune names (here), it is an xls file containing the official name, the latitude, and on the longitude. To start with something simple, it is possible also to look at village containing “saint” in it

There are a lot, and there is no obvious geographic trend. For some simple geographic trend, t is possible to see where are villages having a name ending with “sur mer” (meaning literally “on the sea”) below on the left. Obviously, we cannot find such places in the Alps. Similarly for names ending with “Seine” they are clearly on the Seine river, on the right

> ville=read.table("D:\\r-data\\ville.csv",sep=";",header=TRUE)
> nrow(ville)
[1] 35376
> ville$maj=as.character(ville$Nom.Ville)
> n=nchar(ville$maj)
> I=substr(ville$maj,pmax(0,n-8),n)
> Ind=I=="-sur-Mer "
> sum(Ind)
[1] 98
> library(maps)
> map('france', fill = FALSE)
> X=ville[Ind,]
> x=as.numeric(as.character(X$Longitude))
> y=as.numeric(as.character(X$Latitude))
> points(x,y,pch=19,col="blue",cex=.6)

In order to continue with some geographic pattern, consider the end of the names, such as “-gny” (below on the left, in red) or “-ac” (below on the right, in blue)

Some pretend that “-ac” comes from Gaelic, and can be found in Celtic regions (here in Brittany). Obviously, there is also an origin in Occitany (south west of France). And this gave also in Oïl region the “-gny” (in North and North-East regions). Consider similarly end of the names, such as “-an” (below on the left, in red) or “-ey” (below on the right, in blue)

Still about the end of the names, it is also possible to look for village ending either with “-a” (below on the left, in red) or “-o” (below on the right, in blue)

We are now in the southern part of France…. “-a” in Corse and Pyrénées, while “-o”can be found in Corse, and in Brittany. For the beginning “ker-” or “lan-”  (below on the left, in red) or “castel-” (below on the right, in blue),

“ker-” appears in 18,000 location names (as mentioned here) but only in some village names. It is similar to “castel-” in the southern part of France.
To go a bit further, 40 years ago, Georges Brassens sang a song entitled “La ballade des gens qui sont nés quelque part“.

He says that people are usually extremely proud of their villages…. Actually, their are more people proud of living over something than under something: below are villages containing “sous” (i.e. under below on the left, in red) or “sur” (i.e. over below on the right, in blue)

On the other hand, villages containing “grand” or “grande” (i.e. tall or big below on the left, in red) or “petit” or “petite” (i.e. small below on the right, in blue) seem to be correlated: close to a city with “grand” there is a village with “petit” in it. For instance Virieu-le-Grand and Virieu-le-Petit, or  Essigny-le-Grand and Essigny-le-Petit.

And finally, I found surprising to see so many village containing “montagne” (ie mountain below on the left, in orange) or starting with “Mont” (below on the right, in purple) that are far from mountains,

You do not need to live close to some mountains to get mountains in it. Even in Brittany you can find dozen of villages starting with “Mont”….

Extracting information from a keyboard…

Yesterday, Baptiste published a post on “ethno-photography” (here). As he mentioned it, in Paris 8, they experience a real absence of serious cleaning of office equipment. He then shows the keyboard of the only computer they can use in the sociology department (for forty researchers),

Apart from the fact that everyone in France should be ashamed to see how much is spent in universities (which is the first information we have from that picture), we should also be able to guess in which langage people work in this department.
I considered three books (two in French, one in English) and I would like to see the frequency of each letter,

  • Mauss, manuel d’éthnographie (here), 1926
  • Durkheim, Livre II: Les croyances élémentaires in Les formes élémentaires de la vie religieuse (here), 1912
  • Ferri, Criminal Sociology (here), 1896

Those three books are in rich text format, I just changed it to get text files… Then, it is easy to count appearance of letters. E.g. for Mauss,

> library(corpora)
> textfile=scan("MAUSS-manuel.txt",
+ what="char", sep="\n")
Read 1550 items
> textfile<-tolower(textfile)
> M=NA
> for(i in 1:length(textfile)){
+ line=textfile[i]
+ M=c(M,strsplit(as.character(line),"")[[1]])
+ }
> T=table(M)
> T
M
    '     -           \t     !     "     %     &     (     )     ,     .     / 
 5308  1049 86589    44     3     3     2     2   370   391  6609  4909    12 
    :     ;     ?     @     ]     _     ~     ’     =     «     »     ¬     ° 
  819  1178   113     1     1     4     1    39     1   108   107   823     3 
    …     0     1     2     3     4     5     6     7     8     9     a     à 
    1    69   213    83    73    34    48    33    28    64   151 30559  1651 
    â     ä     b     c     ç     d     e     é     è     ê     ë     f     g 
  224     3  3562 14678   110 17713 63955 10354  1798  1000     5  4555  4911 
    h     i     î     ï     j     k     l     m     n     ñ     o     º     ô 
 4359 30851   226    47  1147   247 24792 12844 32525     6 25562     2   151 
    ö     œ     p     q     r     s     t     u     ù     û     ü     v     w 
   12    52 12696  4667 28237 37630 32945 25001   211    40     9  4787   164 
    x     y     z 
 1996  1222   343

Then, we can summarize in to see proportion of standard 26 letters, and we have, for Mauss,

and for Durkeim,

If we compare the two, we have almost the same proportions,

If we look at our book in English now, we have

i.e., if we compare with Mauss for instance

So we have much more E in French than in English, but still, people writing in English use a lot the E. So looking at the E should not give us any clue…. But we can see that in English, the H is as common as the L, or the C. Not in French, where L is much more frequent than the H. But on the picture, the C is more clear than the H. We can also look at the U, which is common in French, not in English… Here, on the keyboard, it is perfectly clear… so I guess people use it frequently.
So I would say that they write more in French than the write in English, on that computer.
Actually, the same idea has been used a long time ago on calculators to see that Benford’s law works: some numbers are really used (as well as the legend pretends that some pages in logarithm books were never used….), see here orthere. So Baptiste, if one day the keyboard is cleaned up, please send me another picture after a few weeks to see if things have changed….
An for those who cannot imagine how it is to work in some universities in France, just look at his blog (here). Pictures are unbelievable….Good luck Baptiste….

Lottery, and martingales

I recently got a comment on a post I published one year ago, here, about the fact that in September 2009, on the 6th and the 10th, the 6 same numbers came out at the lottery, in Bulgaria (but  I do not understand the question: the author of the comment ask about the order the numbers came out…)
Xi’an published also a post on that topic, there, since last week, the same thing happened in Israel.
All that reminded me a discussion I had with a colleague about another post (here) where I mentioned that I found a strange distribution of numbers in the French lottery (the old one actually). For those who want to check, all historical events are here, in a zip file. My colleague was wondering if I found the martingale to win the lottery…

First, I do not like that term, since martingale is something different from a mathematical point of view… Second, let us look if it would have been possible to make some money… (free lunch ?)

> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
>   N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> length(n)
[1] 28848
> (TN=table(n))
n
1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20
607 576 571 618 579 598 608 582 588 590 562 577 577 580 591 630 558 567 594 608
21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40
578 562 579 583 574 589 602 572 550 598 604 582 545 646 597 618 599 636 609 588
41  42  43  44  45  46  47  48  49
576 589 577 585 618 596 560 571 604

So, it might look nice, but we have to compare that distribution with the one we should have with “independent” draws. It is not possible to look at a discrete uniform distribution: the six numbers are not independent. Each day, the 49 balls are back in the urn, but within a day, we do not have independent draws (it is a sample without replacement of balls). Hence, with 4808 lottery draws, each number cannot be obtained more than 4808 times. So, let us use monte carlo techniques to  look at the theoretical distribution,

> M=matrix(NA,49,1000)
> for(s in 1:1000){
+ B=NA
+ for(i in 1:ntirage){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ M[,s]=sort(table(B))
+ }
> q50=function(x){quantile(x,.5)}
> Q50=apply(M,1,q50)
> lines(1:49,Q50,col="red",lwd=2)
> q10=function(x){quantile(x,.1)}
> Q10=apply(M,1,q10)
> q90=function(x){quantile(x,.9)}
> Q90=apply(M,1,q90)
> polygon(c(1:49,49:1),c(Q10,rev(Q90)),col="light blue",border=NA)
> lines(1:49,Q10,col="red",lty=2)
> lines(1:49,Q90,col="red",lty=2)
> lines(1:49,Q50,col="red",lwd=2)
> points(1:49,sort(TN),pch=19,type="b")

Looking at the graph, it looks like some numbers appeared too frequently, especially the ones that did not appear frequently (bottom left). So, since I have removed the last 50 draws, let us see if we could have used that information, somehow…

> nb=names(sort(TN))
> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> loto=loto[1:50,]
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> TN=table(n)
> TN[nb]
> barplot(TN[nb])

Unfortunately, numbers that came out too frequently over 4800 draws did not appear that frequently of the last 50. Playing top number might not have been a great strategy.

(numbers that came out frequently are on the right, while those we did not see much are on the left)… What about worst numbers: if I had decided to play the 6 that did not come out very frequently (we’ve seen earlier that they should have appeared even less, actually), would it have been interesting ? As we can see, our top 2 numbers were numbers that did not appear frequently earlier (29 and 47 appears respectively 10 and 11 times over 50 draws)….
Over 50 draws of 6 balls, the expected frequency of 6 given number is around 36.7,..

> S=rep(NA,10000)
> for(s in 1:10000){
+ B=NA
+ for(i in 1:50){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ S[s]=sum(B%in%(1:6))
+ }
> mean(S)
[1] 36.7694

But here for the top 6, we have

> z=TN[nb]
> sum(rev(z)[1:6])
[1] 29

i.e. the top 6 appeared 29 times over 50 draw of 6 balls (which looks low) and for the worst 6, it is a bit higher,

>  sum(z[1:6])
[1] 38

If we look at the theoretical density of the frequency of 6 given number, we have

i.e. our worst 6 is a nice average (in green) while top 6 did not appear frequently this time (here in blue) ! So we could not have used that information….
Anyway, if some of you are interesting using statistics to get a free lunch, with the nouveau loto, I did not see any strange pattern (data can be downloaded here).

I am terribly sorry, but I cannot help anyone winning at the French Lottery….