Tag Archives: rstats

More climate extremes, or simply global warming ?

In the paper on the heat wave in Paris (mentioned here) I discussed changes in the distribution of temperature (and autocorrelation of the time series).

During the workshop on Statistical Methods for Meteorology and Climate Change today (here) I observed that it was still an important question: is climate change affecting only averages, or does it have an impact on extremes ? And since I’ve seen nice slides to illustrate that question, I decided to play again with my dataset to see what could be said about temperature in Paris.
Recall that data can be downloaded here (daily temperature of the XXth century).

tmaxparis=read.table("/temperature/TX_SOUID100124.txt",
skip=20,sep=",",header=TRUE)
Dmaxparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d")
Tmaxparis=as.numeric(tmaxparis$TX)/10
tminparis=read.table("/temperature/TN_SOUID100123.txt",
skip=20,sep=",",header=TRUE)
Dminparis=as.Date(as.character(tminparis$DATE),"%Y%m%d")
Tminparis=as.numeric(tminparis$TN)/10
Tminparis[Tminparis==-999.9]=NA
Tmaxparis[Tmaxparis==-999.9]=NA
annee=trunc(tminparis$DATE/10000)
MIN=tapply(Tminparis,annee,min)
plot(unique(annee),MIN,col="blue",ylim=c(-15,40),xlim=c(1900,2000))
abline(lm(MIN~unique(annee)),col="blue")
abline(lm(Tminparis~unique(Dminparis)),col="blue",lty=2)
annee=trunc(tmaxparis$DATE/10000)
MAX=tapply(Tmaxparis,annee,max)
points(unique(annee),MAX,col="red")
abline(lm(MAX~unique(annee)),col="red")
abline(lm(Tmaxparis~unique(Dmaxparis)),col="red",lty=2)

On the plot below, the dots in red are the annual maximum temperatures, while the dots in blue are the annual minimum temperature. The plain line is the regression line (based on the annual max/min), and the dotted lines represent the average maximum/minimum daily temperature (to illustrate the global tendency),

It is also possible to look at annual boxplot, and to focus either on minimas, or on maximas.

annee=trunc(tminparis$DATE/10000)
boxplot(Tminparis~as.factor(annee),ylim=c(-15,10),
xlab="Year",ylab="Temperature",col="blue")
x=boxplot(Tminparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[1,],pch=19,col="blue")
abline(lm(x$stats[1,]~xx),col="blue")
annee=trunc(tmaxparis$DATE/10000)
boxplot(Tmaxparis~as.factor(annee),ylim=c(15,40),
xlab="Year",ylab="Temperature",col="red")
x=boxplot(Tmaxparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[5,],pch=19,col="red")
abline(lm(x$stats[5,]~xx),col="red")

Plain dots are average temperature below the 5% quantile for minima, or over the 95% quantile for maxima (again with the regression line),

We can observe an increasing trend on the minimas, but not on the maximas !
Finally, an alternative is to remember that we focus on annual maximas and minimas. Thus, Fisher and Tippett theory (mentioned here) can be used. Here, we fit a GEV distribution on a blog of 10 consecutive years. Recall that the GEV distribution is

http://freakonometrics.blog.free.fr/public/perso/gev1.png
install.packages("evir")
library(evir)
Pmin=Dmin=Pmax=Dmax=matrix(NA,10,3)
for(s in 1:10){
X=MIN[1:10+(s-1)*10]
FIT=gev(-X)
Pmin[s,]=FIT$par.ests
Dmin[s,]=FIT$par.ses
X=MAX[1:10+(s-1)*10]
FIT=gev(X)
Pmax[s,]=FIT$par.ests
Dmax[s,]=FIT$par.ses
}

The location parameter http://freakonometrics.blog.free.fr/public/perso/gev4.png is the following, with on the left the minimas and on the right the maximas,

while the scale parameter http://freakonometrics.blog.free.fr/public/perso/gev3.png is

and finally the shape parameter http://freakonometrics.blog.free.fr/public/perso/gev2.png is

On those graphs, it is very difficult to say anything regarding changes in temperature extremes… And I guess this is a reason why there is still active research on that area…

Cursed numbers ?

In Lost, Hugo “Hurley” Reyes played the numbers 4, 8, 15, 16, 23 and 42 at the lottery, and ended up winning the $114-million jackpot. And over the ensuing weeks, everyone around him seems to suffer increasingly bad luck: Hurley’s grandfather dies of a heart attack, his brother’s wife walks out, his mother breaks her ankle while the house Hurley bought her goes up in flames, and Hurley himself is falsely arrested.
Anyway, last week (here) 4 numbers (out of 6) appeared at the lottery in LA. As pointed out by Xian (here), the odds were not that small, i.e. it is a 1‰ chance,

http://freakonometrics.blog.free.fr/public/perso/probaloto.png
Hence, with one lottery per week, the return period is 16 years. Note this percentage is very close to what we did observe on the French lottery (below the statistics in ‰, from here, in a zip file),

> loto=read.table("loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3",
   "boule_4","boule_5","boule_6")])
> P=rep(NA,nrow(N))
> for(s in 1:nrow(N)){
+ P[s]=sum(N[s,1]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,2]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,3]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,4]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,5]%in%c(4, 8, 15, 16, 23, 42)+
+          N[s,6]%in%c(4, 8, 15, 16, 23, 42))
+ }
> table(P)/nrow(N)*1000
P
         0          1          2          3          4
435.732113 405.366057 137.271215  19.966722   1.663894

But what about the full sequence…? Imagine that in France, at the official lottery, the exact sequence played by Hugo appears. What a coincidence.The probability that the sequence appears, assuming that there are 48 possible numbers in the lottery, is

http://freakonometrics.blog.free.fr/public/perso/lotto2.png
i.e. the expected number of draws we need before seeing that sequence for the first time is almost a billion.
Now if we look at all official lotteries around the world, say 100 per week, what is the probability to see Hurley’s sequence shows up – at least once – in 25 years (assuming that after 25 years, no one will remember Lost and those cursed numbers) ? It looks like it is a 1% chance…
http://freakonometrics.blog.free.fr/public/perso/proba-lost.png
So let us wait and see…

Non, les probabilités ne sont pas nos amies…

… et non, nous ne vivons pas le monde merveilleux des Bisounours: la vie n’est pas toujours juste. Les probabilités sont un objet assez curieux à manipuler, pas forcément très intuitif. J’avais évoqué ici et enfin  la notion d’espace probabilisé http://freakonometrics.blog.free.fr/public/perso/espace-probabilise.png… En fait, généralement, les calculs de probabilités sont assez troublants et paradoxaux. On va considérer ici des gens devant prendre des situations en présence d’incertitude, et plus particulièrement deux situations: la difficulté de prendre en compte de l’information (et donc de calculer des probabilités conditionnelles) et le cas où les gens doivent agir en prenant des décisions de manière aléatoire (et donc fixent en même les probabilités).

  • Les probabilités conditionnelles

Avant hier, j’avais mis en ligne (ici) un billet sur le paradoxe dit de Monty Hall, où le candidat à un jeu télévisé doit choisir une porte parmi trois (afin de gagner une voiture), une fois son choix effectué, le présentateur ouvre une des deux autres portes (une qui n’a pas de lot derrière) et demande au candidat s’il souhaite revenir sur son choix. Et la réponse est qu’effectivement, le candidat a intérêt à ouvrir la porte qui est restée ouverte (et que l’animateur n’a pas choisie). L’idée du raisonnement est de formaliser proprement cette information à l’aide de lois conditionnelles.

Cette importance de l’information se retrouve par exemple dans l’histoire des cocus de Bagdad ou des yeux bleus (ici). Sinon, sur les paradoxes de probabilité, je peux aussi renvoyer vers des histoires de des lancers de pièces (ici), mais aussi sur le paradoxe des enveloppes () qui pourrait faire penser à l’histoire de Monty Hall. Mais mon préféré sur les lois conditionnelles reste le paradoxe de Simpson (ici, repris sous une forme plus géométrique )…

  • Les probabilités comme paramètres de décision

Dans tous ces exemples, les probabilités sont connues, et il s’agit simplement d’une mise à jour afin de prendre en compte une nouvelle information. Mais il existe des cas encore plus vicieux, où les probabilités ne peuvent pas être considérées comme exogènes, ou objectives, car elles sont déterminées par les joueurs. Considérons une jeu simple, un peu dans l’esprit du pierre-feuille-ciseau (que j’avais déjà évoqué ici, en rappelant qu’il était fondamental de prendre des décisions de manière aléatoire). Dans le jeu d’aujourd’hui, on a toujours deux joueurs, disons moi (P) et ma fille (F). On cache nos mains dans le dos, et on les sort au même moment, en montant un ou deux doigts,

  • si le nombre de doigts sortis est identique (paires (1,1) ou (2,2)), alors je gagne le nombre de doigts sortis, i.e. 2 ou 4
  • si le nombre de doigts sortis est différent (paires (1,2) ou (2,1)), alors ma fille gagne le nombre de doigts sortis, i.e. 3

La matrice des paiements est alors la suivante

P\F 1 2
1 2 -3
2 -3 4

Si on suppose que l’on a une chance sur 4 pour chacune des paires, le jeu est alors a somme nulle, puisque
http://freakonometrics.blog.free.fr/public/maths/jeuPF-80.png
Sauf que les probabilités sont fixées par les joueurs, en l’occurrence moi et ma fille. Supposons que je sorte 1 doigt avec probabilité http://freakonometrics.blog.free.fr/public/maths/jeuPF1.png, et 2 avec probabilité http://freakonometrics.blog.free.fr/public/maths/jeuPF2.png(avec http://freakonometrics.blog.free.fr/public/maths/jeuPF-81.png). Ma fille sort 1 doigt avec probabilité http://freakonometrics.blog.free.fr/public/maths/jeuPF3.png, et 2 avec probabilitéhttp://freakonometrics.blog.free.fr/public/maths/jeuPF4.png (avec http://freakonometrics.blog.free.fr/public/maths/jeuPF5.png). La valeur espérée du jeu (de mon point de vue) est
http://freakonometrics.blog.free.fr/public/maths/jeuPF.png
Ma fille cherche http://freakonometrics.blog.free.fr/public/maths/jeuPF15.png qui va minimiser cette valeur, alors que moi, je cherche http://freakonometrics.blog.free.fr/public/maths/jeuPF16.png qui la maximise. Bref, un problème classique de problème minmax de théorie des jeu. L’équilibre (s’il existe) sera http://freakonometrics.blog.free.fr/public/maths/jeuPF17.png où http://freakonometrics.blog.free.fr/public/maths/jeuPF-18.pnget http://freakonometrics.blog.free.fr/public/maths/jeuPF-19.png solution de
http://freakonometrics.blog.free.fr/public/maths/jeuPF-20.png
A p donné, ma fille cherche à trouver
http://freakonometrics.blog.free.fr/public/maths/jeuPF-21.png
soit
http://freakonometrics.blog.free.fr/public/maths/jeuPF-23.png
soit encore
http://freakonometrics.blog.free.fr/public/maths/jeuPF-25.png
en notant que http://freakonometrics.blog.free.fr/public/maths/jeuPF-81.png, on peut réécrire http://freakonometrics.blog.free.fr/public/maths/jeuPF-26.png. Autrement dit, si http://freakonometrics.blog.free.fr/public/maths/jeuPF-27.png, ma fille a toujours intérêt à sortir un doigt (car http://freakonometrics.blog.free.fr/public/maths/jeuPF-82.png) et ce n’est pas un équilibre… pareillement, si http://freakonometrics.blog.free.fr/public/maths/jeuPF-28.png, ma fille a toujours intérêt à sortir deux doigts, et là encore ce n’est pas un équilibre. Autrement dit, le seul équilibre possible est obtenu lorsque http://freakonometrics.blog.free.fr/public/maths/jeuPF-29.png, i.e l’équilibre correspond à http://freakonometrics.blog.free.fr/public/maths/jeuPF-30.png. De manière dual, pour ma part, je cherchais à trouver
http://freakonometrics.blog.free.fr/public/maths/jeuPF-31.png
i.e.
http://freakonometrics.blog.free.fr/public/maths/jeuPF-32.png
soit
http://freakonometrics.blog.free.fr/public/maths/jeuPF-25.png
qui conduit à un équilibre seulement si http://freakonometrics.blog.free.fr/public/maths/jeuPF-33.png. Autrement dit, afin que le jeu soit à l’équilibre, il faut que nous jouions la stratégie suivante: sortir un doigt avec probabilité http://freakonometrics.blog.free.fr/public/maths/jeuPF-40.png (et pas http://freakonometrics.blog.free.fr/public/maths/jeuPF60.png) et deux avec probabilité http://freakonometrics.blog.free.fr/public/maths/jeuPF-41.png. Et la valeur espérée du jeu est alors
http://freakonometrics.blog.free.fr/public/maths/jeuPF-50.png
Moralité, ma fille tu as intérêt à travailler tes maths si tu veux battre un jour ton père ce n’est pas vraiment un jeu “juste“… à condition bien sûr que moi et ma fille soyons ensuite capable de mettre en œuvre cette stratégie… Car si je regarde avec mes yeux de statisticiens, je pense qu’il est assez dur de faire la différence entre http://freakonometrics.blog.free.fr/public/maths/jeuPF-40.png et http://freakonometrics.blog.free.fr/public/maths/jeuPF60.png … La preuve sur cette petite simulation,

> N=100; pv=rep(NA,100000)
> for(s in 1:100000){
+ X=sample(1:2,size=N,prob=c(5/12,7/12),replace=TRUE)
+ P=prop.test(table(X)[1],n=N,p=1/2)
+ pv[s]=P$p.value}
> mean(pv>0.05)
[1] 0.66776

autrement dit, en supposant que ma fille et moi arrivions a jouer 100 fois à ce jeu, dans 2/3 des cas je n’arrive pas à faire la différence entre une probabilité valant http://freakonometrics.blog.free.fr/public/maths/jeuPF60.png et une probabilité valant http://freakonometrics.blog.free.fr/public/maths/jeuPF-40.png.

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

I really need to find hot (and sexy) topics

50 days ago (here), I was supposed to be very optimistic about the probability that I could reach a million viewed pages on that blog (over a bit more than two years). Unfortunately, the wind has changed and today, the probability is quite low…

 base=read.table("millionb.csv",sep=";",header=TRUE)
X1=cumsum(base$nombre)
base=read.table("million2b.csv",sep=";",header=TRUE)
X2=cumsum(base$nombre)
X=X1+X2
 D=as.Date(as.character(base$date),"%m/%d/%Y")
kt=which(D==as.Date("01/06/2010","%d/%m/%Y"))
D0=as.Date("08/11/2008","%d/%m/%Y")
D=D0+1:length(X1)
P=rep(NA,(length(X)-kt)+1)
for(h in 0:(length(X)-kt)){
model  <- arima(X[1:(kt+h)],c(7 1,7),method="CSS") 
 forecast <- predict(model,200)
u=max(D[1:kt+h])+1:300
k=which(u==as.Date("01/01/2011","%d/%m/%Y"))
(P[h+1]=1-pnorm(1000000,forecast$pred[k],forecast$se[k]))
}
plot( D[length(D)-length(P)]+1:220,c(P,rep(NA,220-length(P))),
ylab="Probability to reach 1,000,000",xlab="",
type="l",col="red",ylim=c(0,1))
So, I guess my posts on multiple internal rates of return, or Young’s inequality will have to wait next year… I really need to find some more sexy post to attract readers.. Challenge accepted !

Is it that stupid to make extremely long term forecast when studying mortality ?

I received recently a comment by FCA (here) who raised an important question, about forecast in dynamic mortality models. (S)he mentioned that from his(her) point of view, the econometric models I considered were “good to predict for the next, say, 3 or 4 years. Not for the next 50 years…”. Which was the message I tried to stress last year in a conference about retirement in France (here). But from a quantitativepoint of view, how inconsistent were forecasts made 35 years ago, or 60 years ago ?

Consider here the Lee Carter model, obtained on the periods 1816-1950 (in black below), 1816-1975 (in red) and 1816-2000 (in blue), unfortunately, it is difficult to compare http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s since we have identifiability problems here. Nevertheless, we if consider affine transformation so that  http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s are equal in 1900 and 1950 (say), we obtain

On that graph, we considered an ETS (AAN) forecast. If we do not consider the entire series for forecasting, but only observations following WWI (1945), we obtain

For sketches of the R code,

T=1980
base0=data.frame(D,E,A,Y,a=as.factor(A),
y=as.factor(Y))
base=base0[base0$Y<=T,]
LC2=gnm(D~a+Mult(a,y),offset=log(E),family=
poisson,data=base)
A=LC2$coefficients[1]+LC2$coefficients[2:110]
B=LC2$coefficients[111:220]
K0=LC2$coefficients[221:length(LC2$coefficients)]
Y=as.numeric(K0)
K1=c(K0,forecast(ets(Y,model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y,allowdrift=TRUE),h=240)$mean)
MU=matrix(NA,length(A),length(K1))
MU1=MU2=MU
for(i in 1:length(A)){
for(j in 1:length(K1)){
MU1[i,j]=exp(A[i]+B[i]*K1[j])
MU2[i,j]=exp(A[i]+B[i]*K2[j])
}}
x=40
s=seq(0,109-x-1)
t=2000
Pxt1=cumprod(exp(-diag(MU1[x+1+s,t+s-base1$Year[1]-1])))
Pxt2=cumprod(exp(-diag(MU2[x+1+s,t+s-base1$Year[1]-1])))
r=.035
m=70
h=seq(0,39)
V1=1/(1+r)^(m-x+h)*Pxt1[m-x+h]
V2=1/(1+r)^(m-x+h)*Pxt2[m-x+h]
M=cbind(V1,V2)
apply(M,2,sum)

Actually, it is not that bad…. even if it is only a qualitative intuition. Again, I am not a demographer, and my interest is more on actuarial science… so if we look at the estimation of annuities (still the same insurance contract, as here) for some insured of age 40 in 2000, we get the following graph (where forecasts http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s were obtained on the complete series, i.e. from 1816 until the year we consider),

(here it means that in 1900, I had to forecast mortality for someone of age 40 in 2000… so we had to forecast mortality with a 150 year horizon). Obviously, even if we are able to forecast improvement of mortality rates, it is not enough since it looks like, each year, improvement are alway higher than what what expected. Note that if we run it twice (since there might be problem with initial values in the econometric procedure) we obtain something similar,

So, the output is consistent. And if we change the way we predict future values, e.g. on focusing only on the past 50 years, i.e.

K1=c(K0,forecast(ets(Y[(length(Y)-50):length(Y)],
model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y[(length(Y)-50):length(Y)],
allowdrift=TRUE),h=240)$mean)

we obtain the following graph for the annuity associated to an insurance contract sold in 2000,

so that relative changes compared with 1980 are (in %)

Hence, over a bit more than 25 years, we underestimated annuities of 25%. We if start to take into account possible investments, it is not so bad, I think….  don’t you think ?

 

Finding roots of functions in actuarial science

The following simple code can be used to find roots of functions (based on the secant algorithm),

secant=function(fun, x0, x1, tolerence=1e-07, niter=500){
for ( i in 1:niter ) {
	x2 <- x1-fun(x1)*(x1-x0)/(fun(x1)-fun(x0))
	if (abs(fun(x2)) < tolerence)
		return(x2)
	x0 <- x1
	x1 <- x2
}}

It can be interesting in actuarial science, e.g. to find the actuarial rate so that to present values are equal. For instance, consider the following capital, given only if the insured is still alive (this example was initially considered here). We would like to find the rate so that the probable discounted value is 600,

> Lx=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ header=TRUE,sep=";")
> capital=c(100,100,125,125,150,150)
> n=length(capital)
> x=0.035
> X=45
> f=function(x){
+ capital.act=capital*(1/(1+x))^(1:n)
+ PROBA=Lx[((Lx[,1]>X)*(Lx[,1]<=(X+n)))==1,2]/Lx[(Lx[,1]==X)==1,2]
+ return(sum(capital.act*PROBA))}
>
> f1=function(x){f(x)-600}
> secant(f1,0,0.1)
[1] 0.06022313
> f(0.06022313)
[1] 600

*

Marc Levy n’utilise que 150 mots ?

L’intérêt des voyages en avion, c’est que ça offre beaucoup de temps pour lire… Et comme j’avais fini mon polar arrivé à Paris, je me suis rabattu sur la version papier de Rue89. Et un article m’a beaucoup plus, où l’on apprenait que les hommes ne pouvaient pas lire Marc Levy… Ne l’ayant jamais lu moi même je ne suis pas apte pour juger, mais une raison évoquée était qu’il utilisait trop peu de mots… En particulier, l’article reprenait une déclaration de François Busnel qui affirmait (ici) “Ces livres sont cousus de fil blanc et écrits avec 150 mots. C’est comme la série Les Feux de l’amour.”

Ah, voilà une affirmation qui m’intéresse en tant que statisticien… combien les écrivains utilisent de mots dans leurs romans…?

Si on regarde Le Voleur d’Ombre, il contient environ 64 000 mots, en tout. Parmi ces mots, on en compte 6 830 uniques (en sachant qu’ici abandonnèrent,abandonne et abandonnait, par exemple, sont considérés comme différents). Le code R est d’ailleurs assez simple,

> livre=scan("Levy-Marc-Le voleur d'ombres.txt", 
+ what=character())
> minlivre=tolower(livre)
> TABLE=rev(sort(table(minlivre)))
> length(TABLE)
[1] 6827
> length(livre)
[1] 64122
> D1=data.frame(mot=names(x),N=as.numeric(x),
+ Freq=as.numeric(x)/sum(x)*100)

(je tiens à préciser que j’ai remplacé les virgules, les points, les apostrophes, etc par des espaces, dans le texte de base afin de récupérer dans le fichier simplement des mots, sans symboles – et j’ai transformé les mots pour n’avoir que des minuscules).

Histoire de comparer avec d’autres auteurs, j’ai pris un auteur plus ancien (pour éviter les soucis de droits d’auteurs, car les textes se trouvent en ligne), à savoir Théophile Gautier, et plus particulièrement son Capitaine Fracasse (oui, on reste dans les pavés). En tout, 127 177 mots, dont 15 595 uniques. Si on compare les courbes de Lorentz, on obtient le graphique suivant,

Les 20 mots les plus fréquents sont (en nombre et surtout en fréquence) respectivement

cbind(D1[1:20,],D2[1:20,])
    mot    N      Freq mot    N      Freq
1    de 2310 3.6025077  de 6284 4.9411450
2    la 1591 2.4812077  et 3862 3.0367126
3    je 1534 2.3923147  la 3720 2.9250572
4     à 1270 1.9805995  le 3235 2.5436989
5    le 1165 1.8168491   à 2429 1.9099365
6    et 1121 1.7482299 les 2214 1.7408808
7   que 1096 1.7092418   d 2192 1.7235821
8     j  889 1.3864196  un 2075 1.6315843
9     l  873 1.3614672   l 2055 1.6158582
10   un  857 1.3365148 une 1678 1.3194210
11   en  780 1.2164312  en 1497 1.1770996
12   me  736 1.1478120  il 1494 1.1747407
13  pas  701 1.0932285 des 1455 1.1440748
14  les  665 1.0370856  se 1178 0.9262681
15   il  654 1.0199308 que 1085 0.8531417
16 elle  653 1.0183712  du 1072 0.8429197
17   tu  596 0.9294782 qui 1064 0.8366293
18   ai  588 0.9170020 son  971 0.7635028
19 dans  549 0.8561804   s  922 0.7249739
20    d  542 0.8452637  qu  915 0.7194697

On retrouve malgré tout que Marc Levy utilise une autre manière d’écrire, qu’il utilise beaucoup le je ou le moi (par rapport à Théophile Gautier). Mais critiquer car il utilise peu de mots me semble un peu léger…. il semble en utiliser autant que des vrais grands écrivains….

70% des salariés français touchent moins de 1 500 € par mois ?

Hier, @Louisa_A me demandait de commenter l’information comme quoi “70% des salariés français touchaient moins de 1 500 € par mois“. Tout d’abord, rappelons que selon l’INSEE, en 2008, le salaire médian net en France est de l’ordre de 1655 € par mois pour un temps plein (cf ici par exemple). Comme c’est la médiane (je renvoie ici pour une bonne blague du nouvel obs sur la médiane, comme quoi il ne faut pas toujours croire ce qu’on lit), c’est que 50% des français touchent ce salaire, ou plutôt toucheraient ce salaire s’il travaillaient à temps plein…. Le hic est que généralement l’insee normalise les données, et ne parle que de taux plein…

Mais heureusement l’insee met aussi a disposition des bases, en particulier la base dads2007, ici, avec un fichier de 25Mo (zippé) , contenant près de 2 millions d’observations, dont le temps travaillé en 2007 et le salaire (ou plutôt une tranche de salaire). Le code est plutôt simple (même si j’ai travaillé sur la première moitié de la base, suite à des soucis d’importation sur mon petit portable),

> library(foreign)
> base=read.dbf("D:\\r-data\\salaries07.dbf")
> TRANCHE=c(0,100,300,600,1000,1300,2000,3000,
+ 4000,5000,7000,9000,11000,12500,14000,16000,
+ 18000,20000,24000,30000,1000000000)
> salinf=TRANCHE[1:20]
> salsup=TRANCHE[2:21]

Bref,
on peut récupérer des salaires médians, par temps plein (afin de voir si nos données sont représentatives), en notant qu’un temps plein c’est 1820 heures dans l’année (cf ici)

> Xinf=salinf[base$TRNNETO+1]/base$NBHEUR_TOT*1820/12
> Xsup=salsup[base$TRNNETO+1]/base$NBHEUR_TOT*1820/12
> quantile(Xinf,.5,na.rm=TRUE)
 50% 
1484.745 
> quantile(Xsup,.5,na.rm=TRUE)
 50% 
1685.185

Comme nous ne sommes pas trop loin des 1655 indiqués, j’aurais tendance à faire confiance à nos chiffres (en se plaçant proches de la borne supérieure d’ailleurs). Mais si on travaille sur le revenu, et pas le revenu pour un temps plein, les chiffres sont un peu différents,

> Xinf=salinf[base$TRNNETO+1]/12
> Xsup=salsup[base$TRNNETO+1]/12
> quantile(Xinf,.5,na.rm=TRUE)
 50% 
1333.333 
> quantile(Xsup,.5,na.rm=TRUE)
 50% 
1500

Pour revenir à nos moutons, si on s’intéresse à la proportion des français qui touchent moins de 1500 € par mois, j’obtiens

> mean(Xinf<=1500)
[1] 0.6118895
> mean(Xsup<=1500)
[1] 0.5236287

C’est à dire qu’à la louche, je pense que 55% des français touchent moins de 1500 € par mois. A condition que les chiffres soient en net. Si la citation parlait du brut, ça correspond en gros à du 1200 € par mois, et on a alors les proportions suivantes,

> mean(Xinf<=1500*.8)
[1] 0.4215607
> mean(Xsup<=1500*.8)
[1] 0.3143752

D’ailleurs un petite remarque sur ces histoires de revenu et de temps de travail. Si le temps travaillé par mois et le salaire horaire étaient des variables indépendantes, j’aurais pu faire des calculs rapides à partir de la distribution du temps de travail. Malheureusement, ces deux données ne sont pas indépendantes. Pour s’en convaincre, calculons le produit des espérances (salaire moyen horaire par temps moyen travaillé par mois) et l’espérance du produit (revenu moyen)

> Xinf=salinf[base$TRNNETO+1]/base$NBHEUR_TOT*1820/12
> Xsup=salsup[base$TRNNETO+1]/base$NBHEUR_TOT*1820/12
> mean(Xinf[Xinf<1000000000000],na.rm=TRUE)*
+ mean(base$NBHEUR_TOT/1820)
[1] 1609.669
> mean(salinf[base$TRNNETO+1])
[1] 16832.14

qui ne sont manifestement pas égaux. On a même un dépendance positive entre le salaire horaire et le temps de travail mensuel.  On retrouve ici le fait que les personnes qui effectuent les travaux avec les salaires horaires les plus bas sont aussi celles qui travaillent le moins. D’un point de vue technique, pour étudier les inégalités temps de travail/revenu horaire, c’est un peu compliqué à cause du produit. C’est pour ça qu’avec Stéphane Mussard on avait introduit la décomposition  des mesures d’inégalités pour des inégalités multiplicatives, ici. Mais on sort du cadre du papier, et ma réponse serait que non, je doute que la personne soit de bonne foi (la seule source que j’ai trouvé sur internet donnant ces chiffres était ici).

Comments on probabilities

The only thing I remember from courses I had in probability a few years ago is that we also have to clearly defined the event we want to calculate the probability. On the Freakonomics blog, last week, the Israeli lottery was mentioned (here, see also there where I mentioned that, and odds facts from the French lottery),

Yesterday, Andrew Gelman claimed (here) that there was a probability error… Well, since Andrew is really a statistician (and a good one… while I am barely an economist), I tried to do the maths…. and to understand where the error was coming from…

Since 6 numbers are drawn out of a pool of numbers from 1 to 37, the total number of combination at each lottery is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto01.png
> (n=choose(37,6))
[1] 2324784

Over 8 lotteries (since there are two draws per week, we can assume there 8 draws per month), the probability of no identical draws is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto02.png

Here is the R code for those who want to check, again,

> prod(n-0:7)/n^8
[1] 0.999988

Each month, the probability of “coincidence” (I define “coincidence” the event “over 8 draws, at least two times, we obtained the same 6-uplet” or more precisely (as mentioned here) “over one calendar month, at least two times, we obtained the same 6-uplet“) is

> (p=1-(prod(n-0:7)/n^8))
[1] 1.204407e-05

The occurrence of a coincidence each month as a Geometric distribution, with probability p. And it is classical, following Gumbel’s definition (here), to consider 1/p, called the “return period“, i.e. the number of months we have to wait until we observe a coincidence (i.e. a repetition in the same month), since for a geometric distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto03.png
> 1/p/(12)
[1] 6919.034

Here, the (expected) return period is 6919 years.

From my point of view, this is “the incident of six numbers repeating themselves within a calendar month”, and this is an event of once in 6919.034 years. On the other hand the median of a geometric distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto04.png
> -log(2)/log(1-p)/(12)
[1] 4795.88

which means that we have 50% chance to get such a coincidence over 4796 years.

Of course, if instead of looking at a longer period, say 100 draws, i.e. one year (here I define “coincidence” the event “over 100 draws, at least two times, we obtained the same 6-uplet“), we have in red the expected return period, and in blue the median of the geometric distribution,

> M=E=rep(NA,100)
> for(i in 2:100){
+ p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
+ E[i]=1/p/(100/i)
+ M[i]=-log(2)/log(1-p)/(100/i)
+ }
> plot(1:100,E,ylim=c(0,10000),type="l",col="red",lwd=2)
> lines(1:100,M,col="blue",lwd=2)
> abline(v=8,lty=2)
> points(8,E[8],pch=19,col="red")
> points(8,M[8],pch=19,col="blue")

or below of a log-scaled version

As Xi’an did (here), assume now that there is a lottery over 100 countries. Here I define “coincidence” the event “over k lottery draws over 100 around the world, at least two times, we obtained the same 6-uplet“, and then the previous graph becomes (with on the x axis the level of k)

Here I have a 12% chance if we consider probability to have identical numbers over a month…

But here, we can have one 6-uplet in Israel, and the other one in Egypt, say… If we want to get the same 6-uplet in the same country, the graph is now

i.e. each month there is a chance over one thousand…

> i=8
> p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
> 1-(1-p)^100
[1] 0.001203689

Note: actually, Xi’an mentioned that the probability that this coincidence [of two identical draws over 188 draws] occurred in at least one out of 100 lotteries (there are hundreds of similar lotteries across the World) is 53%! And I got the same,

> 1-(1-P[188])^100
[1] 0.5305219

Names of villages, in France

Keith Briggs published a post here on names of English place name element distribution, which contains almost twenty maps like the one where names ends by -bourn,bourne,burn (here) or -head (there). Actually, it is possible (Robin mentioned that already here) to do similar things in France… Consider the dataset containing the 35,250 commune names (here), it is an xls file containing the official name, the latitude, and on the longitude. To start with something simple, it is possible also to look at village containing “saint” in it

There are a lot, and there is no obvious geographic trend. For some simple geographic trend, t is possible to see where are villages having a name ending with “sur mer” (meaning literally “on the sea”) below on the left. Obviously, we cannot find such places in the Alps. Similarly for names ending with “Seine” they are clearly on the Seine river, on the right

> ville=read.table("D:\\r-data\\ville.csv",sep=";",header=TRUE)
> nrow(ville)
[1] 35376
> ville$maj=as.character(ville$Nom.Ville)
> n=nchar(ville$maj)
> I=substr(ville$maj,pmax(0,n-8),n)
> Ind=I=="-sur-Mer "
> sum(Ind)
[1] 98
> library(maps)
> map('france', fill = FALSE)
> X=ville[Ind,]
> x=as.numeric(as.character(X$Longitude))
> y=as.numeric(as.character(X$Latitude))
> points(x,y,pch=19,col="blue",cex=.6)

In order to continue with some geographic pattern, consider the end of the names, such as “-gny” (below on the left, in red) or “-ac” (below on the right, in blue)

Some pretend that “-ac” comes from Gaelic, and can be found in Celtic regions (here in Brittany). Obviously, there is also an origin in Occitany (south west of France). And this gave also in Oïl region the “-gny” (in North and North-East regions). Consider similarly end of the names, such as “-an” (below on the left, in red) or “-ey” (below on the right, in blue)

Still about the end of the names, it is also possible to look for village ending either with “-a” (below on the left, in red) or “-o” (below on the right, in blue)

We are now in the southern part of France…. “-a” in Corse and Pyrénées, while “-o”can be found in Corse, and in Brittany. For the beginning “ker-” or “lan-”  (below on the left, in red) or “castel-” (below on the right, in blue),

“ker-” appears in 18,000 location names (as mentioned here) but only in some village names. It is similar to “castel-” in the southern part of France.
To go a bit further, 40 years ago, Georges Brassens sang a song entitled “La ballade des gens qui sont nés quelque part“.

He says that people are usually extremely proud of their villages…. Actually, their are more people proud of living over something than under something: below are villages containing “sous” (i.e. under below on the left, in red) or “sur” (i.e. over below on the right, in blue)

On the other hand, villages containing “grand” or “grande” (i.e. tall or big below on the left, in red) or “petit” or “petite” (i.e. small below on the right, in blue) seem to be correlated: close to a city with “grand” there is a village with “petit” in it. For instance Virieu-le-Grand and Virieu-le-Petit, or  Essigny-le-Grand and Essigny-le-Petit.

And finally, I found surprising to see so many village containing “montagne” (ie mountain below on the left, in orange) or starting with “Mont” (below on the right, in purple) that are far from mountains,

You do not need to live close to some mountains to get mountains in it. Even in Brittany you can find dozen of villages starting with “Mont”….

Extracting information from a keyboard…

Yesterday, Baptiste published a post on “ethno-photography” (here). As he mentioned it, in Paris 8, they experience a real absence of serious cleaning of office equipment. He then shows the keyboard of the only computer they can use in the sociology department (for forty researchers),

Apart from the fact that everyone in France should be ashamed to see how much is spent in universities (which is the first information we have from that picture), we should also be able to guess in which langage people work in this department.
I considered three books (two in French, one in English) and I would like to see the frequency of each letter,

  • Mauss, manuel d’éthnographie (here), 1926
  • Durkheim, Livre II: Les croyances élémentaires in Les formes élémentaires de la vie religieuse (here), 1912
  • Ferri, Criminal Sociology (here), 1896

Those three books are in rich text format, I just changed it to get text files… Then, it is easy to count appearance of letters. E.g. for Mauss,

> library(corpora)
> textfile=scan("MAUSS-manuel.txt",
+ what="char", sep="\n")
Read 1550 items
> textfile<-tolower(textfile)
> M=NA
> for(i in 1:length(textfile)){
+ line=textfile[i]
+ M=c(M,strsplit(as.character(line),"")[[1]])
+ }
> T=table(M)
> T
M
    '     -           \t     !     "     %     &     (     )     ,     .     / 
 5308  1049 86589    44     3     3     2     2   370   391  6609  4909    12 
    :     ;     ?     @     ]     _     ~     ’     =     «     »     ¬     ° 
  819  1178   113     1     1     4     1    39     1   108   107   823     3 
    …     0     1     2     3     4     5     6     7     8     9     a     à 
    1    69   213    83    73    34    48    33    28    64   151 30559  1651 
    â     ä     b     c     ç     d     e     é     è     ê     ë     f     g 
  224     3  3562 14678   110 17713 63955 10354  1798  1000     5  4555  4911 
    h     i     î     ï     j     k     l     m     n     ñ     o     º     ô 
 4359 30851   226    47  1147   247 24792 12844 32525     6 25562     2   151 
    ö     œ     p     q     r     s     t     u     ù     û     ü     v     w 
   12    52 12696  4667 28237 37630 32945 25001   211    40     9  4787   164 
    x     y     z 
 1996  1222   343

Then, we can summarize in to see proportion of standard 26 letters, and we have, for Mauss,

and for Durkeim,

If we compare the two, we have almost the same proportions,

If we look at our book in English now, we have

i.e., if we compare with Mauss for instance

So we have much more E in French than in English, but still, people writing in English use a lot the E. So looking at the E should not give us any clue…. But we can see that in English, the H is as common as the L, or the C. Not in French, where L is much more frequent than the H. But on the picture, the C is more clear than the H. We can also look at the U, which is common in French, not in English… Here, on the keyboard, it is perfectly clear… so I guess people use it frequently.
So I would say that they write more in French than the write in English, on that computer.
Actually, the same idea has been used a long time ago on calculators to see that Benford’s law works: some numbers are really used (as well as the legend pretends that some pages in logarithm books were never used….), see here orthere. So Baptiste, if one day the keyboard is cleaned up, please send me another picture after a few weeks to see if things have changed….
An for those who cannot imagine how it is to work in some universities in France, just look at his blog (here). Pictures are unbelievable….Good luck Baptiste….

Lottery, and martingales

I recently got a comment on a post I published one year ago, here, about the fact that in September 2009, on the 6th and the 10th, the 6 same numbers came out at the lottery, in Bulgaria (but  I do not understand the question: the author of the comment ask about the order the numbers came out…)
Xi’an published also a post on that topic, there, since last week, the same thing happened in Israel.
All that reminded me a discussion I had with a colleague about another post (here) where I mentioned that I found a strange distribution of numbers in the French lottery (the old one actually). For those who want to check, all historical events are here, in a zip file. My colleague was wondering if I found the martingale to win the lottery…

First, I do not like that term, since martingale is something different from a mathematical point of view… Second, let us look if it would have been possible to make some money… (free lunch ?)

> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
>   N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> length(n)
[1] 28848
> (TN=table(n))
n
1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20
607 576 571 618 579 598 608 582 588 590 562 577 577 580 591 630 558 567 594 608
21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40
578 562 579 583 574 589 602 572 550 598 604 582 545 646 597 618 599 636 609 588
41  42  43  44  45  46  47  48  49
576 589 577 585 618 596 560 571 604

So, it might look nice, but we have to compare that distribution with the one we should have with “independent” draws. It is not possible to look at a discrete uniform distribution: the six numbers are not independent. Each day, the 49 balls are back in the urn, but within a day, we do not have independent draws (it is a sample without replacement of balls). Hence, with 4808 lottery draws, each number cannot be obtained more than 4808 times. So, let us use monte carlo techniques to  look at the theoretical distribution,

> M=matrix(NA,49,1000)
> for(s in 1:1000){
+ B=NA
+ for(i in 1:ntirage){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ M[,s]=sort(table(B))
+ }
> q50=function(x){quantile(x,.5)}
> Q50=apply(M,1,q50)
> lines(1:49,Q50,col="red",lwd=2)
> q10=function(x){quantile(x,.1)}
> Q10=apply(M,1,q10)
> q90=function(x){quantile(x,.9)}
> Q90=apply(M,1,q90)
> polygon(c(1:49,49:1),c(Q10,rev(Q90)),col="light blue",border=NA)
> lines(1:49,Q10,col="red",lty=2)
> lines(1:49,Q90,col="red",lty=2)
> lines(1:49,Q50,col="red",lwd=2)
> points(1:49,sort(TN),pch=19,type="b")

Looking at the graph, it looks like some numbers appeared too frequently, especially the ones that did not appear frequently (bottom left). So, since I have removed the last 50 draws, let us see if we could have used that information, somehow…

> nb=names(sort(TN))
> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> loto=loto[1:50,]
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> TN=table(n)
> TN[nb]
> barplot(TN[nb])

Unfortunately, numbers that came out too frequently over 4800 draws did not appear that frequently of the last 50. Playing top number might not have been a great strategy.

(numbers that came out frequently are on the right, while those we did not see much are on the left)… What about worst numbers: if I had decided to play the 6 that did not come out very frequently (we’ve seen earlier that they should have appeared even less, actually), would it have been interesting ? As we can see, our top 2 numbers were numbers that did not appear frequently earlier (29 and 47 appears respectively 10 and 11 times over 50 draws)….
Over 50 draws of 6 balls, the expected frequency of 6 given number is around 36.7,..

> S=rep(NA,10000)
> for(s in 1:10000){
+ B=NA
+ for(i in 1:50){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ S[s]=sum(B%in%(1:6))
+ }
> mean(S)
[1] 36.7694

But here for the top 6, we have

> z=TN[nb]
> sum(rev(z)[1:6])
[1] 29

i.e. the top 6 appeared 29 times over 50 draw of 6 balls (which looks low) and for the worst 6, it is a bit higher,

>  sum(z[1:6])
[1] 38

If we look at the theoretical density of the frequency of 6 given number, we have

i.e. our worst 6 is a nice average (in green) while top 6 did not appear frequently this time (here in blue) ! So we could not have used that information….
Anyway, if some of you are interesting using statistics to get a free lunch, with the nouveau loto, I did not see any strange pattern (data can be downloaded here).

I am terribly sorry, but I cannot help anyone winning at the French Lottery….

Margin of error, and comparing proportions in the same sample

Irecently tried to answer a simple question, asked by @adelaigue. Actually, I thought that the answer would be obvious… but it is a little bit more compexe than what I thought. In a recent survey about elections in Brazil, it was mentionned in a French newspapper that “Mme Rousseff, 62 ans, de 46,8% des intentions de vote et José Serra, 68 ans, de 42,7%” (i.e. proportions obtained from the survey). It is also mentioned that “la marge d’erreur du sondage est de 2,2% ” i.e. the margin of error is 2.2%, which means (for the journalist) that there is a “grande probabilité que les 2 candidats soient à égalité” (there is a “large probability” to have equal proportions).
Usually, in sampling theory, we look at the margin of error of a single proportion. The idea is that the variance of https://latex.codecogs.com/gif.latex?%20\widehat{p}, obtained from a sample of size https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m201.png

thus, the standard error is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m202.png

The standard 95% confidence interval, derived from a Gaussian approximation of the binomial distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m203.png

The largest value is obtained when p is 1/2, and then we have a worst case confidence interval (an upper bound) which is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m204.png

So with a margin of error https://perso.univ-rennes1.fr/arthur.charpentier/latex/m205.png means that https://perso.univ-rennes1.fr/arthur.charpentier/latex/m206.png. Hence, with a 5% margin of error, it means that n=400. While 2.2% means that n=2000:
> 1/.022^2
[1] 2066.116
Classically, we compare proportions between two samples: surveys at two different dates, surveys in different regions, surveys paid by two different newpapers, etc. But here, we wish to compare proportions within the same sample. This has been consider in an “old” paper published in 1993 in the American Statistician,

It contains nice figures to illustrate the difference between the standard approach,

and the one we would like to study here.

This point is mentioned in the book by Kish, survey sampling (thanks Benoit for the reference),


Let https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial05.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial06.png denote empirical frequencies we have obtained from the sample, based on https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png observations. Then since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial07.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial08.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial09.png

we have

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial11.png

Thus, a natural margin of error on the difference between the two proportion is here

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m207.png

which is here 4 points
> n=2000
> p1=46.8/100
> p2=42.7/100
> 1.96*sqrt((p1+p2)-(p1-p2)^2)/sqrt(n)
[1] 0.04142327
Which is exactly the difference we have here ! Hence, the probability of reaching such a value is quite small (2%)
> s=sqrt(p1*(1-p1)/n+p2*(1-p2)/n+2*p1*p2/n)
> (p1-p2)/s
[1] 1.939972
> 1-pnorm(p1-p2,mean=0,sd=sqrt((p1+p2)-(p1-p2)^2)/sqrt(n))
[1] 0.02619152

Actually, we can compare the three margin of errors we have so far,

  • the upper bound
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m208.png
  • the “average one”
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m209.png

where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m212.png
  • the more accurate one we just obtained,
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m213.png

where https://perso.univ-rennes1.fr/arthur.charpentier/latex/m214.png.
> p=seq(0,.5,by=.01)
> ic1=rep(1.96/sqrt(4*n),length(p))
> ic2=1.96*sqrt(p*(1-p))/sqrt(n)
> delta=.01
> ic31=1.96*sqrt(2*p-delta^2)/sqrt(n)
> delta=.2
> ic32=1.96*sqrt(2*p-delta^2)/sqrt(n)
> plot(p,ic32,type=”l”,col=”blue”)
> lines(p,ic31,col=”red”)
> lines(p,ic2)
> lines(p,ic1,lty=2)
So on the graph below, the dotted line is the standard upper bound, the plain line in black being a more accurate one when the probability is https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png (the x-axis). The red line is the true margin of error with a large difference between candidates (20 points) and the blue line with a small difference (1 point).


Remark: an alternative is to consider a chi-square test, comparering two multinomial distributions, with probabilities https://perso.univ-rennes1.fr/arthur.charpentier/latex/m215.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/m216.png where https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png is the average proportion, i.e. 44.75%. Then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial21.png

i.e.  https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png=3.71
> p=(p1+p2)/2
> (x2=n*((p1-p)^2/p+(p2-p)^2/p))
[1] 3.756425
> 1-pchisq(x2,df=1)
[1] 0.05260495
Under the null hypothesis, https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png should have a chi-square distribution, with one degree of freedom (since the average is fixed here). Here the probability to reach that level is around 5% (which can be compared with the 2% we add before).

So finally, I would think that here, stating that there is a “large probability” is not correct…

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.