Open data and ecological fallacy

A couple of days ago, on Twitter, @alung mentioned an old post I did publish on this blog about open-data, explaining how difficult it was to get access to data in France (the post, published almost 18 months ago can be found here, in French). And  @alung was wondering if it was still that hard to access nice datasets. My first answer was that actually, people were more receptive, and I now have more people willing to share their data. And on the internet, amazing datasets can be found now very easily. For instance in France, some detailed informations can be found about qualitifications, houses and jobs, by small geographical areas, on http://www.recensement.insee.fr (thanks @coulmont for the link). And that is great for researchers (and anyone actually willing to check things by himself).

But one should be aware that those aggregate data might not be sufficient to build up econometric models, and to infere individual behaviors. Thinking that relationships observed for groups necessarily hold for individuals is a common fallacy (the so-called ” ecological fallacy“).

In a popular paper, Robinson (1950) discussed “ecological inference“, stressing the difference between ecological correlations (on groups) and individual correlations (see also Thorndike (1937)) He considered two aggregated quantities, per american state: the percent of the population that was foreign-born, and the percent that was literate. One dataset used in the paper was the following

> library(eco)
> data(forgnlit30)
> tail(forgnlit30)
Y          X         W1          W2 ICPSR
43 0.076931986 0.03097168 0.06834300 0.077206504    66
44 0.006617641 0.11479052 0.03568792 0.002847920    67
45 0.006991899 0.11459207 0.04151310 0.002524065    68
46 0.012793782 0.18491515 0.05690731 0.002785916    71
47 0.007322475 0.13196654 0.03589512 0.002978594    72
48 0.007917342 0.18816461 0.02949187 0.002916866    73

The correlation between  foreign-born and literacy was

> cor(forgnlit30$X,1-forgnlit30$Y)
[1] 0.2069447

So it seems that there is a positive correlation, so a quick interpretation could be that in the 30’s, amercians were iliterate, but hopefully, literate immigrants got the idea to come in the US. But here, it is like in Simpson’s paradox, because actually, the sign should be negative, as obtained on individual studies. In the state-based-data study, correlation was positive mainly because foreign-born people tend to live in states where the native-born are relatively literate…

Hence, the problem is clearly how individuals were grouped. Consider the following set of individual observations,

> n=1000
> r=-.5
> Z=rmnorm(n,c(0,0),matrix(c(1,r,r,1),2,2))
> X=Z[,1]
> E=Z[,2]
> Y=3+2*X+E
> cor(X,Y)
[1] 0.8636764

Consider now some regrouping, e.g.

> I=cut(Z[,2],qnorm(seq(0,1,by=.05)))
> Yg=tapply(Y,I,mean)
> Xg=tapply(X,I,mean)

Then the correlation is rather different,

>  cor(Xg,Yg)
[1] 0.1476422

Here we have a strong positive individual correlation, and a small (positive correlation) on grouped data, but almost anything is possible.

Models with random coefficients have been used to make ecological inferences. But that is a long story, andI will probably come back with a more detailed post on that topic, since I am still working on this with @coulmont (following some comments by @frbonnet on his post on recent French elections on http://coulmont.com/blog/).

Proving tautological versus trivial results in mathematics

There is something that might be fun in mathematics, which is the connexion between trivial, tautological and difficult questions. Sometimes, things are so intuitive, that they seem to be obvious. But mathematicians aren’t jedis, and they should not trust too much their intuition… I mean intuition is fine, but it is not a proof. It is like those standard results we learn in topology courses, e.g. “the closure of an open ball is not necessarily the closed ball”. The other thing is that after a while, you try to prove something, until someone makes you realize that it is the definition…

And this morning, while I was trying to make a coffee, @renaudjf came with a simple question (yes, it always starts like that). Consider the standard algorithm to generate a conditional random variable. Assume that  has a priori distribution , and that , given , has (conditional) distribution .

The standard idea is monte carlo simulation, to generate values of , is
  •  step 1: generate 
  •  step 2: given that generation of , generate 
“Can we prove that we actually generate from the (true, maybe hard to characterize) non-conditional distribution of  ? Or is it just trivial ?”. After having those previous philosophical questions, we came to the point that if it was trivial, then we should be able to prove it. A standard way of writing the algorithm is to use the quantile based technique
  •   with ,
  •   with ,
For instance, to generate negative binomial distribution
n=1
theta=rgama(n,3,3)
X=rpois(n,lambda=theta)
Thus, let  where  and  are two independent random variables with a uniform distribution on the unit interval. Let us try to derive its distribution, i.e.
so
if we consider the following change of variate 
which is exactly the non-conditional distribution of .
And then, you’re quite happy because you’ve been able to prove a trivial result ! But next time, I promise, we’ll try to derive an amazing theorem that will change humanity… but next time only, first, let us prove trivial results.

Short versus long papers, in academic journals

This Monday, during my talk on quantile regressions (at the Montreal R-meeting), we’ve seen how those nice graphs could be interpreted, with the evolution of the slope of the linear regression, as a function of the probability level. One illustration was on large hurricanes, from Elsner, Kossin & Jagger (2008). The other one was on birthweight, from Abrevaya (2001).

It is also to illustrate that technique to academic publication, e.g. the length of papers, over time. Actually, the data we can extract from Scopus are quite similar to the ones uses on hurricanes. For several journals, it is possible to look at the length of articles. Since Scopus is quite expensive ($60,000 per year for the campus, as far as remember, so I can imagine the penalty I might have to pay for sharing such a dataset)

base=read.table("/home/scopus.csv",
header=TRUE,sep=",")
pages=base$Page.end-base$Page.start
year=base$Year

Again, a first idea can be to look at boxplots, and regression on (nonparametric) quantiles, here for Econometrica,

boxplot(pages~as.factor(year),col="light blue")
Q=function(p=.9) as.vector(by(pages,as.factor(year),
function(x) quantile(x,p)))
u=1:16
points(u,Q(p),pch=19,col="blue")
abline(lm(Q(p)~u,weights=table(year)),lwd=2,col="blue")

Consider now (as in the slides in the previous post) a quantile regression (instead of a regression on quantiles), for instance in the Annals of Probability,

library(quantreg)
u=seq(.05,.95,by=.01)
coefstd=function(u) summary(rq(pages~year,
tau=u))$coefficients[,2]
coefest=function(u) summary(rq(pages~year,
tau=u))$coefficients[,1]
CS=Vectorize(coefstd)(u)
CE=Vectorize(coefest)(u)
k=2
plot(u,CE[k,],ylim=c(min(CE[k,]-2*CS[k,]),
max(CE[k,]+2*CS[k,])))
polygon(c(u,rev(u)),c(CE[k,]+1.96*CS[k,],
rev(CE[k,]-1.96*CS[k,])),
col="light green",border=NA)
lines(u,CE[k,],lwd=2,col="red")
abline(h=0)

We have the following slope, for the year, as a function of the probability level,

The slope is always positive, so size of papers is increasing with time, short and long papers. But the influence of time is much larger for long paper than short one: for short papers (lower decile) every year, the size keeps increasing, with one more page every three years. For long paper (upper decile), it is two more pages every three years.

If we look now at the Annals of Statistics, we have

and for the evolution of the slope of the quantile regression,

Again the impact is positive: papers are longer in 2010 than 15 years ago. But the trend is the reverse: short papers (lower decile) are much longer, almost one more page every year, with long paper increase only by one more page every two years… Initially, I want to run such a study on a much longer term, with quantile regressions and splines to see when there might have been a change, both in lower and upper tails. Unfortunately, as suggested by some colleagues, there might have been some changes in the format of the journal (columns, margins, fonts, etc). That’s a shame, because I rediscover nice short papers of 5-10 pages published 20 or 30 years ago. They are nice to read (and also potentially interesting for a post on the blog). 5 pages, that’s perfect, but 40 pages, that’s way too long. I wonder if I am the only one having this feeling, missing those short but extremely interesting papers….

Régression médiane et géométrie

Suite à mon exposé d’hier, Pierre me demandait d’argumenter un point que j’avais évoqué oralement sans le justifier: “ une régression médiane passe forcément par deux points du nuage “. Par exemple,

x=c(1,2,3)
y=c(3,7,8)
plot(x,y,pch=19,cex=1.5,xlim=c(0,4),ylim=c(0,10))
library(quantreg)
abline(rq(y~x,tau=.5),col="red")

Essayons de le justifier… de manière un peu heuristique. Commençons un peu au hasard, avec une droite qui passe “entre” les points (on admettra que la régression médiane sépare l’espace en deux, avec la moitié des points en dessous, la moitié au dessus, à un près pour des histoires de parité).

plot(x,y,pch=19,cex=1.5,xlim=c(0,4),ylim=c(0,10))
abline(a=-1,b=3.2,col="blue")

c’est joli, mais on peut sûrement faire mieux. En particulier, on va chercher ici à minimiser la somme des valeurs absolue des erreurs (c’est le principe de la régression médiane). Essayons de translater la courbe, vers le haut, ou vers le bas,

plot(x,y,pch=19,cex=1.5,xlim=c(0,4),ylim=c(0,10))
abline(a=-1,b=3.2,col="blue",lwd=2)
for(i in seq(-2,3,by=.5)){
abline(a=i,b=3.2,col="blue",lty=2)}

Si on regarde ce que vaut la somme des valeurs absolues des erreurs, on a

d=function(h) sum(abs(y-(h+3.2*x)))
H=seq(-4,6,by=.01)
D=Vectorize(d)(H)
plot(H,D,type="l")

Formellement, l’optimum est ici

> optimize(d,lower=-5,upper=5)
$minimum
[1] -0.2000091

$objective
[1] 2.200009

Bref, retenons cette courbe, et notons qu’elle passe par un des points,

Et c’est assez normal. On commençait avec deux points au dessus, et un en dessous. Soit la somme des valeurs absolues des erreurs . Si on translate de , on passe de à (tant que l’on a toujours deux points au dessus, car celui en dessous sera toujours en dessous). Si on translate de , (là encore tant qu’il n’y a qu’un point en dessous). Bref, on a intérêt à translater vers le haut. Si on dépasse le premier point rencontré, on se retrouve dans la situation inverse, avec un point au dessus et deux au dessous, et on a intérêt à redescendre. Bref, la translation optimale revient a s’arrêter dès qu’on croise un point. Autrement dit, la régression passe forcément par un point. Au moins.

Maintenant, essayons de faire pivoter la courbe autour de ce point, là encore afin de minimiser la somme des valeurs absolues des erreurs,

plot(x,y,pch=19,cex=1.5,xlim=c(0,4),ylim=c(0,10))
abline(a=optimize(d,lower=-5,upper=5)$minimum,b=3.2,
col="blue",lwd=2)
points(x[1],y[1],cex=1.8,lwd=2,col="blue")
for(i in seq(-1,5,by=.25)){
abline(a=(y[1]-i*x[1]),b=i,col="blue",lty=2)}

La distance est alors, en fonction de la pente de la droite

d2=function(h) sum(abs(y-((y[1]-h*x[1])+h*x)))
H=seq(-4,6,by=.01)
D=Vectorize(d2)(H)
plot(H,D,type="l")

Là encore on peut formaliser un peu

> optimize(d2,lower=-5,upper=5)
$minimum
[1] 2.500018

$objective
[1] 1.500018

Et si on regarde cette dernière droite, on passe par deux points,

plot(x,y,pch=19,cex=1.5,xlim=c(0,4),ylim=c(0,10))
h=optimize(d2,lower=-5,upper=5)$minimum
abline(a=(y[1]-h*x[1]),
b=h,col="purple",lwd=2)

Pour comprendre pourquoi, comparons les deux cas,

  • en faisant un pivot vers le bas, pour un des points, la valeur absolue des erreurs augmente alors que pour l’autre, elle diminue

  • en faisant un pivot vers le haut, là encore pour un des points, la valeur absolue des erreurs augmente alors que pour l’autre, elle diminue, mais en sens inverse par rapport au cas précédent,

Et pour faire simple, dans le premier cas, le gain (sur la baisse) compense la perte, alors que dans le second cas, c’est le contraire. Bref, on a intérêt à pivoter vers le bas, ou vers le point à droite. Jusqu’à l’atteindre. Et optimalement, on passera par ce point. Bref, on passera par deux points. Et je laisse les plus courageux regarder avec plus de trois points, mais c’est toujours pareil…

Licornes, philosophes, vieux cons, peer-review et instituts de sondages

oui, vaste programme… Lorsque les enfants étaient petits, je me souviens avoir été maintes fois étonné par la profondeur des questions qu’ils pouvaient poser,

dis papa, comment tu sais tout ça ?

ou

dis papa, si j’étais pas né comment ça serait ?

Toutes ces questions sont embarrassantes, car elles sont assez fondamentales quand on y pense. Car le problème est qu’après, les enfants grandissent, et pensent avoir des réponses à ces mêmes questions,

t’es sûre, tu ne veux pas une licorne pour ton anniversaire ?

mais non papa, ça n’existe pas les licornes

bien sûr que si que ça existe, n’oublie pas que les papas ça sait tout ! Et comment tu sais que ça existe pas d’abord ?

ben j’en ai jamais vu…

et alors… tu n’as jamais vu de crocodiles, et pourtant ça existe…

oui mais j’en ai vu à la télé

et moi à la télé j’ai vu des wookies et des ewoks dans Star Wars, et des dragons dans Harry Potter, et..

oui mais à la télé ça existe pas ! Les licornes personne n’en a jamais vu d’abord

et ça suffit pour dire que ça n’existe pas, tu penses ?

(en fait, on peut avoir ce même genre de dialogue avec dieu, ou mieux, le père Noël). Bref, suite à cette discussion je me demandais ce qui faisait qu’on y croyait ou qu’on n’y croyait plus. Par exemple en sciences: l’opposition classique entre sciences et religion est basée sur le fait que la religion repose sur la foi, et alors que la science non, on doit avoir une preuve de ce qu’on avance. Un peu comme le personnage de Gorgias (de Platon), le précurseur de tous nos hommes (et femmes) politiques: on a raison parce qu’on a eu le dernier mot. Et non parce qu’on l’a prouvé. Mais il faut être lucide. La science devient de plus en plus une histoire de foi et de croyance. Par exemple en cours, la très grande majorité de mes élèves me croient si je leur dit qu’une méthode ne marche pas, rares sont ceux qui penseraient à me demander de leur justifier, ou au mois de trouver des éléments étayant mon propos (car la tendance est de faire de moins de moins de démonstration en cours, surtout quand on fait de la statistique appliquée). Par exemple en séries temporelles: lorsque je faisais cours il y a 10 ans à l’université Paris Dauphine, on passait des heures sur les choix des ordres d’autorégression (faut-il un retard à l’ordre 4, ou 12 ?). Maintenant, tous les logiciels font ça automatiquement. Un peu comme le choix de la fenêtre quand on fait de l’estimation à noyau… sous R, on a un choix optimal de fenêtre… et souvent, quand on est pressé, on y croit.

Oui, en sciences, on utilise beaucoup cet argument de foi. Le plus classique étant de dire que ça a été publié dans une grande revue… Et c’est le principe du peer-review: on délègue à d’autres chercheurs la responsabilité de regarder en détails un papier, qui souvent dépasse nos propres compétences, afin de se reposer sur leur jugement les yeux fermés. C’est bien entendu stupide, et c’est pour ça qu’on demande aux étudiants de maîtrise de creuser les papiers en détails, de relire et comprendre les démonstrations, et de faire des simulations pour vérifier si ça marche ! Et je serais bien le premier à dire qu’il ne faut jamais croire ce qui est écrit dans mes livres ou mes papiers ! ou sur le blog ! Quoi que… sur le blog, je mets autant que possible l’intégralité des codes, afin de permettre aux personnes de vérifier plus facilement ce que j’ai fait. Et c’est le gros intérêt du blog pour les chercheurs, par rapport aux articles publiés: on n’est pas là pour épater la galerie (je renvoie d’ailleurs à un très bon article expliquant comment écrire un article scientifique). Sur le blog, on peut davantage lancer des discussions, et être beaucoup plus transparents, et modestes !

Soit dit en passant, cette histoire de foi se retrouve largement dans cette mise en abyme des sondages, évoquée dans un précédant billet, sur le fait que plus personne ne croit aux sondages. Pourtant des grands sociologues et des grands statisticiens ont travaillé depuis des dizaines d’années sur la théorie des sondages. Des centaines d’articles (parus dans des articles relus par d’autres grands scientifiques) justifient certaines méthodes d’analyse. Et pourtant la foi n’est pas là. Peut-être serait-il temps que les instituts soient plus transparents, qu’ils donnent accès aux données brutes. Car je suis convaincu que la transparence est la clé de tout. Enfin presque… pour l’existence du père Noël, je vais me battre autant que possible pour maintenir le doute !

Talk on quantiles at the R Montreal group

This afternoon, I will be giving a two-hour talk at McGill on quantiles, quantile regressions, confidence regions, bagplots and outliers. Before defining (properly) quantile regressions, we will mention regression on (local) quantiles, as on the graph below, on hurricanes,

In order to illustrate quantile regression, consider the following natality database,

base=read.table(
"http://freakonometrics.free.fr/natality2005.txt",
header=TRUE,sep=";")

We can use it produce those nice graphs we can find in several papers, modeling weight of newborns,

u=seq(.05,.95,by=.01)
coefstd=function(u) summary(rq(WEIGHT~SEX+
SMOKER+WEIGHTGAIN+BIRTHRECORD+AGE+ BLACKM+
BLACKF+COLLEGE,data=base,tau=u))$coefficients[,2]
coefest=function(u) summary(rq(WEIGHT~SEX+
SMOKER+WEIGHTGAIN+BIRTHRECORD+AGE+ BLACKM+
BLACKF+COLLEGE,data=base,tau=u))$coefficients[,1]
CS=Vectorize(coefstd)(u)
CE=Vectorize(coefest)(u)

The slides can be downloaded on the blog, as well as the R-code.

25% ? sérieusemement, vous y croyez ?

(pour reprendre les statistiques postées par @guybirenbaum l’autre jour)

Réfléchissons deux minutes (ou un peu plus, c’est ce qu’on a fait ce midi avec @J_P_Boucher). On vous interroge sur un sondage: si vous croyez aux sondages, vous le dites. Mais si vous n’y croyez pas… vous répondez n’importe quoi (sinon c’est que vous y croyez, non ?). Bon essayons de formaliser ça afin de mieux comprendre… Soit https://latex.codecogs.com/gif.latex?X la réponse donnée, et https://latex.codecogs.com/gif.latex?Y la vérité (ce que pense vraiment la personne interrogée). On sait que

https://latex.codecogs.com/gif.latex?\mathbb{P}(X=C)=1-\mathbb{P}(X=CP)=\frac{1}{4}

Mais ce qu’on veut calculer, c’est https://latex.codecogs.com/gif.latex?p=%20\mathbb{P}(Y=C), la probabilité qu’une personne y croit vraiment aux sondages. Pour ça, on sait que

https://latex.codecogs.com/gif.latex?\mathbb{P}(X=C)=\mathbb{P}(X=C|Y=C)\cdot%20\mathbb{P}(Y=C)

https://latex.codecogs.com/gif.latex?+\mathbb{P}(X=C|Y=CP)\cdot%20\mathbb{P}(Y=CP)

Pour le premier terme, une personne qui y croit le dira (sinon c’est qu’elle n’y croit pas), donc https://latex.codecogs.com/gif.latex?%20\mathbb{P}(X=C|Y=C)=1. Par contre, une personne qui n’y croit pas peut dire n’importe quoi. Disons que https://latex.codecogs.com/gif.latex?%20\mathbb{P}(X=C|Y=CP) vaut https://latex.codecogs.com/gif.latex?%20\alpha. Dans ce cas

https://latex.codecogs.com/gif.latex?%20\frac{1}{4}=p+\alpha(1-p)

ou encore

https://latex.codecogs.com/gif.latex?%20p=\frac{1/4-\alpha}{1-\alpha}

Supposons par exemple qu’une petite proportion des personnes qui n’y croient pas disent y croire, disons 10%. Dans ce cas, la vraie probabilité qu’une personne croit au sondage serait plutôt 16%. Si https://latex.codecogs.com/gif.latex?%20\alpha était un peu plus grand (25%), on serait juste tombé sur la proportion des rigolos qui répondent n’importe quoi aux sondages, car personne n’y croit !

Pigeonholes and triangles

Once again, there was a nice maths puzzle on http://www.futilitycloset.com/ last week (but without further reference). The question was the following, “Five points are located in an equilateral triangle with 10-inch sides (or on its perimeter). What’s the maximum distance between the two closest points?” Actually, this is simply an application of Dirichlet’s pigeonholes theorem, as mentioned in the answer of the puzzle, “Connect the midpoints of the triangle’s sides to make four smaller triangles. Because there are five points, two of them must fall within one of these triangles. The maximum distance between these two is 5 inches.”

Thus, with Dirichlet’s pigeonholes theorem, we know not only the maximal minimum distance, but also where points must be (on corners of inside triangles). Here, there might be two possibilities (with also the different shapes obtained using rotations),

Further, we also observe that this result is valid not only with five points, but with six. And if we go further, e.g. with nine points, we have the following

So actually, it is possible to have a simple conjecture: let denote the number of points, and let be so that

then the minimal distance is . Which can be visualized below,

Based on that pigeonhole theorem, I have the intuition that this result is valid (here we just count the number of inside-triangles), but can we check if this is correct or not ? One idea might be to draw points randomly, and so see where points might end, or at least the maximal distance obtained over millions of random draws… But standard monte carlo might take a while… so we can use two ideas. One idea is from quasi-monte carlo techniques: since we want points to be to be as separate as possible, we do not need to draw randomly in the triangle, but perhaps we can draw randomly points on some grid. A second idea from the latin hypercube technique (and that pigeonholes theorem): instead to generating points randomly in the triangle, perhaps we can draw them in specific regions. For instance, with five points, we know that 4 points have to be in those sub-triangles, and the additional point in any one of those triangle. And because of symmetry, with five points, we can claim that this additional point has to be specifically in one sub-triangle. A random sample with five points within the same sub-triangle will be useless (and a waste of computational time).

With the following code, we define a grid, for a triangle, either upward, or downward, starting from some point (on the left), with a given length, and a given number of subdivision.

TRIANGLES=function(xinf,yinf,l,n,updown="up"){
X=NA;Y=NA
for(i in n:1){
u=xinf+seq(0+(n-i)/2/(n-1),1-(n-i)/2/(n-1),length=i)*l
if(updown=="up") v=rep(yinf+(n-i)*sqrt(3)/2*l/(n-1),i)
if(updown=="down") v=rep(yinf-(n-i)*sqrt(3)/2*l/(n-1),i)
X=c(X,u);Y=c(Y,v)}
return(cbind(X[-1],Y[-1]))}

Here are grid with respectively 20 and 50 points on the lower side. It is then possible to define 4 grids, corresponding to the four sub-triangles,

k=3;st=6
firstgrid=TRIANGLES(0,0,(k-2)/(k-1),k-1)
secondgrid1=TRIANGLES(
firstgrid[1,1],firstgrid[1,2],1*(k-2)/(k-1),st)
secondgrid2=TRIANGLES(
firstgrid[2,1],firstgrid[1,2],1*(k-2)/(k-1),st)
secondgrid3=TRIANGLES(
firstgrid[3,1],firstgrid[3,2],1*(k-2)/(k-1),st)
secondgrid4=TRIANGLES(
firstgrid[3,1],firstgrid[3,2],1*(k-2)/(k-1),st,updown="down")

Then, we just draw randomly five points on that grid, in the four sub-triangles,

N=5
Dmax=0
setpointmax=matrix(0,4,2)
indice=c(1:4,sample(1:4,size=N-4,replace=FALSE))
tindice=table(indice)
indice1=sample(1:nrow(secondgrid1),size=
tindice[1],replace=FALSE)
indice2=sample(1:nrow(secondgrid2),size=
tindice[2],replace=FALSE)
indice3=sample(1:nrow(secondgrid3),size=
tindice[3],replace=FALSE)
indice4=sample(1:nrow(secondgrid4),size=
tindice[4],replace=FALSE)
setpoint=rbind(secondgrid1[indice1,],secondgrid2[indice2,],
secondgrid3[indice3,],secondgrid4[indice4,])

No, we can run a code, where we keep in mind locations of the five points each time we beak a record,

D=min(dist(setpoint,"euclidean"))
if(D>Dmax){Dmax=D
setpointmax=setpoint}

Here are some locations obtained after running the algorithm a few times (with five points)

On the graph below, we can visualize the time it takes before having a record, and the convergence towards 1/2 (which is the true value of the maximal distance)

The convergence is slow… extremely slow… However, we can run the same code for more than five points, e.g. seven points (actually, here sub-triangles are not used here, and it looks like we have been lucky here, since the convergence was rather fast),

Ex-æquo aux élections ?

La saison 1 de Mad Men se termine avec l’élection de 1960, Nixon contre Kennedy, qui s’est jouée dans un mouchoir de poche.

L’autre jour, Benoit Rittaud évoquait sur http://images.math.cnrs.fr/ la possibilité que des candidats soient ex-æquo dans une élection, et les aspects légaux (car visiblement la constitution française n’a pas vraiment prévu ce cas). Le plus intéressant aurait été – à mon avis (mais mon avis est fortement biaisé) – de calculer la probabilité qu’un tel évènement survienne. Par exemple, qu’à l’issu du premier tour, le deuxième et le troisième candidat soient ex-æquo, de telle sorte qu’on ne puisse déterminer qui ira au second tour. Justement, un des commentaire affirme “la probabilité qu’un tel événement arrive est environ l’inverse du nombre typique de voix séparant le deuxième homme (ou femme) du troisième”. N’ayant réussi à démontrer ce résultat, je me suis dit qu’on pourrait se lancer dans des simulations pour quantifier cette probabilité.

Commençons par supposer qu’il y a quatre candidats (et la possibilité de s’abstenir). On suppose que le taux d’abstention est de 20%, et que les intentions de vote pour chacun des candidats soit de 20%.

EXE=function(N=100,ns=1000000){
ExE=rep(NA,ns)
for(s in 1:ns){
M=sample(c("A","B","C","D","E"),size=N,prob=rep(.2,5),
replace=TRUE)
tb=table(M)
Ms=rev(sort(tb[-(names(tb)=="A")]))
ExE[s]=(Ms[2]==Ms[3])
}
return(mean(ExE))}

Le soucis est que faire plusieurs centaines de millions de tirages de plus de 40 millions de lois multinomiales pourrait prendre du temps (genre vraiment beaucoup). Une stratégie intuitive peut être de faire des simulations pour une population plus faible, puis d’extrapoler.

E=function(N) EXE(N)
taille=c(20,50,100,200,500,1000,2000)
proba=Vectorize(E)(taille)
plot(taille,proba,type="b",log="xy")

base=data.frame(y=proba,x=taille)
reg=lm(log(y)~I(log(x)),data=base)
s2=summary(reg)$sigma
m=exp(predict(reg,newdata=
data.frame(x=41194689)))*exp(s2^2/2)

L’extrapolation linéaire n’étant peut-être pas parfaite, on peut tenter un ajustement quadratique (comme cela a été fait sur la figure ci-dessus)

reg2=lm(log(y)~I(log(x))+I(log(x)^2),data=base)
s2=summary(reg2)$sigma
m2=exp(predict(reg2,newdata=data.frame(x=41194689)))*exp(s2^2/2)

La différence de prédiction avec les deux modèles est significative comme on le voit sur le graphique. Toutefois, avoir des ex-æquo avec une très très grande population est tellement rare qu’il faut réellement beaucoup de simulations pour avoir un ordre de grandeur valide. L’extrapolation semble être une alternative valide ici (mais je suis ouvert à toute critique, ou discussion).

Maintenant que l’on a une méthodologie (ou presque), utilisons des ordres de grandeurs réalistes. Par exemple en se basant sur l’élection présidentielle de 2002. 41.19 millions d’électeurs, 28.4% d’abstentions, et pour les votes exprimés, un candidat en tête avec 19.9% des votes, un second à 16.8% et le troisième à 16.2% (et les autres plus loin). Les intentions de votes pour les principaux candidats figurent dans la loi multinomiale ci-dessous

EXE=function(N=100,ns=1000){
ExE=rep(NA,ns)
for(s in 1:ns){
M=sample(c("abs","JC","JMLP","LJ","FB","AL","JPC"
,"NM","autres"),size=N,prob=c(.284,.1988*.716,
.1686*.716,.1618*.716,.0684*.716,.0572*.716,
.0533*.716,.0525*.716,.1714104),replace=TRUE)
tb=table(M);
Ms=rev(sort(tb[-(names(tb)%in%c("abs","autres"))]));
ExE[s]=(Ms[2]==Ms[3])
}
return(mean(ExE))}

Si on en croit l’affirmation du commentaire, la probabilité d’avoir eu des ex-æquo (entre le second et le troisième candidat) serait de l’ordre de

> 1/(41194689*(.1686*.716-.1618*.716))
[1] 4.985823e-06

Je dois avouer que les calculs n’ont rien donné (on verra tout à l’heure ce qui se passe si on change la définition d’ex-æquo). Numériquement, les calculs tournent encore (après 4 jours) sur le serveur… Par contre, si la population ne contenait que 10000 électeurs, la probabilité d’avoir des ex-æquo (entre le 2ème et le 3ème candidat) serait de l’ordre de 3 sur 10000 (avec un intervalle de confiance que je n’ai toujours pas calculé)

> EXE(10000,100000000)
[1] 3.245562e-05

A titre d’illustration, on peut regarder ce qui s’est passé, commune par commune en France (les données sont en ligne sur le site du Ministère de l’Intérieur). L’exercice est biaisé parce qu’il y a de l’hétérogénéité spatiale… mais ça permet toujours d’illustrer… Les nombres de voix obtenues, pour le second et le troisième candidat, par commune est le suivant

> election=read.table(
+ "/Users/UQAM/Documents/Pres2002Tour1.csv",sep=";")
+ election=election[-1,]
> VOIX=election[,seq(14,74,by=4)]
> DEUXIEME=TROISIEME=TOTAL=rep(NA,nrow(VOIX))
> for(i in 1:nrow(VOIX)){
+ u=rev(sort(as.numeric(as.character(
+ election[i,seq(14,74,by=4)]))))
+ TOTAL[i]=sum(u)
+ DEUXIEME[i]=u[2]
+ TROISIEME[i]=u[3]
+ }

Sur 37000 communes,

> nrow(election)
[1] 37513

ou plutôt sur les 24000 communes qui ont eu plus de 5000 votes exprimés

> sum((TOTAL>5000))
[1] 24230

il y a eu 56 cas avec de deuxième et troisième candidats ex-aequo,

> base=data.frame(TOTAL,DEUXIEME,TROISIEME,COMMMUNE=election[,3])
> selection=(TOTAL>5000)
> sousbase=base[selection,]
> sum(sousbase$DEUXIEME==sousbase$TROISIEME)
[1] 56

On a même le détail,

> sousbase[indice,]
TOTAL DEUXIEME TROISIEME                     COMMMUNE
1134   5819      699       699                        Crouy
1507   5386      717       717          Louroux-Bourbonnais
2603   6235     1151      1151                  Saint-Peray
2916   6346      935       935                        Coucy
2970   7529     1413      1413                   Haraucourt
5089   5567      717       717                      Vignats

(etc). On peut alors regarder, en fonction de la taille des communes (ici le nombre de votes exprimés) la probabilité d’avoir un ex-æquo, dans une commune donnée. On va subdivisionner en 50 classes (pour commencer), et compter à chaque fois le nombre d’ex-æquo,

> n=50
> niveau=seq(1/(2*n),1-1/(2*n),by=1/n)
> base$CLASSE=cut(TOTAL,quantile(TOTAL,seq(0,1,by=1/n)),
+ labels=niveau)
> base$EXAEQUO=(base$DEUXIEME==base$TROISIEME)*1
> m=by(base$EXAEQUO, base$CLASSE, sum)

Par exemple pour les 2% des plus grosses villes, on a eu 5 cas de communes avec des ex-aequos

------------------------------------------------------------
base$CLASSE: 0.99
[1] 5

Si on regarde plus en détails, ce sont les communes avec plus de 7500 votes exprimés,

> quantile(TOTAL,.98)
98%
7427.76
> selection=(TOTAL>quantile(TOTAL,.98))
> sousbase=base[selection,]
> sum(sousbase$DEUXIEME==sousbase$TROISIEME)
[1] 5
> indice=which(sousbase$DEUXIEME==sousbase$TROISIEME)
> sousbase[indice,]
TOTAL DEUXIEME TROISIEME       COMMMUNE CLASSE EXAEQUO
2970   7529     1413      1413     Haraucourt   0.99       1
14359  7496     1392      1392          Renac   0.99       1
25028  8104     1381      1381         Heloup   0.99       1
30014  8087     1340      1340         Barnay   0.99       1
36770  7435      764       764 Pont-sur-Yonne   0.99       1

On peut représenter ces nombres de cas d’ex-æquo et faire une régression de Poisson (car on fait du comptage ici),

> taille=length(TOTAL)/n
> plot(quantile(TOTAL,niveau),m/taille)
> sb=data.frame(niveau,exaequo=as.vector(m))
> library(splines)
> reg=glm(exaequo~bs(niveau),family=poisson,data=sb)
> u=seq(0,1,by=.001)
> lines(quantile(TOTAL,u),predict(reg,
+ newdata=data.frame(niveau=u),
+ type="response")/taille,lwd=2,col="red")

pour des facilités de lecture, on a en ordonnées des probabilités et en abscisse les tailles moyennes des villes

On peut faire la même chose avec 200 classes,

Bref, pour une commune de taille moyenne (disons moyenne supérieure, avec 5000 votes exprimés), on a 2 chances sur 1000 d’avoir un second et un troisième candidat ex-æquo. Ce qui laisserait entendre que pour 40 millions, la probabilité devrait être faible, voire très faible…

Bon, au delà du problème conceptuel des ex-æquo (ou légal, car je doute qu’il soit possible d’avoir une égalité parfaite, le Ministère de l’Intérieur ferait un recompte et trouverait forcément un résultat différent), il semble que la probabilité semble beaucoup plus faible que celle obtenue par la règle ad-hoc….

Mais peut-être serait-il réaliste de dire que l’élection n’a pas su dégager une majorité (c’est le but d’une élection il me semble) si le nombre de voix séparant deux candidats est trop faible. Par exemple, si moins de 100 voix séparent les candidats, on refait un vote ! Pour revenir sur mon exemple précédant, avec 10000 votants

> EXE100(10000,1000000)
[1] 0.012641

soit une chance sur 100 (je me suis autorisé à faire moins de simulations car l’évènement me semble moins improbable… donc il y a besoin de moins de simulations pour l’observer à l’occasion). Si on va plus loin, et que l’on fait des simulations que l’on projette, comme auparavant, pour estimer la probabilité d’avoir des ex-æquo non pas pour une ville, mais au niveau de la France entière, on est plutôt aux alentours de

> (m=exp(predict(reg,newdata=
+ data.frame(x=41194689)))*exp(s2^2/2))
1
1.505422e-12

pour un modèle linéaire, et

> (m=exp(predict(reg2,newdata=
+ data.frame(x=41194689)))*exp(s2^2/2))
1
4.074588e-98

pour un modèle quadratique (avec la confiance – relativement faible – que l’on peut accorder à une telle projection). Ce qui donne une différence énorme… Si on visualise tout ça,

lines(c(1,100000000),exp(predict(reg,newdata=
data.frame(x=c(1,100000000))))*exp
(s2^2/2),lty=2,col="light blue")

que l’on représente en bleu, en vert un ajustement quadratique

lines(10^(seq(1,8,by=.1)),exp(predict(reg,newdata=
data.frame(x=10^(seq(1,8,by=.1)))))*exp
(s2^2/2),lty=2,col="dark green")

alors que le calcul suggéré dans le commentaire donnerait la courbe rouge,

lines(c(1,100000000),1/(c(1,100000000)*
(.1686*.716-.1618*.716)),col="red",lty=2)

Peut-on parler d’évènement improbable ? A titre de comparaison, quand on parle de quelque chose d’improbable, ou qui arrive avec une probabilité infinitésimale, on pense au loto. A l’ancien loto (français), il fallait choisir 6 numéros parmi 49 (sans remise). La probabilité de gagner le gros lot était de

Pour l’Euromillions, il faut choisir 5 numéros parmi 50 numéros auxquels il faut ajouter les combinaisons possibles avec les Étoiles, soit 2 numéros à choisir parmi 9. La probabilité de gagner le gros lot est alors de

Bref, tomber sur des ex-æquo lors d’une élection nationale, en France, c’est a priori plus rare que gagner le gros lot en jouant une fois au loto… On peut donc dire qu’il ne devrait pas y avoir d’ex-æquo à l’issu du premier tour !

Eating chocolate, an Easter problem

Assume that there are (say) 100 chocolate eggs in a basket, 20 are dark chocolate, while 80 are milk chocolate. Unfortunately, eggs are wrapped, and there is no way you can distinguish them. My daughter has the following algorithm for eating them (and she actually plans to eat all of them)

  1. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper (just to remember how many of each kind are left), eat it, and move to strategy 2.
  2. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper and:
  • if it is the same kind as the one she got before, then eat it, and go again to step 2.
  • if it is not the same kind as the one she got before, she wraps it back, and go again to step 1.

At the end, if there is only one egg left, the probability that it is a milk chocolate egg is exactly 1/2… Nice, isn’t it ?

It is a simple rejection technique algorithm. It is possible to run some code to check the answer. The algorithm which return the taste of the last egg remaining is

> lastchocolate=function(dark=80,milk=20){
+ s=1
+ while(dark+milk>1){
+ if(s==1){
+ (eatnow=sample(c("D","M"),prob=c(dark,milk),size=1))
+ if(eatnow=="D"){dark=dark-1};
+ if(eatnow=="M"){milk=milk-1};
+ eatbefore=eatnow;s=2}
+ if(s==2){
+ if(dark+milk>1){
+ s=1;
+ eatnow=sample(c("D","M"),prob=c(dark,milk),size=1)
+ if(eatnow==eatbefore){s=2
+ eat=eatnow;
+ if(eatbefore=="D"){dark=dark-1};
+ if(eatbefore=="M"){milk=milk-1}}
+ }}
+ }
+ return(c(dark,milk))}

If we run it 2,000 times, we obtain

> set.seed(1)
> m=lastchocolate(dark=80,milk=20)
> for(s in 1:1999){m=cbind(m,lastchocolate(dark=80,milk=20))}
> apply(m,1,sum)
[1] 1022 978

So it looks like we have half chance to end up with a dark chocolate egg, and half chance to end up with a milk chocolate egg.

Let us prove that result… Let  denote the number of milk chocolate and the number of dark chocolate eggs, when we start. Consider an inductive proof of the fact that the probability has to be . The first step is when . Then

out of chocolates, the probability to pick a milk chocolate egg is . Assume that it is  for all pairs  such that  and .
Assume that after some steps, there are  and  chocolates, with  (again, at least one egg has been eaten). The probability to have  is
Similarly, the probability to have  is
So, the probability that both are strictly positive is then
Then we can use our inductive assumption. Thus, the overall probability that the last egg is a milk one is
where the part on the left is  and the second one is . This probability is exactly one half (straightforward).

Nonconvexity, and playing indoor paintball

Following the two previous posts (here and there), on the number of people that don’t get wet while playing with water pistols, consider now an indoor version, in a non-convex room (i.e. player behind wall are now, somehow, protected). In the previous posts, players where playing on a square field, and I briefly mentioned that if the field was a disk, results would have been (roughly) the same: so far, the shape of the field was not an issue. But what if the field is no longer convex,

library(sp)
plot(0:2,0:2,col="white",xlab="",ylab="")
MAP=Polygon(cbind(c(0,0,1,1,2,2,0),
c(0,2,2,1,1,0,0)))
polygon(MAP@coords,col="light blue")

and players hidden behind the wall cannot be reached (red lines above are impossible hits). As earlier, it is still possible to look at the closest neighbor, we just have to exclude pairs that can no longer hit each other.

And again, it is possible to plot safe zones in green.

Once again, it is possible to look more closely are those supposed-to-be “safe zones”, i.e. by looking at the distribution of the location of players that were dry at the end of the game. With 11 players, we obtain


What about the distribution of the number of dry players, over a game ?

touch=function(x1,y1,x2,y2,n=251){
X=seq(x1,x2,length=n)
Y=seq(y1,y2,length=n)
sum(point.in.polygon(X,Y,MAP@coords[,1],
MAP@coords[,2], mode.checked=FALSE)==0)==0
}

NOTWETnc=function(n,p){
sx=runif(50)*2;sy=runif(50)*2
IN=which(point.in.polygon(sx,sy,MAP@coords[,1],
MAP@coords[,2], mode.checked=FALSE)==1)
Sx=sx[IN];Sy=sy[IN]
Sx=Sx[1:n];Sy=Sy[1:n]
IN=IN[1:n]
MI=matrix(NA,n,n)
for(i in 1:(n-1)){
for(j in (i+1):(n)){
MI[j,i]=MI[i,j]=touch(Sx[i],Sy[i],Sx[j],Sy[j])
}}
(d=as.matrix(dist(cbind(Sx,Sy),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dpossible=d
dpossible[MI==FALSE]=999999
dmin=apply(dpossible,2,which.min)
#whonotwet=( (1:n) %notin% names(table(dmin)) )
notwet=n-length(table(dmin))
return(notwet)}

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NSim=10000
Nnc=Vectorize(NOTWETnc)(n=rep(11,NSim))
Nc=Vectorize(NOTWET)(n=rep(11,NSim))
T=table(Nc)
Tn=table(Nnc)
plot(as.numeric(names(Tn)),
Tn/NSim,type="b",col="blue")
lines(as.numeric(names(T)),
T/NSim,type="b",col="red",pch=4)

On 11 players, we have the same distribution as the one on a square field. So convexity is not a key issue here…

Strange isn’t it. And with an odd number of player, not only there is at least one dry player, but at least, half of the players (maybe minus one) have to be wet…

Where hiding if you don’t want to get wet ?

Following the previous post, two additional remarks. Following a comment by@cosi, I have investigated quickly a binomial fit to the distribution of the number of people not getting wet, with a fixed number of players on the field. It looks like it should be a binomial distribution with a fixed probability (2/3) and with size parameter affine in the number of players. @guigui suggested some connexion with with “birds on a wire” problem (see e.g. http://www.cut-the-knot.org/)

n=p=rep(NA,20)
for(i in 1:20){
NSim=10000
N=Vectorize(NOTWET)(n=rep(3+2*i,NSim))
n[i]=mean(N)/(1-var(N)/mean(N))
p[i]=1-var(N)/mean(N)
}
plot(seq(5,43,by=2),n,col="red",type="b")

for the implied size parameter above, and below the implied probability parameter.

plot(seq(5,43,by=2),p,col="blue",type="b")

(as functions of the number of players). I’d be glad to get more details on that 2/3 probability.

Now, let us investigate another question sent by email: “Where should you hide if you don’t want to get wet ?” A first idea could be the following: given that some players are already on the field, where should I go if I do not want to get wet ? Below are some simulations for 7 or 25 players (already on the field). The red area is the area so that I will become someone’s target (perhaps even the target of two players…). The green area is the safe zone.

(with 7 players above, and 25 below)

It looks like, on the border, it might be safer than in the middle of the field. But we have to confirm that intuition… or at least see if that intuition is valid.

Based on what was done the other day, it is possible to look where people that got wet were located (instead of counting dry players as done in the previous function). So here, we simply look where non wet players were standing

NOTWET=function(n,p){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
#plot(x[-whonotwet],y[-whonotwet],pch=19,col="blue",type="p")
#points(x[whonotwet],y[whonotwet],pch=19,col="red")
M=matrix(NA,p,p);u=seq(0,1,by=1/p)
for(i in 1:p){
for(j in 1:p){
M[i,j]=sum((x[whonotwet]>=u[i])&(x[whonotwet]<u[i+1])&
(y[whonotwet]>=u[j])&(y[whonotwet]<u[j+1]))
}}
return(M)}

based on function

"%notin%" <- function(x, y) x[!x %in% y]

On a given grid, we count people playing the game that ended dry (with might avoid boundary bias on nonparametric smooth estimator of distribution, as we’ll see later on). For instance with 11 players,

M11=matrix(0,25,25);
for(s in 1:100000){
M11=M11+NOTWET(11,25)
}

Then we can plot the distribution, on the field,

COL=rev(heat.colors(101)); p=25
u=seq(0,1,by=1/p)
plot(0:1,0:1,col="white",xlab="",ylab="")
for(i in 1:p){
for(j in 1:p){
polygon(c(u[i],u[i],u[i+1],u[i+1]),
 c(u[j],u[j+1],u[j+1],u[j]),border=NA,
col=COL[trunc(M11[i,j])/max(M11)*100+1])
}}

Red means a lot of non-wet people (i.e safer zones). Graphs below are with 7 and 11 players respectively (from the left to the right)

with the following distribution on the diagonal: corners are almost 4 times safer than the middle of the field, with 7 players,

Below are plotted distributions of locations of non-wet players when the total number of players was either 25 (on the left) and 101 (on the right)

with again on the diagonal

Hence, the border is rather safe, but next to the border, it is no safe any longer: is someone is standing right on the border, he will probably shoot at you: there is no one behind him ! This explains the stange behavior on the borders (and corners, thanks JP for the intuitive explanation).
But would it be completely different with a field shaped as a disk ?

using the previous technique of working on a fixed grid (or correcting for boundary bias, since the disk might cover only a fraction of the grid-square), or keeping coordinates of non-wet players, and using standard kernel-based estimator of the distribution (the light yellow circle outside the disk is simply due to bias of the kernel estimator on the border)

NOTWET=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
return(cbind(x[whonotwet],y[whonotwet]))
}

M=t(c(0,0))
for(s in 1:10000){
M=rbind(M11,NOTWET(25))
}
M=M[-1,]

library(ks)
HP=matrix(c(.001,0,0,.001),2,2)
K=kde(x=M11, H=HP)
image(K$eval.points[[1]],K$eval.points[[2]],K$estimate2,
col=rev(heat.colors(101)),xlim=c(-1,1),ylim=c(-1,1))

 

And note that the distribution of the number of players ending the game dry is the same, for a square field, or a disk,

NOTWET2=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NSim=100000
Nsquare=Vectorize(NOTWET)(n=rep(25,NSim))
Ndisk=Vectorize(NOTWET2)(n=rep(25,NSim))
Tsq=table(Nsquare)
Tdk=table(Ndisk)
plot(as.numeric(names(Tsq)),Tsq/NSim,
type="b",col="red")
lines(as.numeric(names(Tdk)),Tdk/NSim,
type="b",pch=4,col="blue")


But so far, it was still simple… I wonder what it might become if we consider a non-convex place, with walls, where player might hide…. Next time, a post on indoor paint-ball !

Playing with fire (or water)

A few days ago,http://www.futilitycloset.com/published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)

The answer is nice, an can be read on the blog.

What puzzled me in this problem is the following: if we know, for sure, that at least one player won’t get wet, we don’t know exactly how many of them won’t get wet (assuming that if they shoot at the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

It is then rather simple to get the distribution of the number of player that did not get wet,

N25=Vectorize(NOTWET)(n=rep(25,NSim))
T=table(N25)
plot(as.numeric(names(T)),T/NSim,type="b")

The graph for different values for the total number of players is the following (based on 25,000 simulations)

If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,

NSim=25000
N51=Vectorize(NOTWET)(n=rep(51,NSim))
T=table(N51)
plot(as.numeric(names(T)),T/NSim,type="b",col="blue")
u=seq(0,51,by=.1)
lines(u,dnorm(u,mean(N51),sd(N51)),col="red",lty=2)

If anyone has an intuition (not to say a proof) for that, I’d be glad to hear it…

Sunday evening, stupid games…

This evening, while I was about to wash the dishes, I heard my elders starting a game (call them Him and Her)
Him: “I have picked – in my head – a number, lower than 50. Try to guess…”
Her: “No way, too difficult…”
Him: “You can try five different numbers…”
Her: “.,. um … No, no way…”
Me: “Wait… each time we suggest a number, you tell us if yours is either above, or below ?”
You can see me coming clearly, can’t you ? Using a simple subdivision rule, we have a fast algorithm (and indeed, if I have to choose between washing the dishes and playing with the kids…)
Him: “um…. ok”
Her: “Daddy, are you sure we will win ?”
Me: “Well… I cannot promise that we will win… but I am rather sure [sic] that we will win quite frequently: more gains than losses…” (I guess).
Her: “Great ! I am playing with daddy…”

Him: “um…. wait, is it one of you trick, again ? I don’t to play anymore… Do you want to see the books we’ve chosen at the library ?”
Her: “Sure…”
Me: “What ? no one wants to see if I was right ? that we have indeed more than 50% chances to win…”
Him and her: “No !”
The point of that story ? If we listen to kids, science will not go forward, trust me. But I am curious… I want to see if my intuition was correct. Actually, the intuition was based on the fact that

> 2^5
[1] 32 
> 2^6
[1] 64

so in 5 or 6 steps the algorithm of subdivision should converge. I guess… I mean, I do not know for sure, since 50 is not a power of 2, so it might be difficult, each time, to split in two: we have to deal only with integers here…
To be sure, let us substitute my laptop to my son… to pick up numbers, randomly (yes, sometimes I feel like I am Doctor Tenma, 天馬博士). The algorithm is simple: there are bounds, and at each stop I should suggest the middle of the interval. If the middle is not an integer, I suggest either the integer below or the integer above (with equal probabilities).

cutinhalf=function(a,b){
m=(a+b)/2
if(m %% 1 == 0){m=m}
if(m %% 1 != 0){m=sample(c(m-.5,m+.5),size=1)}
return(round(m))}

The following functions runs 10,000 simulations, and tells us how many times, out of 5 numbers suggested, we got the good one.

winning=function(lower=1,upper=50,tries=5,NS=100000){
SIM=rep(NA,NS)
for(simul in 1:NS){
interval=c(lower,upper)
(unknownnumber=sample(lower:upper,size=1))
success=FALSE
for(i in 1:tries){
picknumber=cutinhalf(interval[1],interval[2])
if(picknumber==unknownnumber){success=TRUE}
if(picknumber>unknownnumber){interval[2]=picknumber}
if(picknumber<unknownnumber){interval[1]=picknumber}
#print(c(unknownnumber,picknumber,success,interval))
};SIM[simul]=success};return(mean(SIM))}

It looks like the probability that we got the good number is higher than 60%,

> winning()
[1] 0.61801

Which is not bad. And if the upper limit was not 50, but something else, the probability of winning would have been the following.

VWN=function(n){winning(upper=n)}
V=Vectorize(VWN)(seq(25,100,by=5))
plot(seq(25,100,by=5),V,type="b",col="red",ylim=c(0,1))


Actually, after losing a couple of times, I am rather sure that my son would have to us that we can suggest only four numbers. In that case, the probability would have been close to 30%, as shown on the blue curve below (where four numbers only can be suggested)

Anyway, as intuited, with five possible suggestions, we were quite likely to win frequently. Actually with a probability of almost 2 out of 3…and 1 out of 3 if my son had decided to pick an number between 1 and 100, or only 4 possible suggestions… Those are quite large actually, when we think about it. It reminds me that McGyver story I mentioned a few months ago… Anyway, calculating probabilities is nice, but I still have to wash the dishes…