Tag Archives: probability

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

UEFA, is that it ?

Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple

> table(M[,2*n])/length(M[,2*n])*100

       1        2        3        5        7       10       11 
11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856

Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain

> S
       1     2     3     5     7    10    11    13
4   0.00 14.16 14.16  0.00 22.22 21.25 13.05 15.13
6  12.52 13.19 13.19 14.11 20.13  0.00 12.35 14.47
8  18.78  0.00 19.54 21.50  0.00  0.00 18.39 21.76
9  18.78 19.54  0.00 21.50  0.00  0.00 18.39 21.76
12 14.68 15.54 15.54 16.56  0.00 23.19 14.47  0.00
14 11.64 12.37 12.37 13.05 18.96 18.25  0.00 13.34
15 11.77 12.55 12.55  0.00 19.36 18.59 11.64 13.50
16 11.82 12.61 12.61 13.25 19.31 18.70 11.67  0.00

that can be visualized below

White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints…

Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code

> U=c(4,6,8,9,12,14,15,16)
> a1=U[1]
> b1=U[2]
> c1=U[3]
> d1=U[4]
> e1=U[5]
> f1=U[6]
> g1=U[7]
> h1=U[8]
> a2=b2=c2=d2=e2=f2=g2=h2=NA
> posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
> if(length(posa2)==0){na=na+1}
> for(a2 in posa2){
+ posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
+ if(length(posb2)==0){na=na+1}
+ for(b2 in posb2){
+ posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
+ if(length(posc2)==0){na=na+1}
+ for(c2 in posc2){
+ posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],
+ a2,b2,c2)
+ if(length(posd2)==0){na=na+1}
+ for(d2 in posd2){
+ pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],
+ a2,b2,c2,d2)
+ if(length(pose2)==0){na=na+1}
+ for(e2 in pose2){
+ posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],
+ a2,b2,c2,d2,e2)
+ if(length(posf2)==0){na=na+1}
+ for(f2 in posf2){
+ posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],
+ a2,b2,c2,d2,e2,f2)
+ if(length(posg2)==0){na=na+1}
+ for(g2 in posg2){
+ posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],
+ a2,b2,c2,d2,e2,f2,g2)
+ if(length(posh2)==0){na=na+1}
+ for(h2 in posh2){
+ s=s+1
+ V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
+ }}}}}}}}

On the initial ordering of home team, the number of deadlocks was

> na
[1] 657

The probability of obtaining a deadlock is then

> 657/(657+5463)
[1] 0.1073529

(657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered

 [1]    6    4   16   14   12   15    8    9

In that case, the probability of obtaining a deadlock was

> 4047/(4047+5463)
[1] 0.4255521

Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e.

> Urandom=sample(U,size=8)

the distribution of the probablity of having a deadlock is

All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results…

UEFA, what were the odds ?

Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…).

To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th),

UEFA

Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition.

And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix.

Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix,

 > n=16
 > uefa=read.table(
 + "http://freakonometrics.blog.free.fr/public/data/uefa.csv",
 + sep=",",header=TRUE)
 > LISTEIMPOSSIBLE=matrix(
 + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n)

I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one),

 > a1=1
 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]}
 > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1]

Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible,

 > b1=6
 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)

Etc. So, given the list of home teams,

 > a1=4
 > b1=6
 > c1=8
 > d1=9
 > e1=12
 > f1=14
 > g1=15
 > h1=16

consider the following loops,

 > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
 > for(a2 in posa2){
 + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
 + for(b2 in posb2){
 + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
 + for(c2 in posc2){
 + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2)
 + for(d2 in posd2){
 + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2)
 + for(e2 in pose2){
 + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2)
 + for(f2 in posf2){
 + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2)
 + for(g2 in posg2){
 + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2)
 + for(h2 in posh2){
 + s=s+1
 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
 + cat(s,V,"\n") 
 + M=rbind(M,V)
 + }}}}}}}}

With the print option, we end up with

5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 
5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 
5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1

i.e.

> nrow(M)
[1] 5463

possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %)

> 100/nrow(M)
[1] 0.01830496

Which is not that small when we think about it….

And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied….

Actuariat 1, ACT2121, huitième cours

Pour le huitième cours d’actuariat 1 (ACT2121, préparation à l’examen P de la SOA), on continuera les exercices commencés la semaine passée. Je mets toutefois en ligne quelques exercices supplémentaires, pour ceux qui souhaitent s’entraîner davantage (le fichier est en ligne ici). Pour rappel (?) l’examen final aura lieu dans 2 semaines la semaine prochaine, et portera sur l’ensemble de la matière. Comme toujours, 30 questions, 3 heures, et on commence à 13 heures (dois-je le préciser ?). Cette fois, je fournis la table “officielle” de la SOA.

 


Actuariat 1, ACT2121, septième cours

Toujours dans le cadre de la préparation à l’examen P de la SOA, une série d’exercices. Comme il reste trois semaines (en plus de l’examen final), on va essayer de finir de revoir l’ensemble des notions. Les 50 exercices sont en ligne ici. Je mettrais en ligne (très bientôt) l’énoncé et la correction de l’examen intra de lundi (avec – comme la dernière fois – les statistiques de réponse par question)

Bayes is playing Russian roulette

There was (once again) a nice puzzle inhttp://www.futilitycloset.com/. Bayes and a good friend are playing Russian roulette. The revolver has six chambers. He puts two bullets in two adjacent chambers, spin the cylinder, hold the gun to his friend’s head, and pull the trigger. It clicks. So it is now Bayes’s turn: he can choose either to spin the cylinder again or leave it as it is. Which is better? Hopefully, Bayes knows his theorem: if he does spin it, the probability of getting killed is 2 out of 6 (four empty chambers out of six), but if he does not, since his friend is still alive, then the hammer should be next to one of the four cylinders in red, below


So here, there is 3 chance out of 4 to survive, i.e. the probability of getting killed is 1 out of 4 (while it was 1 out of 3 when spinning). So Bayes should not spin. And as always, it is possible to see it is a more general result: more generally, in a revolver with http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif chambers, it there are http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif bullets in http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif adjacent chambers,  if the first player survives, the probability of getting killed is k over http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif, when spinning, while it would be 1 over http://freakonometrics.blog.free.fr/public/perso5/bullet03.gif if we don’t. Not spinning is better if and only if

http://freakonometrics.blog.free.fr/public/perso5/bullet04.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso5/bullet05.gif

So you’d better not spin, unless there was one bullet in the revolver, i.e. http://freakonometrics.blog.free.fr/public/perso5/bullet06.gif… or http://freakonometrics.blog.free.fr/public/perso5/bullet07.gif (in that case, it might not be a good idea actually to play the game).

Proving tautological versus trivial results in mathematics

There is something that might be fun in mathematics, which is the connexion between trivial, tautological and difficult questions. Sometimes, things are so intuitive, that they seem to be obvious. But mathematicians aren’t jedis, and they should not trust too much their intuition… I mean intuition is fine, but it is not a proof. It is like those standard results we learn in topology courses, e.g. “the closure of an open ball is not necessarily the closed ball”. The other thing is that after a while, you try to prove something, until someone makes you realize that it is the definition…

And this morning, while I was trying to make a coffee, @renaudjf came with a simple question (yes, it always starts like that). Consider the standard algorithm to generate a conditional random variable. Assume that  has a priori distribution , and that , given , has (conditional) distribution .

The standard idea is monte carlo simulation, to generate values of , is
  •  step 1: generate 
  •  step 2: given that generation of , generate 
“Can we prove that we actually generate from the (true, maybe hard to characterize) non-conditional distribution of  ? Or is it just trivial ?”. After having those previous philosophical questions, we came to the point that if it was trivial, then we should be able to prove it. A standard way of writing the algorithm is to use the quantile based technique
  •   with ,
  •   with ,
For instance, to generate negative binomial distribution
n=1
theta=rgama(n,3,3)
X=rpois(n,lambda=theta)
Thus, let  where  and  are two independent random variables with a uniform distribution on the unit interval. Let us try to derive its distribution, i.e.
so
if we consider the following change of variate 
which is exactly the non-conditional distribution of .
And then, you’re quite happy because you’ve been able to prove a trivial result ! But next time, I promise, we’ll try to derive an amazing theorem that will change humanity… but next time only, first, let us prove trivial results.

Eating chocolate, an Easter problem

Assume that there are (say) 100 chocolate eggs in a basket, 20 are dark chocolate, while 80 are milk chocolate. Unfortunately, eggs are wrapped, and there is no way you can distinguish them. My daughter has the following algorithm for eating them (and she actually plans to eat all of them)

  1. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper (just to remember how many of each kind are left), eat it, and move to strategy 2.
  2. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper and:
  • if it is the same kind as the one she got before, then eat it, and go again to step 2.
  • if it is not the same kind as the one she got before, she wraps it back, and go again to step 1.

At the end, if there is only one egg left, the probability that it is a milk chocolate egg is exactly 1/2… Nice, isn’t it ?

It is a simple rejection technique algorithm. It is possible to run some code to check the answer. The algorithm which return the taste of the last egg remaining is

> lastchocolate=function(dark=80,milk=20){
+ s=1
+ while(dark+milk>1){
+ if(s==1){
+ (eatnow=sample(c("D","M"),prob=c(dark,milk),size=1))
+ if(eatnow=="D"){dark=dark-1};
+ if(eatnow=="M"){milk=milk-1};
+ eatbefore=eatnow;s=2}
+ if(s==2){
+ if(dark+milk>1){
+ s=1;
+ eatnow=sample(c("D","M"),prob=c(dark,milk),size=1)
+ if(eatnow==eatbefore){s=2
+ eat=eatnow;
+ if(eatbefore=="D"){dark=dark-1};
+ if(eatbefore=="M"){milk=milk-1}}
+ }}
+ }
+ return(c(dark,milk))}

If we run it 2,000 times, we obtain

> set.seed(1)
> m=lastchocolate(dark=80,milk=20)
> for(s in 1:1999){m=cbind(m,lastchocolate(dark=80,milk=20))}
> apply(m,1,sum)
[1] 1022 978

So it looks like we have half chance to end up with a dark chocolate egg, and half chance to end up with a milk chocolate egg.

Let us prove that result… Let  denote the number of milk chocolate and the number of dark chocolate eggs, when we start. Consider an inductive proof of the fact that the probability has to be . The first step is when . Then

out of chocolates, the probability to pick a milk chocolate egg is . Assume that it is  for all pairs  such that  and .
Assume that after some steps, there are  and  chocolates, with  (again, at least one egg has been eaten). The probability to have  is
Similarly, the probability to have  is
So, the probability that both are strictly positive is then
Then we can use our inductive assumption. Thus, the overall probability that the last egg is a milk one is
where the part on the left is  and the second one is . This probability is exactly one half (straightforward).

Playing with fire (or water)

A few days ago,http://www.futilitycloset.com/published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)

The answer is nice, an can be read on the blog.

What puzzled me in this problem is the following: if we know, for sure, that at least one player won’t get wet, we don’t know exactly how many of them won’t get wet (assuming that if they shoot at the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

It is then rather simple to get the distribution of the number of player that did not get wet,

N25=Vectorize(NOTWET)(n=rep(25,NSim))
T=table(N25)
plot(as.numeric(names(T)),T/NSim,type="b")

The graph for different values for the total number of players is the following (based on 25,000 simulations)

If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,

NSim=25000
N51=Vectorize(NOTWET)(n=rep(51,NSim))
T=table(N51)
plot(as.numeric(names(T)),T/NSim,type="b",col="blue")
u=seq(0,51,by=.1)
lines(u,dnorm(u,mean(N51),sd(N51)),col="red",lty=2)

If anyone has an intuition (not to say a proof) for that, I’d be glad to hear it…

Sunday evening, stupid games…

This evening, while I was about to wash the dishes, I heard my elders starting a game (call them Him and Her)
Him: “I have picked – in my head – a number, lower than 50. Try to guess…”
Her: “No way, too difficult…”
Him: “You can try five different numbers…”
Her: “.,. um … No, no way…”
Me: “Wait… each time we suggest a number, you tell us if yours is either above, or below ?”
You can see me coming clearly, can’t you ? Using a simple subdivision rule, we have a fast algorithm (and indeed, if I have to choose between washing the dishes and playing with the kids…)
Him: “um…. ok”
Her: “Daddy, are you sure we will win ?”
Me: “Well… I cannot promise that we will win… but I am rather sure [sic] that we will win quite frequently: more gains than losses…” (I guess).
Her: “Great ! I am playing with daddy…”

Him: “um…. wait, is it one of you trick, again ? I don’t to play anymore… Do you want to see the books we’ve chosen at the library ?”
Her: “Sure…”
Me: “What ? no one wants to see if I was right ? that we have indeed more than 50% chances to win…”
Him and her: “No !”
The point of that story ? If we listen to kids, science will not go forward, trust me. But I am curious… I want to see if my intuition was correct. Actually, the intuition was based on the fact that

> 2^5
[1] 32 
> 2^6
[1] 64

so in 5 or 6 steps the algorithm of subdivision should converge. I guess… I mean, I do not know for sure, since 50 is not a power of 2, so it might be difficult, each time, to split in two: we have to deal only with integers here…
To be sure, let us substitute my laptop to my son… to pick up numbers, randomly (yes, sometimes I feel like I am Doctor Tenma, 天馬博士). The algorithm is simple: there are bounds, and at each stop I should suggest the middle of the interval. If the middle is not an integer, I suggest either the integer below or the integer above (with equal probabilities).

cutinhalf=function(a,b){
m=(a+b)/2
if(m %% 1 == 0){m=m}
if(m %% 1 != 0){m=sample(c(m-.5,m+.5),size=1)}
return(round(m))}

The following functions runs 10,000 simulations, and tells us how many times, out of 5 numbers suggested, we got the good one.

winning=function(lower=1,upper=50,tries=5,NS=100000){
SIM=rep(NA,NS)
for(simul in 1:NS){
interval=c(lower,upper)
(unknownnumber=sample(lower:upper,size=1))
success=FALSE
for(i in 1:tries){
picknumber=cutinhalf(interval[1],interval[2])
if(picknumber==unknownnumber){success=TRUE}
if(picknumber>unknownnumber){interval[2]=picknumber}
if(picknumber<unknownnumber){interval[1]=picknumber}
#print(c(unknownnumber,picknumber,success,interval))
};SIM[simul]=success};return(mean(SIM))}

It looks like the probability that we got the good number is higher than 60%,

> winning()
[1] 0.61801

Which is not bad. And if the upper limit was not 50, but something else, the probability of winning would have been the following.

VWN=function(n){winning(upper=n)}
V=Vectorize(VWN)(seq(25,100,by=5))
plot(seq(25,100,by=5),V,type="b",col="red",ylim=c(0,1))


Actually, after losing a couple of times, I am rather sure that my son would have to us that we can suggest only four numbers. In that case, the probability would have been close to 30%, as shown on the blue curve below (where four numbers only can be suggested)

Anyway, as intuited, with five possible suggestions, we were quite likely to win frequently. Actually with a probability of almost 2 out of 3…and 1 out of 3 if my son had decided to pick an number between 1 and 100, or only 4 possible suggestions… Those are quite large actually, when we think about it. It reminds me that McGyver story I mentioned a few months ago… Anyway, calculating probabilities is nice, but I still have to wash the dishes…

Ruin probability and infinite time

A couple of weeks ago, I had a discussion with a practitioner, working in some financial company, about ruin, and infinite time. And it reminded me a weird result. Well, not a weird result, but a result I found disturbing, at first, when I was a student (that I rediscovered with the eyes of someone dealing with computational issues, seeing here a difficult theoretical question). Consider a simple ruin problem. A player has wealth . Then he flips a coin: tails he has a gain of 1, heads he experiences a loss of 1. At time , his wealth is where  is associated to the th coin:  is equal to 1 with probability (tails), and -1 with probability  (heads). It is also possible to write

where  can be interpreted as the net gain of the player. In order to get a good understanding of results that can be obtained. Assume  to be given. Let denote the number of heads and  the number of tails. Then , while . Let  denote the number of paths to go from point A (wealth  at time ) to point B (wealth  at time ). Note that this is a Markovian problem, that can be modeled using Markov chains

But here, we will focus on combinatorial results. Hence,

In order to derive probabilities to reach , let  denote the number of paths going from  to . And let denote the number of paths going from  to  that do reach  at some point between  and . Using a simple reflexion property, then if  and  are positive,

Based on those reflexions, two results can be derived (focusing on probability, instead of counting paths). First, we can obtain that

(given that n and x have the same parity). The second result we can obtain is that

Based on those two expressions, if  denotes the first time  become null, given ,

then

This can be computed easily,

> x=10
> p=.55
> ProbN=function(n){
+ pb=0
+ if(abs(n-x) %% 2 == 0)
+ pb=x/n*choose(n,(n+x)/2)*(1-p)^((n+x)/2)*(p)^((n-x)/2)
+ return(pb)}
> plot(Vectorize(ProbN)(1:1000),type="s")

That looks nice… But if we look closer, we can wonder what

would be ? Since we have the distribution of a probabilty measure, we might expect one. But here

> sum(Vectorize(ProbN)(1:1000))
[1] 0.134385

And this is not due to calculation mistakes that we do not get 1 here. Actually, we should write

which might be interpreted as the probability of ruin, starting from , that we denote  from now on. The term on the left can be approximated using monte-carlo simulations

> p=.55
> x=10
> m=1000
> simul=10000
> S=sample(c(-1,1),size=m*simul,replace=TRUE,prob=c(1-p,p))
> MS=matrix(S,simul,m)
> for(k in 2:m) MS[,k]=MS[,k]+MS[,k-1]
> T0=function(vm) which(vm<=(-x))[1]
> MTmin=apply(MS,1,T0)
> mean(is.na(MTmin)==FALSE)
[1] 0.1328

To check the validity of the relationship above, a simple (theoretical) recursive formula can be derived for the term on the right (ruin probability), namely

with a boundary conditions , and . Then is comes that

Note that it might be tricky to check using monte carlo simulation… since we cannot have an infinite number of runs. And we’re dealing precisely with things that do occur when time is infinite. Actually, we can still check convergence, considering an upper limit  for the number of runs, and then letting  go to infinity. Note that an explicit formula can then be derived (using additional border condition )

Using the following code, it is possible to calculate ruin probability, in order to estimate .

> MSmin=apply(MS,1,min)
> mean(MSmin<=(-x))
[1] 0.1328
> (((1-p)/p)^x-((1-p)/p)^m)/(1-((1-p)/p)^m)
[1] 0.1344306

The following graph shows the evolution of ruin probability as a function of initial wealth (with monte carlo simulation, with a fixed horizon – including a confidence interval – versus the analytical expression)

Hence, with stopping times, one should remember that

and that those two terms can be approximated simply using simulations or standard approximations.

when Nuns or Hells Angels get in a plane

Today, at lunch, Matthieu told us a nice story (or call it a paradox if you like) about the probability to find you seat empty when you get in a place. 

  • a plane full of nuns

Assume that you are in the line to get in the airplane, you are the 100th in the line. The first one is scatter brained, he has his head in the clouds, and when he get in the airplane, he cannot remember where he should seat. His strategy is then extremely simple: he seats randomly in the plane. So he picks up randomly a seat, and he waits.

Then come 98 nuns (one by one). And nuns are extremely polite: if there is someone in their seat (the one that is on the ticket they have) then they do not complain, and pick up another seat randomly (among those available, of course). Then you arrive. The question is simple: what is the probability that someone is seated at your seat ?

Any idea…?

Maybe I should give more time to do the maths… and tell another story…

  • a plane full of Hells Angels

Consider almost the same problem as the one mentioned above. Except that now, it is not 98 nuns that are getting in the plane, but 98 Hells Angels. So the problem here is that Hells Angels are slightly less polite than nuns. When they find someone seating on the seat they should have, they do not shyly move to another seat, but they grunt and then our scatter brained man (who is actually seating in their seat) has to move somewhere else. And the question is the same: you are the 100th person to get in the plane, what is the probability that someone is seated at your seat ?Any idea….?

The important point is that the problem is exactly the same (at least from a mathematical point of view, maybe not for the stewardess, or from the guy who enter first in the plane). The point is that, at each time, there could be only one person (or less) seating in a seat which is not his or hers (in the sense that if we compare the list of the passenger at any time, and the list of seats taken, there should be only one – or less – difference). The difference in the two story is that in the first case, it will be a nun, while in the second one, it will be our shy guy.

  • Let us run simulations

If we do not see how to get that probability analytically, let us run some R code,

> set.seed(1)
> n=100; TEST=rep(NA,100000)
> for(s in 1:100000){
+ OCCUPIED=rep(FALSE,n)
+ OCCUPIED[sample(1:n,size=1)]=TRUE
+ for(j in 2:(n-1)){
+ FREE=which(OCCUPIED==FALSE)
+ if(OCCUPIED[j]==TRUE){OCCUPIED[sample(FREE,size=1)]=TRUE}
+ if(OCCUPIED[j]==FALSE){OCCUPIED[j]=TRUE}
+ }
+ TEST[s]=OCCUPIED[n]==TRUE
+ }
> mean(TEST)
[1] 0.49878

Here, we clearly see that the problem is the same (either with nuns or Hells Angels): we do not care about who will change his/her seat, but we just look at seats that are available… So the program is valid for the two problems (and the solution will then be the same). Another point is that the probability looks extremely simple: one over two !

  • an analytical expression

Consider the Hells Angels problem (for notations). Let http://freakonometrics.blog.free.fr/public/perso2/nonnes1.gif denote the probability that, at time http://freakonometrics.blog.free.fr/public/perso2/nonne6.gif, our shy guy is sitting in my seat. When he gets in the plane, the probability that he gets to my seat is

http://freakonometrics.blog.free.fr/public/perso2/nonne2.gif
Then, the probability that, after ith passenger’s entrance, our guy is sitting in my own seat is (since the initial proof was not correct, I remove it, see below for a nice proof) One can get that

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-01.gif
So, we can get the probability that, when I get in, our guy is sitting in my own seat as
http://freakonometrics.blog.free.fr/public/perso2/avion-ec-07.gif

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-08.gif

Hence, there is one chance out of two that my seat will be free… (which is what we got with Monte Carlo simulations).

But a faster proof is to observe that, in the Hells Angels case, our guy will be kicked out until he reaches either his seat, or mine. Since those two events are equiprobable, there is one chance out of two that he seats in my seat (and since no Hells Angel will seat in mine, only this first guy can). So the probability that someone is in my seat when I get in is one half.

Nice isn’t it ? And thanks Matthieu for the problem  (with his friend Claude’s solution with the Hells Angels, and Olivier and Renaud for their comments) !

Could it really be a coincidence ?!?

Last week, on http://www.guardian.co.uk, there was a nice post on coincidences, concluding with “a decent knowledge of mathematics shows you that most coincidences are just that: coincidence“, asking for a better knowledge of basic probabilities….

The only problem is that the first line is not correct… “Today is my birthday, which means that there is a 50% probability that one of the first 23 people who read this blog entry will share their birthday with me”.

There is a well known “paradox” in probabilities related to this: the correct sentence should be “there is a 50% probability that if we consider the first 23 people who read this blog entry at least two of them will share their birthday“… but not necessarily with you… Indeed the probability that 2 people out of 23 share their birthday is

> 1-prod(365:(365-22))/(365^23)
[1] 0.5072972

But if you want to find someone who share his/her birthday with you, you need to wait for the first 253 if you are looking for 50% chances,

> 1-(364/365)^253
[1] 0.5004772

But dealing with probability is not that simply. At least, we get the message… And I will be the last one to blame you: yesterday, a colleague got me asking me “simple” questions about probabilities to find a pair a socks in a several drawers… And I could not get it right…

Probabilities, and opening doors (or boxes)

Recently, while we were in the car to Québec city with some PhD students, someone mentioned  the Monty Hall paradox, and we discussed possible extensions… The Monty Hall paradox is usually presented from tv show,

Craig F. Whitaker wrote the problem as follows, in Parade Magazine, September 1990, « Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice? ». Actually, Bertrand proposed the same problem, but with boxes instead of doors…. but the problem was the same.

Assume that the candidate chooses door 1 (without loss of generality since the problem is clearly symmetric).  The probability that the car is behind door 2 is http://freakonometrics.blog.free.fr/public/maths/montyh52.png. The animator can either open door 2 or door 3:

  • if the car is behind door 3, he has to open door 2,
  • if the car is behind door 2, he has to open door 3,
  • if the car is behind door 1, he can open door 2 or 3, and we assume that the opening is equiprobable,

 

Assume that the animator says “the second box is empty”, what should the candidate do ?
To formalize the problem let http://freakonometrics.blog.free.fr/public/maths/montyh1.png denote the event that the car is behind door http://freakonometrics.blog.free.fr/public/maths/montyh2.png, and http://freakonometrics.blog.free.fr/public/maths/montyh3.png the event that the animator opens door http://freakonometrics.blog.free.fr/public/maths/montyh4.png. So, if he opens door 2, the probability that the car is behind door 3 is
http://freakonometrics.blog.free.fr/public/maths/montyh5.png
where
http://freakonometrics.blog.free.fr/public/maths/montyh7.png
from the previous discussion, since he cannot open door 1 (the candidate chose it) and the cannot open door 3 (since the car is behind). Further
http://freakonometrics.blog.free.fr/public/maths/montyh8.png
from equiprobability. And for http://freakonometrics.blog.free.fr/public/maths/montyh9.png we get, similarly
http://freakonometrics.blog.free.fr/public/maths/montyh10.png
Thus,
http://freakonometrics.blog.free.fr/public/maths/montyh11.png
while
http://freakonometrics.blog.free.fr/public/maths/montyh12.png
So the optimal strategy is to open the third door (even if I chose the first one)… It is usually seen as a paradox, but if you consider a much larger number of doors (say 4),

and that the animator opens 2 doors, then should we still change, and open the door that is still closed ? The higher the number of doors, the higher the probability to have something behind the other door…

For instance, with 4 doors, or boxes, if the candidate still chose the first door, and that the animator opens doors 2 and 3, then, the probability that the car is behind the fourth one is
http://freakonometrics.blog.free.fr/public/maths/montyh15.png
i.e.
http://freakonometrics.blog.free.fr/public/maths/montyh50.png
http://freakonometrics.blog.free.fr/public/maths/montyh52.png that appears at the denominator since we focus on the pair http://freakonometrics.blog.free.fr/public/maths/montyh54.png) while
http://freakonometrics.blog.free.fr/public/maths/montyh55.png
Once again, we have that
http://freakonometrics.blog.free.fr/public/maths/monthyh56.png
i.e. the opening of a doors bring us no information about our choice, but it will after conditional probability for the remaining door.
More generally, with n doors, if the animator opens http://freakonometrics.blog.free.fr/public/maths/monthy20.png doors,

then
http://freakonometrics.blog.free.fr/public/maths/montyh60.png
while
http://freakonometrics.blog.free.fr/public/maths/montyh62.png
And here the result is even more intuitive: we have to open the door that was left closed. Actually, it is possible to see that it can be extend to the case where the are http://freakonometrics.blog.free.fr/public/maths/montyh75.png doors (or boxes), the candidate chooses http://freakonometrics.blog.free.fr/public/maths/montyh73.png doors, and the animator opens http://freakonometrics.blog.free.fr/public/maths/montyh71.png (out of the http://freakonometrics.blog.free.fr/public/maths/montyh95.png remaining doors). Then, behind each door chosen by the candidate, the probability does not change
http://freakonometrics.blog.free.fr/public/maths/montyh70.png
where i goes from 1 to http://freakonometrics.blog.free.fr/public/maths/montyh73.png, while
http://freakonometrics.blog.free.fr/public/maths/montyh80.png
where i goes from http://freakonometrics.blog.free.fr/public/maths/montyh76.png to http://freakonometrics.blog.free.fr/public/maths/montyh75.png.