Tag Archives: Monty Hall

Monty Hall problem, with Thompson sampling

We all know the Monty Hall problem. Recently, Jason Rosenhouse published a book on that topic (entitled The Monty Hall Problem, The Remarkable Story of Math’s Most Contentious Brain Teaser). The game is more or less described by the following question

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

While I was preparing some slides for a lecture on Bayesian modeling and thinking, I wanted to find an illustration of what is sometimes called the Bayesian brain, that can be related to updates of beliefs, when we experience. And I was looking for examples of Thompson sampling. And actually, it is possible to learn that switching is the optimal strategy, in the Monty Hall problem, just by playing sequentially the game, and learning from previous strategies. The following code is used, to choose the door with the price (the car), and the one we first select

set.seed(1)
n = 5000
listdoor = matrix(1:3,3,n)
door = listdoor
win = sample(1:3,size=n,replace=TRUE)
pick1 = sample(1:3,size=n,replace=TRUE)

Then, the presenter picks one, that is neither the car, nor the one we chose initially. The following trick can be used, to get the list of available choices

door[win+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA
door[pick1+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA

Then, the presenter picks one

presenter = apply(door,2, function(x) sample(x[!is.na(x)],size=1))
> presenter[win != pick1] = apply(door,2,function(x) x[!is.na(x)])[win != pick1] 
presenter = unlist(presenter)
presenter[1:10]
[1] 3 2 3 1 2 2 2 3 1 2

Now, let us consider the  Monty Hall problem. We have two possible strategies. The first one is to keep the door we chose, initially

pick2a = pick1
gaina = (pick2a==win)
mean(gaina)
[1] 0.3392

As expected, on average, we win with (about) 1 chance out of 3. The second one is to (always) pick the other door (the one left). The code is close to the one we used before

door = listdoor
door[pick1+(0:(n-1))*3] = NA
door[presenter+(0:(n-1))*3] = NA
pick2b = apply(door,2,function(x) x[!is.na(x)])
gainb = (pick2b==win)
mean(gainb)
[1] 0.6608

If you know Monty Hall problem the probability to win is now 2 chance out of 3 (which is what the maths tells us). That is what we have with simulations.

Now, what if we don’t know how to do the maths, and we don’t want to compute it? We can use Thompson sampling to explore, and exploit. In a general context, we have to choose among On a le choix entre K alternatives (here K=2, since we can either keep our initial choice, or pick the other one), and the output is \boldsymbol{X}=(X_1,\cdots, X_K), where X_k\sim\mathcal{B}(\theta_k), but \theta_k is unknow, and we will play the game, and learn. From previous computations, we know that \theta_1=1/3 while \theta_2=2/3.

We use some prior distribution, \theta_k\sim\mathcal{B}eta(\alpha_k,\beta_k), since the Beta distribution is the conjugate of the Bernoulli. At time t, we draw K (independent) Beta variables B_k\sim\mathcal{B}eta(\alpha_k,\beta_k), and pick k^\star = \displaystyle{\underset{k=1,\cdots,K}{\text{argmax}}\{B_k\}}.  Here the code will be

set.seed(2)
X = cbind(pick2a == win,pick2b == win)*1
AB1 = AB2 = tirage = matrix(NA,n,2)
choix = rep(NA,n)
k=1
AB1[k,] = AB2[k,] = c(1,1)
for(k in 1:(n-1)){
tirage[k,] = c(rbeta(1,AB1[k,1],AB1[k,2]),
rbeta(1,AB2[k,1],AB2[k,2]))
choix[k] = which.max(tirage[k,])
if(choix[k] == 1){
AB1[k+1,] = AB1[k,] + c(X[k,1],1-X[k,1])
AB2[k+1,] = AB2[k,] 
}
if(choix[k] == 2){
AB1[k+1,] = AB1[k,] 
AB2[k+1,] = AB2[k,] + c(X[k,2],1-X[k,2])
}}

Before showing some graphs, let us check that indeed, we select more the second strategy (which is here to select the other door)

AB1[n,]
[1] 5 13
AB2[n,]
[1] 3292 1693

Indeed, since the average of a Beta distribution, \mathcal{B}eta(\alpha,\beta) is \alpha/(\alpha+\beta)

AB2[n,1]/(sum(AB2[n,]))
[1] 0.6603811

i.e. the probability to win, with this second strategy is about 2/3 (as obtained previously). We can visualize this on the animation below, with, in red the first strategy (keep your initial choice), in green the second one (select the other door), 0 and 1 respectively if we win, or not. Then we can visualize the evolution of \alpha_2 and \beta_2 on topc, and \alpha_1 and \beta_1 below (the index is time t). Finallly, we have the two variables B_1 and B_2 drawn,

Of course, another simulation would have given different B_1‘s and B_2‘s, but finally, we learn that the second strategy is better, and we learn it quite fast…

Here is another one (just to confirm)

So clearly, even if we don’t know which is the optimal strategy (keep our initial choice, or switch), a player who played that game about 30 times should be able to understand that switching should be a better strategy.

Monty Hall (oh no, not again)

Quite frequently, someone on the internet discovers the Monty Hall paradox, and become so enthusiastic that it becomes urgent to publish an article – or a post – about it. The latest example can be http://www.bbc.co.uk/news/magazine-24045598. I won’t blame them, I did the same a few years ago (see http://freakonometrics.hypotheses.org/776, or http://freakonometrics.hypotheses.org/775, in French).

My point today is that the Monty Hall paradox raise an important question, about information. How comes that something to sounds like non-informative can actually be extremely informative. I will not get back on the blue eyes paradox (see http://freakonometrics.hypotheses.org/1963, in French) or the exam paradox (see http://freakonometrics.hypotheses.org/2328, in French one more time), which are related to information, but not with a probabilistic approach. I will stay close to Monty Hall’s paradox today.

This morning, in my probability class, we were looking at a simple exercise (I say simple because it is only the second course of the session). The problem was the following

Consider an urn , with 15 blue balls, and 10 red balls, and an urn , with 10 blue balls, and 15 red balls. We select randomly one urn (with probability 50% for each urn).
We draw a ball, which turns out to be blue, and we put it back in the urn, Now, we draw a (second) ball. What is the probability that this (second) ball is blue?

Please, take your time to read that carefully…

Ready? Your first thought should be that since we put back the ball, after the first draw, it does not change the probabilities, right? So, why did we say that? It is necessary? (about the last question, yes, when something is mentioned in an exercise, we should use it).

Let’s forget about this second ball story, as an introduction to this problem. What was, actually, the probability for the first ball to be blue? Trivially, it was

i.e.

Let us run a code to get that, using simulations:

> n=1000000
> set.seed(1)

First, let us draw the urn, randomly

> urn=sample(1:2,size=n,replace=TRUE)

Then, let us draw the first, and the second ball,

> urns=matrix(c(15,10,10,15),2,2)
> colnames(urns)=c("blue","red")
> sample.urn=(urns[urn,])
> prob.urn=sample.urn/apply(sample.urn,1,sum)
> u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
> u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]

The probability that the first ball was blue is here

> sum(u1=="blue")/n
[1] 0.499953

and for the second one

> sum(u2=="blue")/n
[1] 0.499221

So, indeed, the probability to have a blue ball is 50%. Now, what was the question? Given that the first ball was blue, what it the probability that the second one is blue? Here, on our simulations, it is

> sum(u2[u1=="blue"]=="blue")/sum(u1=="blue")
[1] 0.5194088

Which is close to 52%.And if you run more simulations, you get

> f=function(seed){
+ set.seed(seed)
+ urns=matrix(c(15,10,10,15),2,2)
+ colnames(urns)=c("blue","red")
+ sample.urn=(urns[urn,])
+ prob.urn=sample.urn/apply(sample.urn,1,sum)
+ u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
+ u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]
+ return(sum(u2[u1=="blue"]=="blue")/
+ sum(u1=="blue"))
+ }
> Vectorize(f)(1:20)
 [1] 0.5194088 0.5200931 0.5203338 0.5192104 0.5196960 0.5206121 0.5195453
 [8] 0.5184580 0.5203755 0.5200154 0.5196557 0.5179276 0.5188652 0.5204724
[15] 0.5197437 0.5209244 0.5205770 0.5208725 0.5206228 0.5190711

The probability is always close to 52%, and is (significantly) different from 50%.

Still not convinced that we have some information here that should be used? Imagine that in the first urn, we add 1 blue ball, and 24 red balls; and the opposite in the second one. In that case, if we say that the first ball was blue, it means that it is very likely that the urn chosen was the second one. Let’s look at by it running some simulations

> set.seed(1)
> urns=matrix(c(1,24,24,1),2,2)
> colnames(urns)=c("blue","red")
> sample.urn=(urns[urn,])
> prob.urn=sample.urn/apply(sample.urn,1,sum)
> u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
> u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]

As before, the probability that the second ball is blue is 50% (because of the symmetry actually)

> sum(u2=="blue")/n
[1] 0.500362

But if I tell you that the first one was blue, the probability that the second one is blue becomes

> sum(u2[u1=="blue"]=="blue")/sum(u1=="blue")
[1] 0.9236433

So even if – somehow – we do not change much by replacing the ball in its urn, we do have here some information, since it was mentioned that the ball was blue. And we should use it. Again, the important point is that the sentence was not “we draw a ball and we put it back”, but “we draw a blue ball, and we put it back”. Now, it we do the maths, everything become simple, and clear (as usual).

The question is here to compute

and according to Bayes formula, it is

Now, to compute those two probabilities, we have to condition on the urn,

Given the urn, since we replace the ball,

i.e.

So if we substitute numerical probabilities to get a blue ball in the previous formula, we get

which not the same as

Here, we get

> {(15/25)^2+(10/25)^2}/((15/25)+(10/25))
[1] 0.52

which confirms our empirical 52%, and note that in the second case (where there was only 1 blue ball in one urn, and 24 in the second one)

> {(24/25)^2+(1/25)^2}/((24/25)+(1/25))
[1] 0.9232

which again is close to the empirical 92.3% we got.

I strongly believe that the mis-intuition we might have is close to the one we can observe in Monty Hall paradox. And unless you write things properly, it is difficult to conclude anything….

PS [48  hours later] thanks @mikeandallie for the animated version of my post

Probabilities, and opening doors (or boxes)

Recently, while we were in the car to Québec city with some PhD students, someone mentioned  the Monty Hall paradox, and we discussed possible extensions… The Monty Hall paradox is usually presented from tv show,

Craig F. Whitaker wrote the problem as follows, in Parade Magazine, September 1990, « Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice? ». Actually, Bertrand proposed the same problem, but with boxes instead of doors…. but the problem was the same.

Assume that the candidate chooses door 1 (without loss of generality since the problem is clearly symmetric).  The probability that the car is behind door 2 is http://freakonometrics.blog.free.fr/public/maths/montyh52.png. The animator can either open door 2 or door 3:

  • if the car is behind door 3, he has to open door 2,
  • if the car is behind door 2, he has to open door 3,
  • if the car is behind door 1, he can open door 2 or 3, and we assume that the opening is equiprobable,

 

Assume that the animator says “the second box is empty”, what should the candidate do ?
To formalize the problem let http://freakonometrics.blog.free.fr/public/maths/montyh1.png denote the event that the car is behind door http://freakonometrics.blog.free.fr/public/maths/montyh2.png, and http://freakonometrics.blog.free.fr/public/maths/montyh3.png the event that the animator opens door http://freakonometrics.blog.free.fr/public/maths/montyh4.png. So, if he opens door 2, the probability that the car is behind door 3 is
http://freakonometrics.blog.free.fr/public/maths/montyh5.png
where
http://freakonometrics.blog.free.fr/public/maths/montyh7.png
from the previous discussion, since he cannot open door 1 (the candidate chose it) and the cannot open door 3 (since the car is behind). Further
http://freakonometrics.blog.free.fr/public/maths/montyh8.png
from equiprobability. And for http://freakonometrics.blog.free.fr/public/maths/montyh9.png we get, similarly
http://freakonometrics.blog.free.fr/public/maths/montyh10.png
Thus,
http://freakonometrics.blog.free.fr/public/maths/montyh11.png
while
http://freakonometrics.blog.free.fr/public/maths/montyh12.png
So the optimal strategy is to open the third door (even if I chose the first one)… It is usually seen as a paradox, but if you consider a much larger number of doors (say 4),

and that the animator opens 2 doors, then should we still change, and open the door that is still closed ? The higher the number of doors, the higher the probability to have something behind the other door…

For instance, with 4 doors, or boxes, if the candidate still chose the first door, and that the animator opens doors 2 and 3, then, the probability that the car is behind the fourth one is
http://freakonometrics.blog.free.fr/public/maths/montyh15.png
i.e.
http://freakonometrics.blog.free.fr/public/maths/montyh50.png
http://freakonometrics.blog.free.fr/public/maths/montyh52.png that appears at the denominator since we focus on the pair http://freakonometrics.blog.free.fr/public/maths/montyh54.png) while
http://freakonometrics.blog.free.fr/public/maths/montyh55.png
Once again, we have that
http://freakonometrics.blog.free.fr/public/maths/monthyh56.png
i.e. the opening of a doors bring us no information about our choice, but it will after conditional probability for the remaining door.
More generally, with n doors, if the animator opens http://freakonometrics.blog.free.fr/public/maths/monthy20.png doors,

then
http://freakonometrics.blog.free.fr/public/maths/montyh60.png
while
http://freakonometrics.blog.free.fr/public/maths/montyh62.png
And here the result is even more intuitive: we have to open the door that was left closed. Actually, it is possible to see that it can be extend to the case where the are http://freakonometrics.blog.free.fr/public/maths/montyh75.png doors (or boxes), the candidate chooses http://freakonometrics.blog.free.fr/public/maths/montyh73.png doors, and the animator opens http://freakonometrics.blog.free.fr/public/maths/montyh71.png (out of the http://freakonometrics.blog.free.fr/public/maths/montyh95.png remaining doors). Then, behind each door chosen by the candidate, the probability does not change
http://freakonometrics.blog.free.fr/public/maths/montyh70.png
where i goes from 1 to http://freakonometrics.blog.free.fr/public/maths/montyh73.png, while
http://freakonometrics.blog.free.fr/public/maths/montyh80.png
where i goes from http://freakonometrics.blog.free.fr/public/maths/montyh76.png to http://freakonometrics.blog.free.fr/public/maths/montyh75.png.