Tag Archives: probability

Generating a Markov chain vs. computing the transition matrix

A couple of days ago, we had a quick chat on Karl Broman‘s blog, about snakes and ladders (see http://kbroman.wordpress.com/…) with Karl and Corey (see http://bayesianbiologist.com/….), and the use of Markov Chain. I do believe that this application is truly awesome: the example is understandable by anyone, and computations (almost any kind, from what we’ve tried) are easy to perform. At the same time, some French students asked me specific details regarding some old lectures notes on Markov chains, and on some introductory example I used as a possible motivation: the stepping stone algorithm. In the notes, I just mentioned the idea of this popular generic algorithm (introduced in Sawyer (1976)) and I use simulations to show – visually – how it works. Again, it was just to motivate the course which actually did focus on the theory of Markov Chains. But those student wanted more, like how did I get the transition matrix, for instance. And that is actually not a simple question, from a computational perspective. I mean, I can easily generate this Markov Chain, but writing explicitly the transition, that was another story. Which took me a bit longer. In a very specific case…

But let us get back to the roots, and to the stepping stone algorithm. At least, one of them (the one I used in my notes) because it looks like there are several algorithm. We do consider a grid, say , with some colors inside, say  possible colors. Each cell of the grid has a given color. Then, at some stage, we select randomly one cell in the grid, and it will take the color of one of its neighbor (some kind of absorption, or mutation). This is, more or less, what is also detailed in some lecture notes by James Propp (see also e Sato (1983) or Zähle et al. (2005) for more theoretical details about that Markov chain). This is extremely simple to generate (that’s what I did in my notes, with very big grids, and a lot of colors). But what if we want to write the transition matrix ?

First of all, we need to define the state space. Basically, we do have  cells, each of them has one color, chosen among . Which gives us  possible states…. And that can be large. I mean, if we consider the smallest possible grid (that might be interesting), say , and only  colors, then we talk about possible states. That is large, not huge. But we should keep in mind that we have to compute a transition matrix, that would be a matrix with  elements. More generally, we talk about writing down matrices with  elements. If we want black and white  grids, that would mean a matrix with  which mean 4 billion elements ! And if we consider an red-green-blue  grid, we have to explicit a matrix with  i.e almost 400 million elements. So, let’s face it: we can only work with  bi-color grids.

So let’s try… The good thing is that it can be related to work I’ve been doing recently on binomial recombining trees (binomial being related to bi-color). First of all, our grid will be describes as follows

> h=3
> M=matrix(1:(h^2),h,h)
> M
     [,1] [,2] [,3]
[1,]    1    4    7
[2,]    2    5    8
[3,]    3    6    9

with two colors

> color=c("red","blue")

Then, we should look for neighbors, or derive an neighborhood matrix,

> d=function(i,j) dist(rbind(c((i-1)%/%h,(i-1)%%h),
+                            c((j-1)%/%h,(j-1)%%h)))
> Neighb=matrix(Vectorize(d)(rep(1:(h^2),each=h^2),
+                            rep(1:(h^2),h^2)),h^2,h^2)
> trunc(Neighb*100)/100
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,] 0.00 1.00 2.00 1.00 1.41 2.23 2.00 2.23 2.82
 [2,] 1.00 0.00 1.00 1.41 1.00 1.41 2.23 2.00 2.23
 [3,] 2.00 1.00 0.00 2.23 1.41 1.00 2.82 2.23 2.00
 [4,] 1.00 1.41 2.23 0.00 1.00 2.00 1.00 1.41 2.23
 [5,] 1.41 1.00 1.41 1.00 0.00 1.00 1.41 1.00 1.41
 [6,] 2.23 1.41 1.00 2.00 1.00 0.00 2.23 1.41 1.00
 [7,] 2.00 2.23 2.82 1.00 1.41 2.23 0.00 1.00 2.00
 [8,] 2.23 2.00 2.23 1.41 1.00 1.41 1.00 0.00 1.00
 [9,] 2.82 2.23 2.00 2.23 1.41 1.00 2.00 1.00 0.00
> Neighb=(Neighb<2)&(Neighb>0)
> Neighb
       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9]
 [1,] FALSE  TRUE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE
 [2,]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE
 [3,] FALSE  TRUE FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE
 [4,]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE  TRUE FALSE
 [5,]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
 [6,] FALSE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE
 [7,] FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE  TRUE FALSE
 [8,] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
 [9,] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE FALSE

Now, let us explicit our 512 possible states.

> n=h^2
> states=function(x){
+   Base.b=rep(0,n)
+   ndigits=(floor(logb(x,base=length(color)))+1)
+   for(i in 1:ndigits){
+     Base.b[n-i+1]=(x%%length(color))
+     x=(x %/% length(color))}
+   return(Base.b)}
> M=Vectorize(states)(1:(length(color)^n-1))
> liststates=data.frame(rbind(rep(0,h^2),t(M)))
> head(liststates)
  X1 X2 X3 X4 X5 X6 X7 X8 X9
1  0  0  0  0  0  0  0  0  0
2  0  0  0  0  0  0  0  0  1
3  0  0  0  0  0  0  0  1  0
4  0  0  0  0  0  0  0  1  1
5  0  0  0  0  0  0  1  0  0
6  0  0  0  0  0  0  1  0  1

(for the first six, with 0/1 digits instead of colors). For instance, if we look at a specific one, it is possible to plot the grid, using

> plotsteps=function(u){
+   plot(0:h,0:h,col="white",xlab="",ylab="",axes=FALSE)
+   for(i in 0:(h^2-1)){
+   x=i%/%h
+   y=i%%h
+   polygon(x+c(1,.1,.1,1),y+c(1,1,.1,.1),
+   col=color[as.numeric(u)[i+1] + 1])
+   text(x+.45,y+.45,i)
+   }}

Here,

> plotsteps(liststates[100,])

Then, given one state, let us see what could happen next,

  • let us compute all connected states: all states where we can end up in if we change one cell
  • we have to check, for each connect state which cell did change
  • we should compute probabilities to reach those 9 states, based on the fact that each of the cell is chosen with the same probability, and the fact that probability to change the color is based on the colors around.
  • if some states cannot be reached (if a cell is surrounded by elements of the same color, so it cannot change its color), then, we should remove then from the list of reachable (possible) states.

The code will be something like the following

> listneighbour=function(i){
+   start=liststates[i,]
+   difference2only=function(j) {
+     w=which(liststates[j,]!=liststates[i,])
+     return((length(w)==1))}
+   possible=which( Vectorize(difference2only)(1:nrow(liststates))==TRUE )
+   P=function(j){   
+     L=liststates[i,which(Neighb[which(liststates[j,]!=liststates[i,]),]==TRUE)]
+     T=table(as.numeric(L))
+     T=T[as.character(0:(length(color)-1))]
+     T[is.na(T)]=0
+     return(as.numeric(T)/sum(T))
+   }
+   probability=Vectorize(P)(possible)
+   W=NULL
+   for(j in possible) W=c(W,which(liststates[j,]!=liststates[i,]))
+   I=1-liststates[i,W]+1
+   vp=diag(probability[as.numeric(I),])
+   vproba=0*vp
+   if(sum(vp)!=0) vproba=vp/sum(vp)
+   return(list(
+     color=liststates[i,W],
+     absorb=W,
+     possible=possible,
+     probability=probability,
+     prob=vproba))
+ }

For instance, if we start from state 100 (here, on the right)

> listneighbour(100)
$color
    X3 X4 X8 X9 X7 X6 X5 X2 X1
100  1  1  1  1  0  0  0  0  0

$absorb
[1] 3 4 8 9 7 6 5 2 1

$possible
[1]  36  68  98  99 104 108 116 228 356

$probability
     [,1] [,2] [,3]   [,4]   [,5] [,6] [,7] [,8]   [,9]
[1,]    1  0.8  0.6 0.6667 0.3333  0.4  0.5  0.6 0.6667
[2,]    0  0.2  0.4 0.3333 0.6667  0.6  0.5  0.4 0.3333

$prob
[1] 0.17964072 0.14371257 0.10778443 0.11976048 0.11976048
[6] 0.10778443 0.08982036 0.07185629 0.05988024

Let us look more specificaly at the 99th state (which appears above as a state that could be reached from the 100th),

> liststates[99,]
   X1 X2 X3 X4 X5 X6 X7 X8 X9
99  0  0  1  1  0  0  0  1  0

If we plot it (here on the right, again), we get

> plotsteps(liststates[99,])

Clearly, here, the cell in the upper corner (number 9) changed from blue to red. Now, about the probability… The probability to select cell 9 is 1/9, and given that cell 9 is chosen, the probability to go from blue to red is 2/3 (the cell is surrounded by 2 red cells, and 1 blue cell). The probability to remain blue is then 1/3. Those are the probabilities computed by our function (the table with two rows, one per color). In order to get a better understanding on the meaning of the last line, with some sort of probabilities), let us look at the following (simpler) example.

> liststates[2,]
  X1 X2 X3 X4 X5 X6 X7 X8 X9
2  0  0  0  0  0  0  0  0  1

that can be visualized on the right (on the right). Here,

> listneighbour(2)
$color
  X9 X8 X7 X6 X5 X4 X3 X2 X1
2  1  0  0  0  0  0  0  0  0

$absorb
[1] 9 8 7 6 5 4 3 2 1

$possible
[1]   1   4   6  10  18  34  66 130 258

$probability
     [,1] [,2] [,3] [,4]  [,5] [,6] [,7] [,8] [,9]
[1,]    1  0.8    1  0.8 0.875    1    1    1    1
[2,]    0  0.2    0  0.2 0.125    0    0    0    0

$prob
[1] 0.65573770 0.13114754 0.00000000 0.13114754 0.08196721 
[6] 0.00000000 0.00000000 0.00000000 0.00000000

Things are pretty simple here

  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{1,2,3,4,7\}, then nothing change, since all the neighbors have the same color. So if we want to focus on changes (or say run the algorithm until the first color change, then choosing those cells is a waste of time)
  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{5,6,8\}, then it could be possible to change the color. And actually, https://latex.codecogs.com/gif.latex%20?\{5\} is different from https://latex.codecogs.com/gif.latex%20?\{6,8\} (since it does have much more neighbors)
  • if we chose cell https://latex.codecogs.com/gif.latex%20?\{9\}, then definitively, the color will change, since all neighbors have the other color here,

Now, the probability to select cell  given that there was a color change would be, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{3}{3}=1

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{5}and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{8}

And for the other, https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%200. So, it comes – since we assume that cells are drawn independently, and with the same probability, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{1%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{40}{61}

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{5}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{8}{61}

and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{8}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{5}{61}

Which are exactly the probability computed above. The point is that we compute probabilities given that a color change will actually occur. The good point is that it should faster convergence to some limiting distribution. If any.

What about our transition matrix ? Well, using a simply loop, we should get it easily

> M=matrix(0,nrow(liststates),nrow(liststates))
+ for(i in 1:nrow(liststates)){
+ L=listneighbour(i)
+ if(sum(L$prob)!=0){
+ j=L$possible
+ M[i,j]=L$prob
+ }
+ if(sum(L$prob)==0){
+ j=i
+ M[i,j]=1
+ }
+ }

One can check that this matrix satisfies some properties of transition matrices. For instance, the sum per row is one,

> sum(apply(M,1,sum)!=1)
[1]  0

Remember that this matrix is big, so I will not print if here. But trust me, it works (it might take a while on an old laptop, but anyone can do it). Now, if we want to visualize some paths of that chain, we can use the following algorithm. First, we need a starting point, that can be chosen randomly,

> j=sample(1:nrow(liststates),size=1)

or using a given colored grid, say

> j=100

Then we plot it,

> plotsteps(liststates[j,])

Now, the code within the loop is here

> d=rep(0,nrow(liststates))
> d[j]=1
> d=d%*%M
> j=sample(1:nrow(M),size=1,prob=d)
> plotsteps(liststates[j,])

Here are some examples. And indeed, we end up either with all cells in blue, or all cells in red.

Now, do we have to compute that transition matrix to produce those graph (and to generate that Markov chain) ? No. Of course not… At each step, I use a Dirac measure, and use the transition matrix just to get the probability to generate then the next state. Actually, one can write a faster and more intuitive code to generate the same chain… But I should probably keep that for another post…

From Simpson’s paradox to pies

Today, I wanted to publish a post on economics, and decision theory. And probability too… Those who do follow my blog should know that I am a big fan of Simpson’s paradox. I also love to mention it in my
econometric classes. It does raise important questions, that I do relate to multicolinearity, and interepretations of regression models, with multiple (negatively correlated) explanatory variables. This paradox has amazing pedogological virtues. I did mention it several times on this blog (I should probably mention that I discovered this paradox via Marco Scarsini, who did learn me a lot of things, in decision theory and in probability). For those who do not know this paradox, here is an example that Marco gave in one of his talk, a few years ago. Consider the following statistics, when healthy people entered in some hospital

hospital total survivors deaths survival
rate
hospital A 600 590 10 98%
hospital B 900 870 30 97%

while, when sick people entered in the same hospitals

hospital total survivors deaths survival
rate
hospital A 400 210 190 53%
hospital B 100 30 70 30%

Somehow, whatever your health situation, you should choose hospital A. Now, if we agregate

hospital total survivors deaths survival
rate
hospital A 1000 800 200 80%
hospital B 1000 900 100 90%

i.e. without any doubts, people should choose hospital B.

Actually, Simpson’s paradox is called Simpson’s paradox because Colin Blyth named it that way in 1972, in his paper entitled on Simpson’s paradox and the sure-thing principle (an economic article in a statistical journal), that can be downloaded from http://www.stat.cmu.edu/~fienberg/…. He found this paradox in a paper published in 1951 by Edward Simpson, even if other papers actually did mention it earlier. The most popular application is probably admission at Berckley’s graduate studiesprograms, and sex bias, see Bickel, Hammel & O’Connell (1975), that can be downloaded from http://www.unc.edu/~nielsen/…. I also mentioned a geometric interpretation of this paradox a few years ago on my blog, which is so simple to understand that the paradox is no longer a paradox actually, since on the example above, we had

and

while

With symbolic notations, one can have at the same time

and

with also

as shown on the graph below

There should be connection between Simpson’s paradox and the ecological fallacy (which is an issue I recently discovered and that I found extremely interesting, related again to difficulties of interpreting
regressions). But that’s another story. My point today is that Colin Blyth did mention another nice paradox, that is related, this time, to stochastic orderings. The idea is the following. Consider the three spinners drawn below (imagine some arrows in those circles)

  • spinner A: no matter where the arrow stops, the gain is 3,
  • spinner B: 56% chances to gain 2, 22% chances to gain 4, and 22% chances to gain 6,
  • spinner C: 51% chances to gain 1, 49% chances to gain 5.

Instead of spinners, it is also possible to consider three different lotteries,

You play against a friend, you pick a spinner, while the friend picks another. Everyone flick his arrow, the highest number wins (no matter the difference). Let us compute the odds. First case, A against B, from
A’s perspective

B-2 B-4 B-6
A-3 56%
+1
win
22%
-1
lose
22%
-3
lose

In that case, A has 56% chance of beating B. Second case, A against C, from A’s perspective,

C-1 C-5
A-3 51%
+1
win
49%
-2
lose
In that case, A has 51% chance of beating C. Third (an final) case, B against C, from B’s perspective. Assuming independence between the spinners, joint probabilities can easily be computed,
C-1 C-5
B-2 28.56%
+1
win
27.44%
-3
lose
B-4 11.22%
+3
win
10.78%
-1
lose
B-6 11.22%
+5
win
10.78%
+1
win
In that case, B has 61.78% chance of beating C. So, if we try to summarize,
  • A is the best choice, since it beats both with – always – more than 50% chance,
  • C is the worst choice, since it is beaten by both with – always – more than 50% chance,
Now, assume that you play not against one friend, but two friends. An everyone picks a different spinner. Let
us compute the odds, one more time. First case, A against B and C, from A’s perspective
B-2
C-1
B-2
C-5
B-4
C-1
B-4
C-5
B-6
C-1
B-6
C-5
A-3 28.56%
+1
win
27.44%
-2
lose
11.22%
-1
lose
10.78%
-1
lose
11.22%
-3
lose
10.78%
-3
lose
In that case, A has 28.56% chance of beating B and C. Second case, B against A and C, from B’s perspective,
A-3
C-1
A-3
C-5
B-2 28.56%
-1
lose
27.44%
-2
lose
B-4 11.22%
+1
win
10.78%
-1
lose
B-6 11.22%
+3
win
10.78%
+1
win
In that case, B has 33.22% chance of beating A and B.Third (an final) case, C against A, from C’s perspective,
A-3
B-2
A-3
B-4
A-3
B-6
C-1 28.56%
-2
lose
11.22%
-3
lose
11.22%
-5
lose
C-5 27.44%
+2
win
10.78%
+1
win
10.78%
-1
lose

In that case, C has 38.22% chance of beating A and B. So, if we try to summarize, this time

  • C is the best choice, since has (strictly) more than 1/3 chances to win, which the highest probability
  • A is the worst choice, since has (strictly) less than 1/3 chances to win, which the lowest probability

Odd isn’t it ? Now, is there an interpretation of that paradox ? Yes, Martin Gardner, in his paper on induction and probability, mentioned the case of drug testing. The value we had with the spinner is the health level, rated from 1 to 6. Thus, taking drug A, you always get an average health level of 3. With drug C, on the other hand, you get either very sick (level 1) or very well (level 5). Consider now a doctor who wants to maximize the patient’s chance of being well. If only pills A and C are available, then the doctor should choose A. This is what we’ve seen in the first part. Assume that now a company delivers a third pill, called drug B. Then the doctor should find C more interesting…. Odd, isn’t it ?

Colin Blyth gave a more amusing application. Assume that you like to go to the restaurant, and you like get a dessert there. Dessert A – the apple pie – is the average one, with a standard level, that you rank 3 (on a scale from 1 to 6). Dessert C – the cheese cake – can either be awfull (ranked 1) or delicious (ranked 5). You’d better go for the apple pie if you want to maximize the probability of not being disappointed (i.e. maximizing your “best chance” according to Colin Blyth, but I guess it can be interpreted as regret minimization too). Now assume that dessert B – the blueberry pie – is available (with ranks given by the spinner). Then you should go for the cheese cake. I let you imagine the discussion that you can have, then, with your favorite waitress

– Hi Mr Freakonometrics, do you want a piece of apple pie ? (yes, actually she also comes frequently on my blog, and knows me from my pseudo…)

– Probably. But actually, I was wondering if you did have your blueberry pie today ?

– Yes, in fact we do….

– Great, in that case, I’ll go for the cheese cake.

She’ll probably think that I am freak… so I hope she’ll come and read my post, to understand that, actually, it does make a lot of sense to go for what was supposed to be my worst case.

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

UEFA, is that it ?

Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple

> table(M[,2*n])/length(M[,2*n])*100

       1        2        3        5        7       10       11 
11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856

Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain

> S
       1     2     3     5     7    10    11    13
4   0.00 14.16 14.16  0.00 22.22 21.25 13.05 15.13
6  12.52 13.19 13.19 14.11 20.13  0.00 12.35 14.47
8  18.78  0.00 19.54 21.50  0.00  0.00 18.39 21.76
9  18.78 19.54  0.00 21.50  0.00  0.00 18.39 21.76
12 14.68 15.54 15.54 16.56  0.00 23.19 14.47  0.00
14 11.64 12.37 12.37 13.05 18.96 18.25  0.00 13.34
15 11.77 12.55 12.55  0.00 19.36 18.59 11.64 13.50
16 11.82 12.61 12.61 13.25 19.31 18.70 11.67  0.00

that can be visualized below

White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints…

Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code

> U=c(4,6,8,9,12,14,15,16)
> a1=U[1]
> b1=U[2]
> c1=U[3]
> d1=U[4]
> e1=U[5]
> f1=U[6]
> g1=U[7]
> h1=U[8]
> a2=b2=c2=d2=e2=f2=g2=h2=NA
> posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
> if(length(posa2)==0){na=na+1}
> for(a2 in posa2){
+ posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
+ if(length(posb2)==0){na=na+1}
+ for(b2 in posb2){
+ posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
+ if(length(posc2)==0){na=na+1}
+ for(c2 in posc2){
+ posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],
+ a2,b2,c2)
+ if(length(posd2)==0){na=na+1}
+ for(d2 in posd2){
+ pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],
+ a2,b2,c2,d2)
+ if(length(pose2)==0){na=na+1}
+ for(e2 in pose2){
+ posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],
+ a2,b2,c2,d2,e2)
+ if(length(posf2)==0){na=na+1}
+ for(f2 in posf2){
+ posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],
+ a2,b2,c2,d2,e2,f2)
+ if(length(posg2)==0){na=na+1}
+ for(g2 in posg2){
+ posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],
+ a2,b2,c2,d2,e2,f2,g2)
+ if(length(posh2)==0){na=na+1}
+ for(h2 in posh2){
+ s=s+1
+ V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
+ }}}}}}}}

On the initial ordering of home team, the number of deadlocks was

> na
[1] 657

The probability of obtaining a deadlock is then

> 657/(657+5463)
[1] 0.1073529

(657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered

 [1]    6    4   16   14   12   15    8    9

In that case, the probability of obtaining a deadlock was

> 4047/(4047+5463)
[1] 0.4255521

Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e.

> Urandom=sample(U,size=8)

the distribution of the probablity of having a deadlock is

All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results…

UEFA, what were the odds ?

Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…).

To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th),

UEFA

Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition.

And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix.

Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix,

 > n=16
 > uefa=read.table(
 + "http://freakonometrics.blog.free.fr/public/data/uefa.csv",
 + sep=",",header=TRUE)
 > LISTEIMPOSSIBLE=matrix(
 + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n)

I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one),

 > a1=1
 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]}
 > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1]

Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible,

 > b1=6
 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)

Etc. So, given the list of home teams,

 > a1=4
 > b1=6
 > c1=8
 > d1=9
 > e1=12
 > f1=14
 > g1=15
 > h1=16

consider the following loops,

 > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
 > for(a2 in posa2){
 + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
 + for(b2 in posb2){
 + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
 + for(c2 in posc2){
 + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2)
 + for(d2 in posd2){
 + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2)
 + for(e2 in pose2){
 + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2)
 + for(f2 in posf2){
 + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2)
 + for(g2 in posg2){
 + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2)
 + for(h2 in posh2){
 + s=s+1
 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
 + cat(s,V,"\n") 
 + M=rbind(M,V)
 + }}}}}}}}

With the print option, we end up with

5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 
5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 
5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1

i.e.

> nrow(M)
[1] 5463

possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %)

> 100/nrow(M)
[1] 0.01830496

Which is not that small when we think about it….

And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied….

Actuariat 1, ACT2121, huitième cours

Pour le huitième cours d’actuariat 1 (ACT2121, préparation à l’examen P de la SOA), on continuera les exercices commencés la semaine passée. Je mets toutefois en ligne quelques exercices supplémentaires, pour ceux qui souhaitent s’entraîner davantage (le fichier est en ligne ici). Pour rappel (?) l’examen final aura lieu dans 2 semaines la semaine prochaine, et portera sur l’ensemble de la matière. Comme toujours, 30 questions, 3 heures, et on commence à 13 heures (dois-je le préciser ?). Cette fois, je fournis la table “officielle” de la SOA.

 


Actuariat 1, ACT2121, septième cours

Toujours dans le cadre de la préparation à l’examen P de la SOA, une série d’exercices. Comme il reste trois semaines (en plus de l’examen final), on va essayer de finir de revoir l’ensemble des notions. Les 50 exercices sont en ligne ici. Je mettrais en ligne (très bientôt) l’énoncé et la correction de l’examen intra de lundi (avec – comme la dernière fois – les statistiques de réponse par question)

Bayes is playing Russian roulette

There was (once again) a nice puzzle inhttp://www.futilitycloset.com/. Bayes and a good friend are playing Russian roulette. The revolver has six chambers. He puts two bullets in two adjacent chambers, spin the cylinder, hold the gun to his friend’s head, and pull the trigger. It clicks. So it is now Bayes’s turn: he can choose either to spin the cylinder again or leave it as it is. Which is better? Hopefully, Bayes knows his theorem: if he does spin it, the probability of getting killed is 2 out of 6 (four empty chambers out of six), but if he does not, since his friend is still alive, then the hammer should be next to one of the four cylinders in red, below


So here, there is 3 chance out of 4 to survive, i.e. the probability of getting killed is 1 out of 4 (while it was 1 out of 3 when spinning). So Bayes should not spin. And as always, it is possible to see it is a more general result: more generally, in a revolver with http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif chambers, it there are http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif bullets in http://freakonometrics.blog.free.fr/public/perso5/bullet02.gif adjacent chambers,  if the first player survives, the probability of getting killed is k over http://freakonometrics.blog.free.fr/public/perso5/bullet01.gif, when spinning, while it would be 1 over http://freakonometrics.blog.free.fr/public/perso5/bullet03.gif if we don’t. Not spinning is better if and only if

http://freakonometrics.blog.free.fr/public/perso5/bullet04.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso5/bullet05.gif

So you’d better not spin, unless there was one bullet in the revolver, i.e. http://freakonometrics.blog.free.fr/public/perso5/bullet06.gif… or http://freakonometrics.blog.free.fr/public/perso5/bullet07.gif (in that case, it might not be a good idea actually to play the game).

Proving tautological versus trivial results in mathematics

There is something that might be fun in mathematics, which is the connexion between trivial, tautological and difficult questions. Sometimes, things are so intuitive, that they seem to be obvious. But mathematicians aren’t jedis, and they should not trust too much their intuition… I mean intuition is fine, but it is not a proof. It is like those standard results we learn in topology courses, e.g. “the closure of an open ball is not necessarily the closed ball”. The other thing is that after a while, you try to prove something, until someone makes you realize that it is the definition…

And this morning, while I was trying to make a coffee, @renaudjf came with a simple question (yes, it always starts like that). Consider the standard algorithm to generate a conditional random variable. Assume that  has a priori distribution , and that , given , has (conditional) distribution .

The standard idea is monte carlo simulation, to generate values of , is
  •  step 1: generate 
  •  step 2: given that generation of , generate 
“Can we prove that we actually generate from the (true, maybe hard to characterize) non-conditional distribution of  ? Or is it just trivial ?”. After having those previous philosophical questions, we came to the point that if it was trivial, then we should be able to prove it. A standard way of writing the algorithm is to use the quantile based technique
  •   with ,
  •   with ,
For instance, to generate negative binomial distribution
n=1
theta=rgama(n,3,3)
X=rpois(n,lambda=theta)
Thus, let  where  and  are two independent random variables with a uniform distribution on the unit interval. Let us try to derive its distribution, i.e.
so
if we consider the following change of variate 
which is exactly the non-conditional distribution of .
And then, you’re quite happy because you’ve been able to prove a trivial result ! But next time, I promise, we’ll try to derive an amazing theorem that will change humanity… but next time only, first, let us prove trivial results.

Eating chocolate, an Easter problem

Assume that there are (say) 100 chocolate eggs in a basket, 20 are dark chocolate, while 80 are milk chocolate. Unfortunately, eggs are wrapped, and there is no way you can distinguish them. My daughter has the following algorithm for eating them (and she actually plans to eat all of them)

  1. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper (just to remember how many of each kind are left), eat it, and move to strategy 2.
  2. if there are eggs in her basket, she picks one – at random – looks if it is either dark or milk chocolate, write it down on a piece of paper and:
  • if it is the same kind as the one she got before, then eat it, and go again to step 2.
  • if it is not the same kind as the one she got before, she wraps it back, and go again to step 1.

At the end, if there is only one egg left, the probability that it is a milk chocolate egg is exactly 1/2… Nice, isn’t it ?

It is a simple rejection technique algorithm. It is possible to run some code to check the answer. The algorithm which return the taste of the last egg remaining is

> lastchocolate=function(dark=80,milk=20){
+ s=1
+ while(dark+milk>1){
+ if(s==1){
+ (eatnow=sample(c("D","M"),prob=c(dark,milk),size=1))
+ if(eatnow=="D"){dark=dark-1};
+ if(eatnow=="M"){milk=milk-1};
+ eatbefore=eatnow;s=2}
+ if(s==2){
+ if(dark+milk>1){
+ s=1;
+ eatnow=sample(c("D","M"),prob=c(dark,milk),size=1)
+ if(eatnow==eatbefore){s=2
+ eat=eatnow;
+ if(eatbefore=="D"){dark=dark-1};
+ if(eatbefore=="M"){milk=milk-1}}
+ }}
+ }
+ return(c(dark,milk))}

If we run it 2,000 times, we obtain

> set.seed(1)
> m=lastchocolate(dark=80,milk=20)
> for(s in 1:1999){m=cbind(m,lastchocolate(dark=80,milk=20))}
> apply(m,1,sum)
[1] 1022 978

So it looks like we have half chance to end up with a dark chocolate egg, and half chance to end up with a milk chocolate egg.

Let us prove that result… Let  denote the number of milk chocolate and the number of dark chocolate eggs, when we start. Consider an inductive proof of the fact that the probability has to be . The first step is when . Then

out of chocolates, the probability to pick a milk chocolate egg is . Assume that it is  for all pairs  such that  and .
Assume that after some steps, there are  and  chocolates, with  (again, at least one egg has been eaten). The probability to have  is
Similarly, the probability to have  is
So, the probability that both are strictly positive is then
Then we can use our inductive assumption. Thus, the overall probability that the last egg is a milk one is
where the part on the left is  and the second one is . This probability is exactly one half (straightforward).

Playing with fire (or water)

A few days ago,http://www.futilitycloset.com/published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)

The answer is nice, an can be read on the blog.

What puzzled me in this problem is the following: if we know, for sure, that at least one player won’t get wet, we don’t know exactly how many of them won’t get wet (assuming that if they shoot at the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

It is then rather simple to get the distribution of the number of player that did not get wet,

N25=Vectorize(NOTWET)(n=rep(25,NSim))
T=table(N25)
plot(as.numeric(names(T)),T/NSim,type="b")

The graph for different values for the total number of players is the following (based on 25,000 simulations)

If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,

NSim=25000
N51=Vectorize(NOTWET)(n=rep(51,NSim))
T=table(N51)
plot(as.numeric(names(T)),T/NSim,type="b",col="blue")
u=seq(0,51,by=.1)
lines(u,dnorm(u,mean(N51),sd(N51)),col="red",lty=2)

If anyone has an intuition (not to say a proof) for that, I’d be glad to hear it…

Sunday evening, stupid games…

This evening, while I was about to wash the dishes, I heard my elders starting a game (call them Him and Her)
Him: “I have picked – in my head – a number, lower than 50. Try to guess…”
Her: “No way, too difficult…”
Him: “You can try five different numbers…”
Her: “.,. um … No, no way…”
Me: “Wait… each time we suggest a number, you tell us if yours is either above, or below ?”
You can see me coming clearly, can’t you ? Using a simple subdivision rule, we have a fast algorithm (and indeed, if I have to choose between washing the dishes and playing with the kids…)
Him: “um…. ok”
Her: “Daddy, are you sure we will win ?”
Me: “Well… I cannot promise that we will win… but I am rather sure [sic] that we will win quite frequently: more gains than losses…” (I guess).
Her: “Great ! I am playing with daddy…”

Him: “um…. wait, is it one of you trick, again ? I don’t to play anymore… Do you want to see the books we’ve chosen at the library ?”
Her: “Sure…”
Me: “What ? no one wants to see if I was right ? that we have indeed more than 50% chances to win…”
Him and her: “No !”
The point of that story ? If we listen to kids, science will not go forward, trust me. But I am curious… I want to see if my intuition was correct. Actually, the intuition was based on the fact that

> 2^5
[1] 32 
> 2^6
[1] 64

so in 5 or 6 steps the algorithm of subdivision should converge. I guess… I mean, I do not know for sure, since 50 is not a power of 2, so it might be difficult, each time, to split in two: we have to deal only with integers here…
To be sure, let us substitute my laptop to my son… to pick up numbers, randomly (yes, sometimes I feel like I am Doctor Tenma, 天馬博士). The algorithm is simple: there are bounds, and at each stop I should suggest the middle of the interval. If the middle is not an integer, I suggest either the integer below or the integer above (with equal probabilities).

cutinhalf=function(a,b){
m=(a+b)/2
if(m %% 1 == 0){m=m}
if(m %% 1 != 0){m=sample(c(m-.5,m+.5),size=1)}
return(round(m))}

The following functions runs 10,000 simulations, and tells us how many times, out of 5 numbers suggested, we got the good one.

winning=function(lower=1,upper=50,tries=5,NS=100000){
SIM=rep(NA,NS)
for(simul in 1:NS){
interval=c(lower,upper)
(unknownnumber=sample(lower:upper,size=1))
success=FALSE
for(i in 1:tries){
picknumber=cutinhalf(interval[1],interval[2])
if(picknumber==unknownnumber){success=TRUE}
if(picknumber>unknownnumber){interval[2]=picknumber}
if(picknumber<unknownnumber){interval[1]=picknumber}
#print(c(unknownnumber,picknumber,success,interval))
};SIM[simul]=success};return(mean(SIM))}

It looks like the probability that we got the good number is higher than 60%,

> winning()
[1] 0.61801

Which is not bad. And if the upper limit was not 50, but something else, the probability of winning would have been the following.

VWN=function(n){winning(upper=n)}
V=Vectorize(VWN)(seq(25,100,by=5))
plot(seq(25,100,by=5),V,type="b",col="red",ylim=c(0,1))


Actually, after losing a couple of times, I am rather sure that my son would have to us that we can suggest only four numbers. In that case, the probability would have been close to 30%, as shown on the blue curve below (where four numbers only can be suggested)

Anyway, as intuited, with five possible suggestions, we were quite likely to win frequently. Actually with a probability of almost 2 out of 3…and 1 out of 3 if my son had decided to pick an number between 1 and 100, or only 4 possible suggestions… Those are quite large actually, when we think about it. It reminds me that McGyver story I mentioned a few months ago… Anyway, calculating probabilities is nice, but I still have to wash the dishes…

Ruin probability and infinite time

A couple of weeks ago, I had a discussion with a practitioner, working in some financial company, about ruin, and infinite time. And it reminded me a weird result. Well, not a weird result, but a result I found disturbing, at first, when I was a student (that I rediscovered with the eyes of someone dealing with computational issues, seeing here a difficult theoretical question). Consider a simple ruin problem. A player has wealth . Then he flips a coin: tails he has a gain of 1, heads he experiences a loss of 1. At time , his wealth is where  is associated to the th coin:  is equal to 1 with probability (tails), and -1 with probability  (heads). It is also possible to write

where  can be interpreted as the net gain of the player. In order to get a good understanding of results that can be obtained. Assume  to be given. Let denote the number of heads and  the number of tails. Then , while . Let  denote the number of paths to go from point A (wealth  at time ) to point B (wealth  at time ). Note that this is a Markovian problem, that can be modeled using Markov chains

But here, we will focus on combinatorial results. Hence,

In order to derive probabilities to reach , let  denote the number of paths going from  to . And let denote the number of paths going from  to  that do reach  at some point between  and . Using a simple reflexion property, then if  and  are positive,

Based on those reflexions, two results can be derived (focusing on probability, instead of counting paths). First, we can obtain that

(given that n and x have the same parity). The second result we can obtain is that

Based on those two expressions, if  denotes the first time  become null, given ,

then

This can be computed easily,

> x=10
> p=.55
> ProbN=function(n){
+ pb=0
+ if(abs(n-x) %% 2 == 0)
+ pb=x/n*choose(n,(n+x)/2)*(1-p)^((n+x)/2)*(p)^((n-x)/2)
+ return(pb)}
> plot(Vectorize(ProbN)(1:1000),type="s")

That looks nice… But if we look closer, we can wonder what

would be ? Since we have the distribution of a probabilty measure, we might expect one. But here

> sum(Vectorize(ProbN)(1:1000))
[1] 0.134385

And this is not due to calculation mistakes that we do not get 1 here. Actually, we should write

which might be interpreted as the probability of ruin, starting from , that we denote  from now on. The term on the left can be approximated using monte-carlo simulations

> p=.55
> x=10
> m=1000
> simul=10000
> S=sample(c(-1,1),size=m*simul,replace=TRUE,prob=c(1-p,p))
> MS=matrix(S,simul,m)
> for(k in 2:m) MS[,k]=MS[,k]+MS[,k-1]
> T0=function(vm) which(vm<=(-x))[1]
> MTmin=apply(MS,1,T0)
> mean(is.na(MTmin)==FALSE)
[1] 0.1328

To check the validity of the relationship above, a simple (theoretical) recursive formula can be derived for the term on the right (ruin probability), namely

with a boundary conditions , and . Then is comes that

Note that it might be tricky to check using monte carlo simulation… since we cannot have an infinite number of runs. And we’re dealing precisely with things that do occur when time is infinite. Actually, we can still check convergence, considering an upper limit  for the number of runs, and then letting  go to infinity. Note that an explicit formula can then be derived (using additional border condition )

Using the following code, it is possible to calculate ruin probability, in order to estimate .

> MSmin=apply(MS,1,min)
> mean(MSmin<=(-x))
[1] 0.1328
> (((1-p)/p)^x-((1-p)/p)^m)/(1-((1-p)/p)^m)
[1] 0.1344306

The following graph shows the evolution of ruin probability as a function of initial wealth (with monte carlo simulation, with a fixed horizon – including a confidence interval – versus the analytical expression)

Hence, with stopping times, one should remember that

and that those two terms can be approximated simply using simulations or standard approximations.

when Nuns or Hells Angels get in a plane

Today, at lunch, Matthieu told us a nice story (or call it a paradox if you like) about the probability to find you seat empty when you get in a place. 

  • a plane full of nuns

Assume that you are in the line to get in the airplane, you are the 100th in the line. The first one is scatter brained, he has his head in the clouds, and when he get in the airplane, he cannot remember where he should seat. His strategy is then extremely simple: he seats randomly in the plane. So he picks up randomly a seat, and he waits.

Then come 98 nuns (one by one). And nuns are extremely polite: if there is someone in their seat (the one that is on the ticket they have) then they do not complain, and pick up another seat randomly (among those available, of course). Then you arrive. The question is simple: what is the probability that someone is seated at your seat ?

Any idea…?

Maybe I should give more time to do the maths… and tell another story…

  • a plane full of Hells Angels

Consider almost the same problem as the one mentioned above. Except that now, it is not 98 nuns that are getting in the plane, but 98 Hells Angels. So the problem here is that Hells Angels are slightly less polite than nuns. When they find someone seating on the seat they should have, they do not shyly move to another seat, but they grunt and then our scatter brained man (who is actually seating in their seat) has to move somewhere else. And the question is the same: you are the 100th person to get in the plane, what is the probability that someone is seated at your seat ?Any idea….?

The important point is that the problem is exactly the same (at least from a mathematical point of view, maybe not for the stewardess, or from the guy who enter first in the plane). The point is that, at each time, there could be only one person (or less) seating in a seat which is not his or hers (in the sense that if we compare the list of the passenger at any time, and the list of seats taken, there should be only one – or less – difference). The difference in the two story is that in the first case, it will be a nun, while in the second one, it will be our shy guy.

  • Let us run simulations

If we do not see how to get that probability analytically, let us run some R code,

> set.seed(1)
> n=100; TEST=rep(NA,100000)
> for(s in 1:100000){
+ OCCUPIED=rep(FALSE,n)
+ OCCUPIED[sample(1:n,size=1)]=TRUE
+ for(j in 2:(n-1)){
+ FREE=which(OCCUPIED==FALSE)
+ if(OCCUPIED[j]==TRUE){OCCUPIED[sample(FREE,size=1)]=TRUE}
+ if(OCCUPIED[j]==FALSE){OCCUPIED[j]=TRUE}
+ }
+ TEST[s]=OCCUPIED[n]==TRUE
+ }
> mean(TEST)
[1] 0.49878

Here, we clearly see that the problem is the same (either with nuns or Hells Angels): we do not care about who will change his/her seat, but we just look at seats that are available… So the program is valid for the two problems (and the solution will then be the same). Another point is that the probability looks extremely simple: one over two !

  • an analytical expression

Consider the Hells Angels problem (for notations). Let http://freakonometrics.blog.free.fr/public/perso2/nonnes1.gif denote the probability that, at time http://freakonometrics.blog.free.fr/public/perso2/nonne6.gif, our shy guy is sitting in my seat. When he gets in the plane, the probability that he gets to my seat is

http://freakonometrics.blog.free.fr/public/perso2/nonne2.gif
Then, the probability that, after ith passenger’s entrance, our guy is sitting in my own seat is (since the initial proof was not correct, I remove it, see below for a nice proof) One can get that

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-01.gif
So, we can get the probability that, when I get in, our guy is sitting in my own seat as
http://freakonometrics.blog.free.fr/public/perso2/avion-ec-07.gif

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-08.gif

Hence, there is one chance out of two that my seat will be free… (which is what we got with Monte Carlo simulations).

But a faster proof is to observe that, in the Hells Angels case, our guy will be kicked out until he reaches either his seat, or mine. Since those two events are equiprobable, there is one chance out of two that he seats in my seat (and since no Hells Angel will seat in mine, only this first guy can). So the probability that someone is in my seat when I get in is one half.

Nice isn’t it ? And thanks Matthieu for the problem  (with his friend Claude’s solution with the Hells Angels, and Olivier and Renaud for their comments) !

Could it really be a coincidence ?!?

Last week, on http://www.guardian.co.uk, there was a nice post on coincidences, concluding with “a decent knowledge of mathematics shows you that most coincidences are just that: coincidence“, asking for a better knowledge of basic probabilities….

The only problem is that the first line is not correct… “Today is my birthday, which means that there is a 50% probability that one of the first 23 people who read this blog entry will share their birthday with me”.

There is a well known “paradox” in probabilities related to this: the correct sentence should be “there is a 50% probability that if we consider the first 23 people who read this blog entry at least two of them will share their birthday“… but not necessarily with you… Indeed the probability that 2 people out of 23 share their birthday is

> 1-prod(365:(365-22))/(365^23)
[1] 0.5072972

But if you want to find someone who share his/her birthday with you, you need to wait for the first 253 if you are looking for 50% chances,

> 1-(364/365)^253
[1] 0.5004772

But dealing with probability is not that simply. At least, we get the message… And I will be the last one to blame you: yesterday, a colleague got me asking me “simple” questions about probabilities to find a pair a socks in a several drawers… And I could not get it right…