Tag Archives: Las Vegas

Las Vegas and financial institutions

Exactly one month ago, I entered the Bellagio casino to gamble at the roulette. It was actually a request from my daughter’s godfather (who happens to be a probabilist, actually). On a comment on a previous post, he suggested the following deal,

In the Bellagio you put 10$ for me on the 33 and 10$ for you as well. If 33 shows up, you bring me to a French “3 étoiles” restaurant next time you stop by in France. If 33 doesn’t shows up, I bring you to MacDonald…

I have to admit that I like eating in French “3 étoiles” restaurants, so I did gamble. Well, I could not remember the terms of the agreement very well (neither the number to select, nor the amount to put on the table). So I did ask my daughter which number I was supposed to pick, and she choose 22. Anyway, the number that came up was neither the 33 nor the 22, so we lost. And roulette tables at the Bellagio were down to a $15 minimum (from what I remember, I was supposed to play $5 or $10). And I have seen tables with $100 minimum (probably more, but I am not sure, and I could not take much pictures inside) ! So I did play $15 (I did keep chips as souvenirs), and I have to admit that I was excited during a few seconds. I really enjoyed that thrilling sensation ! And I was playing only $15 !

Later on, in the room of the hotel – while we’ve been watching TV – we saw some poker games where people where putting $200,000 on the table (there was almost a million in the pot) ! I tried to explain my kids that this was a reason why there were so many signs on the walls claiming that kids were not supposed to enter casinos, and so many ads about gambling being an addiction. It is not reasonable to put so much on a table ! $200,000 on the table ? This is probably more than all I could possibly own !

I thought about all that yesterday, when I discovered the following Table, about leverage ratios of bank,see  http://fool.com/investing/…

Company

Leverage Ratio (Assets-to-Equity), 2007

Bear Stearns 34:1
Morgan Stanley 33:1
Merrill Lynch 32:1
Lehman Brothers 31:1
Goldman Sachs 26:1

(with similar values in U.K., according to http://voxeu.org/…)

What does 30 mean ? It means that a company with $1 in capital holds $30 in (various) financial positions (see http://newleftreview.org/II/… for a discussion). If you think about it, with a relative decline of 3.5%, the absolute loss is larger than the capital hold by the company… Now, if we forget about Lehman, and focus on me, gambling in Las Vegas, we can try to illustrate this 30:1 leverage ratio as follows. The way I see it is that, if I were a bank with $200,000 equity (equity being, from my understanding, everything I own), I would be able to borrow 30 times this amount, and put this money on some table in Las Vegas. OK, there might be a big difference, since in Vegas, on average I will loose money, while most models in finance claim that (on average) we should gain money (somehow, since it might depend on your reference level). And no one really own the casino in real life. But still. A 30 leverage ratio means that I would be playing more than $6 million on a table in Las Vegas ! How should I understand that 30 leverage ratio ? Am I really such a small player ? Are banks really too big players ? Or perhaps they do not hold enough capital to play that big….

Playing cards in Vegas?

In a previous post, a few weeks ago, I mentioned that I will be in Las Vegas by the end of July. And I took the opportunity to write a post on roulette(s). Since some colleagues told me I should take some time to play poker there, I guess I have to understand how to play poker… so I went back to basics on cards, and shuffling techniques.

Now, I have to confess that I have been surprised, while I was looking for mathematical models for shuffling, to find so many deterministic techniques (and results related to algebra, and cycles).

On http://mathworld.wolfram.com/ for instance, one can find nice articles on so-called in-shuffle or out-shuffle techniques. There is also a great article, Golomb (1961), but mainly on algebraic properties of permutations by cutting and shuffling, as well as Diaconis, Kantor and Graham’s The Mathematics of Perfect Shuffle And if you look at Monge’s shuffle, you can find a deterministic recursive relationship. As a statistician (or applied probabilist), I should confess that I did not find answer to the question I wanted to ask : how long should we shuffle before getting cards randomly sorted in ours hands ?

  • Randomness (from a statistician perspective)

First, I need to define (as properly as possible) a notion of “cards randomly sorted“. Consider a game with 32 cards. Why 32 ? Mathematicians will tell you that 32 is a great number, since it is a power of 2, so there might be interesting (algebraic) properties when shuffling. From a computational point of view, 32 is smaller than 52, so my random generations will run faster. This is basically why I used 32. 10 would have been better, but not realistic with cards.

So, our 32 cards can be seen as a vector, or a list, of 32 items, say

In order to assess if my cards are randomly sorted, let us get back to number properties (real valued numbers). If there were 10 cards, the list can be seen as an element of the following set

(or to be more specific, a subset of that set, since numbers have to be different – it has to be a permutation – we cannot have duplicates, we’ll get back to that point in a few seconds). Let us see this list as a decimal number, with 10 digits. More precisely,

Now,  it is natural to say that cards are randomly sorted is this number is uniformly distributed on the unit interval, isn’t it ? (if we use the same shuffle many times, with the same starting point)

Well, if we think about it twice, uniform on the unit interval is probably not the proper distribution, since (as mentioned above) all digits have to be different. For instance, the smallest number would be  and the largest  . But as we will see, it this uniform assumption might not be too strong, actually.

And if we want to get back to our initial problem, with 32 cards, we simply have to use a decomposition in the 32-basis.

So if we have an algorithm to shuffle cards, we just have to run it several times (with the same starting value) and see when  starts to be uniformly distributed. We start with a Dirac distribution, we have some kind of transition matrix, we expect our limiting distribution to be uniform and we wonder when the limiting distribution is reached… And from a statistical point of view, that should not be that difficult to assess, since we do have several goodness of fit tests that can be used.

Actually, it is possible to check if our technique passes the test of a uniform distribution, when digit are randomly generated (without replacement). The code to generate  is

> j = 32
> X3 = (0:(j-1))[sample(1:j)] 
> x3 = sum(j^(-(1:j))*X3)

If we run it a few times, and check if the assumption of a uniform distribution is valid (on samples with, say, 500 observations),

> P3=NULL
> for(i in 1:10000){
+   U3=NULL
+   for(s in 1:500){
+     X3 =(0:(j-1))[sample(1:j)] 
+     x3 =sum(j^(-(1:j))*X3)
+     U3 =c(U3,x3)}
+   P3 =c(P3,ks.test(U3,punif)$p.value)
+ }

in 95% of the scenarios, the -value exceeds 5%

> mean(P3>.05)
[1] 0.9529

(which is something we should have under the null), More precisely, we can check that the -value is uniformly distributed on the unit interval.

> hist(P3,freq=FALSE)

So assuming that our number is uniform on the unit interval might be a good notion for “cards are randomly sorted“.

What we need now is some shuffling algorithms. Or to be more specific, some feasible shuffling algorithm. I mean here that I just start playing with cards, so it should be some techniques that I should be able to perform, to understand how it works…. So you will have to wait a few weeks before I start talking about the riffle or dovetail shuffle (you know the kind of shuffle in which half of the deck is held in each hand, and then cards are released by the thumbs so that they fall to the table interleaved… like in the movies) !

  • Top in at random shuffle, and related (simple) algorithm

My first algorithm is simple: the top-in at random shuffle. We start with the following ordering

    N=1:m

There are https://latex.codecogs.com/gif.latex%20?m cards, and n denote the place where the card on top will go.

    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])

Then, we repeat that transfer of the card on top several times.

schuffle1=function(m,ns=10){
  N=1:m
  for(i in 1:ns)
    {
    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])
    }
return(N)}

Now, it is also possible to consider a bottom-in at random shuffle. The idea is the same, the only difference it that you start from the card at the bottom of the deck. But that would be the same as the one before (in terms of time before reaching randomness)

    n=sample(1:(m-1),size=1)
    if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
    if(n==1) N=c(N[m],N[1:(m-1)])

Why not mixing ? Randomly. Call it randomly mixed top-bottom in at random shuffle. You start either with the card on top, or at bottom (with identical probability), of the deck and then move the card somewhere,

     card=sample(c("top","bottom"),size=1)
     if(card=="top"){
       n=sample(2:m,size=1)
       if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
       if(n==m) N=c(N[2:n],N[1])}
     if(card=="bottom"){
       n=sample(1:(m-1),size=1)
       if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
       if(n==1) N=c(N[m],N[1:(m-1)])}

All those codes can be together (within the same function),

schuffle1=function(m,ns=10,which="top"){
  N=1:m
if(which=="top"){
  for(i in 1:ns)
    {
    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])
    }}
if(which=="bottom"){
  for(i in 1:ns)
    {
    n=sample(1:(m-1),size=1)
    if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
    if(n==1) N=c(N[m],N[1:(m-1)])
    }}
  if(which=="mixed"){
    for(i in 1:ns)
    {card=sample(c("top","bottom"),size=1)
     if(card=="top"){
       n=sample(2:m,size=1)
       if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
       if(n==m) N=c(N[2:n],N[1])
       }
     if(card=="bottom"){
       n=sample(1:(m-1),size=1)
       if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
       if(n==1) N=c(N[m],N[1:(m-1)])
       }
    }}  
  return(N)}

But why do we take only one card ? It won’t be more complex to take 2. Or 3. Or more.

  • Tops in at random shuffle, and related (mixed) algorithm

Yes, I used tops to say that we would take several cards on top of the deck. Say a random number of cards. And then, the strategy is the same, so the previous code is (slightly) adapted, as follows

     k=sample(1:(m-1),size=1)
     n=sample((k+1):m,size=1); if(k==m-1) n=m
     if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
     if(n==m) N=c(N[(k+1):n],N[1:k])

The idea is the following, here

As earlier, it is possible to take cards at the bottom of the deck, or, one more time, to use a mixed strategy. The codes would be

     card=sample(c("top","bottom"),size=1)
     if(card=="top"){
		 k=sample(1:(m-1),size=1)
		 n=sample((k+1):m,size=1); if(k==m-1) n=m
		 if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
		 if(n==m) N=c(N[(k+1):n],N[1:k])}
     if(card=="bottom"){
		 k=sample(2:m,size=1)
		 n=sample(1:(k-1),size=1); if(k==1) n=1
		 if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
		 if(n==1) N=c(N[k:m],N[n:(k-1)])}

Again, it is possible to have all those codes in the same function,

schuffle2=function(m,ns=10,which="top"){
  N=1:m
  if(which=="top"){
    for(i in 1:ns)
    {
      k=sample(1:(m-1),size=1)
      n=sample((k+1):m,size=1); if(k==m-1) n=m
      if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
      if(n==m) N=c(N[(k+1):n],N[1:k])
    }}
  if(which=="bottom"){
    for(i in 1:ns)
    {
      k=sample(2:m,size=1)
      n=sample(1:(k-1),size=1); if(k==1) n=1
      if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
      if(n==1) N=c(N[k:m],N[n:(k-1)])
    }}
  if(which=="mixed"){
    for(i in 1:ns)
    {card=sample(c("top","bottom"),size=1)
     if(card=="top"){
		 k=sample(1:(m-1),size=1)
		 n=sample((k+1):m,size=1); if(k==m-1) n=m
		 if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
		 if(n==m) N=c(N[(k+1):n],N[1:k])
     }
     if(card=="bottom"){
		 k=sample(2:m,size=1)
		 n=sample(1:(k-1),size=1); if(k==1) n=1
		 if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
		 if(n==1) N=c(N[k:m],N[n:(k-1)])
     }
    }}  
  return(N)}
  • How long should we shuffle before having cards randomly sorted ?

With the codes mentioned above, it is possible to run generations of shuffles,

distu=function(k=100,j=32){
	U1B=U1T=U1M=U2B=U2T=U2M=U3=NULL
	for(s in 1:100){
		X1T=(0:(j-1))[schuffle1(j,k,"top")] 
		X1B=(0:(j-1))[schuffle1(j,k,"bottom")] 
		X1M=(0:(j-1))[schuffle1(j,k,"mixed")] 
		X2T=(0:(j-1))[schuffle2(j,k,"top")] 
		X2B=(0:(j-1))[schuffle2(j,k,"bottom")] 
		X2M=(0:(j-1))[schuffle2(j,k,"mixed")]
		X3 =(0:(j-1))[sample(1:j)] 

		x1T=sum(j^(-(1:j))*X1T)
		x1B=sum(j^(-(1:j))*X1B)
		x1M=sum(j^(-(1:j))*X1M)
		x2T=sum(j^(-(1:j))*X2T)
		x2B=sum(j^(-(1:j))*X2B)
		x2M=sum(j^(-(1:j))*X2M)
		x3 =sum(j^(-(1:j))*X3)

		U1T=c(U1T,x1T)
		U1B=c(U1B,x1B)
		U1M=c(U1M,x1M)
		U2T=c(U2T,x2T)
		U2B=c(U2B,x2B)
		U2M=c(U2M,x2M)
		U3 =c(U3,x3)
    }
	B=list(U1T=U1T,...)
}

and then, we run tests to see if the samples can be assumed to be uniformly distributed on the unit interval, e.g. for the very first kind first shuffle describe above, it would be

ks.test(B$U1T,punif)$p.value

More precisely, we use the following function, to estimate to proportion of scenarios where the -value exceeds 5%,

PV=function(k){
	P1B=P1T=P1M=P2B=P2T=P2M=P3=NULL
	for(i in 1:10000){
        B=dist(k,j=32)
		P1T=c(P1T,ks.test(B$U1T,punif)$p.value)
		P1M=c(P1M,ks.test(B$U1M,punif)$p.value)
		P2T=c(P2T,ks.test(B$U2T,punif)$p.value)
		P2M=c(P2M,ks.test(B$U2M,punif)$p.value)
		P3 =c(P3,ks.test(B$U3,punif)$p.value)}
	return(list(
		p1T=mean(P1T>.05),
		p1M=mean(P1M>.05),
		p2T=mean(P2T>.05),
		p2M=mean(P2M>.05),				
		p3=mean(P3>.05)))}

If we plot the results, we have

K=1:100
MP=Vectorize(PV)(K)
plot(K,MP[1,],col="red",type="b",ylim=0:1,pch=1)
lines(K,MP[2,],type="b",pch=19,col="red")
lines(K,MP[3,],col="blue",type="b",pch=1)
lines(K,MP[4,],type="b",pch=19,col="blue")
lines(K,MP[5,],type="b",pch=3,col="black")

Here, we look at the proportion of -values that exceed 5%. We can pretend that we have a uniform distribution if that proportion is close to 95%. So basically, we just have to see when we reached for the first time the 95% region.If we zoom in the upper part of the graph, we get

With 32 cards,

  • with a top in at random, we have to shuffle about 70 or 80 cards before having a randomly sorted set of cards. Which is large, but which is quite intuitive. One can imagine that it might take a while before getting the cards at bottom much higher in the pack,
  • with a randomly mixed top in at random strategy, it is faster, slightly (we do not have that problem with cards at bottom that stay at bottom), since it takes about 60 or 70 cards.
  • with a tops in a random, it goes again faster, with about 35 rounds,
  • with a randomly mixed tops-bottoms in at random, it takes about 10 to 15 rounds.

Those results were obtained on tests on samples of size 100. The same code ran on a server during the week-end, with samples of size 500. Note that the output is rather close,

Note that those algorithm were mentioned because they were feasible, not only from a computational point of view, but when playing with real cards, in paper. Like with kids. I can actually ask my kids to perform those shuffle techniques next time we play with cards. The good thing is that randomly mixed tops-bottoms in at random shuffle technique: kids can do it 10 times, and cards should be randomly ordered in the deck…

Now, for those willing to see more algorithms, there are the so-called Fisher-Yates (also Knuth) shuffle. But may I keep that for another post ?

In three months, I’ll be in Vegas (trying to win against the house)

In fact, I’m going there with my family and some friends, including two probabilists (I mean professionals, I am merely an amateur), with this incredible challenge: will I be able to convince  probabilists to go to play at the Casino?

Actually, I also want to study them carefully, to understand how we should play optimally. For example, I hope I can make them play the roulette. Roulette is simple. With a French (or European) roulette, it is probably the simplest: if I bet on black, I win if one of 18 black numbers is out, and I lose if one of the 18 red numbers – or zero (which is green) – is out. This gives a winning probability of 18/37 i.e. a 48.64% chance. But in Vegas, I think it’s mostly American Roulette that can be found in casinos, in which there is a zero and a double zero (both favorable to the bank). Here, the  probability of winning is 18/38, i.e. 47.36% chance. The two roulettes are

Now, let us discuss a little bit about optimal strategy. For instance, suppose I go to Las Vegas with an initial wealth  (say $100). The goal is to find the strategy which maximizes the probability to leave Las Vegas with https://latex.codecogs.com/gif.latex%20?2s (here $200). Should I play big, or small ?

Assume that I can bet https://latex.codecogs.com/gif.latex%20?x (that will be, here, for convenience, a fraction of ). With probability https://latex.codecogs.com/gif.latex%20?p, I will get https://latex.codecogs.com/gif.latex%20?2x, and with probability https://latex.codecogs.com/gif.latex%20?1-p, I will get https://latex.codecogs.com/gif.latex%20?0 (and lose my https://latex.codecogs.com/gif.latex%20?x). As mentioned above, https://latex.codecogs.com/gif.latex%20?p is (a little) smaller than 50%. The casino must win (actually, we will see that this assumption has a very strong impact on the strategy).

Suppose my goal is to double my initial sum, as mentioned in the introduction of this post. Maybe there is an optimum value for https://latex.codecogs.com/gif.latex%20?x, to maximize the probability of doubling my bet. To make it simple, the game ends either because I did not, or because I did, manage to double my initial wealth… Assume further that https://latex.codecogs.com/gif.latex%20?x is fixed, and that I do not revise my bets. One can use monte carlo simulations, to get an intuitive idea…

> bet=function(s=1,t=2*s,x=s/4,p=.4736,nsim=100000){
+     vp=rep(0,nsim); #vw=s
+     for(i in 1:nsim){
+       w=s;
+       while((w>0)&(w<t)){
+          ux=sample(c(min(x,t-w),-x),size=1,prob=c(p,1-p))
+          w=w+ux
+       }
+       vp[i]=(w>=t)}
+     return(mean(vp))
+ }

If we plot this probability as a function of , we have the following

> BET=function(x) bet(x=x)
> vx=1/(1:20)
> px= Vectorize(BET)(vx)
> plot(vx,px,log="x")

Let us see if we can do the maths, and actually compute those probabilities.

For example, if https://latex.codecogs.com/gif.latex%20?x%20=%20s, I play everything I have, and I double with probability https://latex.codecogs.com/gif.latex%20?p. That one was simple.  And indeed, on the graph above, the point on the right is probability  https://latex.codecogs.com/gif.latex%20?p (the red horizontal line).

Assume now that I can bet https://latex.codecogs.com/gif.latex%20?x%20=%20s%20/%202, and I will play, at least, two rounds

  • with probability  I will lose both rounds (and the game is over)
  • with probability , I will win both rounds, and I double my bet (and the game is also over)
  • with probability , I will lose once, and double once. Anyway, I will find myself again with my (initial) wealth . So the game will start again….

To make the story short the probability of doubling my earnings is

which is

https://latex.codecogs.com/gif.latex%20?p%20^%202%20\left%20(1%20+2%20p%20(1-p)%20+%20[2p%20(1-p)]%20^%202%20+%20\cdots%20\right)%20=%20\frac%20{p%20^%202}%20{1-2p%20(1-p)}

Let’s try something more general: I have initial wealth , I can bet https://latex.codecogs.com/gif.latex%20?x and the goal is to reach  (or, more generally, say, ). Now, the probability to reach  from  betting (always) https://latex.codecogs.com/gif.latex%20?x is exactly the same as the probability to reach  from  betting only 1. Let  denote the probability to go from  to  betting 1 (let us use generic parameters). We can easily get the following equation

https://latex.codecogs.com/gif.latex%20?P_b(a)%20=%20p\cdot%20P_b(a+1)%20+%20(1-p)%20\cdot%20P_b(a-1)

Thus, we can write

https://latex.codecogs.com/gif.latex%20?p\cdot%20(P_b(a+1)-P_b(a))%20=%20(1-p)\cdot%20(P_b(a)-P_b(a-1))

or equivalently

https://latex.codecogs.com/gif.latex%20?(P_b(a+1)-P_b(a))%20=\frac{1-p}{p}\cdot%20(P_b(a)-P_b(a-1))

https://latex.codecogs.com/gif.latex%20?\left(\frac{1-p}{p}\right)^a\cdot%20(P_b(1)-P_b(0))

Now, observe that https://latex.codecogs.com/gif.latex%20?P_b(0)=0 (since I cannot have a gain without any money).

Let us write https://latex.codecogs.com/gif.latex%20?P_b(a+1)-P_b(0) using a domino technique :

https://latex.codecogs.com/gif.latex%20?[P_b(a+1)-P_b(a)]+[P_b(a)-P_b(a-1)]+\cdots+[P_b(1)-P_b(0)]

i.e.

https://latex.codecogs.com/gif.latex%20?\left(\frac{1-p}{p}\right)^a%20P_b(1)+\left(\frac{1-p}{p}\right)^{a-1}%20P_b(1)+\cdots+%20\left(\frac{1-p}{p}\right)^0%20P_b(1)

so this geometric sum can also be written

https://latex.codecogs.com/gif.latex%20?\left(1%20-\left[\frac{1-p}{p}\right]^{a+1}%20\right)%20\left(1%20-\left[\frac{1-p}{p}\right]%20\right)^{-1}

Finally, we can write

https://latex.codecogs.com/gif.latex%20?P_b(a)=\left(1%20-\left[\frac{1-p}{p}\right]^{a}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]%20\right)^{-1}\cdot%20P_b(1)

Here, there is still  that I have to explicit. The idea is to observe that , thus

https://latex.codecogs.com/gif.latex%20?P_b(a)=\left(1%20-\left[\frac{1-p}{p}\right]^{a}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]^{b}%20\right)^{-1}

So finally,

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(gain)=\left(1%20-\left[\frac{1-p}{p}\right]^{s/x}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]^{2s/x}%20\right)^{-1}

Nice isn’t it? But to be honest, there is nothing new here. This is actually an old theorem discovered by Christiaan Huygens in 1657, then extended by Jacob Bernoulli in 1680 and finally properly established by Abraham de Moivre in 1711. It is possible to plot this graph, as a function of ,

> bet2=function(s=1,t=2*s,x=s/4,p=.4736){
+     vp=(1-((1-p)/p)^(s/x))/(1-((1-p)/p)^(t/x))
+     return(vp)
+ }

The graph is the same as the one with monte carlo simulation (hopefully). Observe, looking carefully at the function above, that the probability is decreasing with . Which makes sense… Further, the probability is decreasing with : the more hungry, the less chance of winning I have.

Now, the interesting part is what is plotted on the graphs above: the smaller  (the size of the bets at each round), the less chances to win: if I want to win, it is important not to play being little player ! I must bet everything I have ! Actually, the funny thing is that if the probability of winning was (slightly) larger than 1/2, on the contrary, I should bet as small as possible

So far, there is nothing new. Everything mentioned in this post can be related to a fundamental result of Lester Dubins and Leonard Savage, in “How to Gamble if You Must : Inequalities for Stochastic Processes” (published in 1965), see also Sudderth (1972). Of course, I can try another strategy, a little less reasonable, I think, which is sometimes called Martingale of D’Alembert. I believe more in luck than coincidence, so, when I win, I drop my bet (do not tempt fate) but when I lose, I increase my bet (I must win someday). But let’s keep it for another post, someday…

Again, that’s a theory. I guess we should try, and see how it works. I’ll try to upload pictures on the blog during the road trip, so if by the beginning in August nothing has been posted on the blog, please send a rescue team to save me at the Bellagio…