Tag Archives: Markov

Fairness and discrimination, PhD Course, #5 Models and Data

For the fifth course, we will discuss machine learning and standard techniques used to get predictive models, and to assess accuracy of those models.

GLM (possibly constrained)

Classically, we use a penalized version of least squares (but this can be adapted to GLMs, when penalizing the negative log-likelihood).  Because of Karush–Kuhn–Tucker conditions, having a constraint on the parameter is equivalent to the following penalized problem, when the constraint is on the \ell_2 norm of \boldsymbol{\beta},

We can also consider the \ell_1 norm of \boldsymbol{\beta},

Those two approaches can be see as a trade-off between accuracy (here the empirical risk on the left) and complexity of the model (on the right). And we can also consider a mixture of the two norms,

As we will see, it will also be possible to consider some penality related to fairness and discriminiation measures (in-processing).

Classifier and ROC Curves

We will also recall metrics used in the context of classification, such as the ROC curve

Each point of the curve can be related to two areas related to the distributions of the scores (in the two groups), for the same threshold – namely the false positive rate and true positive rate

Based on the ROC curve, we can define the AUC, the area under the ROC curve,

But for classifiers, the important challenge is to have calibrated scores, meaning that we want the score to be interpreted as the true underlying probability.

Calibration

Well-calibration is defined as follows

or (with different notations)

It is a well know properties in several applications.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-07.png

The plot on the right is the calibration plot,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-10.png

We can easily get that plot

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-09.png

This concept is related to the question “do probabilities returned by some model represent reals probabilities ?” For instance, below, we have pictures generated as some sort of geodesic between two pictures, with a woman on the top left, and a man in the bottom right, published in the New York Times. And below, “probabilities” given by  https://www.picpurify.com/demo-face-gender-age.html.

We could agree that it is rather strange that probabilities (to have a man) do not increase continuously, but on top, with extremely high confidence, the model predicts that the picture is the one of a woman, and below, also with extremely high confidence, that the person is a man…

Data, observations vs. experiments

Then, after concept and notations related to models, we will talk about data. More specifically, the distinction between observations and experimentations.

Another popular classification is the one discussed by Judea Pearl.

So we will talk about association, correlation, causal inference, and counterfactuals.

“Correlated variables” or proxies

One important issue, is that with massive data, one can easily get a (good) proxy of almost any sensitive variable.

The concept is related to comonotonicity, or perfect correlation.

But this is clearly too strong, so we will discuss depedence measures, too.

Independence properties

Recall that independence is defined as follows

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-11.png

and we can consider a weaker form, based on null-covariance

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-12.png

or null-correlation

(sidenote, this correlation measure is bounded, and those bounds are related to Hardy-Littlewood inequality and optimal transport)

An interesting measure is the maximal correlation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-13.png

or we can consider a weaker version, without consider all possible transformation, but only a subset

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-14.png

Another important concept is the one of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-16.png

(the later will be used in the context of causal graphs).

Causality

Before talking about causality, recall that what non-independence mean…

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-17.png

We can then construct causal graphs, or “directed acyclic graphs”

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-20.png

where nodes are the variables used in the model, and the outcome (usually that the end of the causal graph). Then we define paths

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-18.png

and the concept of d-separation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-19.png

This concept is related to the statistical property of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-21.png

More precisely, we have the following Markov property on causal graphs

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-22.png

For example, for such a graphical model,

the joint distribution is \mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2|x_1]\times \mathbb{P}[x_3|x_2]\times \mathbb{P}[x_4|x_3]and for the graphical model below

we have\mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2]\times \mathbb{P}[x_3|x_1,x_2]\times \mathbb{P}[x_4|x_3]Those graphs can be related to structural models (with idiosyncratic noise denoted U), since

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-23.png

Potential outome

Another important concept is the concept of counterfactuals, and potential outome. In an ideal world, we would have observed the outome in both cases, with and without the treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-24.png

but in real life, it’s only one of them,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-25.png

And the goal will be, somehow, to estimate what the non-observed outcome would be. And then, classical quantites we wish to estimate are the average treatement effect, and the conditional version, based on some covariates.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-26.png

This concept will be related to counterfactual fairness actually, when the “treatement” will be the sensitive attribute.

Twin network representation of the counterfactual

Finally, we will consider a so-called “twin network representation”. Consider a DAG, associated with some simple structural model

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-27.png

Based on a structural model, we can get values of idiosyncratic noise component

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-28.png

Then, we use those values on the twin representation, when the treatement is not 0, but 1. Counterfactuals are created by using the same noises

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-29.png

The difference between the two outcomes is the treatement effect, or the disparate treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-30.png

or more generally, we write

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-31.png

This is an idea used in Plecko & Meinshausen, 2019, in the context of fairness, but we will discuss this more, later on…

Optimal Claiming Strategies in Bonus Malus Systems and Implied Markov Chains

With Arthur David and Romuald Elie, we just wrote a short paper on bonus malus, and optimal strategies to claim a loss (or not)

In this paper, we investigate the impact of the claim reporting strategy of drivers, within a bonus malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses induces optimal thresholds under which, drivers do not claim their losses. A numerical algorithm is provided for computing such thresholds and realistic numerical applications are discussed.

The paper is now online on http://papers.ssrn.com/id=2790583 and https://hal.archives-ouvertes.fr/hal-01326798.

Note that we do not discuss here legal issues here (in some contracts, it is compulsory to claim all losses, even small ones), but economic incentives and mathematical issues. Some popular journals in France did mention that issue, of non claims small losses (see http://leparticulier.fr/) but in those standard computations (see below), it is based on some naive model that we improve in our paper,

Generating a Markov chain vs. computing the transition matrix

A couple of days ago, we had a quick chat on Karl Broman‘s blog, about snakes and ladders (see http://kbroman.wordpress.com/…) with Karl and Corey (see http://bayesianbiologist.com/….), and the use of Markov Chain. I do believe that this application is truly awesome: the example is understandable by anyone, and computations (almost any kind, from what we’ve tried) are easy to perform. At the same time, some French students asked me specific details regarding some old lectures notes on Markov chains, and on some introductory example I used as a possible motivation: the stepping stone algorithm. In the notes, I just mentioned the idea of this popular generic algorithm (introduced in Sawyer (1976)) and I use simulations to show – visually – how it works. Again, it was just to motivate the course which actually did focus on the theory of Markov Chains. But those student wanted more, like how did I get the transition matrix, for instance. And that is actually not a simple question, from a computational perspective. I mean, I can easily generate this Markov Chain, but writing explicitly the transition, that was another story. Which took me a bit longer. In a very specific case…

But let us get back to the roots, and to the stepping stone algorithm. At least, one of them (the one I used in my notes) because it looks like there are several algorithm. We do consider a grid, say , with some colors inside, say  possible colors. Each cell of the grid has a given color. Then, at some stage, we select randomly one cell in the grid, and it will take the color of one of its neighbor (some kind of absorption, or mutation). This is, more or less, what is also detailed in some lecture notes by James Propp (see also e Sato (1983) or Zähle et al. (2005) for more theoretical details about that Markov chain). This is extremely simple to generate (that’s what I did in my notes, with very big grids, and a lot of colors). But what if we want to write the transition matrix ?

First of all, we need to define the state space. Basically, we do have  cells, each of them has one color, chosen among . Which gives us  possible states…. And that can be large. I mean, if we consider the smallest possible grid (that might be interesting), say , and only  colors, then we talk about possible states. That is large, not huge. But we should keep in mind that we have to compute a transition matrix, that would be a matrix with  elements. More generally, we talk about writing down matrices with  elements. If we want black and white  grids, that would mean a matrix with  which mean 4 billion elements ! And if we consider an red-green-blue  grid, we have to explicit a matrix with  i.e almost 400 million elements. So, let’s face it: we can only work with  bi-color grids.

So let’s try… The good thing is that it can be related to work I’ve been doing recently on binomial recombining trees (binomial being related to bi-color). First of all, our grid will be describes as follows

> h=3
> M=matrix(1:(h^2),h,h)
> M
     [,1] [,2] [,3]
[1,]    1    4    7
[2,]    2    5    8
[3,]    3    6    9

with two colors

> color=c("red","blue")

Then, we should look for neighbors, or derive an neighborhood matrix,

> d=function(i,j) dist(rbind(c((i-1)%/%h,(i-1)%%h),
+                            c((j-1)%/%h,(j-1)%%h)))
> Neighb=matrix(Vectorize(d)(rep(1:(h^2),each=h^2),
+                            rep(1:(h^2),h^2)),h^2,h^2)
> trunc(Neighb*100)/100
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,] 0.00 1.00 2.00 1.00 1.41 2.23 2.00 2.23 2.82
 [2,] 1.00 0.00 1.00 1.41 1.00 1.41 2.23 2.00 2.23
 [3,] 2.00 1.00 0.00 2.23 1.41 1.00 2.82 2.23 2.00
 [4,] 1.00 1.41 2.23 0.00 1.00 2.00 1.00 1.41 2.23
 [5,] 1.41 1.00 1.41 1.00 0.00 1.00 1.41 1.00 1.41
 [6,] 2.23 1.41 1.00 2.00 1.00 0.00 2.23 1.41 1.00
 [7,] 2.00 2.23 2.82 1.00 1.41 2.23 0.00 1.00 2.00
 [8,] 2.23 2.00 2.23 1.41 1.00 1.41 1.00 0.00 1.00
 [9,] 2.82 2.23 2.00 2.23 1.41 1.00 2.00 1.00 0.00
> Neighb=(Neighb<2)&(Neighb>0)
> Neighb
       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9]
 [1,] FALSE  TRUE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE
 [2,]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE
 [3,] FALSE  TRUE FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE
 [4,]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE  TRUE FALSE
 [5,]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
 [6,] FALSE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE
 [7,] FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE  TRUE FALSE
 [8,] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
 [9,] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE FALSE

Now, let us explicit our 512 possible states.

> n=h^2
> states=function(x){
+   Base.b=rep(0,n)
+   ndigits=(floor(logb(x,base=length(color)))+1)
+   for(i in 1:ndigits){
+     Base.b[n-i+1]=(x%%length(color))
+     x=(x %/% length(color))}
+   return(Base.b)}
> M=Vectorize(states)(1:(length(color)^n-1))
> liststates=data.frame(rbind(rep(0,h^2),t(M)))
> head(liststates)
  X1 X2 X3 X4 X5 X6 X7 X8 X9
1  0  0  0  0  0  0  0  0  0
2  0  0  0  0  0  0  0  0  1
3  0  0  0  0  0  0  0  1  0
4  0  0  0  0  0  0  0  1  1
5  0  0  0  0  0  0  1  0  0
6  0  0  0  0  0  0  1  0  1

(for the first six, with 0/1 digits instead of colors). For instance, if we look at a specific one, it is possible to plot the grid, using

> plotsteps=function(u){
+   plot(0:h,0:h,col="white",xlab="",ylab="",axes=FALSE)
+   for(i in 0:(h^2-1)){
+   x=i%/%h
+   y=i%%h
+   polygon(x+c(1,.1,.1,1),y+c(1,1,.1,.1),
+   col=color[as.numeric(u)[i+1] + 1])
+   text(x+.45,y+.45,i)
+   }}

Here,

> plotsteps(liststates[100,])

Then, given one state, let us see what could happen next,

  • let us compute all connected states: all states where we can end up in if we change one cell
  • we have to check, for each connect state which cell did change
  • we should compute probabilities to reach those 9 states, based on the fact that each of the cell is chosen with the same probability, and the fact that probability to change the color is based on the colors around.
  • if some states cannot be reached (if a cell is surrounded by elements of the same color, so it cannot change its color), then, we should remove then from the list of reachable (possible) states.

The code will be something like the following

> listneighbour=function(i){
+   start=liststates[i,]
+   difference2only=function(j) {
+     w=which(liststates[j,]!=liststates[i,])
+     return((length(w)==1))}
+   possible=which( Vectorize(difference2only)(1:nrow(liststates))==TRUE )
+   P=function(j){   
+     L=liststates[i,which(Neighb[which(liststates[j,]!=liststates[i,]),]==TRUE)]
+     T=table(as.numeric(L))
+     T=T[as.character(0:(length(color)-1))]
+     T[is.na(T)]=0
+     return(as.numeric(T)/sum(T))
+   }
+   probability=Vectorize(P)(possible)
+   W=NULL
+   for(j in possible) W=c(W,which(liststates[j,]!=liststates[i,]))
+   I=1-liststates[i,W]+1
+   vp=diag(probability[as.numeric(I),])
+   vproba=0*vp
+   if(sum(vp)!=0) vproba=vp/sum(vp)
+   return(list(
+     color=liststates[i,W],
+     absorb=W,
+     possible=possible,
+     probability=probability,
+     prob=vproba))
+ }

For instance, if we start from state 100 (here, on the right)

> listneighbour(100)
$color
    X3 X4 X8 X9 X7 X6 X5 X2 X1
100  1  1  1  1  0  0  0  0  0

$absorb
[1] 3 4 8 9 7 6 5 2 1

$possible
[1]  36  68  98  99 104 108 116 228 356

$probability
     [,1] [,2] [,3]   [,4]   [,5] [,6] [,7] [,8]   [,9]
[1,]    1  0.8  0.6 0.6667 0.3333  0.4  0.5  0.6 0.6667
[2,]    0  0.2  0.4 0.3333 0.6667  0.6  0.5  0.4 0.3333

$prob
[1] 0.17964072 0.14371257 0.10778443 0.11976048 0.11976048
[6] 0.10778443 0.08982036 0.07185629 0.05988024

Let us look more specificaly at the 99th state (which appears above as a state that could be reached from the 100th),

> liststates[99,]
   X1 X2 X3 X4 X5 X6 X7 X8 X9
99  0  0  1  1  0  0  0  1  0

If we plot it (here on the right, again), we get

> plotsteps(liststates[99,])

Clearly, here, the cell in the upper corner (number 9) changed from blue to red. Now, about the probability… The probability to select cell 9 is 1/9, and given that cell 9 is chosen, the probability to go from blue to red is 2/3 (the cell is surrounded by 2 red cells, and 1 blue cell). The probability to remain blue is then 1/3. Those are the probabilities computed by our function (the table with two rows, one per color). In order to get a better understanding on the meaning of the last line, with some sort of probabilities), let us look at the following (simpler) example.

> liststates[2,]
  X1 X2 X3 X4 X5 X6 X7 X8 X9
2  0  0  0  0  0  0  0  0  1

that can be visualized on the right (on the right). Here,

> listneighbour(2)
$color
  X9 X8 X7 X6 X5 X4 X3 X2 X1
2  1  0  0  0  0  0  0  0  0

$absorb
[1] 9 8 7 6 5 4 3 2 1

$possible
[1]   1   4   6  10  18  34  66 130 258

$probability
     [,1] [,2] [,3] [,4]  [,5] [,6] [,7] [,8] [,9]
[1,]    1  0.8    1  0.8 0.875    1    1    1    1
[2,]    0  0.2    0  0.2 0.125    0    0    0    0

$prob
[1] 0.65573770 0.13114754 0.00000000 0.13114754 0.08196721 
[6] 0.00000000 0.00000000 0.00000000 0.00000000

Things are pretty simple here

  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{1,2,3,4,7\}, then nothing change, since all the neighbors have the same color. So if we want to focus on changes (or say run the algorithm until the first color change, then choosing those cells is a waste of time)
  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{5,6,8\}, then it could be possible to change the color. And actually, https://latex.codecogs.com/gif.latex%20?\{5\} is different from https://latex.codecogs.com/gif.latex%20?\{6,8\} (since it does have much more neighbors)
  • if we chose cell https://latex.codecogs.com/gif.latex%20?\{9\}, then definitively, the color will change, since all neighbors have the other color here,

Now, the probability to select cell  given that there was a color change would be, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{3}{3}=1

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{5}and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{8}

And for the other, https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%200. So, it comes – since we assume that cells are drawn independently, and with the same probability, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{1%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{40}{61}

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{5}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{8}{61}

and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{8}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{5}{61}

Which are exactly the probability computed above. The point is that we compute probabilities given that a color change will actually occur. The good point is that it should faster convergence to some limiting distribution. If any.

What about our transition matrix ? Well, using a simply loop, we should get it easily

> M=matrix(0,nrow(liststates),nrow(liststates))
+ for(i in 1:nrow(liststates)){
+ L=listneighbour(i)
+ if(sum(L$prob)!=0){
+ j=L$possible
+ M[i,j]=L$prob
+ }
+ if(sum(L$prob)==0){
+ j=i
+ M[i,j]=1
+ }
+ }

One can check that this matrix satisfies some properties of transition matrices. For instance, the sum per row is one,

> sum(apply(M,1,sum)!=1)
[1]  0

Remember that this matrix is big, so I will not print if here. But trust me, it works (it might take a while on an old laptop, but anyone can do it). Now, if we want to visualize some paths of that chain, we can use the following algorithm. First, we need a starting point, that can be chosen randomly,

> j=sample(1:nrow(liststates),size=1)

or using a given colored grid, say

> j=100

Then we plot it,

> plotsteps(liststates[j,])

Now, the code within the loop is here

> d=rep(0,nrow(liststates))
> d[j]=1
> d=d%*%M
> j=sample(1:nrow(M),size=1,prob=d)
> plotsteps(liststates[j,])

Here are some examples. And indeed, we end up either with all cells in blue, or all cells in red.

Now, do we have to compute that transition matrix to produce those graph (and to generate that Markov chain) ? No. Of course not… At each step, I use a Dirac measure, and use the transition matrix just to get the probability to generate then the next state. Actually, one can write a faster and more intuitive code to generate the same chain… But I should probably keep that for another post…

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

Quel lancer de dé faire à Serpents et Échelles ?

Hier soir, j’évoquais l’utilisation des chaînes de Markov au jeu Serpents et Échelles. Comme me le faisait remarquer Jean-Philippe, de manière assez étrange, les enfants sont toujours beaucoup plus content après avoir fait un 6 qu’après avoir fait un 1. Mais est-ce réellement la valeur optimale que le dé doit prendre ? Ça dépend bien sûr de la position. Par exemple, au premier lancer, on peut se demander ce que deviendrait le nombre de tours moyen nécessaires pour finir le jeu, conditionnellement aux 6 lancers possibles. On peut calculer, toujours conditionnellement aux 6 valeurs possibles de dés (et des positions où on va se retrouver), l’espérance du nombre de tours pour finir la partie,

esperance=function(h0){
INITIAL = as.numeric(which(M[h0+1,]>0))-1
ESPERANCE=rep(NA,length(INITIAL))
names(ESPERANCE)=INITIAL
for(k in 1:length(INITIAL)){
initial=rep(0,n+1); initial[INITIAL[k]]=1
distrib=initial%*%M
game=rep(NA,1000)
for(h in 1:length(game)){
game[h]=distrib[n+1]
distrib=distrib%*%M}
ESPERANCE[k]=sum(1-game)}
return(ESPERANCE)}

(à partir du code mis en ligne hier), e.g. pour le premier lancer,

> esperance(0)
1        2        3        5        6       14
32.16499 31.99947 31.82348 31.47954 31.36441 29.83020

où la valeur indiquée est la position où l’on pourrait se retrouver (la dernière correspond à un 4). On note que le meilleur premier lancer possible est celui qui nous amène sur la première échelle, i.e. la 4ème case,

Si on regarde case par case, on a les valeurs des “meilleurs” lancers de dés suivant (sans forcément l’unicité) en fonction de sa position sur le plateau

(avec en rouge les serpents et en bleu les échelles). Amusant non ?

Basics on Markov Chain (for parents)

Markov chains is a very interesting and powerful tool. Especially for parents. Because if you think about it quickly, most of the games our kids are playing at are Markovian. For instance, snakes and ladders…

It is extremely easy to write down the transition matrix, one just need to define all snakes and ladders. For the one above, we have,

n=100
M=matrix(0,n+1,n+1+6)
rownames(M)=0:n
colnames(M)=0:(n+6)
for(i in 1:6){diag(M[,(i+1):(i+1+n)])=1/6}
M[,n+1]=apply(M[,(n+1):(n+1+6)],1,sum)
M=M[,1:(n+1)]
starting=c(4,9,17,20,28,40,51,54,62,
64,63,71,93,95,92)
ending  =c(14,31,7,38,84,59,67,34,19,
60,81,91,73,75,78)
for(i in 1:length(starting)){
v=M[,starting[i]+1]
ind=which(v>0)
M[ind,starting[i]+1]=0
M[ind,ending[i]+1]=M[ind,ending[i]+1]+v[ind]}

So, why is it important to have a Markov Chain ? Because, once you’ve noticed that you had a Markov Chain game, you can derive anything you want. For instance, you can get the distribution after some turns,

powermat=function(P,h){
Ph=P
if(h>1){
for(k in 2:h){
Ph=Ph%*%P}}
return(Ph)}
initial=c(1,rep(0,n))
COLOR=rev(heat.colors(101))
u=1:sqrt(n)
boxes=data.frame(
index=1:n,
ord=rep(u,each=sqrt(n)),
abs=rep(c(u,rev(u)),sqrt(n)/2))
position=function(h=1){
D=initial%*%powermat(M,h)
plot(0:10,0:10,col="white",axes=FALSE,
xlab="",ylab="",main=paste("Position after",h,"turns"))
segments(0:10,rep(0,11),0:10,rep(10,11))
segments(rep(0,11),0:10,rep(10,11),0:10)
for(i in 1:n){
polygon(boxes$abs[i]-c(0,0,1,1),
boxes$ord[i]-c(0,1,1,0),
col=COLOR[min(1+trunc(500*D[i+1]),101)],
border=NA)}
text(boxes$abs-.5,boxes$ord-.5,
boxes$index,cex=.7)
segments(c(0,10),rep(0,2),c(0,10),rep(10,2))
segments(rep(0,2),c(0,10),rep(10,2),c(0,10))}

Here, we have the following (note that I assume that once 100 is reached, the game is over)

Assume for instance, that after 10 turns, your daughter accidentally drops her pawn out of the game. Here is the theoretical (unconditional) position of her pawn after 10 turns,

 so, if she claims she was either on 58, 59 or 60, here are the theoretical probabilities to be in each cell after 10 turns,

> h=10
> (initial%*%powermat(M,h))[59:61]/
+ sum((initial%*%powermat(M,h))[59:61])
[1] 0.1597003 0.5168209 0.3234788

i.e. it is more likely she was on 59 (60th cell of the vector since we start in 0). You can also look at the distribution of the number of turns (at first with only one player).

distrib=initial%*%M
game=rep(NA,1000)
for(h in 1:length(game)){
game[h]=distrib[n+1]
distrib=distrib%*%M}
plot(1-game[1:200],type="l",lwd=2,col="red",
ylab="Probability to be still playing")

Once you have that survival distribution, you have the expected number of turns to finish the game,

> sum(1-game)
[1] 32.16499

i.e. in 33 turns, on average, your daughter reaches the 100 cell. But in 50% of the games, it takes less than 29,

> max(which(1-game>.5))
[1] 29

But assuming that you are playing with your daughter, and that the game is over once one player reaches the 100 cell, it is possible to get the survival distribution of the first time one of us reaches the 100 cell,

plot((1-game[1:200])^2,type="l",lwd=2,col="blue",
ylab="Probability to be still playing (2 players)")

Here, the expected number of turns before ending the game is

> sum((1-game)^2)
[1] 23.40439

And if you ask your son to join the game, the survival distribution function is

plot((1-game[1:200])^3,type="l",lwd=2,col="purple",
ylab="Probability to be still playing (3 players)")

i.e. the expected number of turns before the end is now

> sum((1-game)^3)
[1] 20.02098

Des étés pluvieux en Bretagne ? une réalité statistique…

Pour compléter le précédant billet (ici) on peut se demander en quoi la Bretagne est différente des autres régions françaises… Nous avions vu ici le niveau de précipitation moyen, jour après jours pendant les mois d’été, en Bretagne. A Rennes. En revanche, à Paris on obtient la moyenne suivante,

que l’on peut comparer à Marseille,

ou encore à Strasbourg,

Sur la figure ci-dessous, on voit que la probabilité d’avoir de la pluie à Paris (au sens au moins 0.1 mm d’eau dans la journée, en trait gras bleu, au moins 2 mm d’eau dans la journée, en trait bleu) est supérieure à la probabilité d’avoir de la pluie à Rennes (respectivement en bleu clair gras, et en bleu clair fin)

On est certes très au dessus de Marseille,

mais très en dessous de Strasbourg,

Mais au delà des lois marginales, ces villes sont différentes de la Bretagne si l’on regarde les matrices de transition.

  • Transition d’un jour sur l’autre

Pour Rennes, si on regarde jour après jour, on obtient

jour 
beau
temps
pluie
jour 
beau temps
1955 612 2567
pluie
606 723 1329

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
76,15 % 23,85 %
pluie
45,60 % 54,40 %

Pour Paris, la probabilité de transition jour après jour a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
2689 959 3648
pluie
946 1466 2412

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
73,71 % 26,29 %
pluie
39,22 % 60,78 %

Pour Marseille, la probabilité de transition jour après jour a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
2527 375 2902
pluie
362 216 578

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
87,08 % 12,92 %
pluie
62,63 % 37,37 %

Pour Strasbourg, la probabilité de transition jour après jour a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
31 128 159
pluie
132 1464 1596

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
19,50 % 80,50 %
pluie
8,27 % 91,73 %
  • Transition d’une semaine sur l’autre

Si en revanche on regarde les matrices de transition semaine par semaine, on a des résultats assez différents. Une bonne semaine signifie aucun jour avec plus de 2 mm de pluie.
Pour Rennes, si on regarde semaine après semaine

jour 
beau
temps
pluie
jour 
beau temps
379 25 404
pluie
26 7 33

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
93,81 % 6,19 %
pluie
78,79 % 21,21 %

Pour Paris, la probabilité de transition semaine après semaine a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
576 46 622
pluie
53 4 57

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
92,60 % 7,40 %
pluie
92,98 % 7,02 %

Pour Marseille, la probabilité de transition semaine après semaine a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
274 59 333
pluie
47 9 56

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
82,28 % 17,72 %
pluie
83,93 % 16,07 %

Pour Strasbourg, la probabilité de transition semaine après semaine a la forme suivante

jour 
beau
temps
pluie
jour 
beau temps
1494 614 2018
pluie
613 939 1552

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
70,87 % 29,13 %
pluie
39,50 % 60,50 %

Les tests du chi deux, d’indépendance d’une semaine sur l’autre donnent

  • à Rennes, une statistique du chi-deux de 8,054, soit une p-value de 0,45%
  • à Paris, une statistique du chi-deux de 0,025, soit une p-value de 87,26%
  • à Marseille, une statistique du chi-deux de 0,012, soit une p-value de 91,24%
  • à Strasbourg, une statistique du chi-deux de 0,7649, soit une p-value de 38,18%

autrement dit l’hypothèse d’indépendance est acceptée partout, sauf à Rennes….

  • Moralité ?

De manière assez paradoxale, on prétend que la Bretagne a un temps changeant, et pour reprendre le titre du précédant billet, effectivement, en Bretagne, il peut faire beau plusieurs fois par jour. Mais sur le long terme, d’une semaine sur l’autre, le temps est au contraire très corrélé, contrairement aux autres régions. A Paris, Marseille ou Strasbourg, qu’il ait fait beau ou qu’il ait plus la semaine précédente, cela n’apporte aucune information sur la probabilité d’avoir de la pluie la semaine où l’on vient en vacances…. Mais pas en Bretagne: manifestement, il existe donc des étés pourris, où il pourra pleuvoir toutes les semaines, et des étés superbes où il ne pleut jamais….

En Bretagne, il fait beau plusieurs fois par jour

bon, et à l’occasion il peut pleuvoir un peu…. Il peut donc être intéressant pour planifier un peu ses vacances de calculer la probabilité d’avoir de la pluie.

Les calculs ont été fait sur les données de pluviométrie à Rennes, en ligne sur le site eca&d, ici (données de qualité gratuites).

  • Probabilité d’avoir de la pluie, pendant les vacances d’été

Considérons ici la série du niveau de précipitation par jour, en par 0,1 mm,

et la probabilité d’avoir de la pluie dans la journée (régression logistique),

Bref, il peut pleuvoir à Rennes l’été. Mais ça n’aide pas vraiment pour planifier ses vacances. Car si ça se trouve, il y a des étés sans pluie, et des étés où il pleut sans cesse.

  • Dynamique et matrice de transition (par jour et par semaine)

Autrement dit, au lieu de regarder les lois marginales (comme la probabilité d’avoir de la pluie dans la journée), on peut s’intéresser à la dynamique de la série, modélisée sous la forme d’une chaîne de Markov. Si on regarde jour après jour, avec les 30 mois de juillet et août entre 1980 et 2009,

jour 
beau
temps
pluie
jour 
beau temps
871 298 1169
pluie
292 335 627

ce qui donne les probabilités de transition suivantes,

jour 
beau
temps
pluie
jour 
beau temps
75,51 % 25,49 %
pluie
46,57 % 53, 43 %

autrement, s’il fait beau aujourd’hui, on a 3 chances sur 4 d’avoir du beau temps demain.
Si on regarde semaine après semaine, où l’intérêt sont les semaines sans pluie

,

semaine 
beau
temps
pluie
semaine 
beau temps
13 26 39
pluie
23 140 163

avec là aussi les probabilités de transition suivantes,

semaine 
beau
temps
pluie
semaine 
beau temps
33,33 % 66,67 %
pluie
14,11 % 85,89 %

Si on regarde semaine après semaine, où l’intérêt sont les semaines avec six jours sans pluie (on s’autorise une journée de pluie),

semaine 
beau
temps
pluie
semaine 
beau temps
36 36 72
pluie
30 100 130

ce qui donne les probabilités de transition suivantes,

semaine 
beau
temps
pluie
semaine 
beau temps
50,00 % 50,00 %
pluie
23,08 % 76,92 %

Bon, maintenant, si on pense qu’avoir 2 mm de pluie dans la journée, c’est juste un peu d’humidité dans l’air, les matrices de transitions sont sensiblement différentes, dans le cas où on s’autorise une journée dans la semaine avec un peu d’humidité pour parler de beau temps,

semaine 
beau
temps
pluie
semaine 
beau temps
106 36 142
pluie
31 29 60

ce qui donne les probabilités de transition suivantes,

semaine 
beau
temps
pluie
semaine 
beau temps
75,65 % 25,35 %
pluie
51,67 % 48,33 %

Autrement dit, on retrouve une matrice proche de celle que nous avions sur les données journalière: s’il a fait beau la semaine avant de venir en Bretagne, on a 3 chances sur 4 d’avoir du beau temps. En revanche, s’il a fait mauvais, on a une chance sur deux d’avoir beau temps la semaine suivante.
Si notre définition de pluie est encore plus laxiste (il faut qu’il y ait eu un déluge une journée dans la semaine, à savoir plus de 2 cm d’eau dans la journée), alors cette fois, on obtient,

semaine 
beau
temps
pluie
semaine 
beau temps
171 12 183
pluie
14 5 130

ce qui donne les probabilités de transition suivantes,

semaine 
beau
temps
pluie
semaine 
beau temps
93,44 % 6,56 %
pluie
73,68 % 26,32 %

Autrement dit, s’il y a eu au moins une très mauvaise journée la semaine passée, on a 1 chance sur 4 d’en avoir également une la semaine suivante. En revanche, s’il a fait beau tout le temps, on a presque 95% de chances d’avoir du beau temps la semaine suivante. Moralité, quelle que soit la définition retenue pour définir le beau temps, le temps à Rennes n’est pas indépendant d’une semaine sur l’autre: il y a manifestement des étés pluvieux, et d’autre non.