Are parallel computations worth it ?

Yesterday, Daniel Marcelino published an interesting post on his blog, untitled Parallel Processing: When does it worth ? I was asking myself the same question for a chapter I am currently writing. And I did like his approach, so I tried, on my computer to do the same. I did use three packages to run parallel R codes,

> library(multicore)
> library(snow)
> library(snowfall)

and one to quantify time to run the code

> library(microbenchmark)

I ran the code on my mac, at the office,

> all=detectCores(all.tests=TRUE)
> all
[1] 4

which is a standard computer, with four cores. To run some codes, I had to generate datasets. Here, I consider a data frame, with https://latex.codecogs.com/gif.latex?n rows, and 100 columns. I generate values using a Gaussian distribution,

> gen=function(n) data.frame(matrix(rnorm(n*100),n,100))

The goal, here, will be to compute quantiles (or to be more specific quartiles) per column, and to replicate that 100 times. Here, the standard technique is to use lapply. But two (at least) parallel version of the function can be found. So, let us use it

> base=gen(n=100)
> microbenchmark(
+ mlapp=data.frame(lapply(base, quantile, probs = 1:3/4 )),
+ mclapp=data.frame(mclapply(base, quantile, probs = 1:3/4 , mc.cores = all)),
+ sflapp=data.frame(sfLapply(base, quantile, probs = 1:3/4 )),
+ times=100) -> m

For instance, with 100 rows, we have

> m
Unit: milliseconds
    expr      min       lq   median       uq       max
1 mclapp 50.19290 55.90364 57.99185 64.10619 266.88692
2  mlapp 26.94146 29.49396 31.20571 49.54824  75.60251
3 sflapp 27.54857 30.10224 31.41864 47.10688  59.28925

And with 500,000 rows, we have

> m
Unit: seconds
    expr       min         lq     median        uq      max
1 mclapp 42.999504 103.873919 161.989876 258.66887 660.2953
2  mlapp  3.720542   3.770319   4.070116  11.90181 166.9461
3 sflapp  3.587703   3.770399   4.027876  10.62654 181.0093

So yes, using parallel code would be very interesting ! Especially with very large datasets (I could not run it with 1 million rows). If we consider a loop, to see the evolution of the median time, for each of those three function, we can plot the time it took, as a function of the number of rows,

> i=1;vk=seq(1,6,by=.2)
> col=seq(i,3*2,by=3)
> plot(10^vk,db[2,col],ylim=range(db),col="white",log="x",
+     xlab="Number of rows",ylab="Time")
+ polygon(c(10^vk,rev(10^vk)),c(db[1,col],rev(db[3,col])),col="light blue",border=NA)
+ lines(10^vk,db[2,col],col="blue",lwd=2)

Here, we have the following, with the standard lapply on the left (the line if the median time, with quartiles, 25% and 75%), the multicore function in the middle, and the snowfall function, on the right,

If we zoom in, for small datasets (less than 10,000 rows and 100 columns), we do observe a gain, since the code ran two times faster

So clearly, it might be interesting to write codes to distribute on different cores. But here, I use a simple function (I compute quantiles on columns of a dataset). I should try with a more complex function…

On the other hand, I should mention that, usually, while I have have one (or two) codes running, I can do something else : seeking for recent papers for ongoing research projects, answer to emails that I should have answered a few weeks ago, checking for typos in the book and update the tex file, or type parts of a future posts on my blog, etc. The problem I got yesterday afternoon, when I ran the code, was that suddenly, all the cores on my computer were dedicated to that R code. I could not even finish an email I started before running the code… So finally I left earlier, decided to pick up the kids after school, and went to the park, to enjoy the sunny day we had ! So I have to admit that running parallel codes can have advantages you could not think of !

Generating a Markov chain vs. computing the transition matrix

A couple of days ago, we had a quick chat on Karl Broman‘s blog, about snakes and ladders (see http://kbroman.wordpress.com/…) with Karl and Corey (see http://bayesianbiologist.com/….), and the use of Markov Chain. I do believe that this application is truly awesome: the example is understandable by anyone, and computations (almost any kind, from what we’ve tried) are easy to perform. At the same time, some French students asked me specific details regarding some old lectures notes on Markov chains, and on some introductory example I used as a possible motivation: the stepping stone algorithm. In the notes, I just mentioned the idea of this popular generic algorithm (introduced in Sawyer (1976)) and I use simulations to show – visually – how it works. Again, it was just to motivate the course which actually did focus on the theory of Markov Chains. But those student wanted more, like how did I get the transition matrix, for instance. And that is actually not a simple question, from a computational perspective. I mean, I can easily generate this Markov Chain, but writing explicitly the transition, that was another story. Which took me a bit longer. In a very specific case…

But let us get back to the roots, and to the stepping stone algorithm. At least, one of them (the one I used in my notes) because it looks like there are several algorithm. We do consider a grid, say , with some colors inside, say  possible colors. Each cell of the grid has a given color. Then, at some stage, we select randomly one cell in the grid, and it will take the color of one of its neighbor (some kind of absorption, or mutation). This is, more or less, what is also detailed in some lecture notes by James Propp (see also e Sato (1983) or Zähle et al. (2005) for more theoretical details about that Markov chain). This is extremely simple to generate (that’s what I did in my notes, with very big grids, and a lot of colors). But what if we want to write the transition matrix ?

First of all, we need to define the state space. Basically, we do have  cells, each of them has one color, chosen among . Which gives us  possible states…. And that can be large. I mean, if we consider the smallest possible grid (that might be interesting), say , and only  colors, then we talk about possible states. That is large, not huge. But we should keep in mind that we have to compute a transition matrix, that would be a matrix with  elements. More generally, we talk about writing down matrices with  elements. If we want black and white  grids, that would mean a matrix with  which mean 4 billion elements ! And if we consider an red-green-blue  grid, we have to explicit a matrix with  i.e almost 400 million elements. So, let’s face it: we can only work with  bi-color grids.

So let’s try… The good thing is that it can be related to work I’ve been doing recently on binomial recombining trees (binomial being related to bi-color). First of all, our grid will be describes as follows

> h=3
> M=matrix(1:(h^2),h,h)
> M
     [,1] [,2] [,3]
[1,]    1    4    7
[2,]    2    5    8
[3,]    3    6    9

with two colors

> color=c("red","blue")

Then, we should look for neighbors, or derive an neighborhood matrix,

> d=function(i,j) dist(rbind(c((i-1)%/%h,(i-1)%%h),
+                            c((j-1)%/%h,(j-1)%%h)))
> Neighb=matrix(Vectorize(d)(rep(1:(h^2),each=h^2),
+                            rep(1:(h^2),h^2)),h^2,h^2)
> trunc(Neighb*100)/100
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,] 0.00 1.00 2.00 1.00 1.41 2.23 2.00 2.23 2.82
 [2,] 1.00 0.00 1.00 1.41 1.00 1.41 2.23 2.00 2.23
 [3,] 2.00 1.00 0.00 2.23 1.41 1.00 2.82 2.23 2.00
 [4,] 1.00 1.41 2.23 0.00 1.00 2.00 1.00 1.41 2.23
 [5,] 1.41 1.00 1.41 1.00 0.00 1.00 1.41 1.00 1.41
 [6,] 2.23 1.41 1.00 2.00 1.00 0.00 2.23 1.41 1.00
 [7,] 2.00 2.23 2.82 1.00 1.41 2.23 0.00 1.00 2.00
 [8,] 2.23 2.00 2.23 1.41 1.00 1.41 1.00 0.00 1.00
 [9,] 2.82 2.23 2.00 2.23 1.41 1.00 2.00 1.00 0.00
> Neighb=(Neighb<2)&(Neighb>0)
> Neighb
       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9]
 [1,] FALSE  TRUE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE
 [2,]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE
 [3,] FALSE  TRUE FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE
 [4,]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE  TRUE FALSE
 [5,]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
 [6,] FALSE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE
 [7,] FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE  TRUE FALSE
 [8,] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
 [9,] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE FALSE

Now, let us explicit our 512 possible states.

> n=h^2
> states=function(x){
+   Base.b=rep(0,n)
+   ndigits=(floor(logb(x,base=length(color)))+1)
+   for(i in 1:ndigits){
+     Base.b[n-i+1]=(x%%length(color))
+     x=(x %/% length(color))}
+   return(Base.b)}
> M=Vectorize(states)(1:(length(color)^n-1))
> liststates=data.frame(rbind(rep(0,h^2),t(M)))
> head(liststates)
  X1 X2 X3 X4 X5 X6 X7 X8 X9
1  0  0  0  0  0  0  0  0  0
2  0  0  0  0  0  0  0  0  1
3  0  0  0  0  0  0  0  1  0
4  0  0  0  0  0  0  0  1  1
5  0  0  0  0  0  0  1  0  0
6  0  0  0  0  0  0  1  0  1

(for the first six, with 0/1 digits instead of colors). For instance, if we look at a specific one, it is possible to plot the grid, using

> plotsteps=function(u){
+   plot(0:h,0:h,col="white",xlab="",ylab="",axes=FALSE)
+   for(i in 0:(h^2-1)){
+   x=i%/%h
+   y=i%%h
+   polygon(x+c(1,.1,.1,1),y+c(1,1,.1,.1),
+   col=color[as.numeric(u)[i+1] + 1])
+   text(x+.45,y+.45,i)
+   }}

Here,

> plotsteps(liststates[100,])

Then, given one state, let us see what could happen next,

  • let us compute all connected states: all states where we can end up in if we change one cell
  • we have to check, for each connect state which cell did change
  • we should compute probabilities to reach those 9 states, based on the fact that each of the cell is chosen with the same probability, and the fact that probability to change the color is based on the colors around.
  • if some states cannot be reached (if a cell is surrounded by elements of the same color, so it cannot change its color), then, we should remove then from the list of reachable (possible) states.

The code will be something like the following

> listneighbour=function(i){
+   start=liststates[i,]
+   difference2only=function(j) {
+     w=which(liststates[j,]!=liststates[i,])
+     return((length(w)==1))}
+   possible=which( Vectorize(difference2only)(1:nrow(liststates))==TRUE )
+   P=function(j){   
+     L=liststates[i,which(Neighb[which(liststates[j,]!=liststates[i,]),]==TRUE)]
+     T=table(as.numeric(L))
+     T=T[as.character(0:(length(color)-1))]
+     T[is.na(T)]=0
+     return(as.numeric(T)/sum(T))
+   }
+   probability=Vectorize(P)(possible)
+   W=NULL
+   for(j in possible) W=c(W,which(liststates[j,]!=liststates[i,]))
+   I=1-liststates[i,W]+1
+   vp=diag(probability[as.numeric(I),])
+   vproba=0*vp
+   if(sum(vp)!=0) vproba=vp/sum(vp)
+   return(list(
+     color=liststates[i,W],
+     absorb=W,
+     possible=possible,
+     probability=probability,
+     prob=vproba))
+ }

For instance, if we start from state 100 (here, on the right)

> listneighbour(100)
$color
    X3 X4 X8 X9 X7 X6 X5 X2 X1
100  1  1  1  1  0  0  0  0  0

$absorb
[1] 3 4 8 9 7 6 5 2 1

$possible
[1]  36  68  98  99 104 108 116 228 356

$probability
     [,1] [,2] [,3]   [,4]   [,5] [,6] [,7] [,8]   [,9]
[1,]    1  0.8  0.6 0.6667 0.3333  0.4  0.5  0.6 0.6667
[2,]    0  0.2  0.4 0.3333 0.6667  0.6  0.5  0.4 0.3333

$prob
[1] 0.17964072 0.14371257 0.10778443 0.11976048 0.11976048
[6] 0.10778443 0.08982036 0.07185629 0.05988024

Let us look more specificaly at the 99th state (which appears above as a state that could be reached from the 100th),

> liststates[99,]
   X1 X2 X3 X4 X5 X6 X7 X8 X9
99  0  0  1  1  0  0  0  1  0

If we plot it (here on the right, again), we get

> plotsteps(liststates[99,])

Clearly, here, the cell in the upper corner (number 9) changed from blue to red. Now, about the probability… The probability to select cell 9 is 1/9, and given that cell 9 is chosen, the probability to go from blue to red is 2/3 (the cell is surrounded by 2 red cells, and 1 blue cell). The probability to remain blue is then 1/3. Those are the probabilities computed by our function (the table with two rows, one per color). In order to get a better understanding on the meaning of the last line, with some sort of probabilities), let us look at the following (simpler) example.

> liststates[2,]
  X1 X2 X3 X4 X5 X6 X7 X8 X9
2  0  0  0  0  0  0  0  0  1

that can be visualized on the right (on the right). Here,

> listneighbour(2)
$color
  X9 X8 X7 X6 X5 X4 X3 X2 X1
2  1  0  0  0  0  0  0  0  0

$absorb
[1] 9 8 7 6 5 4 3 2 1

$possible
[1]   1   4   6  10  18  34  66 130 258

$probability
     [,1] [,2] [,3] [,4]  [,5] [,6] [,7] [,8] [,9]
[1,]    1  0.8    1  0.8 0.875    1    1    1    1
[2,]    0  0.2    0  0.2 0.125    0    0    0    0

$prob
[1] 0.65573770 0.13114754 0.00000000 0.13114754 0.08196721 
[6] 0.00000000 0.00000000 0.00000000 0.00000000

Things are pretty simple here

  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{1,2,3,4,7\}, then nothing change, since all the neighbors have the same color. So if we want to focus on changes (or say run the algorithm until the first color change, then choosing those cells is a waste of time)
  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{5,6,8\}, then it could be possible to change the color. And actually, https://latex.codecogs.com/gif.latex%20?\{5\} is different from https://latex.codecogs.com/gif.latex%20?\{6,8\} (since it does have much more neighbors)
  • if we chose cell https://latex.codecogs.com/gif.latex%20?\{9\}, then definitively, the color will change, since all neighbors have the other color here,

Now, the probability to select cell  given that there was a color change would be, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{3}{3}=1

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{5}and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{8}

And for the other, https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%200. So, it comes – since we assume that cells are drawn independently, and with the same probability, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{1%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{40}{61}

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{5}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{8}{61}

and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{8}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{5}{61}

Which are exactly the probability computed above. The point is that we compute probabilities given that a color change will actually occur. The good point is that it should faster convergence to some limiting distribution. If any.

What about our transition matrix ? Well, using a simply loop, we should get it easily

> M=matrix(0,nrow(liststates),nrow(liststates))
+ for(i in 1:nrow(liststates)){
+ L=listneighbour(i)
+ if(sum(L$prob)!=0){
+ j=L$possible
+ M[i,j]=L$prob
+ }
+ if(sum(L$prob)==0){
+ j=i
+ M[i,j]=1
+ }
+ }

One can check that this matrix satisfies some properties of transition matrices. For instance, the sum per row is one,

> sum(apply(M,1,sum)!=1)
[1]  0

Remember that this matrix is big, so I will not print if here. But trust me, it works (it might take a while on an old laptop, but anyone can do it). Now, if we want to visualize some paths of that chain, we can use the following algorithm. First, we need a starting point, that can be chosen randomly,

> j=sample(1:nrow(liststates),size=1)

or using a given colored grid, say

> j=100

Then we plot it,

> plotsteps(liststates[j,])

Now, the code within the loop is here

> d=rep(0,nrow(liststates))
> d[j]=1
> d=d%*%M
> j=sample(1:nrow(M),size=1,prob=d)
> plotsteps(liststates[j,])

Here are some examples. And indeed, we end up either with all cells in blue, or all cells in red.

Now, do we have to compute that transition matrix to produce those graph (and to generate that Markov chain) ? No. Of course not… At each step, I use a Dirac measure, and use the transition matrix just to get the probability to generate then the next state. Actually, one can write a faster and more intuitive code to generate the same chain… But I should probably keep that for another post…

Playing cards in Vegas?

In a previous post, a few weeks ago, I mentioned that I will be in Las Vegas by the end of July. And I took the opportunity to write a post on roulette(s). Since some colleagues told me I should take some time to play poker there, I guess I have to understand how to play poker… so I went back to basics on cards, and shuffling techniques.

Now, I have to confess that I have been surprised, while I was looking for mathematical models for shuffling, to find so many deterministic techniques (and results related to algebra, and cycles).

On http://mathworld.wolfram.com/ for instance, one can find nice articles on so-called in-shuffle or out-shuffle techniques. There is also a great article, Golomb (1961), but mainly on algebraic properties of permutations by cutting and shuffling, as well as Diaconis, Kantor and Graham’s The Mathematics of Perfect Shuffle And if you look at Monge’s shuffle, you can find a deterministic recursive relationship. As a statistician (or applied probabilist), I should confess that I did not find answer to the question I wanted to ask : how long should we shuffle before getting cards randomly sorted in ours hands ?

  • Randomness (from a statistician perspective)

First, I need to define (as properly as possible) a notion of “cards randomly sorted“. Consider a game with 32 cards. Why 32 ? Mathematicians will tell you that 32 is a great number, since it is a power of 2, so there might be interesting (algebraic) properties when shuffling. From a computational point of view, 32 is smaller than 52, so my random generations will run faster. This is basically why I used 32. 10 would have been better, but not realistic with cards.

So, our 32 cards can be seen as a vector, or a list, of 32 items, say

In order to assess if my cards are randomly sorted, let us get back to number properties (real valued numbers). If there were 10 cards, the list can be seen as an element of the following set

(or to be more specific, a subset of that set, since numbers have to be different – it has to be a permutation – we cannot have duplicates, we’ll get back to that point in a few seconds). Let us see this list as a decimal number, with 10 digits. More precisely,

Now,  it is natural to say that cards are randomly sorted is this number is uniformly distributed on the unit interval, isn’t it ? (if we use the same shuffle many times, with the same starting point)

Well, if we think about it twice, uniform on the unit interval is probably not the proper distribution, since (as mentioned above) all digits have to be different. For instance, the smallest number would be  and the largest  . But as we will see, it this uniform assumption might not be too strong, actually.

And if we want to get back to our initial problem, with 32 cards, we simply have to use a decomposition in the 32-basis.

So if we have an algorithm to shuffle cards, we just have to run it several times (with the same starting value) and see when  starts to be uniformly distributed. We start with a Dirac distribution, we have some kind of transition matrix, we expect our limiting distribution to be uniform and we wonder when the limiting distribution is reached… And from a statistical point of view, that should not be that difficult to assess, since we do have several goodness of fit tests that can be used.

Actually, it is possible to check if our technique passes the test of a uniform distribution, when digit are randomly generated (without replacement). The code to generate  is

> j = 32
> X3 = (0:(j-1))[sample(1:j)] 
> x3 = sum(j^(-(1:j))*X3)

If we run it a few times, and check if the assumption of a uniform distribution is valid (on samples with, say, 500 observations),

> P3=NULL
> for(i in 1:10000){
+   U3=NULL
+   for(s in 1:500){
+     X3 =(0:(j-1))[sample(1:j)] 
+     x3 =sum(j^(-(1:j))*X3)
+     U3 =c(U3,x3)}
+   P3 =c(P3,ks.test(U3,punif)$p.value)
+ }

in 95% of the scenarios, the -value exceeds 5%

> mean(P3>.05)
[1] 0.9529

(which is something we should have under the null), More precisely, we can check that the -value is uniformly distributed on the unit interval.

> hist(P3,freq=FALSE)

So assuming that our number is uniform on the unit interval might be a good notion for “cards are randomly sorted“.

What we need now is some shuffling algorithms. Or to be more specific, some feasible shuffling algorithm. I mean here that I just start playing with cards, so it should be some techniques that I should be able to perform, to understand how it works…. So you will have to wait a few weeks before I start talking about the riffle or dovetail shuffle (you know the kind of shuffle in which half of the deck is held in each hand, and then cards are released by the thumbs so that they fall to the table interleaved… like in the movies) !

  • Top in at random shuffle, and related (simple) algorithm

My first algorithm is simple: the top-in at random shuffle. We start with the following ordering

    N=1:m

There are https://latex.codecogs.com/gif.latex%20?m cards, and n denote the place where the card on top will go.

    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])

Then, we repeat that transfer of the card on top several times.

schuffle1=function(m,ns=10){
  N=1:m
  for(i in 1:ns)
    {
    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])
    }
return(N)}

Now, it is also possible to consider a bottom-in at random shuffle. The idea is the same, the only difference it that you start from the card at the bottom of the deck. But that would be the same as the one before (in terms of time before reaching randomness)

    n=sample(1:(m-1),size=1)
    if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
    if(n==1) N=c(N[m],N[1:(m-1)])

Why not mixing ? Randomly. Call it randomly mixed top-bottom in at random shuffle. You start either with the card on top, or at bottom (with identical probability), of the deck and then move the card somewhere,

     card=sample(c("top","bottom"),size=1)
     if(card=="top"){
       n=sample(2:m,size=1)
       if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
       if(n==m) N=c(N[2:n],N[1])}
     if(card=="bottom"){
       n=sample(1:(m-1),size=1)
       if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
       if(n==1) N=c(N[m],N[1:(m-1)])}

All those codes can be together (within the same function),

schuffle1=function(m,ns=10,which="top"){
  N=1:m
if(which=="top"){
  for(i in 1:ns)
    {
    n=sample(2:m,size=1)
    if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
    if(n==m) N=c(N[2:n],N[1])
    }}
if(which=="bottom"){
  for(i in 1:ns)
    {
    n=sample(1:(m-1),size=1)
    if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
    if(n==1) N=c(N[m],N[1:(m-1)])
    }}
  if(which=="mixed"){
    for(i in 1:ns)
    {card=sample(c("top","bottom"),size=1)
     if(card=="top"){
       n=sample(2:m,size=1)
       if(n<m)  N=c(N[2:n],N[1],N[(n+1):m])  
       if(n==m) N=c(N[2:n],N[1])
       }
     if(card=="bottom"){
       n=sample(1:(m-1),size=1)
       if(n>1)  N=c(N[1:(n-1)],N[m],N[n:(m-1)])  
       if(n==1) N=c(N[m],N[1:(m-1)])
       }
    }}  
  return(N)}

But why do we take only one card ? It won’t be more complex to take 2. Or 3. Or more.

  • Tops in at random shuffle, and related (mixed) algorithm

Yes, I used tops to say that we would take several cards on top of the deck. Say a random number of cards. And then, the strategy is the same, so the previous code is (slightly) adapted, as follows

     k=sample(1:(m-1),size=1)
     n=sample((k+1):m,size=1); if(k==m-1) n=m
     if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
     if(n==m) N=c(N[(k+1):n],N[1:k])

The idea is the following, here

As earlier, it is possible to take cards at the bottom of the deck, or, one more time, to use a mixed strategy. The codes would be

     card=sample(c("top","bottom"),size=1)
     if(card=="top"){
		 k=sample(1:(m-1),size=1)
		 n=sample((k+1):m,size=1); if(k==m-1) n=m
		 if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
		 if(n==m) N=c(N[(k+1):n],N[1:k])}
     if(card=="bottom"){
		 k=sample(2:m,size=1)
		 n=sample(1:(k-1),size=1); if(k==1) n=1
		 if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
		 if(n==1) N=c(N[k:m],N[n:(k-1)])}

Again, it is possible to have all those codes in the same function,

schuffle2=function(m,ns=10,which="top"){
  N=1:m
  if(which=="top"){
    for(i in 1:ns)
    {
      k=sample(1:(m-1),size=1)
      n=sample((k+1):m,size=1); if(k==m-1) n=m
      if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
      if(n==m) N=c(N[(k+1):n],N[1:k])
    }}
  if(which=="bottom"){
    for(i in 1:ns)
    {
      k=sample(2:m,size=1)
      n=sample(1:(k-1),size=1); if(k==1) n=1
      if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
      if(n==1) N=c(N[k:m],N[n:(k-1)])
    }}
  if(which=="mixed"){
    for(i in 1:ns)
    {card=sample(c("top","bottom"),size=1)
     if(card=="top"){
		 k=sample(1:(m-1),size=1)
		 n=sample((k+1):m,size=1); if(k==m-1) n=m
		 if(n<m)  N=c(N[(k+1):n],N[1:k],N[(n+1):m])  
		 if(n==m) N=c(N[(k+1):n],N[1:k])
     }
     if(card=="bottom"){
		 k=sample(2:m,size=1)
		 n=sample(1:(k-1),size=1); if(k==1) n=1
		 if(n>1)  N=c(N[1:(n-1)],N[k:m],N[n:(k-1)])  
		 if(n==1) N=c(N[k:m],N[n:(k-1)])
     }
    }}  
  return(N)}
  • How long should we shuffle before having cards randomly sorted ?

With the codes mentioned above, it is possible to run generations of shuffles,

distu=function(k=100,j=32){
	U1B=U1T=U1M=U2B=U2T=U2M=U3=NULL
	for(s in 1:100){
		X1T=(0:(j-1))[schuffle1(j,k,"top")] 
		X1B=(0:(j-1))[schuffle1(j,k,"bottom")] 
		X1M=(0:(j-1))[schuffle1(j,k,"mixed")] 
		X2T=(0:(j-1))[schuffle2(j,k,"top")] 
		X2B=(0:(j-1))[schuffle2(j,k,"bottom")] 
		X2M=(0:(j-1))[schuffle2(j,k,"mixed")]
		X3 =(0:(j-1))[sample(1:j)] 

		x1T=sum(j^(-(1:j))*X1T)
		x1B=sum(j^(-(1:j))*X1B)
		x1M=sum(j^(-(1:j))*X1M)
		x2T=sum(j^(-(1:j))*X2T)
		x2B=sum(j^(-(1:j))*X2B)
		x2M=sum(j^(-(1:j))*X2M)
		x3 =sum(j^(-(1:j))*X3)

		U1T=c(U1T,x1T)
		U1B=c(U1B,x1B)
		U1M=c(U1M,x1M)
		U2T=c(U2T,x2T)
		U2B=c(U2B,x2B)
		U2M=c(U2M,x2M)
		U3 =c(U3,x3)
    }
	B=list(U1T=U1T,...)
}

and then, we run tests to see if the samples can be assumed to be uniformly distributed on the unit interval, e.g. for the very first kind first shuffle describe above, it would be

ks.test(B$U1T,punif)$p.value

More precisely, we use the following function, to estimate to proportion of scenarios where the -value exceeds 5%,

PV=function(k){
	P1B=P1T=P1M=P2B=P2T=P2M=P3=NULL
	for(i in 1:10000){
        B=dist(k,j=32)
		P1T=c(P1T,ks.test(B$U1T,punif)$p.value)
		P1M=c(P1M,ks.test(B$U1M,punif)$p.value)
		P2T=c(P2T,ks.test(B$U2T,punif)$p.value)
		P2M=c(P2M,ks.test(B$U2M,punif)$p.value)
		P3 =c(P3,ks.test(B$U3,punif)$p.value)}
	return(list(
		p1T=mean(P1T>.05),
		p1M=mean(P1M>.05),
		p2T=mean(P2T>.05),
		p2M=mean(P2M>.05),				
		p3=mean(P3>.05)))}

If we plot the results, we have

K=1:100
MP=Vectorize(PV)(K)
plot(K,MP[1,],col="red",type="b",ylim=0:1,pch=1)
lines(K,MP[2,],type="b",pch=19,col="red")
lines(K,MP[3,],col="blue",type="b",pch=1)
lines(K,MP[4,],type="b",pch=19,col="blue")
lines(K,MP[5,],type="b",pch=3,col="black")

Here, we look at the proportion of -values that exceed 5%. We can pretend that we have a uniform distribution if that proportion is close to 95%. So basically, we just have to see when we reached for the first time the 95% region.If we zoom in the upper part of the graph, we get

With 32 cards,

  • with a top in at random, we have to shuffle about 70 or 80 cards before having a randomly sorted set of cards. Which is large, but which is quite intuitive. One can imagine that it might take a while before getting the cards at bottom much higher in the pack,
  • with a randomly mixed top in at random strategy, it is faster, slightly (we do not have that problem with cards at bottom that stay at bottom), since it takes about 60 or 70 cards.
  • with a tops in a random, it goes again faster, with about 35 rounds,
  • with a randomly mixed tops-bottoms in at random, it takes about 10 to 15 rounds.

Those results were obtained on tests on samples of size 100. The same code ran on a server during the week-end, with samples of size 500. Note that the output is rather close,

Note that those algorithm were mentioned because they were feasible, not only from a computational point of view, but when playing with real cards, in paper. Like with kids. I can actually ask my kids to perform those shuffle techniques next time we play with cards. The good thing is that randomly mixed tops-bottoms in at random shuffle technique: kids can do it 10 times, and cards should be randomly ordered in the deck…

Now, for those willing to see more algorithms, there are the so-called Fisher-Yates (also Knuth) shuffle. But may I keep that for another post ?

From a random generator to a sample function

This week-end, I wrote a post since I had some trouble to generate a sample random sample with R, to reproduce one obtained by a co-author, with SAS (generated using Fishman and Moore (1982) used in function RANUNI). I was lucky since another contributor for that book, Christrophe Dutang, got the anwer to the last question I asked: is it possible to reproduce the random generator ? Yes, we can. And it is quite simple, if you use the appropriate library and the appropriate function,

> library(randtoolbox)
> a <- 397204094
> b <- 0
> m <- 2^(31)-1
> set.generator(name="congruRand", mod=m, mult=a, incr=b, seed=123) 
> runif(10)
 [1] 0.7503961 0.3209120 0.1783896 0.9060334 0.3571171
 [6] 0.2211140 0.7864383 0.3980819 0.1246652 0.1876858

If you check in the previous post this is exactly what SAS gave us (and that I could not reproduce by myself). But that was only one part of my problems, since the goal was actually to reproduce indices for a training subsample for credit scoring issues.

I have to admit that I had never though about it before: how should we write a sample function? If values can be replaced, that is fine, we just have to split the unit interval correctly. Like

> set.seed(95)
> (U=runif(10))
 [1] 0.15171155 0.57584087 0.05309844 0.07044485 0.48887914 0.15276707
 [7] 0.37405684 0.30006096 0.96997126 0.30373498
> set.seed(95)
> (R=sample(0:99,size=10,replace=TRUE))
 [1] 15 57  5  7 48 15 37 30 96 30

Here, we just truncate from the values obtained from the random generator. And that is just fine. But how do we write a code to sample without replacements? I mean, how do you get that :

> (S=sample(0:99,size=10,replace=FALSE))
 [1] 15 57  5  6 46 14 35 27 89 92

My initial idea was to use the following technique. The first value is easy to get: we just split the unit interval into 100 subdivision (as before since for the first value, replacement or not, we don’t care) and see in which interval the random value is. And we remove that value from our sample. Then, we split the unit intervall into 99 subdivision, and see in which interval the random value is. It is the 10th? Fine, then our second value is the 10th from our sample (the first value has been removed). Then we split the unit interval in 98 subdivision, etc. The code I wrote to produce that algorithm was the following,

> mysample1=function(N,unif){
+  n=length(N)
+  size=length(unif)
+  V0=N[trunc(unif[1]*n)+1]
+  N=N[-which(N==V0)]
+  V=V0
+  for(i in 2:length(unif)){
+    V0=N[trunc(unif[i]*(n-i+1))+1]
+    N=N[-which(N==V0)]
+    V=c(V,V0)}
+ return(V)}

Unfortunetely, I could not reproduce the sample obtained with the R function,

> mysample1(0:99,unif=U)
 [1] 15 58  5  7 49 17 39 31 97 32
> S
 [1] 15 57  5  6 46 14 35 27 89 92

Since Christrophe is an expert on random generators, I did ask him, one more time. And he came up with the following code,

> mysample2=function(N,unif){
+   integerset=1:length(N)
+   result=rep(NA,length(unif))
+   for(i in 1:length(unif)){
+     intchosen=integerset[ceiling(U[i]*(length(N)-i+1))]
+     integerset[intchosen]=length(integerset)
+     integerset=integerset[-length(integerset)]
+     result[i]=intchosen}
+   return(N[result])}

which works just fine !

> mysample2(0:99,unif=U)
 [1] 15 57  5  6 46 14 35 27 89 92
> S
 [1] 15 57  5  6 46 14 35 27 89 92

So now, not only can we reproduce random numbers obtained with other software, we can also obtain the same samples indices, with or without replacement ! Thanks Christophe !

[May, 15th] Note that this is note the generator used in SAS, actually. In order to reproduce the sample function of SAS, the algorithm is much more simple, by clearly not that efficicient since we generate a random sample of size 100 (if we have 100 observations), and then, we keep the values associated to the indices of the 10 smallest (if we want a sample of size 10). The code could be

> mysample3=function(N,unif,size){
+ q=sort(unif)[size]
+ return(N[U<=q])}

> library(randtoolbox)
> a <- 397204094
> b <- 0
> m <- 2^(31)-1
> set.generator(name="congruRand", mod=m, mult=a, incr=b, seed=123) #OK

> U=runif(100)
> mysample3(1:100,U,size=10)
 [1] 27 37 47 59 60 71 75 82 84 87

Thanks Jean-Philippe for the idea (which works).

Retour au Franc (back to the 70’s)

L’autre jour, en regardant des catalogues de jeux avec les enfants, je me suis fait la réflexion que les prix des jouets (en dollars canadiens) me faisaient penser au prix des jouets (en francs) quand j’étais petit. Je me suis souvenu d’un Noël où mon père me demandait si je voulais vraiment le truc-que-j’ai-depuis-oublié parce que, quand même, c’était cher, ça coutait “plus que 100 Fr“. Et ce matin, au cours une discussion entre tekool et tanx_xx (sur Twitter) j’ai vu passer le message suivant “Entre les années 70 et aujourd’hui il faut simplement (c’est sérieux) convertir de francs à euros” suivi de “Je veux dire un francs en 70 c’est un euros aujourd’hui“. Ce qui confirmerait mon intuition (et tenant compte du taux de change euro-dollar, car mon enfance, c’est plutôt les années 80).

Pour vérifier cette histoire, je suis allé chercher des taux d’inflation, et des évolutions d’indices de prix. Pour avoir des données mensuelles, il suffisait d’aller sur http://bdm.insee.fr/… mais les donnes remontent seulement aux années 90. En revanche, http://france-inflation.com/… remonte à 1901 (un tout petit peu avant ma naissance comme dirait ma fille), mais en données annuelles. La bonne chose est que les deux séries sont consistantes,

> T1=ts(insee,start=1990,frequency=12)
> T2=ts(inffrance,start=1901,frequency=1)
> plot(lag(T1,6),xlim=c(1970,2012),ylim=c(20,130),col="red",lwd=2)
> lines(T2,col="blue")

et si on regarder la variation de l’indice des prix, on retrouve un niveau d’inflation comparable là aussi

> plot(diff(T2)/T2*100,col="blue",ylab="Inflation (%)",xlim=c(1960,2012),ylim=c(0,15))
> lines(lag(diff(T1,12)/T1*100,6),col="red")

L’idée va être d’utiliser le taux annuel, éventuellement interpolé (linéairement pour faire simple), pour voir quand le niveau des prix était 6,5597 fois plus faible qu’aujourd’hui. On peut utiliser la petite fonction

> interpol.linear = function(u,n){
+ v=u[1]
+ for(i in 1:(length(u)-1)){
+ v=c(v,seq(u[i],u[i+1],length=n+1)[-1])}
+ return(v)}

On crée alors un vecteur d’indice des prix plus fin (à partir des prix annuels),

> h=50
> T2b=interpol.linear(T2,n=50)

On va alors chercher quand avec 1 Fr. on avait l’équivalent de 1 € (aujourd’hui)

> last=T2b[length(T2b)]
> euro=6.5597
> level=last/euro
> max(which(T2b<=level))/h+1900
[1] 1969.92

Effectivement, début 1970, les prix devaient correspondre. Si quelqu’un a un vieux catalogue de vente par correspondance de cette époque, je pense qu’on pourrait regarder les montants nominaux. Ça fait donc environ 42 ans…

> date=1901+seq(0,by=1/h,length=length(T2b))
> when=function(D){	
+ last=T2b[max(which(date<=D))]
+ euro=6.5597
+ level=last/euro
+ return(D-max(which(T2b<=level))/h-1900)}
> when(2012)
[1] 42.08

Ça fait beaucoup ou pas ? Car les prix montent, ce n’est un secret pour personne (ou presque)

Si on avait un taux d’inflation (annualisé) à 5%, il faudrait attendre https://latex.codecogs.com/gif.latex%20?T (années) avant de multiplier les prix par 6.5597, avec https://latex.codecogs.com/gif.latex%20?T solution de

soit

https://latex.codecogs.com/gif.latex%20?T=\frac{\log%206.5597}{\log%201.05}\sim%2038.5517

comme en témoigne le calcul

> log(6.5597)/log(1.05)
[1] 38.55172

Bref, une quarantaine d’années est un ordre de grandeur plausible (car l’inflation n’a pas été constante pendant les 40 dernières années en France, loin de là). En fait, on peut faire des calculs, pour voir, dans le passé combien de temps il a fallu attendre avant d’avoir une telle hausse des prix,

> VD=seq(1985,2012,by=.5)
> VW=Vectorize(when)(VD)
> plot(VD,VW,type="l",col="purple",ylab="Nombre d'années",lwd=2)

On retrouve ici le faut que l’inflation a fortement baissé. Lors du passage à l’euro, en 2002, les prix (nominaux) étaient les mêmes que ceux affichés 35 ans auparavant

> when(2002)
[1] 35.3

Amusant non ? Je laisse les amateurs d’actualisation interpréter maintenant le fait que le temps à attendre soit linéaire depuis une quinzaine d’années…

Playing cards, with R

In my courses on R, I usually show how to insert a picture as a background for a graph. But it is also to see the picture as an object, and to insert it in a graph everywhere we like to see it, as explained on the awesome blog http://rsnippets.blogspot.ca/…. (in a post published in January 2012). I wanted to insert cards in a graph. Cards can be found, e.g. on wikipedia, even French versions, like the one I used to play with when I was a kid (see e.g. the Jack of clubs, http://commons.wikimedia.org/…, or the Queen of hearts, http://commons.wikimedia.org/…). But graphs are in svg. First, we have to export them in ppm, either using gimp, or online, with http://www.sciweavers.org/… instance. Here, I have a copy of the 32 cards, and the code to read one, in R, is

library(pixmap)
card=read.pnm("1000px_10_of_clubs.ppm")

Then, I can plot the cart using

plot(card,add=TRUE)

(on a predefined graph) The interesting part is that it is possible to plot the picture within a given box, but it has be bee specified when we read the image file, using

card=read.pnm("1000px_10_of_clubs.ppm",bbox=c(300,200,800,1100))
plot(card,add=TRUE)

If we want to visulize all the cards, first, we have to store the pictures (the cards) in some R format, in a list, then to check for all of them for their dimensions, and then, we can write a code to plot any of them, anywhere we like (again it has to be specified when we read the file, which might take a while)

L=list(cards="french cards")
L2=list(cards="french cards")
color=c("spades","clubs","hearts","diamonds")
nb=c("07","08","09","10","Jack","Queen","King","01")
N=1:32
for(n in N){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1        #color
  name_card=paste("1000px_",nb[i],"_of_",color[j],".ppm",sep="")
L[[n+1]]=read.pnm(name_card)  
L2[[n+1]]=name_card
}

Now,if we want to play one specific card (out of those 32), we can use

card_plot=function(id,loc){
usr <- par("usr")
pin <- par("pin")
card=L[[id+1]]
x.asp <- (card@size[2] * (usr[2] - usr[1]) / pin[1])
y.asp <- (card@size[1] * (usr[4] - usr[3]) / pin[2])
card.height <-.9
card.width <- card.height * x.asp / y.asp
y.0 <- loc[2]
x.0 <- loc[1]
bbox <- c(x.0, y.0, x.0 + card.width, y.0 + card.height)
card <- read.pnm(L2[[id+1]],bbox = bbox)
plot(card,add=TRUE)
}

Note that, here, first we read the file to check the dimensions, and then, we read it again, using the appropriate box (with height given, here 0.9). Now, it is possible to plot the 32 cards on the same graph, for a given ordering

seq_card_plot=function(seq_id){
  X=seq(0,7*.5,by=.5)
  Y=0:4
  table = plot(0:4,0:4,ylim=c(0,4),
  axes=FALSE,xlab="",ylab="",col="white")    
  for(n in 1:length(seq_id)){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1         #color
    card_plot(id=seq_id[n],loc=c(X[i],Y[j])) 
  }}

If we did not shuffle the cards, it would be

seq_card_plot(N)

But it is possible to shuffle the cards, of course,

set.seed(1)
seq_card_plot(sample(N))

Now, to be honest, I am a bit disappointed because I did not use the fact that I have vector based images here. So it should be possible to get much nicer images, I guess…

Reproducibility and randomness

With Stéphane Tufféry, we were working this week on a chapter of a book, entitled Statistical Learning in Actuarial Science. The chapter should be based on R functions, and we wanted to reproduce some outputs he previously obtained with SAS. The good thing is that even complex functions (logistic regression, regression trees, etc) produce the same kind of outputs. But we found a problem that we could not fix: generating identical training subsets of observations… Out of 1,000 lines, we subsample about 600 lines. The problem is that we could not generate the same sets of indexes with R, and SAS. Even using similar random generators… (execpt if we want to extract 1 or 2 lines, no more).

Let us try to explain what’s going on (based on code produced by Stéphane). According to Eubank (2010), there are (at least) two generators of random numbers in SAS,

For instance, for the RAND function, if we generate a Gaussian sample – with Mersenne-Twister algorithm – the code should be

%LET SEED =6;
%LET NREP=10;
DATA TESTRANDOM ;
  CALL STREAMINIT(&SEED);
  DO REP = 1 TO &NREP;
   X = RAND ('NORMAL'); 
   OUTPUT;
  END;
RUN;
PROC PRINT DATA = TESTRANDOM ;
RUN ;

And we get here

Obs. REP X
1 1 2.10680
2 2 -0.25604
3 3 0.28692
4 4 -0.22806
5 5 1.34569
6 6 0.16341
7 7 -0.27788
8 8 0.02133
9 9 1.24050
10 10 1.01054

 

If we want a Uniform sample, it should be

%LET SEED =6;
%LET NREP=10;
DATA TESTRANDM ;
CALL STREAMINIT(&SEED);
DO REP = 1 TO &NREP;
  X = RAND ('UNIFORM'); 
  OUTPUT;
END;
RUN;
PROC PRINT DATA = TESTRANDOM ;
RUN ;
Obs. REP X
1 1 0.66097
2 2 0.48044
3 3 0.87849
4 4 0.19916
5 5 0.04838
6 6 0.19966
7 7 0.81353
8 8 0.53807
9 9 0.01105
10 10 0.53753

On good thing (so far) about the latest, is that Mersenne-Twister has been coded in R, in the RNGkind function

> RNGkind("Mersenne")
> set.seed(6)
> runif(10)
 [1] 0.64357277 0.91590888 0.09523258 0.29537280
 [5] 0.76993170 0.25589353 0.51789573 0.67784993
 [9] 0.14722782 0.70052604

But the output is different, even if we’re supposed to start, here, with the same seed. Now, if we want to make sure about what is done here, let us write our own codes of the Fishman and Moore (1982) algorithm (in order to reproduce the SAS output). The R version of

> a = 397204094      # RANUNI multiplier
> seed = 123         # seed
> n = 10             # sample size
> m = (2^31) - 1     # period
> for (i in (1:n-1)) {
+ seed = (a*seed)%%m
+ u = seed / m
+ print(u)
+ }
[1] 0.7503961
[1] 0.3209121
[1] 0.3453204
[1] 0.2683455
[1] 0.241798
[1] 0.9888181
[1] 0.3943279
[1] 0.9710172
[1] 0.001632214
[1] 0.921537

Let us now run a similar code with SAS,

%LET SEED =123;
%LET NREP=10;
DATA FRANUNI (KEEP = x) ;
seed = &SEED ;
DO REP = 1 TO &NREP;
  CALL RANUNI(seed, x);
  OUTPUT;
END;
RUN;
PROC PRINT DATA = FRANUNI ;
RUN ;

and we get the following output

Obs. x
1 0.75040
2 0.32091
3 0.17839
4 0.90603
5 0.35712
6 0.22111
7 0.78644
8 0.39808
9 0.12467
10 0.18769

It looks like here, indeed, we start with the same seed, since the first two numbers generated are similar. But then, it looks like we really have random numbers… If we change the seed, the first two numbers are similar, but that’s all.

We might be missing something trivial here, but we did not see it. So if anyone has a clue about reproducibility issues when generating random samples, with R and SAS, we are interested !

Poisson regression on non-integers

In the course on claims reserving techniques, I did mention the use of Poisson regression, even if incremental payments were not integers. For instance, we did consider incremental triangles

>  source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
>  INC=PAID
>  INC[,2:6]=PAID[,2:6]-PAID[,1:5]
>  INC
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

On those payments, it is natural to use a Poisson regression, to predict future payments

>  Y=as.vector(INC)
>  D=rep(1:6,each=6)
>  A=rep(2001:2006,6)
>  base=data.frame(Y,D,A)
>  reg=glm(Y~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
>  Yp=predict(reg,type="response",newdata=base)
>  matrix(Yp,6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

and the total amount of reserves would be

>  sum(Yp[is.na(Y)==TRUE])
[1] 2426.985

Here, payments were in ‘000 euros. What if they were in ‘000’000 euros ?

> a=1000
> INC/a
      [,1]  [,2]  [,3]  [,4]  [,5]  [,6]
[1,] 3.209 1.163 0.039 0.017 0.007 0.021
[2,] 3.367 1.292 0.037 0.024 0.010    NA
[3,] 3.871 1.474 0.053 0.022    NA    NA
[4,] 4.239 1.678 0.103    NA    NA    NA
[5,] 4.929 1.865    NA    NA    NA    NA
[6,] 5.217    NA    NA    NA    NA    NA

We can still run a regression here

> reg=glm((Y/a)~as.factor(D)+as.factor(A),data=base,family=poisson(link="log"))
> Yp=predict(reg,type="response",newdata=base)
> sum(Yp[is.na(Y)==TRUE])*a
[1] 2426.985

and the prediction is exactly the same. Actually, it is possible to change currency, and multiply by any kind of constant, the Poisson regression will return always the same prediction, if we use a log link function,

>  homogeneity=function(a=1){
+  reg=glm((Y/a)~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
+  Yp=predict(reg,type="response",newdata=base)
+  return(sum(Yp[is.na(Y)==TRUE])*a)
+  }
>  Vectorize(homogeneity)(10^(seq(-3,5)))
[1] 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985

The trick here come from the fact that we do like the Poisson interpretation. But GLMs simply mean that we do want to solve a first order condition. It is possible to solve explicitly the first order condition, which was obtained without any condition such that values should be integers. To run a simple code, the intercept should be related to the last value of the matrix, not the first one.

> base$D=relevel(as.factor(base$D),"6")
> base$A=relevel(as.factor(base$A),"2006")
> reg=glm(Y~as.factor(D)+as.factor(A), data=base,family=poisson(link="log"))
> summary(reg)

Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), family = poisson(link = "log"), 
    data = base)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.3426  -0.4996   0.0000   0.2770   3.9355  

Coefficients:
                 Estimate Std. Error z value Pr(>|z|)    
(Intercept)       3.54723    0.21921  16.182  < 2e-16 ***
as.factor(D)1     5.01244    0.21877  22.912  < 2e-16 ***
as.factor(D)2     4.04731    0.21896  18.484  < 2e-16 ***
as.factor(D)3     0.86391    0.22827   3.785 0.000154 ***
as.factor(D)4    -0.09254    0.25229  -0.367 0.713754    
as.factor(D)5    -0.93717    0.32643  -2.871 0.004092 ** 
as.factor(A)2001 -0.50271    0.02079 -24.179  < 2e-16 ***
as.factor(A)2002 -0.43831    0.02045 -21.433  < 2e-16 ***
as.factor(A)2003 -0.30029    0.01978 -15.184  < 2e-16 ***
as.factor(A)2004 -0.19096    0.01930  -9.895  < 2e-16 ***
as.factor(A)2005 -0.05864    0.01879  -3.121 0.001799 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

The first idea is to run a gradient descent, as follows (the starting point will be coefficients from a linear regression on the log of the observations),

> YNA <- Y
> XNA=matrix(0,length(Y),1+5+5)
> XNA[,1]=rep(1,length(Y))
>   for(k in 1:5) XNA[(k-1)*6+1:6,k+1]=k
>   u=(1:(length(Y))%%6); u[u==0]=6
>   for(k in 1:5) XNA[u==k,k+6]=k 
>     YnoNA=YNA[is.na(YNA)==FALSE]
>     XnoNA=XNA[is.na(YNA)==FALSE,]
>     beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>     for(s in 1:50){
+     Ypred=exp(XnoNA%*%beta)
+     gradient=t(XnoNA)%*%(YnoNA-Ypred)
+     omega=matrix(0,nrow(XnoNA),nrow(XnoNA));diag(omega)=exp(XnoNA%*%beta) 
+     hessienne=-t(XnoNA)%*%omega%*%XnoNA
+     beta=beta-solve(hessienne)%*%gradient}
> beta
             [,1]
 [1,]  3.54723486
 [2,]  5.01244294
 [3,]  2.02365553
 [4,]  0.28796945
 [5,] -0.02313601
 [6,] -0.18743467
 [7,] -0.50271242
 [8,] -0.21915742
 [9,] -0.10009587
[10,] -0.04774056
[11,] -0.01172840

We are not too far away from the values given by R. Actually, it is just fine if we focus on the predictions

> matrix(exp(XNA%*%beta),6,6))
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

which are exactly the one obtained above. And here, we clearly see that there is no assumption such as “explained variate should be an integer“. It is also possible to remember that the first order condition is the same as the one we had with a weighted least square model. The problem is that the weights are function of the prediction. But using an iterative algorithm, we should converge,

> beta=lm(log(YnoNA)~0+XnoNA)$coefficients
>  for(i in 1:50){
+ Ypred=exp(XnoNA%*%beta)
+  z=XnoNA%*%beta+(YnoNA-Ypred)/Ypred
+  REG=lm(z~0+XnoNA,weights=Ypred)
+  beta=REG$coefficients
+ }
> 
> beta
     XnoNA1      XnoNA2      XnoNA3      XnoNA4      XnoNA5      XnoNA6
 3.54723486  5.01244294  2.02365553  0.28796945 -0.02313601 -0.18743467
     XnoNA7      XnoNA8      XnoNA9     XnoNA10     XnoNA11 
-0.50271242 -0.21915742 -0.10009587 -0.04774056 -0.01172840

which are the same values as the one we got previously. Here again, the prediction is the same as the one we got from this so-called Poisson regression,

> matrix(exp(XNA%*%beta),6,6)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1  8.2 20.9
[2,] 3365.6 1282.0 53.1 20.4  8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7

Again, it works just fine because GLMs are mainly conditions on the first two moments, and numerical computations are based on the first order condition, which has less constraints than the interpretation in terms of a Poisson model.

La belle-mère et la bataille

Ce soir, les enfants voulaient lancer une partie de bataille dix minutes avant de souper. Devant mon peu d’enthousiasme (on ne sait jamais trop quand ce genre de parties finissent), ma belle-mère a suggéré qu’au lieu de jouer à deux (comme le voulaient les deux grands), on devrait jouer à quatre, et comme ça, ça irait plus vite.

Et si ma belle-mère avait raison ? et si elle avait tort ? Car je veux bien croire que le temps moyen d’une partie (une partie se finissant quand il y a un vainqueur, c’est à dire quelqu’un qui a collecté toutes les cartes du jeu), voire le temps médian d’une partie, soit fonction du nombre de joueurs. A priori j’aurais toutefois tendance à croire qu’au contraire, ça va rallonger. En effet, intuitivement, si on joue à trois, au bout d’un moment, un joueur aura perdu toutes ses cartes, et – en moyenne – le paquet est alors partagé entre les deux joueurs qui restent. Et on revient alors à une partie avec deux joueurs. Donc plus y a de joueurs, plus ça devrait durer. Maintenant, on peut aussi noter qu’avec 30 cartes, et 10 joueurs, il est possible que le même joueur gagne 3 fois de suites (de l’ordre d’une chance sur 1000 avec un jeu simplifié, certes), au tout début, et donc en 3 coups, la partie est finie. Ce qui ne peut pas arriver avec 5 joueurs par exemple. Donc avec beaucoup de joueurs, il serait possible de finir une partie beaucoup plus vite. Damned… c’est pénible les belles-mères !

N’ayant pas le courage de faire des maths (il restait dix minutes avant le souper), je me suis contenter de faire un petit programme pour faire des simulations. Bon, j’avoue, je considère une version très simplifiée de la bataille. On a  joueurs, et  cartes, avec  multiple de . On distribue toutes les cartes, soit  cartes par joueur, au premier tour. Je vais alors supposer qu’il n’y a pas de bataille. C’est un peu nul, j’en conviens (puisque c’est le nom du jeu). Autrement dit, on tire au hasard un vainqueur parmi les joueurs. Et de manière équiprobable, c’est à dire qu’un joueur avec beaucoup de cartes n’a pas – a priori – de meilleures cartes que les autres. Le vainqueur prend alors les  cartes, et -1 joueurs (les perdants) perdent une carte.

Une partie se simule alors de la manière suivante,

Time=function(np=2,nc=36){
t=0
N=rep(nc/np,np)
VN=N
P=1:np
while(sum(N==0)<(np-1)){
i=sample(P,1)
N=pmax(N-1,0)
N[i]=N[i]+length(P)
P=which(N!=0)
t=t+1
VN=rbind(VN,N)
}
return(list(time=t,traj=VN))
}

Par exemple, avec deux joueurs, on a l’évolution suivante du nombre de cartes par joueur (sur un jeu contenant =60 cartes),

set.seed(1)
T=Time()
barplot(t(T$traj),col=c("blue","red"),border=NA)

et avec trois joueurs (et toujours nos 60 cartes)

T=Time(np=3,nc=36)
barplot(t(T$traj),col=c("blue","red","green"),border=NA)

 Ici, il semble que la partie avec trois joueurs ait duré beaucoup plus longtemps que celle avec deux. Et que lorsque le troisième joueur a perdu toutes ses cartes, les deux joueurs avaient autant de cartes l’un que l’autre (et on était alors revenu à la position initiale). Mais est-ce toujours le cas ? Faisons quelques simulations pour se faire une idée plus précise… On peut utiliser

game=function(npsim=2,ns=5000,ncsim=60){
T=rep(NA,ns)
for(i in 1:ns){T[i]=Time(np=npsim,nc=ncsim)$time}
return(T)
}

Et on peut regarder des simulations avec 2, 3, 4, 5, etc joueurs,

G1=game(np=2,nc=60)
G2=game(np=3,nc=60)
G3=game(np=4,nc=60)
G4=game(np=5,nc=60)
G5=game(np=6,nc=60)
G6=game(np=10,nc=60)
G7=game(np=12,nc=60)
G8=game(np=15,nc=60)
G9=game(np=20,nc=60)
G10=game(mp=30,nc=60)
G=data.frame(G1,G2,G3,G4,G5,G6,G7,G8,G9,G10)
boxplot(G,names=c(2,3,4,5,6,10,12,15,20,30))

Si on regarde le dessin brut, on a

autrement dit, il semble que la distribution du temps d’une partie soit indépendante du nombre de joueurs (ici en abscisse). On peut regarder les temps moyens

> trunc(apply(G,2,mean))
 G1  G2  G3  G4  G5  G6  G7  G8  G9 G10 
896 925 929 922 919 918 913 908 909 873

avec éventuellement un intervalle de confiance à 95% sur le temps moyen d’une partie, avec 60 cartes

> rbind(trunc(apply(G,2,mean)-2/sqrt(5000)*apply(G,2,sd)),
+ trunc(apply(G,2,mean)),
+ trunc(apply(G,2,mean)+2/sqrt(5000)*apply(G,2,sd)))
      G1  G2  G3  G4  G5  G6  G7  G8  G9 G10
[1,] 875 904 908 901 898 897 893 887 888 852
[2,] 896 925 929 922 919 918 913 908 909 873
[3,] 917 946 950 943 940 939 934 929 930 894

ou encore les quartiles

> trunc(apply(G,2,quantile))
       G1   G2   G3   G4   G5   G6   G7   G8   G9  G10
0%     46   48   50   42   38   33   36    4    3    2
25%   379  403  410  409  407  402  393  391  388  358
50%   681  706  720  710  701  701  706  692  709  660
75%  1192 1233 1249 1209 1203 1221 1229 1194 1195 1178
100% 6746 6927 5656 8135 7392 6377 7926 8542 7528 7062

Histoire de se conforter, on notera que les dernières valeurs minimales ont du sens, compte tenu de notre remarque initiale. Visuellement on obtient

avec en rouge la valeur moyenne (et l’intervalle de confiance autour), toujours en fonction du nombre de joueurs (et toujours le même nombre de cartes). Avec 5 fois plus de simulations, on a la même histoire, avec le temps moyen suivant

> rbind(trunc(apply(G,2,mean)-2/sqrt(25000)*apply(G,2,sd)),
+ trunc(apply(G,2,mean)),
+ trunc(apply(G,2,mean)+2/sqrt(25000)*apply(G,2,sd)))
      G1  G2  G3  G4  G5  G6  G7  G8  G9 G10
[1,] 893 909 917 912 911 912 902 899 895 860
[2,] 902 919 926 922 920 922 911 908 904 869
[3,] 912 928 936 931 930 931 920 918 913 878

Personnellement, malgré la très légère concavité que l’on peut observer sur le graphique ci-dessus (à peine significatif) j’ai l’impression que l’on a toujours la même distribution, quel que soit le nombre de joueurs. Étonnant, non ? J’attends que quelqu’un m’explique que ce résultat est intuitif, et pourrait se trouver simplement, en faisant des maths (je ne dis pas que le temps moyen est le même, je dis que la distribution de la durée d’une partie est indépendante du nombre de joueurs, ce qui me semble plus compliqué à établir). En attendant, je vais essayer d’améliorer mon modèle pour modéliser les batailles. Ou expliquer à mes enfants que leur père est un nerd, car il préfère taper des codes sur son ordinateur plutôt que de jouer à la bataille… Honte à moi !

May the 4th (be with you)

Today is a special day, for all us who did grow up with Star Wars,

And on the internet, one can easily found serious posts, related to Star Wars, for instance, a series of posts on quantifying Stars Wars, with part I, http://theskepticalstatistician.blogspot.ca/…, part II, http://theskepticalstatistician.blogspot.ca/…  and http://theskepticalstatistician.blogspot.ca/…, and finally part III, http://theskepticalstatistician.blogspot.ca/…, for statisticians. One can also read a nice post on Star Wars economics, http://legalinsurrection.com/…

LaTeX in R graphs

A nice post was recently published on the rsnippets blog, about the tikzDevice R package. This package is – indeed – awesome. Even if it has been removed from the CRAN website. Of course, it can be download from the archive folder, on http://cran.r-project.org/…, but also (for a more recent version)  on http://download.r-forge.r-project.org/…. But first, it is necessary to install the following package.

> install.packages("filehash")

Then, we download on of the tikzDevice.zip files, and load it, e.g. using (on Mac)

Then, we can load the library

> library("tikzDevice")

If we want to use nice LaTeX formulas, it might be necessary to upload some (LaTeX) libraries and to specify the encoding format

> "options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))

(this is detailed, e.g. in  http://yihui.name/…), then, we write a code to plot a graph. The idea is to produce a tex file which contains the graph, or more precisely which will produce a pdf graph when we compile it. We start with

> tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))

We will produce a 8×4 graph. The graph is the following,

> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u>=0)&(u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")

We can add text (or TeX based text)

> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\in[0,1])=\\displaystyle{\\int_0^1 \\varphi(x)dx}}$", cex = 1.5)

And we end the file with a standard

> dev.off()

This will produce a .tex file. If we compile this file, we can generate a pdf file, that can be inserted in lecture notes, slides or articles,

Nice, isn’t it ?

Animation, from R to LaTeX

Just a short post, to share some codes used to generate animated graphs, with R. Assume that we would like to illustrate the law of large number, and the convergence of the average value from binomial sample. We can generate samples https://latex.codecogs.com/gif.latex?X_{i,j}\sim\mathcal{B}(1/2) using

> n=200
> k=1000
> set.seed(1)
> X=matrix(sample(0:1,size=n*k,replace=TRUE),n,k)

Each row https://latex.codecogs.com/gif.latex?\boldsymbol{X}_{i}=(X_{i,1},\cdots,X_{i,n},\cdots) will be a trajectory of heads and tails. For each trajectory, define the mean https://latex.codecogs.com/gif.latex?\bar{X}_{i,n}=n^{-1}(X_{i,1}+\cdots+X_{i,n}), which will denote the mean of the first https://latex.codecogs.com/gif.latex?n values. Such a matrix can be computed using

> cummean=function(M){
+ U=matrix(M[,1],nrow(M),1)
+ 	for(i in 2:ncol(M)){
+ 	U=cbind(U,(U[,i-1]*(i-1)+M[,i])/i)}
+ return(U)
+ }

Define then

> Xbar=cummean(X)

Now, to generate an animated gif, the way I usually do it is to generate graphs (png graphs) using a loop,

> S=trunc(10^seq(1,3,by=.05))
> for(j in 1:length(S)){
+ 	s=S[j]
+ 	Xhist=hist(Xbar[,s],breaks=seq(0,1,by=.05),plot=FALSE)
+ 	nfile=paste("LLN-",100+j,".png",sep="")
+ 	png(nfile,600,350)
+ 	layout(matrix(c(3,0,1,2),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
+ 	plot(1:s,Xbar[1,1:s],type="l",col="light blue",ylim=0:1,xlab="",ylab="",axes=FALSE,
+ 		 xlim=c(10,k),log="x")
+ 	axis(1)
+ 	axis(2)
+ 	for(i in 2:(n-1)) lines(1:s,Xbar[i,1:s],col="light blue")
+ 	lines(1:s,Xbar[n,1:s],col="red",lwd=2)
+ 	abline(v=s)
+ 	barplot(Xhist$counts, axes=TRUE,horiz=TRUE,col="light green",xlim=c(0,n/2*1.05))
+ 	dev.off()
+ }

I start at 100 because afterwards, when merging files, it is better to have (really) consecutive numbers, since sometimes, the lexical order is used, i.e. after 1 is 10, then 100, etc. Then I use Terminal commands

Here, the delay is in /100 seconds, and I use an infinite loop. The graph is here

It is possible to use

> library(animation)
> ani.options(interval=.15)
> saveGIF({      })

But the loop can be used also to generate several graphs, and to produce an animated graph in a pdf document (slides or lecture notes). The idea is to use the same code, but the output is here a pdf graph.

> S=trunc(10^seq(1,3,by=.1))
> for(j in 1:length(S)){
+ s=S[j]
+ Xhist=hist(Xbar[,s],breaks=seq(0,1,by=.05),plot=FALSE)
+ 	nfile=paste("LLN-",j,".pdf",sep="")
+ 	pdf(nfile,10,6)
+ layout(matrix(c(3,0,1,2),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
+ plot(1:s,Xbar[1,1:s],type="l",col="light blue",ylim=0:1,xlab="",ylab="",axes=FALSE,
+ xlim=c(10,k),log="x")
+ axis(1)
+ axis(2)
+ for(i in 2:(n-1)) lines(1:s,Xbar[i,1:s],col="light blue")
+ lines(1:s,Xbar[n,1:s],col="red",lwd=2)
+ abline(v=s)
+ barplot(Xhist$counts, axes=TRUE,horiz=TRUE,col="light green",xlim=c(0,n/2*1.05))
+ 	dev.off()
+ }

We can then import them in LaTeX,

\documentclass[a4]{article}
\usepackage{graphicx}
\usepackage{animate}
\begin{document}
\begin{center}
\animategraphics[height=3.1in,palindrome]{1}{/Users/UQAM/LLN-}{1}{21}
\end{center}
\end{document}

This will generate the following pdf file. This animate package is described in several forums, e.g. http://www.geogebra.org/…