Category Archives: Probability

Probabilité et géométrie

Une des formules les plus importantes en probabilité (je trouve) est la “formule des probabilités totales” qui dit tout simplement que

que l’ont peut aussi écrire, à l’aide de la formule de Bayes

Une des conséquences de ce résultat est la “law of total expectation“, souvent appelé théorème de double projection,

que l’on écrit souvent sous la forme raccourcie  (dans la formule de droite, le premier symbole est un espérance, c’est à dire une intégrale, et donc un nombre réel, alors que le second indique que l’on travaille sur une espérance conditionnelle, c’est à dire une variable aléatoire). Mais comme toujours dans les relations simplifiée, il faut savoir de quoi on parle.

La démonstration est simple

soit

ou enfin

tout simplement.

L’interprétation est encore plus simple (pour utiliser un exemple que j’aime de plus en plus) : si est le poids d’une personne https://latex.codecogs.com/gif.latex%20?Y, et https://latex.codecogs.com/gif.latex%20?X le sexe, alors

https://latex.codecogs.com/gif.latex%20?\mathbb{E}(Y)=%20\mathbb{E}(Y\vert%20X=\text{H})\cdot\mathbb{P}(X=\text{H})+\mathbb{E}(Y\vert%20X=\text{F})\cdot\mathbb{P}(X=\text{F})

c’est à dire que le poids moyen d’une personne prise au hasard est un barycentre (damned, je fais déjà de la géométrie) entre le poids moyen des hommes et le poids moyen des femmes, les poids pour le calcul du barycentre étant liés aux proportions d’hommes et de femmes. Tout simplement.

Il existe une autre relation classique en probabilité (connue en statistiques sous le nom de “formule de décomposition de la variance”) qui dit que

Là aussi, on a une écriture un peu simplificatrice, que l’on va essayer de décortiquer un peu. Pour ça, le plus simple est de faire un peu de géométrie. Car oui, dans les espaces de variables aléatoire, on peut faire de la géométrique. En particulier des projections (orthogonales). Mais avant, de faire des projections, il faut des distances, des angles, une notion d’orthogonalité, etc.

  • Rappels de géométrie

Pour parler d’orthogonalité, il faut une notion de produit scalaire. Pour rappel, un produit scalaire, sur un espace https://latex.codecogs.com/gif.latex?\mathcal%20E, c’est défini par

  • une notion de symétrie https://latex.codecogs.com/gif.latex?%3C\vec%20x,\vec%20y%3E=%3C\vec%20y,\vec%20x%3E
  • de bilinéarité https://latex.codecogs.com/gif.latex?%3C\alpha%20\vec%20x+\beta%20\vec%20y,\vec%20z%3E=\alpha%3C\vec%20x,\vec%20z%3E+\beta%3C\vec%20y,\vec%20z%3E
  • de positivité, https://latex.codecogs.com/gif.latex?%3C\vec%20x,\vec%20x%3E%20\geq%200
  • et le produit scalaire est dit défini https://latex.codecogs.com/gif.latex?%3C\vec%20x,\vec%20x%3E=0 implique https://latex.codecogs.com/gif.latex?\vec%20x=\vec%200

De ce produit scalaire, on peut déduire une norme

https://latex.codecogs.com/gif.latex?\|\vec%20x\|=\sqrt{%3C\vec%20x,\vec%20x%3E}

En effet, on retrouve la propriété d’homogénéité, https://latex.codecogs.com/gif.latex?\|\lambda\vec%20x\|=\vert\lambda\vert%20\|\vec%20x\| (c’est pour ça que l’on prend la racine carrée), et https://latex.codecogs.com/gif.latex?\|\vec%20x\|=0 si et seulement si https://latex.codecogs.com/gif.latex?\vec%20x=\vec%200. On a aussi l’inégalité de Cauchy-Schwarz,

https://latex.codecogs.com/gif.latex?\|%3C\vec%20x,\vec%20y%3E\|%20\leq%20\|\vec%20x\|\|\vec%20y\|

qui va impliquer l’inégalité triangulaire

https://latex.codecogs.com/gif.latex?\|\vec%20x+y\|\leq%20\|\vec%20x\|+\|\vec%20y\|

On dira que deux vecteurs sont orthogonaux, noté https://latex.codecogs.com/gif.latex?\vec%20x\perp\vec%20y si https://latex.codecogs.com/gif.latex?%3C\vec%20x,\vec%20y%3E=0. Dans ce cas, on a la relation de Pythagore,

https://latex.codecogs.com/gif.latex?\|\vec%20x+y\|^2=%20\|\vec%20x\|^2+\|\vec%20y\|^2

Autre objet intéressant, la projection orthogonale. Rappelons tout d’abord que si https://latex.codecogs.com/gif.latex?\mathcal%20F\subset%20\mathcal%20E, on dira que https://latex.codecogs.com/gif.latex?\vec%20x\perp\mathcal%20F si https://latex.codecogs.com/gif.latex?\vec%20x\perp\vec%20yy pour tout https://latex.codecogs.com/gif.latex?\vec%20y\in\mathcal%20F. Et on notera

https://latex.codecogs.com/gif.latex?\mathcal%20F^\perp=\{\vec%20x\in\mathcal%20E;\vec%20x\perp\mathcal%20F\}

On a alors des résultats intéressants. En particulier, pour tout https://latex.codecogs.com/gif.latex?\vec%20y\in\mathcal%20E, il existe un unique https://latex.codecogs.com/gif.latex?\vec%20y_\star\in\mathcal%20F\subset%20\mathcal%20E tel que

https://latex.codecogs.com/gif.latex?\|\vec%20y-\vec%20y_\star\|=%20\inf\{\|\vec%20y-\vec%20z\|%20,\vec%20z\in\mathcal%20F\}

On peut aussi montrer que https://latex.codecogs.com/gif.latex?\vec%20y-\vec%20y_\star%20\in\mathcal%20F^\perp. On parlera alors de projection orthogonale de https://latex.codecogs.com/gif.latex?\vec%20y sur https://latex.codecogs.com/gif.latex?\mathcal%20F, et on pourra noter https://latex.codecogs.com/gif.latex?\vec%20y_\star=\Pi_{\mathcal%20F}(\vec%20y). Tout cela est assez standard dans les espaces de dimension finie (on pensera à https://latex.codecogs.com/gif.latex?\mathbb{R}^n pour avoir un peu d’intuition). On connait plein de choses sur les projections orthogonales, il suffit de penser dans https://latex.codecogs.com/gif.latex%20?\mathbb{R}^n pour avoir un peu d’intuition. Par exemple, si , alors

Cette relation est vraie dans https://latex.codecogs.com/gif.latex%20?\mathbb{R}^n, mais aussi dans des espaces plus généraux. C’est ce qu’on appelle le théoreme de double projection (on qu’on devrait voir réapparaître sur les variables aléatoires).

  • Géométrie dans les espaces de variables aléatoires

L’espace des variables aléatoires de variance finie – https://latex.codecogs.com/gif.latex?L_2 – peut être muni d’une telle opération, de produit scalaire,

https://latex.codecogs.com/gif.latex?%3CX,Y%3E=\mathbb{E}(XY)

La norme est alors https://latex.codecogs.com/gif.latex?\|X\|=\sqrt{\mathbb{E}(X^2)}. En fait, on voit qu’on s’avance sur un terrain glissant ici, car pour avoir un produit scalaire, il faudrait que https://latex.codecogs.com/gif.latex?%3CX,X%3E=0 si et seulement si https://latex.codecogs.com/gif.latex?X=0, et pour avoir une norme il faudrait que https://latex.codecogs.com/gif.latex?\|X\|=0 si et seulement si https://latex.codecogs.com/gif.latex?X=0. Le soucis technique ici est que https://latex.codecogs.com/gif.latex?\mathbb{E}(X^2)=0 signifie que https://latex.codecogs.com/gif.latex?\mathbb{P}(X=0)=1 et pas https://latex.codecogs.com/gif.latex?X(\omega)=0 pour tout https://latex.codecogs.com/gif.latex?\omega. Bref, l’égalité est a comprendre au sens presque partout. Mais c’est un point de détail (ici). Techniquement, comme l’explique justement Williams (1991) dans le chapitre 6, https://latex.codecogs.com/gif.latex?L_2 est précisément l’espace  (classique en probabilité) quotienté par cette relation d’équivalence .

Un sous-espace de https://latex.codecogs.com/gif.latex?L_2 est l’espace des constante, https://latex.codecogs.com/gif.latex?\mathbb{R}, que je noterais aussi https://latex.codecogs.com/gif.latex?s\{\boldsymbol{1}\}. On notera que, comme rappelé dans tous les cours que je donne

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y)%20=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}([Y-c]^2)\}

c’est à dire que

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y)=\Pi_{s\{\boldsymbol{1}\}}(Y)

L’espérance est la projection orthogonale sur l’espace des constantes. Et

https://latex.codecogs.com/gif.latex?\text{Var}(Y)%20=\underset{c\in\mathbb{R}}{\text{min}}\{\mathbb{E}([Y-c]^2)\}

Un autre sous-espace de https://latex.codecogs.com/gif.latex?L_2 qui sera intéressant est le suivant. Si https://latex.codecogs.com/gif.latex?X\in%20L_2, on notera

https://latex.codecogs.com/gif.latex?s\{X\}=\{\tilde%20X=\psi(X)\in%20L_2,\psi:\mathbb{R}\rightarrow\mathbb{R}\}

Par exemple, on peut considérer une variable de Bernoulli, https://latex.codecogs.com/gif.latex?X=\boldsymbol{1}_Ahttps://latex.codecogs.com/gif.latex?A\subset%20\mathbb{R}. Dans ce cas, le sous-espace est équivalent au sous-espace des combinaisons linéaires,

https://latex.codecogs.com/gif.latex?sl\{X\}=\{\tilde%20X=\beta_0+\beta_1%20X;(\beta_0,\beta_1)\in%20\mathbb{R}^2\}\subset%20L_2

Techniquement, je pense que

https://latex.codecogs.com/gif.latex?\overline{\sigma\{X\}}%20\subset%20s\{X\}

mais il semblerait qu’on ait égalité stricte… Mais comme je veux parler de projection sans parler de l’espace https://latex.codecogs.com/gif.latex?\Omega, on va continuer avec mon interprétation heuristique, et je ne parlerais pas de https://latex.codecogs.com/gif.latex?\sigma\{X\}. Je fais des projections sur des sous-espaces de https://latex.codecogs.com/gif.latex?L_2, pas sur des https://latex.codecogs.com/gif.latex?\sigma-algèbres.

Si https://latex.codecogs.com/gif.latex?Y\in%20L_2, on posera

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y\vert%20X)=\Pi_{s\{\boldsymbol{X}\}}(Y)

qui sera la solution du problème de moindres carrés

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y\vert%20X)%20=\underset{X_\star\in%20s\{X\}}{\text{argmin}}\{\mathbb{E}([Y-X_\star]^2)\}

On notera que https://latex.codecogs.com/gif.latex?\mathbb{E}(Y\vert%20X)%20\in%20s\{X\}}, et on retrouve ici l’écriture standard d’un modèle de régression, https://latex.codecogs.com/gif.latex?\mathbb{E}(Y\vert%20X)%20=\psi(X). On parlera alors d’espérance conditionnelle. Qui est ici une variable aléatoire, par construction. En fait, l’unicité de la variable aléatoire est possible précisément parce que tout a l’heure, on définissait une égalité presque sure.

On avait rappelé le théor`eme de double projection tout a l’heure. Et comme https://latex.codecogs.com/gif.latex%20?s\{\boldsymbol{1}\}\subset%20s\{X\}, alors

https://latex.codecogs.com/gif.latex%20?\Pi_{s\{\boldsymbol{1}\}}(Y)=\Pi_{s\{\boldsymbol{1}\}}\big(\Pi_{s\{X\}}(Y)\big)

ce qui se traduit par la relation bien connue

https://latex.codecogs.com/gif.latex%20?\mathbb{E}(Y)=%20\mathbb{E}\big(\mathbb{E}(Y\vert%20X)\big)

En économétrie (linéaire) et dans les premiers cours de séries temporelles, on introduit un autre type d’opérateur car on ne projette pas dans des espaces aussi gros que https://latex.codecogs.com/gif.latex%20?s\{X\}. On introduit souvent l’opérateur d’espérance linéaire, avec

https://latex.codecogs.com/gif.latex?\text{EL}(Y\vert%20X)=\Pi_{sl\{{X}\}}(Y)

où (comme on l’avait introduit tout à l’heure)

https://latex.codecogs.com/gif.latex?sl\{X\}=\{\tilde%20X=\beta_0+\beta_1%20X;(\beta_0,\beta_1)\in%20\mathbb{R}^2\}\subset%20L_2

et pour les séries temporelles stationnaires,

https://latex.codecogs.com/gif.latex?\text{EL}(X_t\vert%20X_{t-1},X_{t-2},\cdots)=\Pi_{\overline{sl\{\boldsymbol{X}\}}}(Y)

Oui, si le processus est stationnaire, je peux régresser sur tout le passé, mais dans ce cas, je pense qu’il faut fermer l’espace afin de projeter dessus…

Bon, maintenant, on voit que mon interprétation géométrique est bancale… en effet, je ne peux définir mon espérance conditionnelle (voire mon espérance tout court) à l’aide de projections orthogonales que si je suis dans un espace muni qu’un tel opérateur. Et si https://latex.codecogs.com/gif.latex?L_2 est un espace de Hilbert, ce n’est pas le cas de https://latex.codecogs.com/gif.latex?L_1. Pourtant, nul besoin d’être dans https://latex.codecogs.com/gif.latex?L_2 pour définir une espérance conditionnelle… Je pense que l’astuce peut être de noter que si https://latex.codecogs.com/gif.latex?X appartient à https://latex.codecogs.com/gif.latex?L_1 mais pas https://latex.codecogs.com/gif.latex?L_2, on peut quand même s’en sortir car https://latex.codecogs.com/gif.latex?L_1 est un espace de Banach et on a malgré tout des notions de convergences. Et https://latex.codecogs.com/gif.latex?L_2 est dense dans https://latex.codecogs.com/gif.latex?L_1. Bref, en faisant un peu de limite, pour peut étendre ce qu’on vient de faire pour les variables qui ne sont pas de carré intégrable…

Visuellement (il serait peut-être temps de faire des dessins, non ?) on a

On voit apparaître un paquet de relations sur ce dessin, que ce soit des doubles projections, mais aussi le théorème de Pythagore. Mais prenons notre temps…

  • et la constante ?

Maintenant, si on regarde un peu détails, il y a des choses étranges. Par exemple https://latex.codecogs.com/gif.latex?X\perp%20Y se traduit ici par https://latex.codecogs.com/gif.latex?\mathbb{E}(XY)=0. Mais ce n’est pas ce qu’on utilise classiquement comme notion d’orthogonalité. On a davantage l’habitude de voir

https://latex.codecogs.com/gif.latex?\text{Cov}(X,Y)=\mathbb{E}(XY)-\mathbb{E}(X)\mathbb{E}(Y)=0

Qu’est ce qu’on a raté ? L’idée est de poser https://latex.codecogs.com/gif.latex%20?\vec%20x=X-\mathbb{E}(X), et https://latex.codecogs.com/gif.latex%20?\vec%20y=Y-\mathbb{E}(Y), c’est à dire que l’on va travailler sur les variables centrées. On peut noter que

https://latex.codecogs.com/gif.latex%20?\text{Var}(X)=\mathbb{E}([X-\mathbb{E}(X)]^2)=\|%20\vec%20x%20\|^2

et on va définir la covariance entre X et Y a partir des vecteurs translatés,

https://latex.codecogs.com/gif.latex%20?\text{Cov}(X,Y)=%3C\vec%20x,\vec%20y%3E

et la corrélation comme

https://latex.codecogs.com/gif.latex%20?\text{corr}(X,Y)=\frac{%3C\vec%20x,\vec%20y%3E}{\|%20\vec%20x%20\|\cdot%20\|%20\vec%20y%20\|}%20=\cos(\theta)

où https://latex.codecogs.com/gif.latex%20?\theta est l’angle entre les vecteurs https://latex.codecogs.com/gif.latex%20?\vec%20x et https://latex.codecogs.com/gif.latex%20?\vec%20y. On dira que https://latex.codecogs.com/gif.latex?X\perp%20Y si https://latex.codecogs.com/gif.latex%20?\vec%20x%20\perp%20\vec%20y (au sens géométrique du terme), qui correspond a la relation classique https://latex.codecogs.com/gif.latex%20?\text{corr}(X,Y)=0). Le théroreme de Pythagore nous dit que si https://latex.codecogs.com/gif.latex%20?\vec%20x%20\perp%20\vec%20y, alors

https://latex.codecogs.com/gif.latex%20?\|%20\vec%20x%20+\vec%20y%20\|^2=\|%20\vec%20x%20\|^2+\|%20\vec%20y%20\|^2

Ce qui se traduit par

https://latex.codecogs.com/gif.latex%20?\text{Var}(X+Y)=\text{Var}(X)+\text{Var}(Y)

si les variables sont orthogonales.

Maintenant, on était allé un peu plus loin tout a l’heure, avec des variables correspondant à des espérances conditionnelles. Par exemple, la formule de décomposition de la variance. On peut reprendre l’égalité du théorème de Pythagore, avec

https://latex.codecogs.com/gif.latex%20?\vec%20x=\mathbb{E}(Y\vert%20X)-\mathbb{E}(Y)

et

https://latex.codecogs.com/gif.latex%20?\vec%20y=Y-\mathbb{E}(Y\vert%20X)

On note facilement que https://latex.codecogs.com/gif.latex%20?\vec%20x%20\perp%20\vec%20y, car https://latex.codecogs.com/gif.latex%20?\vec%20x\in%20s\{X\} alors que https://latex.codecogs.com/gif.latex%20?\vec%20y\in%20s\{X\}^\perp, par construction de la projection orthogonale. Aussi le théorème de Pythagore nous dit que

https://latex.codecogs.com/gif.latex%20?\|%20Y-\mathbb{E}(Y)%20\|^2=\|%20\mathbb{E}(Y\vert%20X)-\mathbb{E}(Y)%20\|^2+\|%20%20Y-\mathbb{E}(Y\vert%20X)%20\|^2

Pour le terme de gauche, c’est assez facile,

https://latex.codecogs.com/gif.latex%20?\|%20Y-\mathbb{E}(Y)%20\|^2=%20\text{Var}(Y)

Pour le premier terme de droite, la aussi, c’est facile, car https://latex.codecogs.com/gif.latex%20?\mathbb{E}[\mathbb{E}(Y\vert%20X)]=\mathbb{E}(Y), par le théorème de double projection, et donc

https://latex.codecogs.com/gif.latex%20?\|%20\mathbb{E}(Y\vert%20X)-\mathbb{E}(Y)%20\|^2=\text{Var}(\mathbb{E}(Y\vert%20X))

Pour le dernier terme, a droite, c’est plus vicieux. Disons qu’on va l’identifier a une variance conditionnelle (je n’ai toujours pas défini formellement de variable aléatoire appelée “variance conditionnelle“), i.e.

https://latex.codecogs.com/gif.latex%20?\|%20\vec%20Y-\mathbb{E}(Y\vert%20X)%20\|^2%20=%20\mathbb{E}(\text{Var}(Y\vert%20X))

On retrouve alors la formule de décomposition de la variance, en variance intra et variance inter.

Cette formule est classique en statistique, en régression (les résidus sont orthogonaux a https://latex.codecogs.com/gif.latex%20?sl\{\boldsymbol{X}\}, on aura donc une partie de la variance qui sera expliquée par nos variables explicatives https://latex.codecogs.com/gif.latex%20?\boldsymbol{X} et une partie qui sera dite non expliquée) ou en crédibilité (je peux renvoyer cette fois à l’exemple des jeux de fléchettes de Philbrick (1982)). Promis, on reparlera de géométrie cet hiver, quand je ferais les lois multivariées en cours (avec cette fois des notions d’invariance, de rotations, de symétries, etc)

Generating functions

Today, I wanted to publish a post on generating functions, based on discussions I had with Jean-Francois while having our coffee after lunch a couple of times already. The other reason is that I publish my post while my student just finished their Probability exam (and there were a few questions on generating functions).

  • A short introduction (back on a specific exercise)

In the Probability exam, I included an exercise we’ve seen in class, last week. The question is the following (question 16 in the form – in French). Let  for  and  for  be the cumulative distribution function of some random variable , i.e. . What is the moment generating function of , i.e.  ?

Consider some  (we’ll see later on if some additional constraint are necessary). The tricky part of this exercice appears extremely fast, actually: how could you write  ? I mean, in any probability textbook, the standard answer is

  • if  is discrete,

  • if  is (absolutely) continuous,

where  is the density of . Here,  is clearly not a discrete variable. But is it (absolutely) continuous. My (strong) belief is that you need to plot that distribution function to see how it looks like, , for all 

(following recent discussions with Philippe Reka, I will try to post more hand-made graphs)

Ooops. It looks like we have a discontinuity in 0. So we have to be a bit carefull here :  is neither continuous nor discrete. Let us use the double projection formula,

which can also be writen, if ,

This is simply the idea of saying that the overall average is a barycenter of the average per subgroup. Here, and let  while  (note that ). Thus,

Let us consider the three different components.

 

and

(since it is is a real-valued constant), and here . So finally, we should compute . Observe that  given  is a (absolutely) continuous random variable, with a density. To get it, observe that for all ,

and , i.e.  given  is an exponential distribution.

Hence,  is a mixture between an exponential variable and a Dirac mass in . This was actually the tricky part of the question since it is not obvious when we see (only) the formula above.

From now on, it is just high-school level computations,

if  (for the first time, we see that the function is not defined everywhere). If we put all the expressions together,

  • Monte Carlo computations

If we are lazy (and trust me, I am extremely lazy), it is possible to use Monte Carlo simulations to compute that function,

> F=function(x) ifelse(x<0,0,1-exp(-x)/3)
> Finv=function(u) uniroot(function(x) F(x)-u,c(-1e-9,1e4))$root

or (to avoid the problem of the discontinuity)

> Finv=function(u) ifelse(3*u>1,0,uniroot(function(x)
+ F(x)-u,c(-1e-9,1e4))$root))

Here, the inverse is simple to get, so we can faster the code using

> Finv=function(u) ifelse(3*u>1,0,-log(3*u))

Then, we use

> rF=function(n) Vectorize(Finv)(runif(n))
> M=function(t,n=10000) mean(exp(t*rF(n)))
> Mtheo=function(t) (3-2*t)/(3-3*t)
> u=seq(-2,1 ,by=.1)
> v=Vectorize(M)(u)
> plot(u,v,type="b",col='blue')
> lines(u,Mtheo(u),col="red")

The problem with Monte Carlo simulations is that they should be used only if they are valid. If mean, I can compute

> set.seed(1)
> M(3)
[1] 5748134

Finite sum can always be computed, numerically. Even if here, https://latex.codecogs.com/gif.latex?\mathbb{E}(e^{3X}) does not exist (or to be more precise, is not finite). It is like the average of a Cauhy sample… I can always compute it, even if the expected value does not exists…

> set.seed(1)
> mean(rcauchy(1000000))
[1] 0.006069028

This is related to questions I tried to ask a few years ago in a paper, where I wanted to test if  (or not). Almost all the tests I know are actually based on that assumption… But this is not the point here. My point is that those generating functions are interesting, when then exist. And perhaps working with characteristic function is a better idea.

  • Generating functions

Now, to get back on the begining of last course, generating functions are interesting for a lot of reasons. But first of all, let us define those function properly.

The moment generating function  exists if it is finite on a neighbourhood of https://latex.codecogs.com/gif.latex?0 (there is an https://latex.codecogs.com/gif.latex?a%3E0 such that for all https://latex.codecogs.com/gif.latex?t\in[-a,+a], https://latex.codecogs.com/gif.latex?M_X(t)%3C\infty). In that case, there exists some (open) interval https://latex.codecogs.com/gif.latex?(a,b)\in\overline{R} such that for all https://latex.codecogs.com/gif.latex?t\in(a,b), https://latex.codecogs.com/gif.latex?M_X(t)%3C\infty, called the convergence strip of the moment generating function.

This function is said to be moment generating, since if https://latex.codecogs.com/gif.latex?M_X(\cdot) exists (as defined in the previous paragraph), then all moments exist, for all https://latex.codecogs.com/gif.latex?k\in%20\mathbb{N}\backslash\{0\}, https://latex.codecogs.com/gif.latex?\mathbb{E}\left(\vert%20X\vert^k\right)%3C\infty. This is basically due to the fact that, for all https://latex.codecogs.com/gif.latex?k\in%20\mathbb{N}\backslash\{0\}https://latex.codecogs.com/gif.latex?x^k\exp(-\vert%20t\vert%20x)\rightarrow%200 as https://latex.codecogs.com/gif.latex?x\rightarrow\infty, so, for all https://latex.codecogs.com/gif.latex?x large enough, https://latex.codecogs.com/gif.latex?x^k%20\leq%20\exp(\vert%20t\vert%20x). And before, it is always possible to use a multiplicative constant,

for some https://latex.codecogs.com/gif.latex?K. Thus,

if https://latex.codecogs.com/gif.latex?t is small enough (namely https://latex.codecogs.com/gif.latex?[-t,+t] belongs to the convergence strip).

Now, if we use Taylor’s expansion,

and

If we look at the value of the derivative of that function at point 0, then

As we’ve seen last week in class, it is possible to define a moment generating function in higher dimension, for some random vector https://latex.codecogs.com/gif.latex?\boldsymbol{X}=(X_1,\cdots,X_d),

for some https://latex.codecogs.com/gif.latex?\boldsymbol{t}\in\mathbb{R}^d. It is again a moment generating function since crossed derivatives (taken a point https://latex.codecogs.com/gif.latex?\boldsymbol{0}) are cross-moments. For instance,

 

Some, moment generating functions are interesting if you want to derive moments of a given distribution. Another interesting feature is that this moment generating function (under certain conditions) fully characterize the distribution of the random variable, in the sense that if for some https://latex.codecogs.com/gif.latex?%20h%3E0,
https://latex.codecogs.com/gif.latex?%20M_X(t)=M_Y(t) for all https://latex.codecogs.com/gif.latex?%20t\in(-h,+h), then https://latex.codecogs.com/gif.latex?X\overset{\mathcal{L}}{=}Y.

  • From moment generating functions to characteristic functions

The problem with the moment generating function is that the function is defined (only) on some neighborhood of https://latex.codecogs.com/gif.latex?%200, and we should be careful. The other problem is that it does exist only for distribution in https://latex.codecogs.com/gif.latex?%20L_\infty. Which might be a strong assumption.

Thus, an interesting idea is to consider https://latex.codecogs.com/gif.latex?%20\mathbb{E}\left(%20e^{tX}%20\right) not on the real line, but on the imaginary line.

Thus, let https://latex.codecogs.com/gif.latex?%20\phi_X(t)=\mathbb{E}\left(%20e^{i%20tX}%20\right) for some https://latex.codecogs.com/gif.latex?%20t\in\mathbb{R}. Actually, not some, but all https://latex.codecogs.com/gif.latex?%20t\in\mathbb{R}, since

so the characteristic function always exists. Paul Lévy proved in 1925 that the characteristic function completely characterizes the distribution.

Now, if we look at it quickly, it looks like we did not change a lot of things here, and we should be able to write

https://latex.codecogs.com/gif.latex?%20\phi_X(t)=M_X(i%20t)

If we want to do things properly, let us look at Gut (2005) for instance. Assume that https://latex.codecogs.com/gif.latex?%20M_X(\cdot) is defined on some interval https://latex.codecogs.com/gif.latex?%20(-a,+a). It is then possible to define a function  (this time, it is no longer a real-valued function) as

which is well defined on some strip .
https://latex.codecogs.com/gif.latex?%20\phi_X(\cdot)and https://latex.codecogs.com/gif.latex?%20M_X(\cdot) are then restriction of that function respectively on the imaginary line, and the real line. That function https://latex.codecogs.com/gif.latex?%20\Gamma_X(\cdot) is clearly holomorphic, and thus, the value it takes on such a strip is fully determined by the values it takes on the real interval https://latex.codecogs.com/gif.latex?%20(-a,+a). Thus, the moment generating function will completely characterize the distribution.

But it has to be defined on some neighbourhood of https://latex.codecogs.com/gif.latex?%200. Which is not trivial actually… I mean, I nonlife insurance, we see a lot a Pareto distributions.

  • Fast Fourier Transform

Recall Euler’s formula,

Thus, we should not be surprised to see Fourier’s transform. From this formula, we can write

Using some results in Fourier analysis, we can prove that probability function satisfies (if the random variable has a Dirac mass in x)

which can also be written,

And a similar relationship can be obtained if the distribution is absolutely continuous at point ,

Actually, since we work with real-valued random variables, the complex area was just a detour, and we can prove that actually,

It is then possible to get the cumulative distribution function using Gil-Peleaz’s inversion formula, obtained in 1951,

Nice isn’t it. It means, anyone working on financial markets know those formulas, used to price options (see Carr & Madan (1999) for instance). And the good thing is that any mathematical or statistical software can be used to compute those formulas.

  • Characteristic function and actuarial science

Now, what is the interest of all that in actuarial science ? Characteristic functions are interesting when we deal with sums of independent random variables, since the characteristic function of the sum is simple the product of the characteristic functions. They are also interesting when dealing with compound sums1. Consider the problem of computing the 99.5% quantile of the compound sum of Gamma random variable, i.e.

https://latex.codecogs.com/gif.latex?%20S=\sum_{n=1}^N%20X_i

where https://latex.codecogs.com/gif.latex?%20X_i\sim\mathcal{G}(\alpha,\beta) are i.i.d. and https://latex.codecogs.com/gif.latex?%20N\sim\mathcal{P}(\lambda). The strategy is to discretize the loss amounts,

> n <- 2^20; 
> p <- diff(pgamma(0:n-.5,alpha,beta))

Then, the code to compute https://latex.codecogs.com/gif.latex?%20\tilde%20f(s)=\mathbb{P}(S\in[s\pm1/2]), we use

> f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n

To compute the 99.5% quantile, we just use

> sum(cumsum(f)<.995)

That’s extremely simple, isn’it. Want me to do it for real ? Consider the following losses amounts

> set.seed(1)
> X <- rexp(200,rate=1/100)
> print(X[1:5])
[1] 75.51818 118.16428 14.57067 13.97953 43.60686

Let us fit a gamma distribution. We can use

> fitdistr(X,"gamma")
      shape         rate    
  1.309020256   0.013090411 
 (0.117430137) (0.001419982)

or

> f <- function(x) log(x)-digamma(x)-log(mean(X))+mean(log(X))
> alpha <- uniroot(f,c(1e-8,1e8))$root
> beta <- alpha/mean(X)
> alpha
[1] 1.308995
> beta
[1] 0.01309016

Whatever, we have the parameters of our  Gamma distribution for individual losses. And assume that the mean of the Poisson counting variable is

> lambda <- 100

Again, it is possible to use monte carlo simulations, if we can easily generate a compound sum. We can use the following generic code: first we need functions to generate the two kinds of variables of interest,

> rN.P <- function(n) rpois(n,lambda)
> rX.G <- function(n) rgamma(n,alpha,beta)

then, we can use (see here for a discussion on possible codes)

> rcpd4 <- function(n,rN=rN.P,rX=rX.G){
+ return(sapply(rN(n), function(x) sum(rX(x))))}

If we generate one million variables, we can get an estimator for the quantile,

> set.seed(1)
> quantile(rcpd4(1e6),.995)
   99.5% 
13651.64

Another idea is to remember a proporty of the Gamma distribution: a sum of independent Gamma distributions is still Gamma (with additional assumptions on the parameters, but here we consider identical Gamma distributions). Thus, it is possible to compute the cumulative distribution function of the compound sum,

> F <- function(x,lambda=100,nmax=1000) {n <- 0:nmax
+ sum(pgamma(x,n*alpha,beta)*dpois(n,lambda))}

(or at least a approximation). If we invert that function, we get our quantile

> uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root
[1] 13654.43

Which is consistent with our monte carlo computation. Now, we can also use fast Fourier transform here,

> n <- 2^20; lambda <- 100
> p <- diff(pgamma(0:n-.5,alpha,beta))
> f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n
> sum(cumsum(f)<.995)
[1] 13654

Now, if it is simple, is it efficient ? Let us compare for instance computation time to get those three outputs,

> system.time(quantile(rcpd4(1e5),.995))
       user      system     elapsed 
      2.453       0.106       2.611 
> system.time(uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root)
       user      system     elapsed
      0.041       0.012       0.361 
> system.time(sum(cumsum(Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n)<.995))
       user      system     elapsed
      0.527       0.020       0.560

Computations here are comparable with the (numerical) inversion of the cumulative distribution function. Except that here, we were lucky: if the distribution is not Gamma but log normal, the second algorithm cannot be used.

1. This numerical example is taken from the first chapter of Computational Actuarial Science with R, to appear in a few months.

Halloween and candies (a ballot problem)

This year, for Halloween, a post on candies (I promise, next year I will write another post on zombies). But I don’t want to focus on the kids problems (last year, we tried to minimize their walking distance to collect as much candies as possible, with part 1 and part 2), I want to discuss my own problems. Because usually, the kids wear their costumes, and they go in the streets, they knock on the doors, while I stay at home. So I’m the one, with a bag full of candies, waiting for kids to knock on our door, and then I give them some candies (if they wear a costume). Consider the following problem. Assume that we start with  red candies, and  black ones, with . The thing is that no one like those black candies. What could be the probability that for the   kids that will get candies after knocking at my door (with  for convenience, but we will also consider the more general case where I have to many candies, , later on), the probability to get a red candy is always larger than the probability to have a black candy ? This is somehow related to the popular ballot problem, proposed (and solved) by Whitworth in 1878, but he wrote it only in the fourth edition of Choice and Chance, in 1886 (this is what the legend told us). In 1887, Joseph Bertrand proposed a similar problem, and Désiré André introduced the reflection principle to solve it. The problem is simple : consider an election between two candidates, A (who receives  votes) and B (who receives  votes). A wins the election (). If the ballots are cast one at a time, what is the probability that A will lead throughout the voting? For those who don’t remember the conclusion, the probability is here quite simple,

Observe that some geometry proofs were given, later on, by Aebly or Mirimanoff, both in 1923, as well as Howard Grossman in the 1950’s (see the discussion on http://academiclogbook.blogspot.ca/…).  Actually, http://futilitycloset.com// produced the following geometric proof (with no clear reference),

ballot box lattice diagram

We start at O, where no votes have been cast. Each vote for A moves us one point east and each vote for B moves us one point north until we arrive at E, the final count, (mn). If A is to lead throughout the contest, then our path must steer consistently east of the diagonal line OD, which represents a tie score. Any path that starts by going north, through (0,1), must cut OD on its way to E.

If any path does touch OD, let it be at C. The group of such paths can be paired off as p and q, reflections of each other in the line OD that meet at C and continue on a common track to E.

This means that the total number of paths that touch OD is twice the number of paths p that start their journey to E by going north. Now, the first segment of any path might be up to m units east or up to units north, so the proportion of paths that start by going north is n/(m + n), and twice this number is 2n/(m + n). The complementary probability — the probability of a path not touching OD — is (m –n)/(m + n).

But let’s try to solve our problem. Let  and  denote the number of black and red candies, respectively after the th kid git his (or her) candy. Yes, one at a time. Here,  and . What we want is

Using this formulation, we recognize the ballot problem. Almost. Actually, in the original ballot problem (see Bertrand (1887)), we have to compute the probability that one candidate remains strictly ahead the other one throughout the count. With a strict condition, we get the well-known probability (given previously)

Here, ties are allowed, and we can prove (easily) that

(again, there is some nice geometric interpenetration of that result). It is also possible to get numerically that value using the following function, which will generate a trajectory, and return some indicators (with or without ties)

> red_black=function(sd){
+ set.seed(sd)
+ vectcandy=sample(c(rep("R",r),rep("B",b)))
+ v1=rev(cumsum(rev(vectcandy)=="R"))<rev(cumsum(rev(vectcandy)=="B"))
+ v2=cumsum(rev(vectcandy)=="R")<= cumsum(rev(vectcandy)=="B") 
+ return(list(evol=cbind(rev(cumsum(rev(vectcandy)=="R")),
+ rev(cumsum(rev(vectcandy)=="B")),v1),list=vectcandy,test=(sum(v1)==0),
+ ballot=(sum(v2)==0),when=min(which(v1==1))))}

(here I compute the ballot-type index, where ties are not allowed, and the candy-type index). If we generate 100,000 scenarios, starting with 50 red and 25 black candies, we get

> r=50 
> b=25 
> M=sapply(1:100000,red_black) ­­
> mean(unlist(M[3,]))
[1] 0.50967

which can be compared with the theoretical value

> (r+1-b)/(r+1)
[1] 0.5098039

We can also get the distribution of the first time we have more black candies than red ones left (given that this event occur)

> Z=unlist(M[5,])
> Z=Z[Z<Inf]
> hist(Z,breaks=seq(0,80),probability=TRUE,col="light blue", 
+ border=NA,xlab="",main="")

There might be some analytically formula that can be derived, but I have to confess that I am becoming extremely lazy,

Assume now that this year, kids do not show up at my door (for some reason). Assume that  kids show up. We can see how the probability

will change, with ,

> r=50
> b=25
> impact_n = function(n){
+ red_black=function(sd,nb=n){
+ set.seed(sd)
+ vectcandy=sample(c(rep("R",r),rep("B",b)))
+ v=(rev(cumsum(rev(vectcandy)=="R"))<rev(cumsum(rev(vectcandy)=="B")))[1:nb]
+ return(list(list=vectcandy,test=(sum(v)==0),when=min(which(v==1))))}
+ M=sapply(1:10000,red_black)
+ return(mean(unlist(M[2,])))}

Yes, not only I am too lazy to derive analytic formulas, I am so lazy that I do not try to optimize my code. Here, the evolution of the probability, as a function of  is

> V=Vectorize(impact_n)(25:75)
> plot(25:75,V)

Fun isn’t it? But now, I have to conclude my post, to work a little bit on my make-up : I have learnt so many thinks at the Montreal Zombie Walk a few days ago that kids willing to knock at my door will be scared to death. I guess I will keep all the candies for me this year !

Loi de Poisson

Suite du cours ACT2121, de préparation pour l’examen P de la SOA (probability). Un nouveaux jeu d’exercices, sur le thème 6 (tel que classifié dans le livre de Jacques Labelle, qui servira de référence pour ce cours)

Pour rappels, l’intra du 27 septembre portera sur les thèmes 1-6, c’est à dire sur ce qui a été abordé dans les feuilles mises en ligne.

Monty Hall (oh no, not again)

Quite frequently, someone on the internet discovers the Monty Hall paradox, and become so enthusiastic that it becomes urgent to publish an article – or a post – about it. The latest example can be http://www.bbc.co.uk/news/magazine-24045598. I won’t blame them, I did the same a few years ago (see http://freakonometrics.hypotheses.org/776, or http://freakonometrics.hypotheses.org/775, in French).

My point today is that the Monty Hall paradox raise an important question, about information. How comes that something to sounds like non-informative can actually be extremely informative. I will not get back on the blue eyes paradox (see http://freakonometrics.hypotheses.org/1963, in French) or the exam paradox (see http://freakonometrics.hypotheses.org/2328, in French one more time), which are related to information, but not with a probabilistic approach. I will stay close to Monty Hall’s paradox today.

This morning, in my probability class, we were looking at a simple exercise (I say simple because it is only the second course of the session). The problem was the following

Consider an urn , with 15 blue balls, and 10 red balls, and an urn , with 10 blue balls, and 15 red balls. We select randomly one urn (with probability 50% for each urn).
We draw a ball, which turns out to be blue, and we put it back in the urn, Now, we draw a (second) ball. What is the probability that this (second) ball is blue?

Please, take your time to read that carefully…

Ready? Your first thought should be that since we put back the ball, after the first draw, it does not change the probabilities, right? So, why did we say that? It is necessary? (about the last question, yes, when something is mentioned in an exercise, we should use it).

Let’s forget about this second ball story, as an introduction to this problem. What was, actually, the probability for the first ball to be blue? Trivially, it was

i.e.

Let us run a code to get that, using simulations:

> n=1000000
> set.seed(1)

First, let us draw the urn, randomly

> urn=sample(1:2,size=n,replace=TRUE)

Then, let us draw the first, and the second ball,

> urns=matrix(c(15,10,10,15),2,2)
> colnames(urns)=c("blue","red")
> sample.urn=(urns[urn,])
> prob.urn=sample.urn/apply(sample.urn,1,sum)
> u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
> u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]

The probability that the first ball was blue is here

> sum(u1=="blue")/n
[1] 0.499953

and for the second one

> sum(u2=="blue")/n
[1] 0.499221

So, indeed, the probability to have a blue ball is 50%. Now, what was the question? Given that the first ball was blue, what it the probability that the second one is blue? Here, on our simulations, it is

> sum(u2[u1=="blue"]=="blue")/sum(u1=="blue")
[1] 0.5194088

Which is close to 52%.And if you run more simulations, you get

> f=function(seed){
+ set.seed(seed)
+ urns=matrix(c(15,10,10,15),2,2)
+ colnames(urns)=c("blue","red")
+ sample.urn=(urns[urn,])
+ prob.urn=sample.urn/apply(sample.urn,1,sum)
+ u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
+ u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]
+ return(sum(u2[u1=="blue"]=="blue")/
+ sum(u1=="blue"))
+ }
> Vectorize(f)(1:20)
 [1] 0.5194088 0.5200931 0.5203338 0.5192104 0.5196960 0.5206121 0.5195453
 [8] 0.5184580 0.5203755 0.5200154 0.5196557 0.5179276 0.5188652 0.5204724
[15] 0.5197437 0.5209244 0.5205770 0.5208725 0.5206228 0.5190711

The probability is always close to 52%, and is (significantly) different from 50%.

Still not convinced that we have some information here that should be used? Imagine that in the first urn, we add 1 blue ball, and 24 red balls; and the opposite in the second one. In that case, if we say that the first ball was blue, it means that it is very likely that the urn chosen was the second one. Let’s look at by it running some simulations

> set.seed(1)
> urns=matrix(c(1,24,24,1),2,2)
> colnames(urns)=c("blue","red")
> sample.urn=(urns[urn,])
> prob.urn=sample.urn/apply(sample.urn,1,sum)
> u1=c("blue","red")[1+(runif(n)<prob.urn[,1])]
> u2=c("blue","red")[1+(runif(n)<prob.urn[,1])]

As before, the probability that the second ball is blue is 50% (because of the symmetry actually)

> sum(u2=="blue")/n
[1] 0.500362

But if I tell you that the first one was blue, the probability that the second one is blue becomes

> sum(u2[u1=="blue"]=="blue")/sum(u1=="blue")
[1] 0.9236433

So even if – somehow – we do not change much by replacing the ball in its urn, we do have here some information, since it was mentioned that the ball was blue. And we should use it. Again, the important point is that the sentence was not “we draw a ball and we put it back”, but “we draw a blue ball, and we put it back”. Now, it we do the maths, everything become simple, and clear (as usual).

The question is here to compute

and according to Bayes formula, it is

Now, to compute those two probabilities, we have to condition on the urn,

Given the urn, since we replace the ball,

i.e.

So if we substitute numerical probabilities to get a blue ball in the previous formula, we get

which not the same as

Here, we get

> {(15/25)^2+(10/25)^2}/((15/25)+(10/25))
[1] 0.52

which confirms our empirical 52%, and note that in the second case (where there was only 1 blue ball in one urn, and 24 in the second one)

> {(24/25)^2+(1/25)^2}/((24/25)+(1/25))
[1] 0.9232

which again is close to the empirical 92.3% we got.

I strongly believe that the mis-intuition we might have is close to the one we can observe in Monty Hall paradox. And unless you write things properly, it is difficult to conclude anything….

PS [48  hours later] thanks @mikeandallie for the animated version of my post

Variables aléatoires discrètes

Suite du cours ACT2121, de préparation pour l’examen P de la SOA (probability). Un nouveau jeu d’exercices, sur les thèmes 4-5 (tel que classifié dans le livre de Jacques Labelle, qui servira de référence pour ce cours)

  • Formule de la probabilité totale, et formule de Bayes, #4, et lois discrètes #5 ACT2121-A2013-45.pdf

On fera des exercices sur la loi de Poisson la semaine prochaine, et l’intra du 27 septembre portera sur les thèmes 1-6. On commencera début octobre les variables continues. A suivre…

 

Las Vegas and financial institutions

Exactly one month ago, I entered the Bellagio casino to gamble at the roulette. It was actually a request from my daughter’s godfather (who happens to be a probabilist, actually). On a comment on a previous post, he suggested the following deal,

In the Bellagio you put 10$ for me on the 33 and 10$ for you as well. If 33 shows up, you bring me to a French “3 étoiles” restaurant next time you stop by in France. If 33 doesn’t shows up, I bring you to MacDonald…

I have to admit that I like eating in French “3 étoiles” restaurants, so I did gamble. Well, I could not remember the terms of the agreement very well (neither the number to select, nor the amount to put on the table). So I did ask my daughter which number I was supposed to pick, and she choose 22. Anyway, the number that came up was neither the 33 nor the 22, so we lost. And roulette tables at the Bellagio were down to a $15 minimum (from what I remember, I was supposed to play $5 or $10). And I have seen tables with $100 minimum (probably more, but I am not sure, and I could not take much pictures inside) ! So I did play $15 (I did keep chips as souvenirs), and I have to admit that I was excited during a few seconds. I really enjoyed that thrilling sensation ! And I was playing only $15 !

Later on, in the room of the hotel – while we’ve been watching TV – we saw some poker games where people where putting $200,000 on the table (there was almost a million in the pot) ! I tried to explain my kids that this was a reason why there were so many signs on the walls claiming that kids were not supposed to enter casinos, and so many ads about gambling being an addiction. It is not reasonable to put so much on a table ! $200,000 on the table ? This is probably more than all I could possibly own !

I thought about all that yesterday, when I discovered the following Table, about leverage ratios of bank,see  http://fool.com/investing/…

Company

Leverage Ratio (Assets-to-Equity), 2007

Bear Stearns 34:1
Morgan Stanley 33:1
Merrill Lynch 32:1
Lehman Brothers 31:1
Goldman Sachs 26:1

(with similar values in U.K., according to http://voxeu.org/…)

What does 30 mean ? It means that a company with $1 in capital holds $30 in (various) financial positions (see http://newleftreview.org/II/… for a discussion). If you think about it, with a relative decline of 3.5%, the absolute loss is larger than the capital hold by the company… Now, if we forget about Lehman, and focus on me, gambling in Las Vegas, we can try to illustrate this 30:1 leverage ratio as follows. The way I see it is that, if I were a bank with $200,000 equity (equity being, from my understanding, everything I own), I would be able to borrow 30 times this amount, and put this money on some table in Las Vegas. OK, there might be a big difference, since in Vegas, on average I will loose money, while most models in finance claim that (on average) we should gain money (somehow, since it might depend on your reference level). And no one really own the casino in real life. But still. A 30 leverage ratio means that I would be playing more than $6 million on a table in Las Vegas ! How should I understand that 30 leverage ratio ? Am I really such a small player ? Are banks really too big players ? Or perhaps they do not hold enough capital to play that big….

Bayes, credit scoring and terrorism

Once again, my neighbor Corey did publish a very interesting post on his blog http://bayesianbiologist.com/… on how likely the NSA program will catch a terrorist (a real one). I was working on something similar last weeks, with Stéphane Tufféry, for our chapter, entitled Statistical Learning in Actuarial Science. The idea was to show credit scoring techniques, from logistic regression, classification trees, random forests, etc. Of course, it is more boring, since we talk about loans and not terrorism. In credit scoring, we consider possible loans, and we have to predict if someone is more likely to be a bad guy or a good guy. The idea is the same: based on some covariates, we need to build a score function, that can be related to the probability of being bad. The higher the score, the more likely the person will be a bad guy. Then, of course, we have to discuss errors, namely false positive (good guys that we think are bad) and false negative (bad guys that we think are good). From the company, you do not want to have bad guys in your portfolio, and from everyone else point of view (since everyone believes he is with the good one, this is a classical optimistic bias), we do not want to be confused with those bad guys. Then we can spend hours on classification curves, and criteria to assess if our classifier is good or not, etc. While I was writing the introduction of the chapter, I remember that I found it hard to find proper words (to describe that 0/1 problem). But I did use (like everyone else) the terms good and bad. Like in terrorism. Except that to use this terminology (bad and good), we have to be more specific. In credit scoring, a bad guy is someone who did not pay back, at least once, for instance. But in terrorism, I think it is more difficult to say what a terrorist is.

I mean, in France, we did experiment terrorism too, a few years ago. In December 1996, I was in a RER train, going South, and we reached Cité Universitaire when a bomb did explode in Port Royal. The train following mine I guess. I remember that a couple of days after, I was traveling Paris, in bus, carrying with me a nice plant of… a plant that you’re not supposed to grow. Say I was carrying sandwiches, according to Ted Mosby. So in order to avoid troubles (since I was not suppose to have this kind of plant species), I put it in a large box. I remember that people were starring at me, and some actually asked me what was in the box. So for some reasons, people try to build there own terrorist classifier, based on what they think might be covariates. And dirty trousers, not well shaved, long hair (yes, I used to have long hair) and box in the bus were obviously some of them. Note that I don’t blame them, I do the same! After reading Corey’s post this morning, I took the bus. And I saw someone with a ninja sword.

At first, my terrorist classifier put her (yes, I try to have a gender-free terrorist model) in the bad guy class. Then I understood it was an umbrella. So I put her in the super cool geeky category (that only a few can reach).

When I started to teach non-life insurance in Paris, the last part of the course was dedicated to large risks, natural catastrophes, and a hot topic: terrorism. I was giving this course (probably my best experience, ever) in tandem with François Bucchini, who was working by that time for AXA France. The two of us were giving the course together, interacting: I was the boring guy doing the maths, and François was sharing his experience. And by that time, he was involved in the creation of GAREAT, a market structure, launched in France in 2002, to propose reinsurance against terrorism (for French companies). And one of the first claim was from the CAV (which is a pun for Comité d’Action Viticole) considered as a terrorist group. So, as he told us, be careful of prejudices when you think about terrorism. Cool wine drinkers can be dangerous terrorists…

Actually, I would love to see covariates used by the NSA to predict if you’re a bad guy, or a potentially dangerous terrorist. Let us have a guess… You have asked for a visa for Pakistan? or Afghanistan? or Libya (not Libya, not yet bad guys still have good friends there)? You have a NRA membership? You bought some heavy metal on iTunes? You still have a stop acta sticker on your blog? you have a blog? you wrote a post including the word terrorist in it?

Note: I am supposed to be in Chicago next week. Si if I cannot enter in the U.S., we’ll probably know more about potential covariates.

La belle-mère et la bataille

Ce soir, les enfants voulaient lancer une partie de bataille dix minutes avant de souper. Devant mon peu d’enthousiasme (on ne sait jamais trop quand ce genre de parties finissent), ma belle-mère a suggéré qu’au lieu de jouer à deux (comme le voulaient les deux grands), on devrait jouer à quatre, et comme ça, ça irait plus vite.

Et si ma belle-mère avait raison ? et si elle avait tort ? Car je veux bien croire que le temps moyen d’une partie (une partie se finissant quand il y a un vainqueur, c’est à dire quelqu’un qui a collecté toutes les cartes du jeu), voire le temps médian d’une partie, soit fonction du nombre de joueurs. A priori j’aurais toutefois tendance à croire qu’au contraire, ça va rallonger. En effet, intuitivement, si on joue à trois, au bout d’un moment, un joueur aura perdu toutes ses cartes, et – en moyenne – le paquet est alors partagé entre les deux joueurs qui restent. Et on revient alors à une partie avec deux joueurs. Donc plus y a de joueurs, plus ça devrait durer. Maintenant, on peut aussi noter qu’avec 30 cartes, et 10 joueurs, il est possible que le même joueur gagne 3 fois de suites (de l’ordre d’une chance sur 1000 avec un jeu simplifié, certes), au tout début, et donc en 3 coups, la partie est finie. Ce qui ne peut pas arriver avec 5 joueurs par exemple. Donc avec beaucoup de joueurs, il serait possible de finir une partie beaucoup plus vite. Damned… c’est pénible les belles-mères !

N’ayant pas le courage de faire des maths (il restait dix minutes avant le souper), je me suis contenter de faire un petit programme pour faire des simulations. Bon, j’avoue, je considère une version très simplifiée de la bataille. On a  joueurs, et  cartes, avec  multiple de . On distribue toutes les cartes, soit  cartes par joueur, au premier tour. Je vais alors supposer qu’il n’y a pas de bataille. C’est un peu nul, j’en conviens (puisque c’est le nom du jeu). Autrement dit, on tire au hasard un vainqueur parmi les joueurs. Et de manière équiprobable, c’est à dire qu’un joueur avec beaucoup de cartes n’a pas – a priori – de meilleures cartes que les autres. Le vainqueur prend alors les  cartes, et -1 joueurs (les perdants) perdent une carte.

Une partie se simule alors de la manière suivante,

Time=function(np=2,nc=36){
t=0
N=rep(nc/np,np)
VN=N
P=1:np
while(sum(N==0)<(np-1)){
i=sample(P,1)
N=pmax(N-1,0)
N[i]=N[i]+length(P)
P=which(N!=0)
t=t+1
VN=rbind(VN,N)
}
return(list(time=t,traj=VN))
}

Par exemple, avec deux joueurs, on a l’évolution suivante du nombre de cartes par joueur (sur un jeu contenant =60 cartes),

set.seed(1)
T=Time()
barplot(t(T$traj),col=c("blue","red"),border=NA)

et avec trois joueurs (et toujours nos 60 cartes)

T=Time(np=3,nc=36)
barplot(t(T$traj),col=c("blue","red","green"),border=NA)

 Ici, il semble que la partie avec trois joueurs ait duré beaucoup plus longtemps que celle avec deux. Et que lorsque le troisième joueur a perdu toutes ses cartes, les deux joueurs avaient autant de cartes l’un que l’autre (et on était alors revenu à la position initiale). Mais est-ce toujours le cas ? Faisons quelques simulations pour se faire une idée plus précise… On peut utiliser

game=function(npsim=2,ns=5000,ncsim=60){
T=rep(NA,ns)
for(i in 1:ns){T[i]=Time(np=npsim,nc=ncsim)$time}
return(T)
}

Et on peut regarder des simulations avec 2, 3, 4, 5, etc joueurs,

G1=game(np=2,nc=60)
G2=game(np=3,nc=60)
G3=game(np=4,nc=60)
G4=game(np=5,nc=60)
G5=game(np=6,nc=60)
G6=game(np=10,nc=60)
G7=game(np=12,nc=60)
G8=game(np=15,nc=60)
G9=game(np=20,nc=60)
G10=game(mp=30,nc=60)
G=data.frame(G1,G2,G3,G4,G5,G6,G7,G8,G9,G10)
boxplot(G,names=c(2,3,4,5,6,10,12,15,20,30))

Si on regarde le dessin brut, on a

autrement dit, il semble que la distribution du temps d’une partie soit indépendante du nombre de joueurs (ici en abscisse). On peut regarder les temps moyens

> trunc(apply(G,2,mean))
 G1  G2  G3  G4  G5  G6  G7  G8  G9 G10 
896 925 929 922 919 918 913 908 909 873

avec éventuellement un intervalle de confiance à 95% sur le temps moyen d’une partie, avec 60 cartes

> rbind(trunc(apply(G,2,mean)-2/sqrt(5000)*apply(G,2,sd)),
+ trunc(apply(G,2,mean)),
+ trunc(apply(G,2,mean)+2/sqrt(5000)*apply(G,2,sd)))
      G1  G2  G3  G4  G5  G6  G7  G8  G9 G10
[1,] 875 904 908 901 898 897 893 887 888 852
[2,] 896 925 929 922 919 918 913 908 909 873
[3,] 917 946 950 943 940 939 934 929 930 894

ou encore les quartiles

> trunc(apply(G,2,quantile))
       G1   G2   G3   G4   G5   G6   G7   G8   G9  G10
0%     46   48   50   42   38   33   36    4    3    2
25%   379  403  410  409  407  402  393  391  388  358
50%   681  706  720  710  701  701  706  692  709  660
75%  1192 1233 1249 1209 1203 1221 1229 1194 1195 1178
100% 6746 6927 5656 8135 7392 6377 7926 8542 7528 7062

Histoire de se conforter, on notera que les dernières valeurs minimales ont du sens, compte tenu de notre remarque initiale. Visuellement on obtient

avec en rouge la valeur moyenne (et l’intervalle de confiance autour), toujours en fonction du nombre de joueurs (et toujours le même nombre de cartes). Avec 5 fois plus de simulations, on a la même histoire, avec le temps moyen suivant

> rbind(trunc(apply(G,2,mean)-2/sqrt(25000)*apply(G,2,sd)),
+ trunc(apply(G,2,mean)),
+ trunc(apply(G,2,mean)+2/sqrt(25000)*apply(G,2,sd)))
      G1  G2  G3  G4  G5  G6  G7  G8  G9 G10
[1,] 893 909 917 912 911 912 902 899 895 860
[2,] 902 919 926 922 920 922 911 908 904 869
[3,] 912 928 936 931 930 931 920 918 913 878

Personnellement, malgré la très légère concavité que l’on peut observer sur le graphique ci-dessus (à peine significatif) j’ai l’impression que l’on a toujours la même distribution, quel que soit le nombre de joueurs. Étonnant, non ? J’attends que quelqu’un m’explique que ce résultat est intuitif, et pourrait se trouver simplement, en faisant des maths (je ne dis pas que le temps moyen est le même, je dis que la distribution de la durée d’une partie est indépendante du nombre de joueurs, ce qui me semble plus compliqué à établir). En attendant, je vais essayer d’améliorer mon modèle pour modéliser les batailles. Ou expliquer à mes enfants que leur père est un nerd, car il préfère taper des codes sur son ordinateur plutôt que de jouer à la bataille… Honte à moi !

In three months, I’ll be in Vegas (trying to win against the house)

In fact, I’m going there with my family and some friends, including two probabilists (I mean professionals, I am merely an amateur), with this incredible challenge: will I be able to convince  probabilists to go to play at the Casino?

Actually, I also want to study them carefully, to understand how we should play optimally. For example, I hope I can make them play the roulette. Roulette is simple. With a French (or European) roulette, it is probably the simplest: if I bet on black, I win if one of 18 black numbers is out, and I lose if one of the 18 red numbers – or zero (which is green) – is out. This gives a winning probability of 18/37 i.e. a 48.64% chance. But in Vegas, I think it’s mostly American Roulette that can be found in casinos, in which there is a zero and a double zero (both favorable to the bank). Here, the  probability of winning is 18/38, i.e. 47.36% chance. The two roulettes are

Now, let us discuss a little bit about optimal strategy. For instance, suppose I go to Las Vegas with an initial wealth  (say $100). The goal is to find the strategy which maximizes the probability to leave Las Vegas with https://latex.codecogs.com/gif.latex%20?2s (here $200). Should I play big, or small ?

Assume that I can bet https://latex.codecogs.com/gif.latex%20?x (that will be, here, for convenience, a fraction of ). With probability https://latex.codecogs.com/gif.latex%20?p, I will get https://latex.codecogs.com/gif.latex%20?2x, and with probability https://latex.codecogs.com/gif.latex%20?1-p, I will get https://latex.codecogs.com/gif.latex%20?0 (and lose my https://latex.codecogs.com/gif.latex%20?x). As mentioned above, https://latex.codecogs.com/gif.latex%20?p is (a little) smaller than 50%. The casino must win (actually, we will see that this assumption has a very strong impact on the strategy).

Suppose my goal is to double my initial sum, as mentioned in the introduction of this post. Maybe there is an optimum value for https://latex.codecogs.com/gif.latex%20?x, to maximize the probability of doubling my bet. To make it simple, the game ends either because I did not, or because I did, manage to double my initial wealth… Assume further that https://latex.codecogs.com/gif.latex%20?x is fixed, and that I do not revise my bets. One can use monte carlo simulations, to get an intuitive idea…

> bet=function(s=1,t=2*s,x=s/4,p=.4736,nsim=100000){
+     vp=rep(0,nsim); #vw=s
+     for(i in 1:nsim){
+       w=s;
+       while((w>0)&(w<t)){
+          ux=sample(c(min(x,t-w),-x),size=1,prob=c(p,1-p))
+          w=w+ux
+       }
+       vp[i]=(w>=t)}
+     return(mean(vp))
+ }

If we plot this probability as a function of , we have the following

> BET=function(x) bet(x=x)
> vx=1/(1:20)
> px= Vectorize(BET)(vx)
> plot(vx,px,log="x")

Let us see if we can do the maths, and actually compute those probabilities.

For example, if https://latex.codecogs.com/gif.latex%20?x%20=%20s, I play everything I have, and I double with probability https://latex.codecogs.com/gif.latex%20?p. That one was simple.  And indeed, on the graph above, the point on the right is probability  https://latex.codecogs.com/gif.latex%20?p (the red horizontal line).

Assume now that I can bet https://latex.codecogs.com/gif.latex%20?x%20=%20s%20/%202, and I will play, at least, two rounds

  • with probability  I will lose both rounds (and the game is over)
  • with probability , I will win both rounds, and I double my bet (and the game is also over)
  • with probability , I will lose once, and double once. Anyway, I will find myself again with my (initial) wealth . So the game will start again….

To make the story short the probability of doubling my earnings is

which is

https://latex.codecogs.com/gif.latex%20?p%20^%202%20\left%20(1%20+2%20p%20(1-p)%20+%20[2p%20(1-p)]%20^%202%20+%20\cdots%20\right)%20=%20\frac%20{p%20^%202}%20{1-2p%20(1-p)}

Let’s try something more general: I have initial wealth , I can bet https://latex.codecogs.com/gif.latex%20?x and the goal is to reach  (or, more generally, say, ). Now, the probability to reach  from  betting (always) https://latex.codecogs.com/gif.latex%20?x is exactly the same as the probability to reach  from  betting only 1. Let  denote the probability to go from  to  betting 1 (let us use generic parameters). We can easily get the following equation

https://latex.codecogs.com/gif.latex%20?P_b(a)%20=%20p\cdot%20P_b(a+1)%20+%20(1-p)%20\cdot%20P_b(a-1)

Thus, we can write

https://latex.codecogs.com/gif.latex%20?p\cdot%20(P_b(a+1)-P_b(a))%20=%20(1-p)\cdot%20(P_b(a)-P_b(a-1))

or equivalently

https://latex.codecogs.com/gif.latex%20?(P_b(a+1)-P_b(a))%20=\frac{1-p}{p}\cdot%20(P_b(a)-P_b(a-1))

https://latex.codecogs.com/gif.latex%20?\left(\frac{1-p}{p}\right)^a\cdot%20(P_b(1)-P_b(0))

Now, observe that https://latex.codecogs.com/gif.latex%20?P_b(0)=0 (since I cannot have a gain without any money).

Let us write https://latex.codecogs.com/gif.latex%20?P_b(a+1)-P_b(0) using a domino technique :

https://latex.codecogs.com/gif.latex%20?[P_b(a+1)-P_b(a)]+[P_b(a)-P_b(a-1)]+\cdots+[P_b(1)-P_b(0)]

i.e.

https://latex.codecogs.com/gif.latex%20?\left(\frac{1-p}{p}\right)^a%20P_b(1)+\left(\frac{1-p}{p}\right)^{a-1}%20P_b(1)+\cdots+%20\left(\frac{1-p}{p}\right)^0%20P_b(1)

so this geometric sum can also be written

https://latex.codecogs.com/gif.latex%20?\left(1%20-\left[\frac{1-p}{p}\right]^{a+1}%20\right)%20\left(1%20-\left[\frac{1-p}{p}\right]%20\right)^{-1}

Finally, we can write

https://latex.codecogs.com/gif.latex%20?P_b(a)=\left(1%20-\left[\frac{1-p}{p}\right]^{a}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]%20\right)^{-1}\cdot%20P_b(1)

Here, there is still  that I have to explicit. The idea is to observe that , thus

https://latex.codecogs.com/gif.latex%20?P_b(a)=\left(1%20-\left[\frac{1-p}{p}\right]^{a}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]^{b}%20\right)^{-1}

So finally,

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(gain)=\left(1%20-\left[\frac{1-p}{p}\right]^{s/x}%20\right)\left(1%20-\left[\frac{1-p}{p}\right]^{2s/x}%20\right)^{-1}

Nice isn’t it? But to be honest, there is nothing new here. This is actually an old theorem discovered by Christiaan Huygens in 1657, then extended by Jacob Bernoulli in 1680 and finally properly established by Abraham de Moivre in 1711. It is possible to plot this graph, as a function of ,

> bet2=function(s=1,t=2*s,x=s/4,p=.4736){
+     vp=(1-((1-p)/p)^(s/x))/(1-((1-p)/p)^(t/x))
+     return(vp)
+ }

The graph is the same as the one with monte carlo simulation (hopefully). Observe, looking carefully at the function above, that the probability is decreasing with . Which makes sense… Further, the probability is decreasing with : the more hungry, the less chance of winning I have.

Now, the interesting part is what is plotted on the graphs above: the smaller  (the size of the bets at each round), the less chances to win: if I want to win, it is important not to play being little player ! I must bet everything I have ! Actually, the funny thing is that if the probability of winning was (slightly) larger than 1/2, on the contrary, I should bet as small as possible

So far, there is nothing new. Everything mentioned in this post can be related to a fundamental result of Lester Dubins and Leonard Savage, in “How to Gamble if You Must : Inequalities for Stochastic Processes” (published in 1965), see also Sudderth (1972). Of course, I can try another strategy, a little less reasonable, I think, which is sometimes called Martingale of D’Alembert. I believe more in luck than coincidence, so, when I win, I drop my bet (do not tempt fate) but when I lose, I increase my bet (I must win someday). But let’s keep it for another post, someday…

Again, that’s a theory. I guess we should try, and see how it works. I’ll try to upload pictures on the blog during the road trip, so if by the beginning in August nothing has been posted on the blog, please send a rescue team to save me at the Bellagio…

From Simpson’s paradox to pies

Today, I wanted to publish a post on economics, and decision theory. And probability too… Those who do follow my blog should know that I am a big fan of Simpson’s paradox. I also love to mention it in my
econometric classes. It does raise important questions, that I do relate to multicolinearity, and interepretations of regression models, with multiple (negatively correlated) explanatory variables. This paradox has amazing pedogological virtues. I did mention it several times on this blog (I should probably mention that I discovered this paradox via Marco Scarsini, who did learn me a lot of things, in decision theory and in probability). For those who do not know this paradox, here is an example that Marco gave in one of his talk, a few years ago. Consider the following statistics, when healthy people entered in some hospital

hospital total survivors deaths survival
rate
hospital A 600 590 10 98%
hospital B 900 870 30 97%

while, when sick people entered in the same hospitals

hospital total survivors deaths survival
rate
hospital A 400 210 190 53%
hospital B 100 30 70 30%

Somehow, whatever your health situation, you should choose hospital A. Now, if we agregate

hospital total survivors deaths survival
rate
hospital A 1000 800 200 80%
hospital B 1000 900 100 90%

i.e. without any doubts, people should choose hospital B.

Actually, Simpson’s paradox is called Simpson’s paradox because Colin Blyth named it that way in 1972, in his paper entitled on Simpson’s paradox and the sure-thing principle (an economic article in a statistical journal), that can be downloaded from http://www.stat.cmu.edu/~fienberg/…. He found this paradox in a paper published in 1951 by Edward Simpson, even if other papers actually did mention it earlier. The most popular application is probably admission at Berckley’s graduate studiesprograms, and sex bias, see Bickel, Hammel & O’Connell (1975), that can be downloaded from http://www.unc.edu/~nielsen/…. I also mentioned a geometric interpretation of this paradox a few years ago on my blog, which is so simple to understand that the paradox is no longer a paradox actually, since on the example above, we had

and

while

With symbolic notations, one can have at the same time

and

with also

as shown on the graph below

There should be connection between Simpson’s paradox and the ecological fallacy (which is an issue I recently discovered and that I found extremely interesting, related again to difficulties of interpreting
regressions). But that’s another story. My point today is that Colin Blyth did mention another nice paradox, that is related, this time, to stochastic orderings. The idea is the following. Consider the three spinners drawn below (imagine some arrows in those circles)

  • spinner A: no matter where the arrow stops, the gain is 3,
  • spinner B: 56% chances to gain 2, 22% chances to gain 4, and 22% chances to gain 6,
  • spinner C: 51% chances to gain 1, 49% chances to gain 5.

Instead of spinners, it is also possible to consider three different lotteries,

You play against a friend, you pick a spinner, while the friend picks another. Everyone flick his arrow, the highest number wins (no matter the difference). Let us compute the odds. First case, A against B, from
A’s perspective

B-2 B-4 B-6
A-3 56%
+1
win
22%
-1
lose
22%
-3
lose

In that case, A has 56% chance of beating B. Second case, A against C, from A’s perspective,

C-1 C-5
A-3 51%
+1
win
49%
-2
lose
In that case, A has 51% chance of beating C. Third (an final) case, B against C, from B’s perspective. Assuming independence between the spinners, joint probabilities can easily be computed,
C-1 C-5
B-2 28.56%
+1
win
27.44%
-3
lose
B-4 11.22%
+3
win
10.78%
-1
lose
B-6 11.22%
+5
win
10.78%
+1
win
In that case, B has 61.78% chance of beating C. So, if we try to summarize,
  • A is the best choice, since it beats both with – always – more than 50% chance,
  • C is the worst choice, since it is beaten by both with – always – more than 50% chance,
Now, assume that you play not against one friend, but two friends. An everyone picks a different spinner. Let
us compute the odds, one more time. First case, A against B and C, from A’s perspective
B-2
C-1
B-2
C-5
B-4
C-1
B-4
C-5
B-6
C-1
B-6
C-5
A-3 28.56%
+1
win
27.44%
-2
lose
11.22%
-1
lose
10.78%
-1
lose
11.22%
-3
lose
10.78%
-3
lose
In that case, A has 28.56% chance of beating B and C. Second case, B against A and C, from B’s perspective,
A-3
C-1
A-3
C-5
B-2 28.56%
-1
lose
27.44%
-2
lose
B-4 11.22%
+1
win
10.78%
-1
lose
B-6 11.22%
+3
win
10.78%
+1
win
In that case, B has 33.22% chance of beating A and B.Third (an final) case, C against A, from C’s perspective,
A-3
B-2
A-3
B-4
A-3
B-6
C-1 28.56%
-2
lose
11.22%
-3
lose
11.22%
-5
lose
C-5 27.44%
+2
win
10.78%
+1
win
10.78%
-1
lose

In that case, C has 38.22% chance of beating A and B. So, if we try to summarize, this time

  • C is the best choice, since has (strictly) more than 1/3 chances to win, which the highest probability
  • A is the worst choice, since has (strictly) less than 1/3 chances to win, which the lowest probability

Odd isn’t it ? Now, is there an interpretation of that paradox ? Yes, Martin Gardner, in his paper on induction and probability, mentioned the case of drug testing. The value we had with the spinner is the health level, rated from 1 to 6. Thus, taking drug A, you always get an average health level of 3. With drug C, on the other hand, you get either very sick (level 1) or very well (level 5). Consider now a doctor who wants to maximize the patient’s chance of being well. If only pills A and C are available, then the doctor should choose A. This is what we’ve seen in the first part. Assume that now a company delivers a third pill, called drug B. Then the doctor should find C more interesting…. Odd, isn’t it ?

Colin Blyth gave a more amusing application. Assume that you like to go to the restaurant, and you like get a dessert there. Dessert A – the apple pie – is the average one, with a standard level, that you rank 3 (on a scale from 1 to 6). Dessert C – the cheese cake – can either be awfull (ranked 1) or delicious (ranked 5). You’d better go for the apple pie if you want to maximize the probability of not being disappointed (i.e. maximizing your “best chance” according to Colin Blyth, but I guess it can be interpreted as regret minimization too). Now assume that dessert B – the blueberry pie – is available (with ranks given by the spinner). Then you should go for the cheese cake. I let you imagine the discussion that you can have, then, with your favorite waitress

– Hi Mr Freakonometrics, do you want a piece of apple pie ? (yes, actually she also comes frequently on my blog, and knows me from my pseudo…)

– Probably. But actually, I was wondering if you did have your blueberry pie today ?

– Yes, in fact we do….

– Great, in that case, I’ll go for the cheese cake.

She’ll probably think that I am freak… so I hope she’ll come and read my post, to understand that, actually, it does make a lot of sense to go for what was supposed to be my worst case.

Pills, half pills and probabilities

Yesterday, I was uploading some old posts to complete the migration (I get back to my old posts, one by one, to check links of pictures, reformating R codes, etc). And I re-discovered a post published amost 2 years ago, on nuns and Hell’s Angels in an airplaine.

It reminded me an old probability problem (that might be known as one on Feymann’s problems): suppose that you have a prescription to take half pills for 6 days. Unfortunately the pharmacist was a bit lazy (or just wanted to help me to write a mathematical problem), and he gives 3 (full) pills in a small box. Day 1, you take a pill, break it in two parts, eat one, and return the other half in the box. Day 2, you draw randomly ‘something’ from the box, i.e. either half a pill, or a pill. If it’s a half one, then you eat it. If it is a fill one, you break it in two, eat one half, and return the other half in the box. Etc.On Day 6, if my story was well explained, you should know that there can only be one half pill. So far, so good. But what about Day 5 ? There were either two half pills, or one full pill. But what was the probability that there was a fill pill in the box on Day 5 ?

Nice problem, isn’t it ?

The good thing is that it can be modeled as a Markovian model. Assume that we do have  pills. After  days, the box will be empty. Consider the pair  denoting the number of half pills, and complete pills.  can take all values, from 0 to , and  will be positive, with . Thus, the number of states – possible pairs from Day 1 till Day  – will be , i.e. . More precisely, define those states in a dataframe,

> n=3
> COMPLETE=HALF=NULL
> for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
> k=length(COMPLETE)
> state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
> state
s nc nh
1   1  3  0
2   2  2  1
3   3  2  0
4   4  1  2
5   5  1  1
6   6  1  0
7   7  0  3
8   8  0  2
9   9  0  1
10 10  0  0

Now, we can play to derive the transition matrix of the Markov chain.

> attach(state)
> P=matrix(0,k,k)
> for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(nc==C)&(nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(nc==C-1)&(nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(nc==C)&(nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(nc==C)&(nh==H),"s"]]=1}
+ }

We do have a transition matrix (or a probability matrix) since all elements are positive, and the sum per line is 1,

> apply(P,1,sum)
[1] 1 1 1 1 1 1 1 1 1 1

Here, the transition matrix is the following

> P
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,]    0    1 0.00 0.00 0.00  0.0 0.00  0.0    0     0
[2,]    0    0 0.33 0.66 0.00  0.0 0.00  0.0    0     0
[3,]    0    0 0.00 0.00 1.00  0.0 0.00  0.0    0     0
[4,]    0    0 0.00 0.00 0.66  0.0 0.33  0.0    0     0
[5,]    0    0 0.00 0.00 0.00  0.5 0.00  0.5    0     0
[6,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[7,]    0    0 0.00 0.00 0.00  0.0 0.00  1.0    0     0
[8,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    1     0
[9,]    0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1
[10,]   0    0 0.00 0.00 0.00  0.0 0.00  0.0    0     1

In order to get our probability, let us start from state 1 – or  – with probability 1, and let us look at the distribution at different periods,

> dist=c(1,rep(0,k-1))
> MatDist=matrix(NA,2*n+1,k)
> MatDist[1,]=dist
> for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }

(one can check that after  days, the box is empty). The probability is given in row , and we just have to check which column corresponds to the pair ,

> vs=state[which(MatDist[2*n-1,]>0),]
> proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
> proba
[1] 0.3888889

Here the probability of having a full pair on Day 5 is 38.89%.

Actually, it is possible to study the evolution of this probability as a function of ,

> computeproba=function(n=3){
+ COMPLETE=HALF=NULL
+ for(i in n:0){
+ HALF=c(0:(n-i),HALF)
+ COMPLETE=c(rep(i,length(0:(n-i))),COMPLETE)
+ }
+ k=length(COMPLETE)
+ state=data.frame(s=1:k,nc=rev(COMPLETE),nh=rev(HALF))
+ P=matrix(0,k,k)
+ for(i in 1:k){
+ C=state$nc[i]
+ H=state$nh[i]
+ if((C>0)&(H>0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]= C/(C+H)
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]= H/(C+H)}
+ if((C>0)&(H==0)){
+ P[i,state[(state$nc==C-1)&(state$nh==H+1),"s"]]=1}
+ if((C==0)&(H>0)){
+ P[i,state[(state$nc==C)&(state$nh==H-1),"s"]]=1}
+ if((C==0)&(H==0)){
+ P[i,state[(state$nc==C)&(state$nh==H),"s"]]=1}
+ }
+ dist=c(1,rep(0,k-1))
+ MatDist=matrix(NA,2*n+1,k)
+ MatDist[1,]=dist
+ for(i in 1:(2*n)){dist=as.vector(t(dist)%*%P)
+ MatDist[i+1,]=dist
+ }
+ vs=state[which(MatDist[2*n-1,]>0),]
+ proba=MatDist[2*n-1,vs[vs$nc==1,"s"]]
+ return(proba)
+ }

If we plot the probability as a function of , we get

> P=Vectorize(computeproba)(2:40)
> plot(2:40,P,ylim=c(0,.5))

One can observe that the probability is decreasing. But slowly, extremely slowly. With a log scale on the y-axis, we have

> plot(2:40,P,ylim=c(0,.5),log="y")

If we look for ‘high’ values, we can get

> computeproba(100)
[1] 0.14218

I do not know if this limit goes to 0 as  goes to infinity. Actually, since we do have to compute a matrix with   entries i.e. roughly ,  cannot be that large… Too bad. If anyone knows how this probability behaves as a function of , when  is large, I’d be glad to know…

The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

  • The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let https://latex.codecogs.com/gif.latex?N denote a counting random variable, then it said to be Poisson distributed if there is https://latex.codecogs.com/gif.latex?\lambda\in(0,\infty) such that

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall%20k\in\mathbb{N}

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among https://latex.codecogs.com/gif.latex?n insured. If individual death probabilities are identical, say https://latex.codecogs.com/gif.latex?p, and if deaths are independent events, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall%20k\in\{0,1,\cdots,n\}
And if https://latex.codecogs.com/gif.latex?n\rightarrow\infty and  https://latex.codecogs.com/gif.latex?np\rightarrow%20\lambda, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)\rightarrow%20e^{-\lambda}\frac{\lambda^k}{k!}Again, this is an asymptotic theorem, which is valid when we have a lot of observations (https://latex.codecogs.com/gif.latex?n\rightarrow\infty), but also, the probability of occurrence should be extremely small (since https://latex.codecogs.com/gif.latex?p\sim\lambda/n), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

  • The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let https://latex.codecogs.com/gif.latex?X_1,%20\cdots,X_n denote i.i.d random variables taking values in https://latex.codecogs.com/gif.latex?%20\mathbb{R}^d (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let https://latex.codecogs.com/gif.latex?\mathcal{A}_n\subset\mathbb{R}^d. If  https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)\rightarrow%200 as https://latex.codecogs.com/gif.latex?n\rightarrow\infty (or https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)=O(n^{-1}) to be a little bit more specific about the assumptions), let https://latex.codecogs.com/gif.latex?N denote the (random variable characterizing) count of events https://latex.codecogs.com/gif.latex?\{X_i%20\in%20\mathcal{A}_n\}, then https://latex.codecogs.com/gif.latex?N can be approximated by a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda%20=n%20\times%20\mathbb%20P(X_i%20\in%20\mathcal{A}_n).
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-28-a%CC%80-11.14.21.png

To get an interpretation related to insurance modeling, let https://latex.codecogs.com/gif.latex?\mathcal{A} denote an upper layer in a reinsurance contract, i.e. https://latex.codecogs.com/gif.latex?\mathcal{A}=\{x%3Ed\} for some deductible https://latex.codecogs.com/gif.latex?d. Let https://latex.codecogs.com/gif.latex?X_i‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible https://latex.codecogs.com/gif.latex?d becomes extremely large (and https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A})\rightarrow%200), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if https://latex.codecogs.com/gif.latex?N has a Poisson distribution and, conditionally on https://latex.codecogs.com/gif.latex?Nhttps://latex.codecogs.com/gif.latex?X_1,\cdots,X_N are independent identically distributed generalized Pareto random variables, then https://latex.codecogs.com/gif.latex?\max\{X_1,\cdots,X_N\} has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

  • The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

  • Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

number of
death per
year
Empirical
counts
Poisson
distribution
0 109 108.67
1 65 66.21
2 22 20.22
3 3 4.11
4 1 0.63
5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

number of
hurricanes
per year
empirical
frequency
Poisson
frequency
0 30 27.16
1 48 47.99
2 37 42.41
3 29 24.98
4 8 11.03
5 3 3.90
6 3 1.15
7 1 0.29
8 and more 0 0.08
  • Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period https://latex.codecogs.com/gif.latex?T (in years), then the yearly probability of non-occurrence is https://latex.codecogs.com/gif.latex?1-(1/T).

And the probability of non-occurence over https://latex.codecogs.com/gif.latex?n years is then https://latex.codecogs.com/gif.latex?1-[1-(1/T)]^n. It is standard to summarize this property with the following table,

return period https://latex.codecogs.com/gif.latex?T

Number of years (https://latex.codecogs.com/gif.latex?n) without catastrophes

10 20 50 100 200
10 65.1% 40.1% 18.3% 9.6% 4.9%
20 87.8% 64.2% 33.2% 18.2% 9.5%
50 99.5% 92.3% 63.6% 39.5% 22.5%
100 99.9% 99.4% 86.7% 63.4% 39.5%
200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability https://latex.codecogs.com/gif.latex?1/T=1/n, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then https://latex.codecogs.com/gif.latex?1-\exp(-1), which is equal to 0.632.

  • Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor https://latex.codecogs.com/gif.latex?p is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq0)=1-(1-p)^{50%20\times%2080}

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq%200)\neq%2050\times%2080\times%20p

On the other hand

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq 0)=1-(1-p)^{50\times80%20}%20\sim1-\exp\left(-50\times80\times%20p%20\right)

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that https://latex.codecogs.com/gif.latex?N is null when https://latex.codecogs.com/gif.latex?N has a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda=50\times80\times%20p. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

https://latex.codecogs.com/gif.latex?1-\exp(-50\times%2080/7200)

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).

UEFA, is that it ?

Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple

> table(M[,2*n])/length(M[,2*n])*100

       1        2        3        5        7       10       11 
11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856

Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain

> S
       1     2     3     5     7    10    11    13
4   0.00 14.16 14.16  0.00 22.22 21.25 13.05 15.13
6  12.52 13.19 13.19 14.11 20.13  0.00 12.35 14.47
8  18.78  0.00 19.54 21.50  0.00  0.00 18.39 21.76
9  18.78 19.54  0.00 21.50  0.00  0.00 18.39 21.76
12 14.68 15.54 15.54 16.56  0.00 23.19 14.47  0.00
14 11.64 12.37 12.37 13.05 18.96 18.25  0.00 13.34
15 11.77 12.55 12.55  0.00 19.36 18.59 11.64 13.50
16 11.82 12.61 12.61 13.25 19.31 18.70 11.67  0.00

that can be visualized below

White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints…

Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code

> U=c(4,6,8,9,12,14,15,16)
> a1=U[1]
> b1=U[2]
> c1=U[3]
> d1=U[4]
> e1=U[5]
> f1=U[6]
> g1=U[7]
> h1=U[8]
> a2=b2=c2=d2=e2=f2=g2=h2=NA
> posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
> if(length(posa2)==0){na=na+1}
> for(a2 in posa2){
+ posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
+ if(length(posb2)==0){na=na+1}
+ for(b2 in posb2){
+ posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
+ if(length(posc2)==0){na=na+1}
+ for(c2 in posc2){
+ posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],
+ a2,b2,c2)
+ if(length(posd2)==0){na=na+1}
+ for(d2 in posd2){
+ pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],
+ a2,b2,c2,d2)
+ if(length(pose2)==0){na=na+1}
+ for(e2 in pose2){
+ posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],
+ a2,b2,c2,d2,e2)
+ if(length(posf2)==0){na=na+1}
+ for(f2 in posf2){
+ posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],
+ a2,b2,c2,d2,e2,f2)
+ if(length(posg2)==0){na=na+1}
+ for(g2 in posg2){
+ posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],
+ a2,b2,c2,d2,e2,f2,g2)
+ if(length(posh2)==0){na=na+1}
+ for(h2 in posh2){
+ s=s+1
+ V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
+ }}}}}}}}

On the initial ordering of home team, the number of deadlocks was

> na
[1] 657

The probability of obtaining a deadlock is then

> 657/(657+5463)
[1] 0.1073529

(657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered

 [1]    6    4   16   14   12   15    8    9

In that case, the probability of obtaining a deadlock was

> 4047/(4047+5463)
[1] 0.4255521

Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e.

> Urandom=sample(U,size=8)

the distribution of the probablity of having a deadlock is

All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results…

UEFA, what were the odds ?

Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…).

To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th),

UEFA

Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition.

And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix.

Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix,

 > n=16
 > uefa=read.table(
 + "http://freakonometrics.blog.free.fr/public/data/uefa.csv",
 + sep=",",header=TRUE)
 > LISTEIMPOSSIBLE=matrix(
 + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n)

I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one),

 > a1=1
 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]}
 > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1]

Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible,

 > b1=6
 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)

Etc. So, given the list of home teams,

 > a1=4
 > b1=6
 > c1=8
 > d1=9
 > e1=12
 > f1=14
 > g1=15
 > h1=16

consider the following loops,

 > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
 > for(a2 in posa2){
 + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
 + for(b2 in posb2){
 + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
 + for(c2 in posc2){
 + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2)
 + for(d2 in posd2){
 + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2)
 + for(e2 in pose2){
 + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2)
 + for(f2 in posf2){
 + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2)
 + for(g2 in posg2){
 + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2)
 + for(h2 in posh2){
 + s=s+1
 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
 + cat(s,V,"\n") 
 + M=rbind(M,V)
 + }}}}}}}}

With the print option, we end up with

5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 
5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 
5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1

i.e.

> nrow(M)
[1] 5463

possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %)

> 100/nrow(M)
[1] 0.01830496

Which is not that small when we think about it….

And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied….