Tag Archives: odds

UEFA, is that it ?

Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple

> table(M[,2*n])/length(M[,2*n])*100

       1        2        3        5        7       10       11 
11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856

Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain

> S
       1     2     3     5     7    10    11    13
4   0.00 14.16 14.16  0.00 22.22 21.25 13.05 15.13
6  12.52 13.19 13.19 14.11 20.13  0.00 12.35 14.47
8  18.78  0.00 19.54 21.50  0.00  0.00 18.39 21.76
9  18.78 19.54  0.00 21.50  0.00  0.00 18.39 21.76
12 14.68 15.54 15.54 16.56  0.00 23.19 14.47  0.00
14 11.64 12.37 12.37 13.05 18.96 18.25  0.00 13.34
15 11.77 12.55 12.55  0.00 19.36 18.59 11.64 13.50
16 11.82 12.61 12.61 13.25 19.31 18.70 11.67  0.00

that can be visualized below

White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints…

Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code

> U=c(4,6,8,9,12,14,15,16)
> a1=U[1]
> b1=U[2]
> c1=U[3]
> d1=U[4]
> e1=U[5]
> f1=U[6]
> g1=U[7]
> h1=U[8]
> a2=b2=c2=d2=e2=f2=g2=h2=NA
> posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
> if(length(posa2)==0){na=na+1}
> for(a2 in posa2){
+ posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
+ if(length(posb2)==0){na=na+1}
+ for(b2 in posb2){
+ posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
+ if(length(posc2)==0){na=na+1}
+ for(c2 in posc2){
+ posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],
+ a2,b2,c2)
+ if(length(posd2)==0){na=na+1}
+ for(d2 in posd2){
+ pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],
+ a2,b2,c2,d2)
+ if(length(pose2)==0){na=na+1}
+ for(e2 in pose2){
+ posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],
+ a2,b2,c2,d2,e2)
+ if(length(posf2)==0){na=na+1}
+ for(f2 in posf2){
+ posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],
+ a2,b2,c2,d2,e2,f2)
+ if(length(posg2)==0){na=na+1}
+ for(g2 in posg2){
+ posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],
+ a2,b2,c2,d2,e2,f2,g2)
+ if(length(posh2)==0){na=na+1}
+ for(h2 in posh2){
+ s=s+1
+ V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
+ }}}}}}}}

On the initial ordering of home team, the number of deadlocks was

> na
[1] 657

The probability of obtaining a deadlock is then

> 657/(657+5463)
[1] 0.1073529

(657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered

 [1]    6    4   16   14   12   15    8    9

In that case, the probability of obtaining a deadlock was

> 4047/(4047+5463)
[1] 0.4255521

Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e.

> Urandom=sample(U,size=8)

the distribution of the probablity of having a deadlock is

All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results…

UEFA, what were the odds ?

Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…).

To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th),

UEFA

Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition.

And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix.

Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix,

 > n=16
 > uefa=read.table(
 + "http://freakonometrics.blog.free.fr/public/data/uefa.csv",
 + sep=",",header=TRUE)
 > LISTEIMPOSSIBLE=matrix(
 + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n)

I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one),

 > a1=1
 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]}
 > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1]

Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible,

 > b1=6
 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)

Etc. So, given the list of home teams,

 > a1=4
 > b1=6
 > c1=8
 > d1=9
 > e1=12
 > f1=14
 > g1=15
 > h1=16

consider the following loops,

 > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
 > for(a2 in posa2){
 + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
 + for(b2 in posb2){
 + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
 + for(c2 in posc2){
 + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2)
 + for(d2 in posd2){
 + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2)
 + for(e2 in pose2){
 + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2)
 + for(f2 in posf2){
 + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2)
 + for(g2 in posg2){
 + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2)
 + for(h2 in posh2){
 + s=s+1
 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
 + cat(s,V,"\n") 
 + M=rbind(M,V)
 + }}}}}}}}

With the print option, we end up with

5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 
5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 
5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1

i.e.

> nrow(M)
[1] 5463

possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %)

> 100/nrow(M)
[1] 0.01830496

Which is not that small when we think about it….

And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied….

Comments on probabilities

The only thing I remember from courses I had in probability a few years ago is that we also have to clearly defined the event we want to calculate the probability. On the Freakonomics blog, last week, the Israeli lottery was mentioned (here, see also there where I mentioned that, and odds facts from the French lottery),

Yesterday, Andrew Gelman claimed (here) that there was a probability error… Well, since Andrew is really a statistician (and a good one… while I am barely an economist), I tried to do the maths…. and to understand where the error was coming from…

Since 6 numbers are drawn out of a pool of numbers from 1 to 37, the total number of combination at each lottery is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto01.png
> (n=choose(37,6))
[1] 2324784

Over 8 lotteries (since there are two draws per week, we can assume there 8 draws per month), the probability of no identical draws is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto02.png

Here is the R code for those who want to check, again,

> prod(n-0:7)/n^8
[1] 0.999988

Each month, the probability of “coincidence” (I define “coincidence” the event “over 8 draws, at least two times, we obtained the same 6-uplet” or more precisely (as mentioned here) “over one calendar month, at least two times, we obtained the same 6-uplet“) is

> (p=1-(prod(n-0:7)/n^8))
[1] 1.204407e-05

The occurrence of a coincidence each month as a Geometric distribution, with probability p. And it is classical, following Gumbel’s definition (here), to consider 1/p, called the “return period“, i.e. the number of months we have to wait until we observe a coincidence (i.e. a repetition in the same month), since for a geometric distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto03.png
> 1/p/(12)
[1] 6919.034

Here, the (expected) return period is 6919 years.

From my point of view, this is “the incident of six numbers repeating themselves within a calendar month”, and this is an event of once in 6919.034 years. On the other hand the median of a geometric distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto04.png
> -log(2)/log(1-p)/(12)
[1] 4795.88

which means that we have 50% chance to get such a coincidence over 4796 years.

Of course, if instead of looking at a longer period, say 100 draws, i.e. one year (here I define “coincidence” the event “over 100 draws, at least two times, we obtained the same 6-uplet“), we have in red the expected return period, and in blue the median of the geometric distribution,

> M=E=rep(NA,100)
> for(i in 2:100){
+ p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
+ E[i]=1/p/(100/i)
+ M[i]=-log(2)/log(1-p)/(100/i)
+ }
> plot(1:100,E,ylim=c(0,10000),type="l",col="red",lwd=2)
> lines(1:100,M,col="blue",lwd=2)
> abline(v=8,lty=2)
> points(8,E[8],pch=19,col="red")
> points(8,M[8],pch=19,col="blue")

or below of a log-scaled version

As Xi’an did (here), assume now that there is a lottery over 100 countries. Here I define “coincidence” the event “over k lottery draws over 100 around the world, at least two times, we obtained the same 6-uplet“, and then the previous graph becomes (with on the x axis the level of k)

Here I have a 12% chance if we consider probability to have identical numbers over a month…

But here, we can have one 6-uplet in Israel, and the other one in Egypt, say… If we want to get the same 6-uplet in the same country, the graph is now

i.e. each month there is a chance over one thousand…

> i=8
> p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
> 1-(1-p)^100
[1] 0.001203689

Note: actually, Xi’an mentioned that the probability that this coincidence [of two identical draws over 188 draws] occurred in at least one out of 100 lotteries (there are hundreds of similar lotteries across the World) is 53%! And I got the same,

> 1-(1-P[188])^100
[1] 0.5305219

Lottery, and martingales

I recently got a comment on a post I published one year ago, here, about the fact that in September 2009, on the 6th and the 10th, the 6 same numbers came out at the lottery, in Bulgaria (but  I do not understand the question: the author of the comment ask about the order the numbers came out…)
Xi’an published also a post on that topic, there, since last week, the same thing happened in Israel.
All that reminded me a discussion I had with a colleague about another post (here) where I mentioned that I found a strange distribution of numbers in the French lottery (the old one actually). For those who want to check, all historical events are here, in a zip file. My colleague was wondering if I found the martingale to win the lottery…

First, I do not like that term, since martingale is something different from a mathematical point of view… Second, let us look if it would have been possible to make some money… (free lunch ?)

> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
>   N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> length(n)
[1] 28848
> (TN=table(n))
n
1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20
607 576 571 618 579 598 608 582 588 590 562 577 577 580 591 630 558 567 594 608
21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40
578 562 579 583 574 589 602 572 550 598 604 582 545 646 597 618 599 636 609 588
41  42  43  44  45  46  47  48  49
576 589 577 585 618 596 560 571 604

So, it might look nice, but we have to compare that distribution with the one we should have with “independent” draws. It is not possible to look at a discrete uniform distribution: the six numbers are not independent. Each day, the 49 balls are back in the urn, but within a day, we do not have independent draws (it is a sample without replacement of balls). Hence, with 4808 lottery draws, each number cannot be obtained more than 4808 times. So, let us use monte carlo techniques to  look at the theoretical distribution,

> M=matrix(NA,49,1000)
> for(s in 1:1000){
+ B=NA
+ for(i in 1:ntirage){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ M[,s]=sort(table(B))
+ }
> q50=function(x){quantile(x,.5)}
> Q50=apply(M,1,q50)
> lines(1:49,Q50,col="red",lwd=2)
> q10=function(x){quantile(x,.1)}
> Q10=apply(M,1,q10)
> q90=function(x){quantile(x,.9)}
> Q90=apply(M,1,q90)
> polygon(c(1:49,49:1),c(Q10,rev(Q90)),col="light blue",border=NA)
> lines(1:49,Q10,col="red",lty=2)
> lines(1:49,Q90,col="red",lty=2)
> lines(1:49,Q50,col="red",lwd=2)
> points(1:49,sort(TN),pch=19,type="b")

Looking at the graph, it looks like some numbers appeared too frequently, especially the ones that did not appear frequently (bottom left). So, since I have removed the last 50 draws, let us see if we could have used that information, somehow…

> nb=names(sort(TN))
> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> loto=loto[1:50,]
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> TN=table(n)
> TN[nb]
> barplot(TN[nb])

Unfortunately, numbers that came out too frequently over 4800 draws did not appear that frequently of the last 50. Playing top number might not have been a great strategy.

(numbers that came out frequently are on the right, while those we did not see much are on the left)… What about worst numbers: if I had decided to play the 6 that did not come out very frequently (we’ve seen earlier that they should have appeared even less, actually), would it have been interesting ? As we can see, our top 2 numbers were numbers that did not appear frequently earlier (29 and 47 appears respectively 10 and 11 times over 50 draws)….
Over 50 draws of 6 balls, the expected frequency of 6 given number is around 36.7,..

> S=rep(NA,10000)
> for(s in 1:10000){
+ B=NA
+ for(i in 1:50){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ S[s]=sum(B%in%(1:6))
+ }
> mean(S)
[1] 36.7694

But here for the top 6, we have

> z=TN[nb]
> sum(rev(z)[1:6])
[1] 29

i.e. the top 6 appeared 29 times over 50 draw of 6 balls (which looks low) and for the worst 6, it is a bit higher,

>  sum(z[1:6])
[1] 38

If we look at the theoretical density of the frequency of 6 given number, we have

i.e. our worst 6 is a nice average (in green) while top 6 did not appear frequently this time (here in blue) ! So we could not have used that information….
Anyway, if some of you are interesting using statistics to get a free lunch, with the nouveau loto, I did not see any strange pattern (data can be downloaded here).

I am terribly sorry, but I cannot help anyone winning at the French Lottery….

Mais qui connaît la loi logistique ?

Certains élèves ont pris peur en voyant apparaitre la loi logisitque dans le cours d’économétrie des variables qualitatives, car autant j’avais passé pas mal de temps lors des remises à niveau sur la loi normale, la loi de Student, la loi de Poisson…. jamais je n’avais présenté la loi logistiquue.. Mais qui connaît la loi logisitique ?

  • La loi logistique (en tant que loi)

En maths, on a introduit très tôt les fonctions de la forme

https://perso.univ-rennes1.fr/arthur.charpentier/latex/zevzev.png

où K et r sont des réels positifs et a un réel quelconque, qui sont ce que l’on appelle les solutions (en temps continu) du modèle de Verhulst.

C’est Pierre François Verhulst qui a donné le nom de “courbe logistique“, en expliquant, en 1845  « nous donnerons le terme de logistique à cette courbe ». Comme toujours, il faut revenir aux racines étymologique, en notant que  « logistique » a même racine que logarithme, et que logistikos signifie “calcul” en grec (on peut retrouver plus d’information ici , ou encore ).

Ces fonctions ont été introduites par Pierre François Verhulst, qui était un élève d’Adolphe Quetelet, et qui dévait étudier un modèle d’évolution de population qui ne soit pas exponentielle. Cette courbe a été utilisée dans l’étude des populations à partir des années 20 par Raymond Pearl et Lowell Jacob Reed, mais qui ne créditent Verhulst de la paternité de la découverte qu’en 1922. En revanche, on ne retrouve le terme de “logistique” qu’en 1924 dans une correspondance entre George Yule et Reed. On notera que c’est Joseph Berkson qui défendra l’idée d’ajuster certaines courbes par une fonction logistique (modèle logit) plutôt que par la fonction de répartition de la loi de Gauss (modèle probit, introduit par Bliss en 1934, mais c’est une autre histoire que je raconterais sûrement un jour….).

En dynamique des populations, le modèle de Verhulst est un modèle de croissance proposé vers 1840. Pour reprendre l’explication de wikipedia sur le sujet (inspiré de plusieurs ouvrages sur le sujet dont de Handbook on logisitic distribution) Verhulst a proposé ce modèle en réponse au modèle de Malthus qui proposait un taux d’accroissement constant sans frein conduisant à une croissance exponentielle de la population. Le modèle de Verhulst imagine que le taux de natalité et le taux de mortalité sont des fonctions affines respectivement décroissante et croissante de la taille de la population. Autrement dit, plus la taille de la population augmente, plus son taux de natalité diminue et son taux de mortalité augmente.

On appelle y la taille de la population, m(y) le taux de mortalité et n(y) le taux de natalité. La taille de la population suit l’équation différentielle

https://perso.univ-rennes1.fr/arthur.charpentier/latex/585468azdf.png

Si m et n sont des fonction affines respectivement croissante et décroissante (ce qui une hypothèse un peu forte, mais qui a longtemps été utilisée comme on l’avait vu en cours d’actuariat en M1) alors n – m est une fonction affine décroissante. L’équation peut s’écrire

https://perso.univ-rennes1.fr/arthur.charpentier/latex/325365ze4v.png

avec a et b deux réels positifs, si la croissance est positive quand y est proche de 0. En posant K=a/b, l’équation devient

https://perso.univ-rennes1.fr/arthur.charpentier/latex/36854z6e5v46z.png

avec a > 0 et K > 0 . Aussi, histoire de formaliser, on recherche des fonctions strictement positives définies sur https://perso.univ-rennes1.fr/arthur.charpentier/latex/r+.png et vérifiant le système https://perso.univ-rennes1.fr/arthur.charpentier/latex/5za365c4a.png et

 https://perso.univ-rennes1.fr/arthur.charpentier/latex/54za6cf8.png

ce qui conduit à la solution dite logistique
https://perso.univ-rennes1.fr/arthur.charpentier/latex/az4c65.png
  • La cote au lieu de la probabilité

Bref, voilà pour la loi…. mais comment voit-on apparaître cette loi en économétrie ? Pour rappel, on cherche à modéliser la probabilité de survenant d’un évènement. Mais il existe d’autre outils que la probabilité en tant que telle. En particulier, la cote d’un évènement – ou odds in favor of an event” – est la quantité

https://perso.univ-rennes1.fr/arthur.charpentier/latex/pipipiphn.png

où p est la probabilité de réalisation de cet évènement.
Cette grandeur est bien connue par les turfistes…. Bon, dans ce cadre (les paris), cette probabilité n’a rien d’une probabilité historique, mais il s’agit plutôt d’une probabilité risque neutre comme on dit en finance: il s’agit de la probabilité extraite des prix. Considérons un exemple simple (où on ne mise que sur le vainqueur)

  • Le parieur A mise 10 € sur le cheval numéro 1.
  • Le parieur B mise 5 € sur le cheval numéro 2.
  • Le parieur C mise 20 € sur le cheval numéro 3.
  • Le parieur D mise 15 € sur le cheval numéro 4.

On suppose que les sommes misées reflètent les croyances des joueurs (qui ont tous la même aversion pour le risques). Autrement dit, les joueurs pensent que le cheval 1 a deux fois plus de chances de gagner que le cheval 2. Bref, La cote d’un cheval est obtenue en divisant la somme des enjeux joués sur les autres chevaux par la somme des enjeux joués sur le cheval choisi (ou l’inverse, ça dépend de

la manière dont on souhaite le présenter), soit ici

  • Pour le cheval 1, une cote de [5+20+15]/10=4 (on dira 4 contre 1)
  • Pour
    le cheval 2, une cote de [10+20+15]/5=9 (on dira 9 contre 1)
  • Pour le cheval 3, une cote de [10+5+15]/20=3/2 (on dira 1.5 contre 1)
  • Pour le cheval 4, une cote de [10+5+20]/15=7/3

Ca existe également en football (comme en témoigne le ticket ci-contre où figurent les cotes). Bref, cette cote est une manière très naturelle de parler de probabilité. L’énorme avantage est qu’une probabilité est définie sur [0,1] alors qu’une cote peut prendre n’importe quelle valeur positive.

  • La régression logistique

On souhaite donc modéliser la cote à l’aide de variables explicatives. Mais comme le score peut être négative, il est assez “naturel” de prendre le logarithme (comme pour la régression Poissonnienne par exemple). Aussi, on peut comprendre l’écriture de la forme suivante

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ojker%c3%b4bvjk%c3%b4.png

ou encore

https://perso.univ-rennes1.fr/arthur.charpentier/latex/hopvezuiogvp.png

Or la fonction qui apparaît à gauche est la fonction quantile de la loi logistique. Quod erat demonstrandi…

  • Un cas particulier de GLM

La transformation logistique apparaît très naturelle quand on utilise les GLM (modèles linéaires généralisés). En effet, il faut que la loi s’écrive sous la forme exponentielle
(comme j’en avais parlé ici ou , par exemple). Pour reprendre les notations du cours d’actuariat 2, il faut écrire la loi binomiale

https://perso.univ-rennes1.fr/arthur.charpentier/latex/khpok%c3%b9ikh%c3%b9.png

sous la forme^

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ezv65465.png

soit ici (on n’a pas de paramètre de surdispersion)

https://perso.univ-rennes1.fr/arthur.charpentier/latex/6ze54c65.png

En notant que la loi binomiale se réécrit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/azcac6ea46.png

ou encore

https://perso.univ-rennes1.fr/arthur.charpentier/latex/6c48ze684fg.png

On en déduit que le paramètre naturel est

https://perso.univ-rennes1.fr/arthur.charpentier/latex/zegzgh8796h8.png

et que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/az6f54ea68g54.png

(moyennant un peu de réécriture). Bref, on note une relation naturelle (on parlera de fonction de lien canonique) de la forme entre le paramètre usuel de la loi binomiale et le paramètre dit naturel de la loi exponentielle,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/zegzgh8796h8.png

ou encore

https://perso.univ-rennes1.fr/arthur.charpentier/latex/zevzg7h9.png

La fonction de répartition logistique apparaît là encore presque par enchantemant…. enfin, presque…