Category Archives: M1-statistique

De la fréquence des ouragans aux Etats-Unis

Pour la fin du cours de statistique, on joue enfin avec des données. Après les taux de réussite au brevet des collèges, on a travaillé un peu sur la survenance des ouragans aux Etats-Unis. La base utilisée est la suivante

> base=read.table(
"http://freakonometrics.free.fr/ouragan.csv",
sep=";",header=TRUE,dec=",")
> base=base[1:207,1:7]

Le nombre annuel d’ouragan est la série suivante

>  TB <- table(base$Year)
>  years <- as.numeric(names(TB))
>  counts <- as.numeric(TB)
>  years0=(1900:2005)[which(!(1900:2005)%in%years)]
>  db <- data.frame(years=c(years,years0),
         counts=c(counts,rep(0,length(years0))))
>  X=db$counts
>  plot(db,type="h")

On peut comparer les fréquences observées avec ce que donnerait une loi de Poisson (paramètre obtenu par maximum de vraisemblance)

>  lambda=mean(X)
>  n=length(X)
>  cbind(X,dpois(0:6,lambda)*n)
  [,1]      [,2]
0   16 15.038430
1   34 29.367500
2   27 28.674870
3   13 18.665717
4    5  9.112744
5    6  3.559128
6    5  1.158396

On peut alors faire un test d’ajustement,

>  library(vcd)
>  fit=goodfit(X,"poisson")
>  plot(fit)

>  summary(fit)
 
	 Goodness-of-fit test for poisson distribution
 
                      X^2 df   P(> X^2)
Likelihood Ratio 14.17686  5 0.01452415

On voit que le test d’ajustement d’une loi Poisson n’est pas très concluant ici. Notons que ni la loi binomiale

>  fit=goodfit(X,"binomial")
>  plot(fit)
>  summary(fit)
 
	 Goodness-of-fit test for binomial distribution
 
                      X^2 df     P(> X^2)
Likelihood Ratio 46.21196  5 8.222959e-09

ni la loi binomiale négative ne semble marcher.

>  fit=goodfit(X,"nbinomial")
>  plot(fit)
>  summary(fit)
 
	 Goodness-of-fit test for nbinomial distribution
 
                      X^2 df   P(> X^2)
Likelihood Ratio 10.91507  4 0.02753523

On peut faire un test de stabilité temporelle du paramètre d’une loi de Poisson, à l’aide du test de Przyborowski & Wilenski (1940).  Un ancien package permettait de mettre en place ce test. Mais on peut heureusement utiliser une autre fonction, avec une version différente du test (on pourra aussi consulter Krishnamoorthy & Thomson (2004) pour aller plus loin). L’idée du premier test est assez simple: notons que si sur la première période (de durée ), on a un processus de Poisson de paramètre  et sur la seconde période (de durée ), alors

et

Un test de  s’écrit  Notons que la loi
conditionnelle de  sachant  suit une loi binomiale, donc le paramètre de probabilité est fonction de ce ratio. Plus précisément, conditionnellement à,  suit un loi binomiale  avec

A partir de là, il est facile de mettre en place une stratégie de test. Une autre stratégie est d’utiliser

>  seuil=1960
>  X1=db[db$years<seuil,"counts"]
>  X2=db[db$years>=seuil,"counts"]
>  N1=length(db$years<seuil)
>  N2=length(db$years>=seuil)
>  poisson.test(c(sum(X1),sum(X2)),c(N1,N2))
 
	Comparison of Poisson rates
 
data:  c(sum(X1), sum(X2)) time base: c(N1, N2)
count1 = 85, expected count1 = 103.5, p-value = 0.01216
alternative hypothesis: true rate ratio is not equal to 1
95 percent confidence interval:
 0.5218662 0.9265666
sample estimates:
rate ratio 
 0.6967213

 

Bref, on peut accepter que le nombre annuel d’ouragan n’est pas un processus de Poisson homogène dans le temps…

Autres Données pour le cours de Statistiques

Encore quelques données pour faire un TP de statistique. Ce sont des données de coûts d’ouragans, aux Etats-Unis,

> library(gdata)
> db=read.xls(
+     "http://sciencepolicy.colorado.edu/publications/special/public_data_may_2007.xls",
+     sheet=1)
> stupidcomma = function(x){
+  x=as.character(x)
+  for(i in 1:10){x=sub(",","",as.character(x))}
+  return(as.numeric(x))}
> base=db[,1:4]
> base$Base.Economic.Damage=
Vectorize(stupidcomma)(db$Base.Economic.Damage)
> base$Normalized.PL05=Vectorize(stupidcomma)(db$Normalized.PL05)
> base$Normalized.CL05=Vectorize(stupidcomma)(db$Normalized.CL05)

Lire la base est un peu pénible, à cause de virgules qui traînent comme séparateur de milliers. Mais peu importe, le petit code permet de nettoyer la base.

  • Fréquence d’ouragans

La première étape pourra être de travailler sur la fréquence. On pourra ajuster des lois de Poisson, ou binomiales, ou binomiales négatives, et faire des tests de stabilité du paramètre, avec le temps (on pourra  couper en 4 ou 5 blocs).

> TB <- table(base$Year)
> years <- as.numeric(names(TB))
> counts <- as.numeric(TB)
> years0=(1900:2005)[which(!(1900:2005)%in%years)]
> db <- data.frame(years=c(years,years0),
+       counts=c(counts,rep(0,length(years0))))
> db[88:93,]
   years counts
88  2003      3
89  2004      6
90  2005      6
91  1902      0
92  1905      0
93  1907      0
> plot(years,counts,type='h')

  • Coût des ouragans

Dans un second temps, on pourra travailler sur le coût des ouragans

> plot(base$Normalized.PL05/1e9,type="h")

On pourra tenter d’ajuster des lois, log normale, gamma, Pareto, etc. Faire des tests d’ajustement. Et tester la stabilité des paramètres, dans le temps (en coupant en 2 blocs, par exemple, ou davantage).

Données pour le cours de Statistique

Pour les derniers cours de Statistique, j’avais envoyé  une feuille avec quelques exercices, à regarder. J’en profile pour mettre en ligne les exercices sur l’inférence, et ceux sur les tests. Je mets aussi en ligne un tableau contenant les résultats de cinq établissements sur Rennes, au Brevet des Collèges. On pourra utiliser ces données pour faire un peu de statistique.

Données TP ANOVA

Demain, en cours de statistique mathématique, on verra l’Analyse de la Variance, à un et deux facteurs. Les données que nous utiliserons sont des durées d’appels de téléphone cellulaire,

> download.file(
"http://freakonometrics.free.fr/celulaire.Rdata","cellulaire")
> load("cellulaire")
> head(cellulaire)
  Jour Duree  EntSor
1  LUN    30 Sortant
2  LUN    47 Entrant
3  LUN     6 Entrant
4  LUN     6 Sortant
5  LUN    17 Sortant
6  LUN   116 Entrant
> tail(cellulaire)
   Jour Duree  EntSor
60  SAM    46 Sortant
61  DIM     4 Sortant
62  DIM    44 Sortant
63  DIM    10 Sortant
64  DIM    63 Sortant
65  DIM   193 Entrant

Profile Likelihood

Consider some simulated data

> set.seed(1)
> x=exp(rnorm(100))

Assume that those data are observed i.id. random variables with distribution, with . The natural idea is to consider the maximum likelihood estimator

For instance, consider some maximum likelihood estimator,

> library(MASS)
> (F=fitdistr(x,"gamma"))
     shape       rate   
  1.4214497   0.8619969 
 (0.1822570) (0.1320717)
> F$estimate[1]+c(-1,1)*1.96*F$sd[1]
[1] 1.064226 1.778673

Here, we have an approximated (since the maximum likelihood has an asymptotic Gaussian distribution) confidence interval for . We can use numerical optimization routine to get the maximum of the log-likelihood function

> log_lik=function(theta){
+   a=theta[1]
+   b=theta[2]
+   logL=sum(log(dgamma(x,a,b)))
+   return(-logL)
+ }

> optim(c(1,1),log_lik)
$par
[1] 1.4214116 0.8620311
 
$value
[1] 146.5909

And we have the same value.

Now, what if we care only about , and not . The we can use profile likelihood. The idea is to solve

i.e.

or, equivalently,

> prof_log_lik=function(a){
+   b=(optim(1,function(z) -sum(log(dgamma(x,a,z)))))$par
+   return(-sum(log(dgamma(x,a,b))))
+ }

> vx=seq(.5,3,length=101)
> vl=-Vectorize(prof_log_lik)(vx)
> plot(vx,vl,type="l")
> optim(1,prof_log_lik)
$par
[1] 1.421094
 
$value
[1] 146.5909

A few weeks ago, we have mentioned the likelihood ratio test, i.e.

The analogous can be obtained here, since

(the 1 comes from the fact that  is a one-dimensional coefficient). The (technical) proof can be found in Suhasini Subba Rao’s notes (see also Section 4.5.2 in Antony Davison’s Statistical Models). From that property, we can easily obtain a confidence interval for 

Hence, from our sample, we get the following 95% confidence interval,

> abline(v=optim(1,prof_log_lik)$par,lty=2)
> abline(h=-optim(1,prof_log_lik)$value)
> abline(h=-optim(1,prof_log_lik)$value-qchisq(.95,1)/2)
 
> segments(F$estimate[1]-1.96*F$sd[1],
-170,F$estimate[1]+1.96*F$sd[1],-170,lwd=3,col="blue")
> borne=-optim(1,prof_log_lik)$value-qchisq(.95,1)/2
> (b1=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(.5,1.5))$root)
[1] 1.095726
> (b2=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(1.25,2.5))$root)
[1] 1.811809

that can be visualized below,

> segments(b1,-168,b2,-168,lwd=3,col="red")

In blue the obtained obtained using the asymptotic Gaussian property of the maximum likelihood estimator, and in red, the obtained obtained using the asymptotic chi-square distribution of the log (profile) likelihood ratio.

Cite this article as: Arthur Charpentier, "Profile Likelihood," in Freakonometrics, 16/11/2015, http://freakonometrics.hypotheses.org/20573.

Applications of Chi-Square Tests

This morning, in our mathematical statistical class, we’ve seen the use of the chi-square test. The first one was related to some goodness of fit of a multinomial distribution. Assume that . In order to test  against , use the statistic

Under . For instance, we have the number of weddings, in a large city, per season,

> n=c(301,356,413,262)

We want to test if weddings are celebrated uniformely over the year, i.e. .

> np=rep(sum(n)/4,4)
> cbind(n,np)
       n  np
[1,] 301 333
[2,] 356 333
[3,] 413 333
[4,] 262 333
> Q=sum( (n-np)^2/np  )
> Q
[1] 39.02102

This quantity should be compared with the quantile of the chi-square distribution

> qchisq(.95,df=4-1)
[1] 7.814728

but it is also possible to compute the p-value,

> 1-pchisq(Q,df=4-1)
[1] 1.717959e-08

Here, we reject the assumption that weddings are celebrated uniformly over the year.

Continue reading Applications of Chi-Square Tests

Inference for the Multinomial Distribution

This morning, in our mathematical statistical class, we’ve seen briefly the multinomial distribution, and statistical inference.  has a  distribution if its probability function is

with  and .

The maximum likelihood estimator is then the optimum of

We use Lagrange multiplier to solve this constrained optimization problem,

First order conditions are here

and

Thus,

From

we can easily get that Lagrande multiplier is . And then

One can easily get that this maximum likelihood estimator is unbiased, since . Actually, we can easily prove that

and that , while . The trick to get the later is simple,

and . Thus, we can easily get the covariance. From that term, we can write that

with

while

Statistical Tests: Asymptotic, Exact, ou based on Simulations?

This morning, in our mathematical statistics course, we’ve been discussing the ‘proportion test‘, i.e. given a sample of Bernoulli trials, with , we want to test

against 

A natural test (which can be related to the maximum likelihood ratio test) is  based on the statistic

The test function is here

To get the bounds of the acceptance region, we need the distribution of , under . Consider here a numerical application

n=20
p=.5
set.seed(1)
echantillon=sample(0:1,size=n,
            prob=c(1-p,p),
            replace=TRUE)
  • the asymptotic distribution

The first (and standard idea) is to use the central limit theorem, since

So, under ,

Then  while . The acceptance region is then between the two red lines, below,

T=sqrt(n)*(mean(echantillon)-.5)/
  sqrt(mean(echantillon)*
  (1-mean(echantillon)))
u=seq(-3,3,by=.01)
v=dnorm(u)
plot(u,v,type="l",lwd=2)
abline(v=qnorm(.025),col="red")
abline(v=qnorm(.975),col="red")
abline(v=T,col="blue")

  • the exact distribution

Here we use the fact that

Using transformation of the ‘density’, we can (at least numerically) compute the (exact) distribution of

 

u=seq(-3,3,by=.01)
v=sqrt(.5*(1-.5))*n*dbinom(round(
  (sqrt(.5*(1-.5))*u/sqrt(n)+.5)*n),
  size=n,prob=.5)/sqrt(n)

Here I used a round value, it guess it would be better with a floor function, but here the graph looks symmetric (which is something I like)

abline(v=sqrt(n)*(qbinom(.025,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red")
abline(v=sqrt(n)*(qbinom(.975,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red")
lines(u,v,type="s")

  • distribution based on Monte Carlo simulations

Probably more interesting, here we do not use the fact that we might know the distribution of the mean. We just generate random samples, under , and then compute ,

T=rep(NA,1000)
for(i in 1:1000){
x=sample(0:1,size=n,
         prob=c(1-.5,.5),
         replace=TRUE)
m=mean(x)
T[i]=(m-.5)/sqrt(m*(1-m))*sqrt(n)}
lines(density(T),lwd=2)
abline(v=quantile(T,.025),col="red")
abline(v=quantile(T,.975),col="red")

Where does that 2 come from in the likelihood ratio test?

This afternoon, in class, we’ve seen Wald test, the likelihood-ratio test, and finally the score test. All of them rely on the same idea

and then, use that if   with , we can write

Or – slightly more interesting – if  , then

Then one can get that

Based on that property, we can derive Wald statistics,

that can be visualized below

The score test is a test on the square of the slope

The idea for the likelihood ratio test is to consider

Observe that  can be written, using Taylor’s expansion

for some . The first term is null, since the maximum likelihood estimator is precisely at the maximum of the (log) likelihood. So

That’s more or less where the 2 comes from. Then observe that

and therefore

This test will be discussed further next week (since it is related to Neyman-Pearson’s theorem), but also, that result can be used to derive confidence intervals. With a log-likelihood as follows

it is possible to get a confidence interval for the parameter by looking for‘s such that

We will discuss that idea later on, in the context of profile likelihood.

Tests, Power and Significance

In the mathematical statistics course today, we started talking about tests, and decision rules. To illustrate all the concepts introduced today, we considered the case where we have a sample  with . And we want to test

  against 

In the course, we’ve seen that we could use a test based on the order statistics .  The test would be

i.e. if  we choose , and if , we choose .

From the definition of the first order risk,

we can easily get that

Thus, the power is then

To visualize it, use the following parameters

n=5
alpha=.1
theta0=1

Then

C1=theta0*(1-alpha)^(1/n)
theta=seq(0,2,by=.01)
P1=(1-(theta0/theta)^n*(1-alpha))*(theta>C1)
plot(theta,P1,type="l",lwd=2,col="blue",xlab="",ylab="Power")

Note that, so far, we did never consider the maximum of our sample. Assume that the maximum is , then we can compute the -value,

Here it is

PV=(1-theta^n)*(theta<=1)
plot(theta,PV,type="l",lwd=2,col="blue",xlab="",ylab="p-value")

Now, why not consider another test, based on the minimum (since we have the distribution of the minimum of a sample from a uniform distribution). The test is the same as before

but here, the threshold is

The power of the test is here

This test has the same significance level (by construction), but the power of the test is clearly lower than the one we got using the maximum of our sample, when 

C2=theta0*(1-alpha^(1/n))
P2=(1-(theta0/theta)*(1-alpha^(1/n)))^n*(theta>C2)
lines(theta,P2,type="l",lwd=2,col="red")

Why not consider a test based on ? The problem is that we need the distribution (more specifically the survival distribution) of . We can compute it, numerically. But that might be painful. An alternative is to consider some approximation, based on the central limit theorem, i.e.

Our test is based on , and to get the same significance as before, use

The power of the test is then

Here it is

mu=2*(theta0/2)
s2=2^2*(theta0^2/12)/n
C3=qnorm(1-alpha,mu,sqrt(s2))
(P=1-pnorm(C3,theta,sqrt(s2)))*(theta>C3)
lines(theta,P)

Observe here that the test based on the maximum is not more powerful than the one based on the average (I just wonder if it could be due to the Gaussian approximation…).

Calcul(s) d’information de Fisher

La semaine passée, on avait fait quelques calculs pour obtenir l’information de Fisher pour des lois classiques. Je voulais juste remettre au propre les calculs pour les lois à plusieurs paramètres. Pour la loi Gamma,

la log-vraisemblance s’écrit

de telle sorte que

Ici, même pas besoin de prendre une espérance car la Hessienne est constante

Continue reading Calcul(s) d’information de Fisher

Heuristics on bias and variance for kernel density estimators

Consider the simple case of a moving histogram (which is a very simple kernel). The idea is to recall that

where

is the slope close to point .

Then we use the empirical cumulative density to approximate the slope, i.e.

which can also be writen

Consider now the density seen as a random variable

where the‘s are i.i.d. where , with

Thus, observe that , but that’s not what we’re looking for… From Taylor’s expansion,

thus

where the bias comes from the approximation of the density by some string. About the variance,

thus, since ,

i.e.

We can observe that

is decreasing as , while the variance is increasing as . This is the standard bias-variance tradeoff in statistics.

Convergence and Asymptotic Results

Last week, in our mathematical statistics course, we’ve seen the law of large numbers (that was proven in the probability course), claiming that

given a collection  of i.i.d. random variables, with

To visualize that convergence, we can use

> m=100
> mean_samples=function(n=10){
+   X=matrix(rnorm(n*m),nrow=m,ncol=n)
+   return(apply(X,1,mean))
+ }
> B=matrix(NA,100,20)
> for(i in 1:20){
+   B[,i]=mean_samples(i*10)
+ }
> colnames(B)=as.character(seq(10,200,by=10))
> boxplot(B)

It is possible to visualize also the  bounds (used in the central limit theorem to get a limiting non degenerated distribution)

> u=seq(0,21,by=.2)
> v=sqrt(u*10)
> lines(u,1.96/v,col="red")
> lines(u,-1.96/v,col="red")

Yesterday, we’ve been discussing properties of the empirical cumulative distribution function,

We’ve seen Glivenko-Cantelli theorem, which states that (under mild assumptions)

To visualize that convergence use the following code. Here I use the trick

to get the maximum (componentwise) between two matrices

> m=100
> inf_sample=function(n=10){
+ X=matrix(rnorm(n*m),nrow=m,ncol=n)
+ Xs=t(apply(X,1,sort))
+ Pe_inf=matrix(rep((0:(n-1))/n,
+ each=m),nrow=m,ncol=n)
+ Pe_sup=matrix(rep((0:n)/n,each=m),
+ nrow=m,ncol=n)
+ Pt=pnorm(Xs)
+ D1=abs(Pe_inf-Pt)
+ D2=abs(Pe_sup-Pt)
+ Df=(D1+D2)/2+abs(D2-D1)/2
+ return(apply(Df,1,max))
+ }
> B=matrix(NA,100,20)
> for(i in 1:20){
+   B[,i]=inf_sample(i*10)
+ }
> colnames(B)=as.character(seq(10,200,by=10))
> boxplot(B)

We have also discussed the pointwise asymptotic normality of the empirical cumulative distribution function

Here again, it is possible to visualize it. The first step is to compute several trajectories for empirical cumulative distribution function

> u=seq(-3,3,by=.1)
> plot(u,u,ylim=c(0,1),col="white")
> M=matrix(NA,length(u),1000)
> for(m in 1:1000){
+ n=100
+ x=rnorm(n)
+ Femp=Vectorize(function(t) mean(x<=t))
+ v=Femp(u)
+ M[,m]=v
+ lines(u,v,col='light blue',type="s")
+ }

Note that we can compute (pointwise) confidence bands

> lines(u,apply(M,1,mean),col="red",type="l")
> lines(u,apply(M,1,function(x) quantile(x,.05)),
+ col="red",type="s")
> lines(u,apply(M,1,function(x) quantile(x,.95)),
+ col="red",type="s")

Now, if we focus on one specific point, we can visualize the asmptotic normality (i.e. the almost normality when we have a sample of size 100)

> x0=-1
> y=M[which(u==x0),]
> hist(y,probability=TRUE,
+ breaks=seq(.015,0.55,by=.01))
> vu=seq(0,1,by=.001)
> lines(vu,dnorm(vu,pnorm(x0),
+ sqrt((pnorm(x0)*(1-pnorm(x0)))/100)),
+ col="red")

Petit exercice de proba

Mardi dernier, on avait fait une série d’exercices de proba, et je voulais reprendre un exercice pour lequel j’avais proposé une vérification sur ordinateur.

Pour résoudre l’exercice, j’avais suggéré la méthode suivante. est tiré parmi , alors que est tiré parmi .  On note assez facilement que le plus petit nombre est , et comme le ppcm de  et  est ,

(je laisse faire les calculs pour montrer qu’effectivement, au delà, on n’est plus dans les ensembles de départ). Bref, seuls  nombres figurent dans les deux ensembles. La probabilité que l’on cherche est alors donnée par

où on somme sur nos  nombres. On a alors

où chacune des probabilités vaut  (on a une chance sur  de tomber sur n’importe quel nombre), et on a une somme de  termes. Donc la probabilité que l’on cherche est

Pour le vérifier, on peut utiliser le petit code (R) suivant

> list_X=seq(2,by=3,length=100)
> list_Y=seq(3,by=4,length=100)
> n=1e8
> x=sample(list_X,size=n,replace=TRUE)
> y=sample(list_Y,size=n,replace=TRUE)
> sum(x==y)/n
[1] 0.00250221

qui confirme le petit calcul que l’on vient de faire.