Tag Archives: R-english

Consecutive number and lottery

Recently, I have been reading odd things about strategies to win at the lottery. E.g.

or

I wrote something a long time ago, but maybe it would be better to write another post. First, it is easy to get data on the French lotteries, including draws, number of winners and gains,

loto=read.table("http://freakonometrics.blog.free.fr/public/
data/loto.csv",sep=";",header=TRUE)
balls=loto[,c("boule_1","boule_2","boule_3",
"boule_4","boule_5","boule_6")]
q=function(x){quantile(x,(0:5)/5)}
sortballs=balls
consec=balls[,-1]
sortconsec=consec
for(i in 1:nrow(balls)){sortballs[i,]=q(balls[i,])
consec[i,]=sortballs[i,2:6]-sortballs[i,1:5]
sortconsec[i,]=sort(consec[i,])}
winner1=loto[,"nombre_de_gagnant_au_rang1"]
gain1=as.numeric(as.character(loto[,"rapport_du_rang1"]))
winner2=loto[,"nombre_de_gagnant_au_rang2"]
gain2=as.numeric(as.character(loto[,"rapport_du_rang2"]))
winner3=loto[,"nombre_de_gagnant_au_rang3"]
gain3=as.numeric(as.character(loto[,"rapport_du_rang3"]))
winner4=loto[,"nombre_de_gagnant_au_rang4"]
gain4=as.numeric(as.character(loto[,"rapport_du_rang4"]))
winner5=loto[,"nombre_de_gagnant_au_rang5"]
gain5=as.numeric(as.character(loto[,"rapport_du_rang5"]))
which1=(sortconsec[,1]==1)
which2=(sortconsec[,2]==1)
which3=(sortconsec[,3]==1)
which4=(sortconsec[,4]==1)
which5=(sortconsec[,5]==1)

There several ways to defining “winning at the lottery” (2 out of 6, 3 out of 6, 4 out of 6, etc) and to define “having consecutive numbers” (it can be 2 out of 6, 3 out of 6, etc). For instance,

It is also possible to compare the number of winners obtained for medium winners (3 out of 6, so called vainqueur de rang 4) when there were 2 consecutive numbers

> t.test(winner4[which1==TRUE],winner4[which1==FALSE])

Welch Two Sample t-test

data:  winner4[which1 == TRUE] and winner4[which1 == FALSE]
t = -3.2132, df = 4792.491, p-value = 0.001321
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-6864.430 -1662.123
sample estimates:
mean of x mean of y
33887.82  38151.10

With a simple mean comparison test, we have that there is a significant difference between average number of winners when there were at least 2 consecutive numbers out of 6 balls drawn. And actually, the average number of winners was lower when there are consecutive numbers. And if we look at the average gain, we have also a significant difference

> t.test(gain4[which1==TRUE],gain4[which1==FALSE])

Welch Two Sample t-test

data:  gain4[which1 == TRUE] and gain4[which1 == FALSE]
t = 5.8926, df = 3675.361, p-value = 4.143e-09
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
11.06189 22.09337
sample estimates:
mean of x mean of y
173.9788  157.4012

Here we see that if we play consecutive numbers, on average, the gain is larger. Perhaps it would be better to look at that on graphs.

which0=which1
WIN=c(mean(winner5[which0==TRUE]),mean(winner5[which0==FALSE]),
mean(winner4[which0==TRUE]),mean(winner4[which0==FALSE]),
mean(winner3[which0==TRUE]),mean(winner3[which0==FALSE]),
mean(winner2[which0==TRUE]),mean(winner2[which0==FALSE]),
mean(winner1[which0==TRUE]),mean(winner1[which0==FALSE]))
MWIN=matrix(WIN,2,5)

plot(1:5,MWIN[1,],type="b",col="red",log="y",
ylim=c(1,1000000),xlab="two consecutive numbers",
ylab="number of winners (log scale)")
lines(1:5,MWIN[2,],type="b",col="blue",pch=4)

If we focus on the case where “having consecutive numbers” means two consecutive numbers, we have below the number of winners, with first rank (6 out of 6), then second rank (5 out of 6), etc,

Note that the y-axis is on a log scale, and that draws with consecutive balls are in red, and no consecutive balls are in blue. If we focus on average gains, curves are in opposite order,

But if we consider the case of three consecutive balls, we have, for the number of winners,

or for average gains

Here it starts to get slightly different: there are more “big winners” when there are at least three consecutive numbers. And with four consecutive numbers, it is clearly the opposite

Here we see that there are much more winners with four consecutive numbers (actually, it might be a triplet and a pair). So I have to confess that I am not convinced by the conclusion: actually a lot a people pick consecutive numbers… Actually, if we look at draws where there were the more winners, we can clearly see that a lot of players like consecutive numbers (perhaps not has much as playing birthdays, since most numbers are lower than 31),

loto[loto$"nombre_de_gagnant_au_rang1">50,
c("combinaison_gagnante_en_ordre_croissant",
"nombre_de_gagnant_au_rang1","date_de_tirage")]
combinaison_gagnante gagnant_au_rang1
3189      2-4-13-16-28-31              103
3475       1-5-9-10-12-25               64
4018       4-5-7-14-15-17               63
4396    26-27-28-35-36-37               64
4477     7-11-15-27-33-44               53
4546      2-9-12-14-19-24               60
4685      2-8-10-12-14-16               96
date_de_tirage
3189       19930626
3475       19920212
4018       19880504
4396       19840919
4477       19830914
4546       19820519
4685       19790919

On September 1979, there were 5 even consecutive numbers (ok, it can not be considered strictly as consecutive numbers) and 96 winners with 6 numbers out of 6 ! And if we look at the others, even if they are not strictly consecutive, there is a lot of regularity. So I believe that picking consecutive numbers might not be a great strategy if you want to win a lot of money !

Short selling, volatility and bubbles

Yesterday, I wrote a post (in French) about short-selling in financial market since some journalists claimed that it was well-known that short -selling does increase volatility on financial market. Not only in French speaking journals actually, since we can read on http://www.forbes.com that  « in a market with restrictions on short-selling, volatility is reduced ». But things are not that simple. For instancehttp://www.optionsatoz.com/ explains it from a theoretical point of view. But we can also look at the data. For instance, we can compare the stock price of Air China, exchanged in Shanghai in blue (where short-selling is forbidden) and in Hong Kong in rouge (where short-selling is allowed), since @Igor gave me the tickers of those stocks

library(tseries)
X<-get.hist.quote("0753.HK")
Y<-get.hist.quote("601111.SS")
plot(Y[,4],col="blue",ylim=c(0,30))
lines(X[,4],col="red")

But as @alea_ pointed out, one asset is expressed here in Yuans renminbi, and the other one in HK dollars. So I downloaded the exchange rate fromhttp://www.oanda.com/

Z=read.table("http://freakonometrics.blog.free.fr/public/
data/change-cny-hkd.csv",header=TRUE,sep=";",dec=",")
D=as.Date(as.character(Z$date),"%d/%m/%y")
z=as.numeric(Z$CNY.HKD)
plot(D,z,type="l")
X2=X[,4]
for(t in 1:length(X2)){
X2[t]=X2[t]*z[D==time(X2[t])]} 
X2=X[,4]
plot(Y[,4],col="blue",ylim=c(0,30))
lines(X2,col="red")

Now both stocks are expressed in the same currency. To compare returns volatility, a first idea can be to use GARCH models,

RX=diff(log(X2))
RY=diff(log(Y[,4]))
Xgarch = garch(as.numeric(RX))
SIGMAX=predict(Xgarch)
Ygarch = garch(as.numeric(RY))
SIGMAY=predict(Ygarch)
plot(time(Y)[-1],SIGMAY[,1],col="blue",type="l")
lines(time(X2)[-1],SIGMAX[,1],col="red")

But volatility is here too eratic. So an alternative can be to use exponentially-weighted moving averages, where simple recursive relationships are considered

https://perso.univ-rennes1.fr/arthur.charpentier/latex/vol-04.png

or equivalently

https://perso.univ-rennes1.fr/arthur.charpentier/latex/vol-05.png

The code is not great, but it is easy to understand,

moy.ew=function(x,r){ 
m=rep(NA,length(x))

for(i in 1:length(x)){ 

m[i]=weighted.mean(x[1:i], 
         rev(r^(0:(i-1))))}

    return(m)} 

sd.ew=function(x,r,m){

sd=rep(NA,length(x))

for(i in 1:length(x)){
    
sd[i]=weighted.mean((x[1:i]-m[i])^2,
          rev(r^(0:(i-1))))}

    return(sd)} 
q=.97
MX=moy.ew(RX,q)
SX=sd.ew(RX,q,MX)
MY=moy.ew(RY,q)
SY=sd.ew(RY,q,MY)
plot(time(Y)[-1],SY,col="blue",type="l")
lines(time(X2)[-1],SX,col="red")

And now we have something less erratic, so we can focus now on the interpretation.
It is also possible to look on the difference between those two series of volatility, areas in blue means that in Shanghai (again, where short-selling is forbidden) returns are more volatile than in Hong Kong, and areas in red are periods where returns are more volatile in Hong Kong,

a=time(X2)[which(time(X2)%in%time(Y))]
b=SY[which(time(Y)%in%time(X2))]-
  SX[which(time(X2)%in%time(Y))]
n=length(a)
a=a[-n];b=b[-n]
plot(a,b,col="black",type="l")
polygon(c(a,rev(a)),c(pmax(b,0),rep(0,length(a))),
        col="blue",border=NA)
polygon(c(a,rev(a)),c(pmin(b,0),rep(0,length(a))),
        col="red",border=NA)

So clearly, there is nothing clear that can be said… Sometimes, volatility is higher in Hong Kong, and sometimes, it is higher in Shanghai. But if we look at the price, instead of looking at volatility,

a=time(X2)[which(time(X2)%in%time(Y))]
b=as.numeric(Y[which(time(Y)%in%time(X2)),4])- 
  as.numeric(X2[which(time(X2)%in%time(Y))])
n=length(a)
a=a[-n];b=b[-n]
plot(a,b,col="black",type="l")
polygon(c(a,rev(a)),c(pmax(b,0),rep(0,length(a))),
        col="blue",border=NA)
polygon(c(a,rev(a)),c(pmin(b,0),rep(0,length(a))),
        col="red",border=NA)

Here, it looks like bans on short-selling creates bubbles. Might not not be a goodthing.

We keep breaking records ? so what ?… Get statistical perspective….

This summer, we have been told that some financial series broke some records (here, in French)

For instance, the French CAC40 had negative return for 11 consecutive days (which has never been seen, so far).

> library(tseries)
> x<-get.hist.quote("^FCHI")
> Y=x$Close
> Z=diff(log(Y))
> RUN=rle(as.character(Z>=0))$lengths
> n=length(RUN)
> LOSS=RUN[seq(2,n,by=2)]
> GAIN=RUN[seq(1,n,by=2)]
> TG=sort(table(GIN))
> TG[as.character(1:13)]
GAIN
   1    2    3    4    5    6    7    8    9 <NA> <NA> <NA>   13
 645  336  170   72   63   21    7    3    4   NA   NA   NA    1
 
> TL=sort(table(LOSS))
> TL[as.character(1:15)]
LOSS
   1    2    3    4    5    6    7    8    9 <NA>   11 <NA> <NA> 
 664  337  186   68   42   14    5    3    1   NA    1   NA   NA 
 
> TR=sort(table(RUN))
> TR[as.character(1:15)]
RUN
   1    2    3    4    5    6    7    8    9 <NA>   11 <NA>   13 
1309  673  356  140  105   35   12    6    5   NA    1   NA    1

Indeed 11 consecutive days of negative returns is a record. But one should keep in mind the fact that the real records for runs is 13 consecutive days with positivereturns…
But what does that mean ? Can we still assume time independence of log-returns (since today, a lot of financial models are still based on that assumption) ?
Actually. if financial series were time-independence, such a probability, indeed, should be rather small. At least on 11 or 10 runs. Something like

http://freakonometrics.blog.free.fr/public/perso3/cacindp03.gif

(assuming that each day, the probability to observe a negative return is 50%). But maybe not over 25 years (6250 trading days): the probability to observe a sub-sequence of 10 consecutive negative value (with daily probability of one half) over 6250 observations will be much larger. My guess is that is would be

http://freakonometrics.blog.free.fr/public/perso3/cacindp02.gif

where at the numerator we have the number of favourable cases over the total number of cases. At the numerator, the first number the number of cases where the first 10 (at least) are negative, then for the second one, we count the number of cases where the first is positive, then the next 10 (at least) are negative (and then the second is positive and then the next 10 are negative, the third is positive etc). For those interested by more details (and a more general formula on runs), an answer can be found here.
But note that the probability is quite large… So it is not that unlikely to observe such a sequence over 25 years.
A classical idea when looking at time series is to look at the autocorrelation function of the returns,

which might suggest that there is no correlation with past returns. But it should be possible to do more advanced tests.
On the CAC40 series, we can run an independence run test on the latest 100 consecutive days, and look at the p-value,

> library(lawstat)
> u=as.vector(Z[(n-100):n])
> runs.test(u,plot=TRUE)
 
	Runs Test - Two sided
 
data:  u 
Standardized Runs Statistic = -0.4991, p-value = 0.6177

The B’s here are returns lower than the median (almost null, so they might be considered as negative returns). With such a high p-value, we accept the null hypothesis, i.e. time independence.
If we consider a moving-time window

we can see that we accept the assumption of independence most the the time.
Actually, here, the time window is 100 days (+/- 50 days). But it is possible to consider 200 days,

or even 400 days,

So, except if we focus on 2006, it looks like we should reject the idea of time dependence in financial markets.
It is also possible to look more carefully at the distribution of runs, and to compare it with the case of independent samples (here we consider monte carlo generation of sequences having the same size),

> m=length(Z)
> ns=100000
> HIST=matrix(NA,ns,15)
> for(j in 1:ns){
+ XX=sample(c("A","B"),size=m,replace=TRUE)
+ RUNX=rle(as.character(XX))$lengths
+ S=sort(table(RUNX))
+ HIST[j,]=S[as.character(1:15)]
+ }
> meana=function(x){sum(x[is.na(x)==FALSE])/length(x)}
> cbind(TR[as.character(1:15)],apply(HIST,2,meana),
+       round(m/(2^(1+1:15))))
 
     [,1]       [,2] [,3]
1    1309 1305.12304 1305
2     673  652.46513  652
3     356  326.21119  326
4     140  163.05101  163
5     105   81.52366   82
6      35   40.74539   41
7      12   20.38198   20
8       6   10.16383   10
9       5    5.09871    5
10     NA    2.56239    3
11      1    1.26939    1
12     NA    0.63731    1
13      1    0.31815    0
14     NA    0.15812    0
15     NA    0.08013    0

The first column above is the empirical frequency of runs of length 1,2,3, etc. The second one is the average frequencies obtained on random simulation of independent sample. The third one is the theoretical frequency based on a (geometric distribution with mean 1).

Here again, it looks like our time series behave like an independent sample. Here is also a nice paper by Mark Schilling on the longest run of heads.
So it is not that odd to observe such a series of losses on financial markets….

Multivariate probit regression using (direct) maximum likelihood estimators

Consider a random pair http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-01.gif of binary responses, i.e. http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-02.gif with http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-03.gif taking values 1 or 2. Assume that probability http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-04.gif can be function of some covariates.

  • The Gaussian vector latent structure

A standard model is based a latent Gaussian structure, i.e. there exists some random vector http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-06.gif such that http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-07.gif if http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-08.gif is lower than a given threshold, and 1 otherwise.
As in standard probit models, assume that

http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-09.gif

where we can assume that http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-10.gif is a Gaussian random vector. This assumption can be used to derive the likelihood of a sample http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-11.gif.

> logV=function(parameter){
+ CORRELATION=parameter[1]
+ BETA=matrix(parameter[2:length(parameter)],ncol(Y),ncol(X))
+ z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]))
+ sigma=matrix(c(1,CORRELATION,CORRELATION,1),2,2)
+     a11=pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
+ for(i in 2:nrow(z)){a11=c(a11,pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
+     a10=pnorm(z[1,1],sd=sqrt(sigma[1,1]))-pmnorm(z[1,],varcov=sigma)
+ for(i in
+ 2:nrow(z)){a10=c(a10,pnorm(z[i,1],sd=sqrt(sigma[1,1]))-pmnorm(z[i,],varcov=sigma))}
+     a01=pnorm(z[1,2],sd=sqrt(sigma[2,2]))-pmnorm(z[1,],varcov=sigma)
+ for(i in
+ 2:nrow(z)){a01=c(a01,pnorm(z[i,2],sd=sqrt(sigma[2,2]))-pmnorm(z[i,],varcov=sigma))}
+     a00=1-a10-a01-a11
+ -sum(((Y[,1]==1)&(Y[,2]==1))*log(a11) +
+     1*log(a01) +
+     2*log(a10) +
+     3*log(a00) )
+ }
> OPT=optim(fn=logV,par=c(0,1,1,1,1,1,1),method="BFGS")$par

(the code is a bit long since I had trouble working properly with matrices – or more precisely to vectorize my functions – so I used loops… I am sure it is possible to write a better code).
It is possible to generate samples (based on that specific model) to check that we can actually derive proper maximum likelihood estimators,

> library(mnormt)
> set.seed(1)
> n=1000
> r=0.5
> X1=runif(n)
> X2=rnorm(n)
> Y1S=1+5*X1
> Y2S=8-5*X1
> RES=rmnorm(n,mean=c(0,0),varcov=matrix(c(1,r,r,1),2,2))
> YS=cbind(Y1S,Y2S)+RES
> Y1=(YS[,1]>quantile(YS[,1],.5))*1
> Y2=(YS[,2]>quantile(YS[,2],.5))*1
> base=data.frame(i,Y1,Y2,X1,X2,YS)
> head(base)
  i Y1 Y2        X1          X2      Y1S      Y2S
1 1  0  0 0.2655087  0.07730312 3.177587 5.533884
2 2  0  0 0.3721239 -0.29686864 1.935307 5.089524
3 3  1  0 0.5728534 -1.18324224 4.757848 5.172584
4 4  1  0 0.9082078  0.01129269 4.600029 3.878225
5 5  0  1 0.2016819  0.99160104 2.547362 6.743714
6 6  1  0 0.8983897  1.59396745 5.309974 4.421523

(the two columns on the right are latent observations, that cannot be used since theoretically they are unobservable). Note that it is a simple regression, one of the component is here only to bring some noise. First of all, let us look at marginal probit regression

>  reg1=glm(Y1~X1+X2,data=base,family=binomial)
>  reg2=glm(Y2~X1+X2,data=base,family=binomial)
> summary(reg1)
 
Call:
glm(formula = Y1 ~ X1 + X2, family = binomial, data = base)
 
Deviance Residuals:
Min        1Q    Median        3Q       Max
-2.90570  -0.50126  -0.00266   0.49162   2.78256
 
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.291725   0.267149 -16.065   <2e-16 
X1           8.656836   0.510153  16.969   <2e-16 ***
X2           0.007375   0.090530   0.081    0.935
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 1386.29  on 999  degrees of freedom
Residual deviance:  726.48  on 997  degrees of freedom
AIC: 732.48

Number of Fisher Scoring iterations: 5
> summary(reg2)
Call:
glm(formula = Y2 ~ X1 + X2, family = binomial, data = base)
Deviance Residuals:
Min        1Q    Median        3Q       Max
-2.74682  -0.51814  -0.00001   0.57969   2.58565
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  3.91709    0.24399  16.054   <2e-16 ***
X1          -7.89703    0.46277 -17.065   <2e-16 ***
X2           0.18360    0.08758   2.096    0.036 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)

Null deviance: 1386.29  on 999  degrees of freedom
Residual deviance:  777.61  on 997  degrees of freedom
AIC: 783.61
Number of Fisher Scoring iterations: 5

Here, the optimization yields,

> OPT=optim(fn=logV,par=c(0,1,1,1,1,1,1),method="BFGS")$par
> OPT[1]
[1] 0.5261382
> matrix(OPT[2:7],2,3)
          [,1]      [,2]       [,3]
[1,] -2.451721  4.908633 0.01600769
[2,]  2.241962 -4.539946 0.10614807

Note that the coefficients we have obtained are almost identical to the ones obtained with R standard procedure,

>  library(Zelig)
>  REG= zelig(list(mu1=Y1~X1+X2,
+             mu2=Y2~X1+X2,
+     rho=~1),
+     model="bprobit",data=base)
>  summary(REG)
 
Call:
zelig(formula = list(mu1 = Y1 ~ X1 + X2, mu2 = Y2 ~ X1 + X2,
    rho = ~1), model = "bprobit", data = base)
 
Pearson Residuals:
                 Min        1Q     Median      3Q     Max
probit(mu1) -10.5442 -0.377243  0.0041803 0.36709 8.60398
probit(mu2)  -7.8547 -0.376888  0.0083715 0.42923 5.88264
rhobit(rho) -13.8322 -0.091502 -0.0080544 0.37218 0.85101
 
Coefficients:
                  Value Std. Error   t value
(Intercept):1 -2.451699   0.135369 -18.11116
(Intercept):2  2.241964   0.125072  17.92536
(Intercept):3  1.169461   0.189771   6.16249
X1:1           4.908617   0.252683  19.42602
X1:2          -4.539951   0.233632 -19.43203
X2:1           0.015992   0.050443   0.31703
X2:2           0.106154   0.049092   2.16235
 
Number of linear predictors:  3
 
Names of linear predictors: probit(mu1), probit(mu2), rhobit(rho)
&n
bsp;
Dispersion Parameter for binom2.rho family:   1
 
Residual Deviance: 1460.355 on 2993 degrees of freedom
 
Log-likelihood: -730.1774 on 2993 degrees of freedom
 
Number of Iterations: 3

> matrix(coefficients(REG)[c(1:2,4:7)],2,3)
          [,1]      [,2]       [,3]
[1,] -2.451699  4.908617 0.01599183
[2,]  2.241964 -4.539951 0.10615443

The correlation here is also the same

> (exp(summary(REG)@coef3[3])-1)/(exp(summary(REG)@coef3[3])+1)
[1] 0.5260951

That procedure works well an can be extended to ordinal responses (not only binary ones, or to three dimensional problems,

logV=function(beta){
BETA=matrix(beta[4:(3+ncol(Y)*ncol(X))],ncol(Y),ncol(X))
z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]),X%*%(BETA[3,]))
r12=beta[1]
r23=beta[2]
r31=beta[3]
s1=s2=s3=1
sigma=matrix(c(s1^2,r12*s1*s2,r31*s1*s3,
               r12*s1*s2,s2^2,r23*s2*s3,
               r31*s1*s3,r23*s2*s3,s3^2),3,3)
sigma1=matrix(c(s2^2,r23*s2*s3,
                r23*s2*s3,s3^2),2,2)
sigma2=matrix(c(s1^2,r31*s1*s3,
                r31*s1*s3,s3^2),2,2)
sigma3=matrix(c(s1^2,r12*s1*s2,
                r12*s1*s2,s2^2),2,2)
    a111=pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a111=c(a111,pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a011=pmnorm(z[1,2:3],varcov=sigma1)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a011=c(a011,pmnorm(z[i,2:3],varcov=sigma1)-pmnorm(z[i,],varcov=sigma))}
    a101=pmnorm(z[1,c(1,3)],varcov=sigma2)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a101=c(a101,pmnorm(z[i,c(1,3)],varcov=sigma2)-pmnorm(z[i,],varcov=sigma))}
    a110=pmnorm(z[1,1:2],varcov=sigma3)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a110=c(a110,pmnorm(z[i,1:2],varcov=sigma3)-pmnorm(z[i,],varcov=sigma))}
    a100=pnorm(z[1,1],sd=s1)-pmnorm(z[1,c(1,2)],varcov=sigma3)-pmnorm(z[1,c(1,3)],varcov=sigma2)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a100=c(a100,pnorm(z[i,1],sd=s1)-pmnorm(z[i,c(1,2)],varcov=sigma3)-pmnorm(z[i,c(1,3)],varcov=sigma2)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a010=pnorm(z[1,2],sd=s2)-pmnorm(z[1,c(1,2)],varcov=sigma3)-pmnorm(z[1,c(2,3)],varcov=sigma1)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a010=c(a010,pnorm(z[i,2],sd=s2)-pmnorm(z[i,c(1,2)],varcov=sigma3)-pmnorm(z[i,c(2,3)],varcov=sigma1)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a001=pnorm(z[1,3],sd=s3)-pmnorm(z[1,c(2,3)],varcov=sigma1)-pmnorm(z[1,c(1,3)],varcov=sigma2)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a001=c(a001,pnorm(z[i,3],sd=s3)-pmnorm(z[i,c(2,3)],varcov=sigma1)-pmnorm(z[i,c(1,3)],varcov=sigma2)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a000=1-a111-a011-a101-a110-a001-a010-a100
 
a111[a111<=0]=1e-50
a110[a110<=0]=1e-50
a101[a101<=0]=1e-50
a011[a011<=0]=1e-50
a100[a100<=0]=1e-50
a010[a010<=0]=1e-50
a001[a001<=0]=1e-50
a000[a000<=0]=1e-50
 
-sum(((Y[,1]==0)&(Y[,2]==0)&(Y[,3]==0))*log(a111) +
    4*log(a011) +
    5*log(a101) +
    6*log(a110) +
    7*log(a001) +
    8*log(a010) +
    9*log(a100) +
    10*log(a000) )
}

A strong assumption in that bivariate model is that residuals have a Gaussian structure. It is possible to change that assumption

  • marginally: for instance if we use a logistic cumulative distribution function, then we will have a bivariate logit regression
  • in terms of dependence structure: it is possible to consider another copula than the gaussian one, e.g. Gumbel’s copula (also called the bivariate logistic copula), or Clayton’s

Here, the following code can be used to extend the model to non Gaussian structures,

> F=function(x,r){pmnorm(x,rep(0,length(x)),
+                 varcov=matrix(c(1,r,r,1),2,2))}
> Fx=function(x1){F(c(x1,1e40),0)}
> Fy=function(x2){Fx(x2)}
> 
> logVgen=function(parameter){
+ CORRELATION=parameter[1]
+ BETA=matrix(parameter[2:length(parameter)],ncol(Y),ncol(X))
+ z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]))
+     a11=F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a11=c(a11,F(z[i,],r=CORRELATION))}
+     a10=Fx(z[1,1])-F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a10=c(a10,Fx(z[i,1])-F(z[i,],r=CORRELATION))}
+     a01=Fy(z[1,2])-F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a01=c(a01,Fy(z[i,2])-F(z[i,],r=CORRELATION))}
+     a00=1-a10-a01-a11
+ -sum(((Y[,1]==1)&(Y[,2]==1))*log(a11) +
+     11*log(a01) +
+     12*log(a10) +
+     13*log(a00) )
+ }
>
> beta0=c(0,1,1,1,1,1,1)
> (OPT=optim(fn=logVgen,par=beta0,method="BFGS")$par)
[1]  0.52613820 -2.45172059  2.24196154  4.90863292 -4.53994592  0.01600769
[7]  0.10614807
There were 23 warnings (use warnings() to see them)

E.g.

> library(copula)
> F=function(x,r){pcopula(pnorm(x),
               claytonCopula(2, r))}
> Fx=function(x1){F(c(x1,1e40),0)
}
> Fy=function(x2){Fx(x2)}
  • An application to school tests

Consider the following dataset,

hsb2=read.table("http://freakonometrics.free.fr/hsb2.csv",
        header=TRUE, sep=",")
math_male=hsb2$math[female==0]
write_male=hsb2$write[female==0]
math_female=hsb2$math[female==1]
write_female=hsb2$write[female==1]
plot(math_female, write_female, type="p",
     pch=19,col="red",xlab="maths",ylab="writing",cex=.8)
points(math_male, write_male, cex=1.2, col="blue")

with here maths versus writing, with girls in red and boys in blue, where variables here are

  female :
    0: male
    1: female
  race :
    1: hispanic
    2: asian
    3: african-amer
    4: white
  ses :
    1: low
    2: middle
    3: high
  schtyp : type of school
    1: public
    2: private
  prog : type of program
    1: general
    2: academic
    3: vocation
  read : reading score
  write : writing score
  math : math score
  science : science score
  socst : social studies score

We can try to understand correlation between math and writing skills. Covariates can be the sex of the child, and his reading skills. The question will then be: are good students in maths and writing simply students that can read well ?

Here the code is simply

> W=hsb2$write>=50
> M=hsb2$math>=50
> base=data.frame(Y1=W,Y2=M,
+             X1=hsb2$female,X2=hsb2$read)
>
> library(Zelig)
> REG= zelig(list(mu1=Y1~X1+X2,
+             mu2=Y2~X1+X2,
+     rho=~1),
+     model="bprobit",data=base)
> summary(REG)
 
Call:
zelig(formula = list(mu1 = Y1 ~ X1 + X2, mu2 = Y2 ~ X1 + X2,
    rho = ~1), model = "bprobit", data = base)
 
Pearson Residuals:
                Min        1Q  Median      3Q    Max
probit(mu1) -4.7518 -0.502594 0.15038 0.53038 1.8592
probit(mu2) -3.4243 -0.653537 0.23673 0.67011 2.6072
rhobit(rho) -4.9821  0.010481 0.13500 0.40776 2.9171
 
Coefficients:
                  Value Std. Error  t value
(Intercept):1 -5.484711   0.787101 -6.96825
(Intercept):2 -4.061384   0.633781 -6.40818
(Intercept):3  1.332187   0.322175  4.13497
X1:1           1.125924   0.233550  4.82092
X1:2           0.167258   0.202498  0.82598
X2:1           0.103997   0.014662  7.09286
X2:2           0.082739   0.012026  6.88017
 
Number of linear predictors:  3
 
Names of linear predictors: probit(mu1), probit(mu2), rhobit(rho)
 
Dispersion Parameter for binom2.rho family:   1
 
Residual Deviance: 364.51 on 593 degrees of freedom
 
Log-likelihood: -182.255 on 593 degrees of freedom
 
Number of Iterations: 3
> (exp(summary(REG)@coef3[3])-1)/(exp(
summary(REG)@coef3[3])+1)
[1] 0.5824045

with a remaining correlation among residuals of 0.58. So with only the sex of the student, and his or her reading skill, we cannot explain the correlation between maths and writing skills. With our previous code, we have here

> beta0=c((exp(summary(REG)@coef3[3])-1)/(exp(summary(REG)@coef3[3])+1),
+      summary(REG)@coef3[c(1:2,4:7),1])
> beta0
              (Intercept):1 (Intercept):2          X1:1          X1:2
0.58240446   -5.48471133   -4.06138412    1.12592427    0.16725842
X2:1          X2:2
0.10399668    0.08273879
> (OPT=optim(fn=logV,par=beta0,method="BFGS")$par)
(Intercept):1 (Intercept):2          X1:1          X1:2
0.5824045    -5.4847113    -4.0613841     1.1259243     0.1672584
X2:1          X2:2
0.1039967     0.0827388

i.e. we obtain (almost) exactly the same estimators. But here I have used as starting values for the optimization procedure the estimators given by R. If we change them, hopefully we have a robust maximum likelihood estimator,

> (OPT=optim(fn=logV,par=beta0/2,method="BFGS")$par)
              (Intercept):1 (Intercept):2          X1:1          X1:2
   0.58233360   -5.49428984   -4.06839571    1.12696594    0.16760347
         X2:1          X2:2
   0.10417767    0.08287409
There were 12 warnings (use warnings() to see them)

So once again, it is possible to optimize numerically a likelihood function, and it works.

  1. Y[,1]==0)&(Y[,2]==1 []
  2. Y[,1]==1)&(Y[,2]==0 []
  3. Y[,1]==0)&(Y[,2]==0 []
  4. Y[,1]==1)&(Y[,2]==0)&(Y[,3]==0 []
  5. Y[,1]==0)&(Y[,2]==1)&(Y[,3]==0 []
  6. Y[,1]==0)&(Y[,2]==0)&(Y[,3]==1 []
  7. Y[,1]==1)&(Y[,2]==1)&(Y[,3]==0 []
  8. Y[,1]==1)&(Y[,2]==0)&(Y[,3]==1 []
  9. Y[,1]==0)&(Y[,2]==1)&(Y[,3]==1 []
  10. Y[,1]==1)&(Y[,2]==1)&(Y[,3]==1 []
  11. Y[,1]==0)&(Y[,2]==1 []
  12. Y[,1]==1)&(Y[,2]==0 []
  13. Y[,1]==0)&(Y[,2]==0 []

Playing with robots

My son would be extremely proud if I tell him I can spend hours building robots. Well, my robots are not as fancy as Dr Tenma’s, but they usually do what I ask them to do. For instance, it is extremely simple to build a robot with R, to extract data from websites. I have mentioned it here (one tennis matches), but it failed there (on NY Marathon). To illustrate the use of robots, assume that one wants to build his own dataset to study prices of airline tickets. First, we have to choose a departure city (e.g. Paris) and an arrival city (e.g. Montreal). Then, one wants to look at all possible dates from April first (I ran it last month) till the end of December (so we create a vector with all leaving dates, namely a vector for the day, one for the month, and one for the year). Then, we choose a return date (say 3 days after).

DEP="Paris"
ARR="Montreal"
DATE1D=rep(c(1:30,1:31,1:30,1:31,1:31,1:30,1:31,1:30,
1:31,1:31,1:29),3)
DATE1M=rep(c(rep(4,30),rep(5,31),rep(6,30),rep(7,31),
rep(8,31),rep(9,30),rep(10,31),rep(11,30),rep(12,31),
rep(1,31),rep(2,29)),3)
DATE1Y=rep(c(rep(2011,30+31+30+31+31+30+31+
30+31+31+28),rep(2012,31+29)),3)
k=3
DATE3D=c((1+k):30,1:31,1:30,1:31,1:31,1:30,1:31,
1:30,1:31,1:31,1:29,1:k)
DATE3M=c(rep(4,30-k),rep(5,31),rep(6,30),rep(7,31),rep(8,31),
rep(9,30),rep(10,31),rep(11,30),rep(12,31),rep(1,31),rep(2,29),
rep(3,k))
DATE3Y=c(rep(2011,30+31+30+31+31+30+31+30+31+
31+28-k),re
p(2012,31+29+k))

It is also possible (for a nice robot), to skip all prior dates

skip=max(as.numeric(Sys.Date()-as.Date("2011-04-01")),1)

Then, we need a website where requests can be written nicely (with cities and dates appearing explicitly). Here, I cannot not mention the website that I used since it is stated on the website that it is strictly forbidden to run automatic requests… Anyway, consider a loop create a url address (actually I chose the value of the date randomly, since I had been told that those websites had memory: if you ask too many times for the same thing during a short period of time, prices would go up),

URL=paste("http://www.♦♦♦♦/dest.dll?qscr=fx&flag=q&city1=",
DEP,"&citd1=",ARR,"&",
"date1=",DATE1D[s],"/",DATE1M[s],"/",DATE1Y[s],
"&date2=",DATE3D[s],"/",DATE3M[s],"/",DATE3Y[s],
"&cADULT=1",sep="")

then, we just have to scan the webpage, looking for ticket prices (just looking for some specific names)

page=as.character(scan(URL,what="character"))
I=which(page%in%c("Price0","Price1","Price2"))
if(length(I)>0){
PRIX=substr(page[I+1],2,nchar(page[I+1]))
if(PRIX[1]=="1"){PRIX=paste(PRIX,page[I+2],sep="")}
if(PRIX[1]=="2"){PRIX=paste(PRIX,page[I+2],sep="")}

Here, we have to be a bit cautious, if prices exceed 1000. Then, it is possible to start a statistical study. For instance, if we compare to destination (from Paris), e.g. Montréal and New York, we obtain the following patterns (with high prices during holidays),

It is also possible to run the code twice (here it was run last month, and a couple of days ago), for the same destination (from Paris to Montréal),

Of course, it would be great if I could run that code say every week, to build up a nice dataset, and to study the dynamic of prices…

The problem is that it is forbidden to do this. In fact, on the website, it is mentioned that if we want to extract data (for an academic purpose), it is possible to ask for an extraction. But if we do tell that we study specific prices, data might be biased. So the good idea would be to use several servers, to make several requests, randomly, and to collect them (changing dates and destination). But here, my computing skills – unfortunately – reach a limit….

Oscar awards: good actor versus good actress

I am not a big fan of those ceremonies, where some actors pretend that they are extremely happy to be there, and then some win a trophy, some don’t, and those who win start to cry, and those who did not get a trophy try to pretend that they are not affected, etc. The other reason is that, since I have several kids, I do not go to see the movies that often (I mean apart from Shrek, Toy Story… Harry Potter is probably the only movie I’ve seen with real actors – or at least human actors).

But I remember being surprised when I looked at the nominees in newspapers,

Actresses are beautiful and look young, while actors are more experienced. So I have try to see how old were those who win an Oscar, as best actor (here) or best supporting actor (there), and best actress (here) and best supporting actress (there).

OSCAR=read.table("http://freakonometrics.blog.free.fr/public/data/OSCAR.csv",
sep=",",header=TRUE,dec=".")
actor=OSCAR[,1]
suppactor=OSCAR[,2]
actress=OSCAR[,3]
suppactress=OSCAR[,4]
actor=actor[is.na(actor)==FALSE]
actor=actor[actor>0]
actress=actress[is.na(actress)==FALSE]
actress=actress[actress>0]
suppactor=suppactor[is.na(suppactor)==FALSE]
suppactor=suppactor[suppactor>0]
suppactress=suppactress[is.na(suppactress)==FALSE]
suppactress=suppactress[suppactress>0]
 
boxplot(actor,suppactor,actress,suppactress,col=c("blue","blue","red","red"),
names=c("actor","supp. actor","actress","supp. actress"))

On average, a best actress is 36 years old, while a best actor is 44 years old.  Which is quite a difference… Perhaps because it takes more time to an actor to be a good one ? Assuming that they start acting at 18, it takes 18 more years for an actress to be recognized as a good one (here the best one), and 26 for an actor. Or perhaps it is simply because leading actresses have to look young…
The oldest actor who won an Oscar was Henry Fonda (at the age of 76) and the oldest actress was Jessica Tendy (nearing 81). Tatum O’Neal became the youngest person to win the best suppo
rting actress award
at the age of 10 (she was 8 when she was acting). The youngest best actress was Marlee Matlin, 21. The distribution was be seen below, with actors in blue, and actresses in red, best supporting actors in dotted lines, and best actors in plain lines,

plot(density(actor),xlim=c(10,80),axes=FALSE,
col="blue",names="",ylab="",xlab="",ylim=c(0,.051))
lines(density(suppactor),col="blue",lty=2)
lines(density(actress),col="red")
lines(density(suppactress),col="red",lty=2)
axis(1)

Note that the age of supporting actors is older that leading ones. E.g. the average age for supporting actors winning an Oscar is 50, while it is  44 for actors. Similarly, it is 40 for supporting actresses, and 36 for actresses.

> mean(suppactor)
[1] 50.23762
 
> mean(actor)
[1] 44.29982
 
> mean(suppactress)
[1] 40.55766
 
> mean(actress)
[1] 36.39733

Here, I have to admit that I was surprised. I always thought that being a supporting actor was a first step before being a leading one. So winners of supporting awards should have been younger that winners of leading ones. But this is not the case.

And the dynamic here is rather stable, with actors,

and actresses,

except that the age difference between supporting roles and leading roles have increased in the 80’s for actors, while it decreased in the 80’s for actresses.

Beta kernel and transformed kernel

This Thursday I will give a talk at Laval University, on “Beta kernel and transformed kernel : applications to copula density estimation and quantile estimation“. This time, I will talk at the department of Mathematics and Statistics (13:30 at the pavillon Adrien-Pouliot). “Because copulas have bounded support (the unit square in dimension 2), standard kernel based estimators of densities are (multiplicatively) biased on borders and in corners of the support. Two techniques can be used to avoid that underestimation: Beta kernels and Transformed kernel. We will describe and discuss those two techniques in the first part of the talk. Then, we will see that it is possible to combine those two techniques to get nice estimator of several quantities (e.g. quantiles): transform the data to get on the unit interval – using a transformed kernel – then estimate the (transformed) quantile on [0,1] using a beta kernel, then get back on the initial support. As we will see on simulations, that technique can be better than standard quantile estimators, especially when data are heavy tailed.” Slides can be downloaded here.

  • kernel based density estimation

Kernel based estimation are a popular (and natural) technique to estimate densities.  It is simply and extension of the moving histogram:

so we count how many observations are a the neighborhood of the point where we want to estimate the density of the distribution. Then it is natural so consider a smoothing function, i.e. instead of a step function (either observations are close enough, or not), it is possible to give weights to observations, which will be a decreasing function of the distance,

With a smooth kernel, we have a smooth estimation of the density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-01.gif

Then it is possible to play on the bandwidth, either to get a more accurate estimation of the density, but not that smooth (small bias but large variance),

or a smoother one (large bias, but small variance),

In R, it is simply

> X=rnorm(100)
> (D=density(X))
 
Call:
	density.default(x = X)
 
Data: X (100 obs.);	Bandwidth 'bw' = 0.3548
 
       x                   y            
 Min.   :-3.910799   Min.   :0.0001265  
 1st Qu.:-1.959098   1st Qu.:0.0108900  
 Median :-0.007397   Median :0.0513358  
 Mean   :-0.007397   Mean   :0.1279645  
 3rd Qu.: 1.944303   3rd Qu.:0.2641952  
 Max.   : 3.896004   Max.   :0.3828215  
 
> plot(D$x,D$y)
  • Beta kernel

The idea of Beta kernel is to consider kernels having support [0,1]. In the univariate case,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-06.gif

where http://freakonometrics.blog.free.fr/public/perso3/kernel-f-07.gif is the density of a Beta distribution, i.e.

http://freakonometrics.blog.free.fr<br />
/public/perso3/beta-distribution.gif

For additional material, I have uploaded some R code to fit copula densities using beta kernels,

library(copula)
beta.kernel.copula.surface = function (u,v,bx,by,p) {
s = seq(1/p, len=(p-1), by=1/p)
mat = matrix(0,nrow = p-1, ncol = p-1)
for (i in 1:(p-1)) {
a = s[i]
for (j in 1:(p-1)) {
b = s[j]
mat[i,j] = sum(dbeta(a,u/bx,(1-u)/bx) *
dbeta(b,v/by,(1-v)/by)) / length(u)
} }
return(data.matrix(mat)) }

Then we can used it to see what we get on a simulated sample

library(copula)
COPULA = frankCopula(param=5, dim = 2)
X = rcopula(n=1000,COPULA)
p0 = 26
Z= beta.kernel.copula.surface(X[,1],X[,2],bx=.01,by=.01,p=p0)
u = seq(1/p0, len=(p0-1), by=1/p0)
persp(u,u,Z,theta=30,col="green",shade=TRUE,
box=FALSE,zlim=c(0,6))

http://freakonometrics.free.fr/copula-kernel-beta.gif
(yes, the surface is changing… to illustrate the impact of the bandwidth on the estimation).

  • transformed kernel estimation

I the talk, I will also mention the transformed Kernel estimate, as introduced in the book on L1 density estimation by Luc Devroye and Laszlo Györfi (the book can be downloaded here). I probably spend a few minutes on the original chapter, in order to provide another application of that techniques (not only to estimate copula densities, but here to estimate quantiles of heavy tailed distribution). In the univariate case, the R code is the following (here I consider two transformation, the quantile function of the Gaussian distribution, and the quantile function of the Student distribution with 3 degrees of freedom),

set.seed(1)
sample=rbeta(100,4,3)
 
transfN = function(x){
Y=qnorm(sample)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qnorm(x)); 
  g=f$y[ny]/dnorm(qnorm(x))
return(g)
}
 
df0=3
 
transfT = function(x){
Y=qt(sample,df=df0)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qt(x,3)); 
  g=f$y[ny]/dt(qt(x,df=df0),df=df0)
return(g)
}
 
tN=Vectorize(transfN)
tT=Vectorize(transfT)
 
u=seq(.01,.99,by=.01)
vN=tN(u)
vT=tT(u)
plot(u,vN,type="l",lwd=3,col="blue")
lines(u,vT,lwd=3,col="green")
lines(u,dbeta(u,4,3),col="red",lty=2)

The density estimation is the following,

(the red dotted line is the true density, since we work on a simulated sample). Now, let us get back on the initial chapter,

In the book, this is introduced as follows,

The original idea we add it to use this kernel based estimator for copulas, i.e. since we can estimate densities in high dimension with unbounded support, using

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-02.gif

the idea is to transform marginal observations,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-10.gif

and to use the fact that the associated copula density can be written

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-12.gif

to derive an intuitive estimator for the copula density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-13.gif

An important issue is how do we choose the transformation

And Luc Devroye and Laszlo Györfi mention that this can be used to deal with extremes.

well, extremes are introduced through bumps (which is not the way I would have been dealing with extremes)

and note that several results can be derived on those bumps,

e.g.

Then, there is an interesting discussion about estimating the optimal transformation

and I will prove that this can be an extremely interesting idea, for instance to estimate quantiles of heavy tailed distribution, if we use also the beta kernel estimator on the unit interval. This idea was developed in a paper with Abder Oulidi, online here.

Remark: actually, in the book, an additional reference is mentioned,

but I have never been able to find a copy… if anyone has one, I’d be glad to read it…

Time horizon in forecasting, and rules of thumb

I recently received an email about forecasting and rules of thumb. “Dans la profession […] se transmet une règle empirique qui voudrait que l’on prenne un historique du double de l’horizon de prévision : 20 ans de données pour une prévision à 10 ans, etc… Je souhaite savoir si cette règle n’aurait pas, par hasard, un fondement théorique quitte à ce que le rapport ne soit pas de 2 pour 1, mais de 3 pour 1 ou de 1 pour 1 par exemple.” To summarize briefly, the rule is to consider a 2-1 ratio for the period of observation vs. forecast horizon. And the interesting question is if there are justifications for such a rule…

At first, I remembered a rules of thumb, from the book by Box and Jenkins, which states that it is meaningless to look at autocorrelations when lags exceed the sample size over 6. So with 12 years of data, autocorrelations with a lag higher than two years are useless. But it is not what is mentioned here. So I looked at some dataset, and some standard time series models.

  • It depends on the series

It might obvious… but if it is the case, it means that it will be difficult to have a general rule of thumb. Consider e.g. the number of airline passengers,

library(forecast)
X = AirPassengers
ETS = ets(X)
plot(forecast(ETS,h=length(X)/2))

or some sales in a big store,

or car casualties in France, or the temperature in Nottingham Castle,

or the water level at Lack Hurron, or the flow of the Nile river,

or see also here for forecasting techniques in demography. Actually, in the case of life insurance, actuaries have to forecast future demography, i.e. try to assess death rates of those who currently purchase retirement contracts, who might be 20 years old. So they have to forecast death rate until 2100, say. One the one hand, it sounds difficult to make forecast over a century (it is already difficult for climate, I guess it is even more complex for human life). On the other hand, a 2-1 ratio means that we have to use data from 1800… Here again, it is difficult to justify that mortality in the 1850 could be interesting to say anything about mortality in 2050. So I guess it will be difficult to justify the use of general rules of thumb….

  • It depends on the model

Consider the following (simulated) series. Several models can be fitted. And the shape on the forecast (and the forecast error) will depend on the model considered. The benchmark can be the model without any dynamics, i.e. we assume that observations are i.i.d. Or more classically, assume that it is simple a white noise, i.e. an i.i.d centered process. Then the forecast is the following,

With that kind of assumption, we see that the 2-1 ratio is useless since we can get forecasts up to any horizon…. But that does not seem very robust. For instance, if we consider exponential smoothing techniques, we can obtain

Which is rather different. And with the 2-1 ratio, obviously, there is a lot of uncertainty at the end ! It would be even worst if we assume that we look at a random walk. Because actually a dozen models – at least – can be considered, from ARIMA, seasonal ARIMA, Holt Winters, Exponential Smoothing, etc…

http://freakonometrics.blog.free.fr/public/perso2/animationforecast.gif

So I do not see any theoretical justification of that rule of thumb. Obviously, the maximum horizon can not be extremely far away if the series is non-stationary, with a very irregular pattern, and with a lot of noise… So we’re back at the beginning. If anyone is willing to share his or her experience, comments are open.

Circular or spherical data, and density estimation

I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.

  • circular data density estimation

Consider the density of an angle say, i.e. a function http://freakonometrics.hypotheses.org/files/2015/12/circ-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/circ-02.gif

with a circular relationship, i.e. http://freakonometrics.hypotheses.org/files/2015/12/circ-03.gif. It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

http://freakonometrics.hypotheses.org/files/2015/12/circ-04.gif

where http://freakonometrics.hypotheses.org/files/2015/12/circ-05.gif is Bessel modified function of order 1,

http://freakonometrics.hypotheses.org/files/2015/12/circ-06.gif

(which is simply a normalization parameter). There are two parameters here, http://freakonometrics.hypotheses.org/files/2015/12/circ-07.gif (some concentration parameter) and mu a direction.
From a series of observed angleshttp://freakonometrics.hypotheses.org/files/2015/12/circ-08.gif, the maximum likelihood estimator for kappa is solution of

http://freakonometrics.hypotheses.org/files/2015/12/circ-09.gif

where

http://freakonometrics.hypotheses.org/files/2015/12/circ-10.gif

and

http://freakonometrics.hypotheses.org/files/2015/12/circ-11.gif

and where http://freakonometrics.hypotheses.org/files/2015/12/circ-12.gif, where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….

  • density estimation for hours on simulated data

A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let http://freakonometrics.hypotheses.org/files/2015/12/circ-13.gif is the time (in hours) for the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth observation (the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth phone call received). Then set

http://freakonometrics.hypotheses.org/files/2015/12/circ-15.gif

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,

set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
H=circular(Omega,type="angle",units="radians",rotation="clock")
Ht=circular(Omegat,type="angle",units="radians",rotation="clock")
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1))
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )

rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)

or a kernel based estimation of the density (the gray line on the right).

circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)

The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif, we estimate it on the sample http://freakonometrics.hypotheses.org/files/2015/12/circular-density-3.gif. Then we multiply by 3 to get the density only on http://freakonometrics.hypotheses.org/files/2015/12/0-24.gif. For the bandwidth, I took the same as the one that we would have taken on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif

The code is simply the following

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*pi-pi/2
Dd=d$y[I]/max(d$y)+1

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)

Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

We get here those two graphs,

To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.

density of wind direction

On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.

A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.

density of 911 phone calls

Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).

In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.

That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right

We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right

  • density of earth temperatures, or earthquakes

Of course it is also possible to work in higher dimension. Before, we went from densities on http://freakonometrics.hypotheses.org/files/2015/12/circ-16.gif to densities on the unit circle http://freakonometrics.hypotheses.org/files/2015/12/circ-18.gif. But similarly, it is possible to go from http://freakonometrics.hypotheses.org/files/2015/12/circ-17.gif to the unit sphere http://freakonometrics.hypotheses.org/files/2015/12/circ-19.gif. A nice application being global climate studies,

The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.

library(ks)
X=cbind(EQ$Longitude,EQ$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
plot(DX,add=TRUE,col="red")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")

Without any correction, we get the red level curves. The pink one integrates correction.

Want to say one thing and the exact oppositive with strong confidence ?

No need to do politics. Just take a statistical course. And I do not talk about misinterpretation of statistics, but I talk about the mathematical foundations of statistical tests.
Consider the following parametric test, with a one-dimensional parameter: http://freakonometrics.blog.free.fr/public/perso2/test-lies-01.gif versus http://freakonometrics.blog.free.fr/public/perso2/test-lies-02.gif, for some (fixed) http://freakonometrics.blog.free.fr/public/perso2/test-lies-03.gif. A standard way of doing such a test is to consider an rejection region http://freakonometrics.blog.free.fr/public/perso2/test-lies-05.gif. The test works as follows: consider a sample http://freakonometrics.blog.free.fr/public/perso2/test-lies-06.gif,

  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-07.gif, then we accept http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif
  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-09.gif, the we reject http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif

For instance, consider the case of a Bernoulli sample, with probability http://freakonometrics.blog.free.fr/public/perso2/test-lies-62.gif. The standard idea is to define

http://freakonometrics.blog.free.fr/public/perso2/test-lies-13.gif

The rejection region is then based on statistic http://freakonometrics.blog.free.fr/public/perso2/test-lies-210.gif,

  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-25.gif, then we accept http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif
  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-22.gif, the we reject http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif

where threshold http://freakonometrics.blog.free.fr/public/perso2/test-lies-26.gif is taken so that the probability to make a first type error is http://freakonometrics.blog.free.fr/public/perso2/test-lies-28.gif(say 5%) using the Gaussian approximation for z. Here

http://freakonometrics.blog.free.fr/public/perso2/test-lies-30.gif

Thus, the acceptation region is then the green area below, while the rejection region is the red one, for http://freakonometrics.blog.free.fr/public/perso2/test-lies-210.gif.

Consider now the exact opposite test (with the same http://freakonometrics.blog.free.fr/public/perso2/test-lies-03.gif), http://freakonometrics.blog.free.fr/public/perso2/test-lies-51.gifversus http://freakonometrics.blog.free.fr/public/perso2/test-lies-52.gif. Here, we use the same statistics, and the test is

  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-22.gif, then we accept http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif
  • if http://freakonometrics.blog.free.fr/public/perso2/test-lies-25.gif, the we reject http://freakonometrics.blog.free.fr/public/perso2/test-H0.gif

where now

http://freakonometrics.blog.free.fr/public/perso2/test-lies-50.gif

Thus, now, the acceptation region is then the green area below, while the rejection region is the red one.

So if we summarize what we just said,

  • in the region on the left below, both test agree that http://freakonometrics.blog.free.fr/public/perso2/test-lies-55.gif
  • in the region on the right below, both test agree that http://freakonometrics.blog.free.fr/public/perso2/test-lies-57.gif
  • and in the region in blue, in the middle, the two tests disagree (one claims that http://freakonometrics.blog.free.fr/public/perso2/test-lies-55.gif, and the other one that http://freakonometrics.blog.free.fr/public/perso2/test-lies-57.gif)

Here is the evolution of the region as a function of http://freakonometrics.blog.free.fr/public/perso2/test-lies-56.gif (the size of the sample) when the sample frequency is 20%. With a small sample size, we can hardly say anything.

n=seq(1,100)
p=0.2
x1=p+qnorm(.95)*sqrt(p*(1-p)/n)
x2=p+qnorm(.05)*sqrt(p*(1-p)/n)
plot(n,x1,type="l",ylim=c(0,1))
polygon(c(n,rev(n)),c(x1,rev(x2)),col="light blue",border=NA)
lines(n,x1,lwd=2,col="red")
lines(n,x2,lwd=2,col="red")

One might say that those bounds are based on a Gaussian approximation which is not correct when http://freakonometrics.blog.free.fr/public/perso2/test-lies-56.gif is too small. So we can compute exact bounds,
y1=qbinom(.95,size=n,prob=p)/n
y2=qbinom(.05,size=n,prob=p)/n
polygon(c(n,rev(n)),c(y1,rev(y2)),col="blue",border=NA)
lines(n,y1,lwd=2,col="red")
lines(n,y2,lwd=2,col="red")

and we get

This is what we can observe if we use R statistical procedures, either the asymptotic one,

> prop.test(2,10,.5,alternative="less")
 
1-sample proportions test with continuity correction
 
data:  2 out of 10, null probability 0.5
X-squared = 2.5, df = 1, p-value = 0.05692
alternative hypothesis: true p is less than 0.5
95 percent confidence interval:
0.0000000 0.5100219
sample estimates:
p
0.2
 
> prop.test(2,10,.5,alternative="greater")
 
1-sample proportions test with continuity correction
 
data:  2 out of 10, null probability 0.5
X-squared = 2.5, df = 1, p-value = 0.943
alternative hypothesis: true p is greater than 0.5
95 percent confidence interval:
0.04368507 1.00000000
sample estimates:
p
0.2

or a more accurate one

> binom.test(2,10,.5,alternative="less")
 
Exact binomial test
 
data:  2 and 10
number of successes = 2, number of trials = 10, p-value = 0.05469
alternative hypothesis: true probability of success is less than 0.5
95 percent confidence interval:
0.0000000 0.5069013
sample estimates:
probability of success
0.2
 
> binom.test(2,10,.5,alternative="greater")
 
Exact binomial test
 
data:  2 and 10
number of successes = 2, number of trials = 10, p-value = 0.9893
alternative hypothesis: true probability of success is greater than 0.5
95 percent confidence interval:
0.03677144 1.00000000
sample estimates:
probability of success
0.2

Here, when the sample frequency is 20% and http://freakonometrics.blog.free.fr/public/perso2/test-lies-56.gif is equal to 10, we accept at the same time that theta is higher than 50% and lower than 50%.
And obviously it is not only a theoretical problem: it has obviously some strong implications. This morning, a good friend mentioned a post published some months ago, online here, about discrimination, and the lack of women with academic positions in mathematics, in France. As claimed by the author of the post“A Paris VI, meilleure université française selon son président, sur 11 postes de maitres de conférences, 5 filles classées premières. Il y a donc des filles excellentes ? A Toulouse, sur 4 postes, 2 filles premières. Parité parfaite. Mais à côté de cela, Bordeaux, 4 postes, 0 fille première. Littoral, 3 postes, 0 fille, Nice, 5 postes, 0 fille, Rennes, 7 postes, 0 fille…”.
Consider the latter one: in Rennes, out of 7 people hired last year, no woman. So in some sense, it looks obvious that there is some kind of discrimination ! Zero out of seven ! Well, if we consider the fact that around 30% of PhD thesis in mathematics were defended by women those years, we can also try to see is there if no “positive discrimination“, i.e. test http://freakonometrics.blog.free.fr/public/perso2/test-lies-60.gif where theta is the probability to hire a woman (just to be a little bit provocative).

> prop.test(0,7,.3,alternative="less")
 
1-sample proportions test with continuity correction
 
data:  0 out of 7, null probability 0.3
X-squared = 1.7415, df = 1, p-value = 0.09347
alternative hypothesis: true p is less than 0.3
95 percent confidence interval:
0.0000000 0.3719021
sample estimates:
p
0
 
Warning message:
In prop.test(0, 7, 0.3, alternative = "less") :
Chi-squared approximation may be incorrect
> binom.test(0,7,.3,alternative="less")
 
Exact binomial test
 
data:  0 and 7
number of successes = 0, number of trials = 7, p-value = 0.08235
alternative hypothesis: true probability of success is less than 0.3
95 percent confidence interval:
0.0000000 0.3481637
sample estimates:
probability of success
0

With no woman hired that year, we can still pretend that there was some kind of “positive discrimination“. An note that we do accept – with more confidence – the assumption of “positive discrimination” if we look at all universities together,

> prop.test(5+2,11+4+4+3+5+7,.3,alternative="less")
 
1-sample proportions test with continuity correction
 
data:  5 + 2 out of 11 + 4 + 4 + 3 + 5 + 7, null probability 0.3
X-squared = 1.021, df = 1, p-value = 0.1561
alternative hypothesis: true p is less than 0.3
95 percent confidence interval:
0.0000000 0.3556254
sample estimates:
p
0.2058824
 
> binom.test(5+2,11+4+4+3+5+7,.3,alternative="less")
 
Exact binomial test
 
data:  5 + 2 and 11 + 4 + 4 + 3 + 5 + 7
number of successes = 7, number of trials = 34, p-value = 0.1558
alternative hypothesis: true probability of success is less than 0.3
95 percent confidence interval:
0.0000000 0.3521612
sample estimates:
probability of success
0.2058824

So obviously, with small sample, almost anything can be claimed !

Playing with quantiles, part 1

A standard idea in extreme value theory (see e.g. here, in French unfortunately) is that to estimate the 99.5% quantile (say), we just need to estimate a quantile of level 95% for observations exceeding the 90% quantile.

In extreme value theory, we assume that the 90% quantile (of the initial distribution) can be obtained easily, e.g. the empirical quantile, and then, for the exceeding observations, we fit a Pareto distribution (a Generalized Pareto one to be precise), and get a parametric quantile for the 95% quantile. I.e.

http://freakonometrics.blog.free.fr/public/perso2/quant01.gif

which can be written

http://freakonometrics.blog.free.fr/public/perso2/quant02.gif

So, an estimation of the cumulative distribution function is

http://freakonometrics.blog.free.fr/public/perso2/quant03.gif

and if we invert it, we get the popular expression for high level quantiles,

http://freakonometrics.blog.free.fr/public/perso2/quant04b.gif

Hence, we do not really care about observations in the core of the distribution.

And I was wondering if this can be transposed with quantile regressions. Hence, I would like to get a quantile regression of level 90% (say) of http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif given http://freakonometrics.blog.free.fr/public/perso2/qqqo5.gif, based on observations http://freakonometrics.blog.free.fr/public/perso2/qqq04.gif‘s, but all observations such that http://freakonometrics.blog.free.fr/public/perso2/qqq07.gif for some http://freakonometrics.blog.free.fr/public/perso2/qqq08.gif are missing. More precisely, I have the following sample (here half of the observations are missing),

Assume that we know that I have observations below the http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif quantile of level 25%, and above the http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif quantile of level 75%.
If I want to get the 90% quantile regression, and the 10% quantile, the code is simply,

library(mnormt)
library(quantreg)
library(splines)
set.seed(1)
mu=c(0,0)
r=0
Sigma <- matrix(c(1,r,r,1), 2, 2)
Z=rmnorm(2500,mu,Sigma)
X=Z[,1]
Y=Z[,2]
 
base=data.frame(X,Y)
plot(X,Y,col="blue",cex=.7)
I=(Y>qnorm(.25)
)&(Y<qnorm(.75))
baseI=base[I==FALSE,]
points(X[I],Y[I],col="light blue",cex=.7)
abline(h=qnorm(.25),lty=2,col="blue")
abline(h=qnorm(.75),lty=2,col="blue")
u=seq(-5,5,by=.02)
reg=rq(Y~X,data=base,tau=.05)
lines(u,predict(reg,newdata=data.frame(X=u)),lty=2)
reg=rq(Y~X,data=baseI,tau=.05*2)
lines(u,predict(reg,newdata=data.frame(X=u)))

The graph is the following

Dotted lines – in black – are theoretical lines (if I had all observations), and plain lines are (where half of the sample if missing). Instead of a standard linear quantile regression, it is also possible to try a spline regression,

So obviously, if I miss something in the middle, that’s no big deal, doted and plain lines are here extremely close.
But what if observations http://freakonometrics.blog.free.fr/public/perso2/qqqo5.gif and http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif were correlated ? Consider a Gaussian random vector http://freakonometrics.blog.free.fr/public/perso2/qqq09.gif with correlation http://freakonometrics.blog.free.fr/public/perso2/qqq10.gif (here 0.
6).

It looks like we overestimate the slope for high quantile, but not for lower quantiles. So if observations are correlated, we have to be cautious with that technique.
But why could that be interesting ? Well, because I wanted to run a quantile regression on marathon results. But I could not get the overall dataset (since I had to import observations manually, and I have to admit that it was a bit boring). So I extracted finish times of the first 10% athletes, and the latest 10%. And I was wondering if it was enough to look at the 5% and 95% quantiles, based on the age of the runner… To be continued.

A Million Random Digits: review of reviews

Recently on his blog (here), Robin mentioned an amazing book, called “A Million Random Digits” published by RAND corporation. The book was initially published in 1955, but RAND published a nice (and expensive) second edition.

A great thing is that on Amazon, there are several extremely interesting reviews of the book. E.g.

4.0 out of 5 stars Didn’t like the ending, February 10, 2009  By Damien Katz

Even though I didn’t really see it coming, the ending was kind of anti-climatic. But overall the book held my attention and I really liked the “10034 56429 234088” part. It’s nice to know I’m not the only one who feels that way.

5.0 out of 5 stars I found a typo, September 14, 2007  By fanfan

To whom do I write to report typographical errors? I noticed that the first “7” on the third line page 48 should be a “3”. The “7” that’s printed there now isn’t random. Other than that, this is really an excellent book.

5.0 out of 5 stars Superb and original plot, April 21, 2007  By Herr Tarquin Biskuitfaß

This one has a very unpredictable plot, sublime character development in a style that stubbornly defies any sort of development in its rare and iconoclastic brilliance, and is told remarkably with numbers instead of letters. Take, for example, this passage on page 202, “98783 24838 39793 80954”. I’m speechless. The symmetry is reminiscent of the I Ching, and it approaches a rare spiritual niveau lacking in American literature. It not only reads well, but it looks great too. I have a tattoo of page 214 on my arm, and I’m hoping to get 202 on my belly to celebrate my next birthday. It is an injustice that Rand Corporation has not received the Nobel Prize for Literature, nor even a Pulitzer.

3.0 out of 5 stars A serious reference work?, October 16, 2006  By BJ

For a supposedly serious reference work the omission of an index is a major impediment. I hope this will be corrected in the next edition.

1.0 out of 5 stars Not Nearly A Million, September 3, 2006  By Liron

This book does not even come close to delivering on its promise of one million random digits. My expectations were high after reading the first sentence, which contained ten unique digits. However, the author seems to have exhasted his creativity in this initial burst, because the other 99.999% of the book is filler in which those same ten digits are shamelessly reused!  If you are looking for a larger offering of numerals in various bases, I highly recommend “Peter Rabbit’s ABC and 123”.

3.0 out of 5 stars Wait for the audiobook version, October 19, 2006  By R. Rosini “Newtype”

While the printed version is good, I would have expected the publisher to have an audiobook version as well. A perfect companion for one’s Ipod.

5.0 out of 5 stars Wait for it…, February 10, 2009  By Cranky Yankee

It started off slow, single digit slow in the beginning but I stuck with it. I eventually learned all about the different numbers, 1,2,3,4,5,6,7,8,9 and 0 and their different combinations.  The author introduced them all a bit too quickly for my taste. I would have been perfectly happy with just 1,2,3,4 and 5 for the first 20,000 digits, but then again, I’m not a famous random-number author, am I?  After a while, patterns emerged and the true nature of the multiverse was revealed to me, and the jokes were kinda funny. I don’t want to spoil anything but you will LOVE the twist ending!  Like 4352204 said to 64231234, “2242 6575 0013 2829!”

Ok, I have to admit I tried to check a few of them (that’s my freaky part). For instance the first one is a fake: the two first numbers – for instance – never show up together (consecutively),

> DIGIT=read.table("
+ http://freakonometrics.blog.free.fr/public/data/digits.txt")
> DIGIT=DIGIT[,2:11]
> k=1
> I=apply(DIGIT[,1:2]==c(10034,56429),1,sum)==2
> for(k in 2:9){
+ I=cbind(I,apply(DIGIT[,k+0:1]==c(10034,56429),1,sum)==2)
+ }
> I0=which(apply(I,1,sum)>0)
> DIGIT[I0,]
 [1] V2  V3  V4  V5  V6  V7  V8  V9  V10 V11
<0 rows> (or 0-length row.names)

Nevertheless, I did have some fun reading those reviews. About the book, unfortunately I have to confess I stopped after 99998 appeared (the first time).

when Nuns or Hells Angels get in a plane

Today, at lunch, Matthieu told us a nice story (or call it a paradox if you like) about the probability to find you seat empty when you get in a place. 

  • a plane full of nuns

Assume that you are in the line to get in the airplane, you are the 100th in the line. The first one is scatter brained, he has his head in the clouds, and when he get in the airplane, he cannot remember where he should seat. His strategy is then extremely simple: he seats randomly in the plane. So he picks up randomly a seat, and he waits.

Then come 98 nuns (one by one). And nuns are extremely polite: if there is someone in their seat (the one that is on the ticket they have) then they do not complain, and pick up another seat randomly (among those available, of course). Then you arrive. The question is simple: what is the probability that someone is seated at your seat ?

Any idea…?

Maybe I should give more time to do the maths… and tell another story…

  • a plane full of Hells Angels

Consider almost the same problem as the one mentioned above. Except that now, it is not 98 nuns that are getting in the plane, but 98 Hells Angels. So the problem here is that Hells Angels are slightly less polite than nuns. When they find someone seating on the seat they should have, they do not shyly move to another seat, but they grunt and then our scatter brained man (who is actually seating in their seat) has to move somewhere else. And the question is the same: you are the 100th person to get in the plane, what is the probability that someone is seated at your seat ?Any idea….?

The important point is that the problem is exactly the same (at least from a mathematical point of view, maybe not for the stewardess, or from the guy who enter first in the plane). The point is that, at each time, there could be only one person (or less) seating in a seat which is not his or hers (in the sense that if we compare the list of the passenger at any time, and the list of seats taken, there should be only one – or less – difference). The difference in the two story is that in the first case, it will be a nun, while in the second one, it will be our shy guy.

  • Let us run simulations

If we do not see how to get that probability analytically, let us run some R code,

> set.seed(1)
> n=100; TEST=rep(NA,100000)
> for(s in 1:100000){
+ OCCUPIED=rep(FALSE,n)
+ OCCUPIED[sample(1:n,size=1)]=TRUE
+ for(j in 2:(n-1)){
+ FREE=which(OCCUPIED==FALSE)
+ if(OCCUPIED[j]==TRUE){OCCUPIED[sample(FREE,size=1)]=TRUE}
+ if(OCCUPIED[j]==FALSE){OCCUPIED[j]=TRUE}
+ }
+ TEST[s]=OCCUPIED[n]==TRUE
+ }
> mean(TEST)
[1] 0.49878

Here, we clearly see that the problem is the same (either with nuns or Hells Angels): we do not care about who will change his/her seat, but we just look at seats that are available… So the program is valid for the two problems (and the solution will then be the same). Another point is that the probability looks extremely simple: one over two !

  • an analytical expression

Consider the Hells Angels problem (for notations). Let http://freakonometrics.blog.free.fr/public/perso2/nonnes1.gif denote the probability that, at time http://freakonometrics.blog.free.fr/public/perso2/nonne6.gif, our shy guy is sitting in my seat. When he gets in the plane, the probability that he gets to my seat is

http://freakonometrics.blog.free.fr/public/perso2/nonne2.gif
Then, the probability that, after ith passenger’s entrance, our guy is sitting in my own seat is (since the initial proof was not correct, I remove it, see below for a nice proof) One can get that

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-01.gif
So, we can get the probability that, when I get in, our guy is sitting in my own seat as
http://freakonometrics.blog.free.fr/public/perso2/avion-ec-07.gif

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-08.gif

Hence, there is one chance out of two that my seat will be free… (which is what we got with Monte Carlo simulations).

But a faster proof is to observe that, in the Hells Angels case, our guy will be kicked out until he reaches either his seat, or mine. Since those two events are equiprobable, there is one chance out of two that he seats in my seat (and since no Hells Angel will seat in mine, only this first guy can). So the probability that someone is in my seat when I get in is one half.

Nice isn’t it ? And thanks Matthieu for the problem  (with his friend Claude’s solution with the Hells Angels, and Olivier and Renaud for their comments) !

Does the Student based confidence interval have any interest in practice ?

Friday in the course of statistics, we started the section on confidence interval, and like always, I got a bit confused with the degrees of freedom of the Student (should it be http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?) and which empirical variance (should we consider the one where we divide by http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or the one with http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?).
And each time I start to get confused, the student obviously see it, and start to ask tricky questions… So let us make it clear now. The correct formula is the following: let

http://freakonometrics.blog.free.fr/public/perso2/IC-std-4.gif

then

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

is a confidence interval for the mean of a Gaussian i.i.d. sample.
But the important thing is neither the n-1 that appear as degrees of freedom nor the http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif that appear in the estimation of the standard error. Like always in mathematical result, the most important part of that result is not mentioned here: observations have to be i.i.d. and to be normally distributed. And not “almost” normally distributed….
Consider the following case: we have http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations that are almost normally distributed. Hence, I consider a student t distribution

n=20; X=rt(n,df=3)

An Anderson Darling normality test accepts a normal distribution in 2 cases out of 3.

for(s in 1:10000){
X=rt(n,df=3)
pv[s]=ad.test(X)$p.value
}
mean(pv>.05)
[1] 0.6799

With a true normal distribution if would be 95% of the cases, so in some sense, I can pretend that I generate almost normal samples.
For those samples, we can look at bounds of the 90% confidence interval for the mean, with three different formulas,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

i.e. the correct one, or the one where I considered http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif degrees of freedom instead of http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-2.gif

and the one were we condired a Gaussian quantile instead of a Student t one,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-3.gif

(and one might think to look at the non-unbiased estimator of the variance, also).
for(s in 1:10000){
X=rt(n,df=3)
m[s]=mean(X)
sd=sqrt(var(X))
IC1[s]=m[s]-qt(.95,df=n-1)*sd/sqrt(n)
IC2[s]=m[s]-qt(.95,df=n)*sd/sqrt(n)
IC3[s]=m[s]-qnorm(.95)*sd/sqrt(n)
}

One the graph below are plotted the distributions of the values obtained as lower bound of the 90% confidence interval,

(the curves with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif and http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif degrees of freedom in quantiles are the same, here).
The dotted vertical line is the true lower bound of the 90%-confidence interval, given the true distribution (which was not a Gaussian one).
If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution)

mean(IC1)
[1] -0.605381

instead of

mean(IC3)
[1] -0.5759391

(obtained with a Gaussian distribution instead of a Student one). Actually, both of them are quite different from the correct one which was

quantile(m,.05)
       5% 
-0.623578

As I mentioned in a previous post (here), an important issue is that if we do not know a parameter and substitute an estimator, there is usually a cost (which means usually that the confidence interval should be larger). And this is what we observe here. From a teacher’s point of view, it is an important issue that should be mentioned in statistical courses….

But another important point is also that confidence interval is valid only if the underlying distribution is Gaussian. And not almost Gaussian, but really a Gaussian one.  So since with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations everything might look Gaussian, I was wondering what should be done in practice… Because in some sense, using a Student quantile based confidence interval on some almost Gaussian sample is as wrong as using a Gaussian quantile based confidence interval on some Gaussian sample…

What is the optimal strategy to marry the best one ?

Valentine’s day is a nice opportunity to post on hot and sexy topics… Well, it’s also an important day that I should not miss, probably as much as Saint Patrick’smy wife’s birthday. And as I mentioned last week (here), it is difficult to get the distribution of the age of marriage on the internet… So maybe we can build up a small model, to understand when do girls decide to get married… Consider a young girl who knows that he will not meet thousands of men willing to marry her (actually, one can consider the opposite point of view, with young man who can find only http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png girls willing to marry him, the problem can be assumed as symmetric, especially if I do not want to get feminist leagues on my back).

Assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men agree to marry her. Of course, among those http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men, our girl wants to marry the “best” one (assume that men can be ranked objectively). Of course, she cannot meet the “best” guy immediately, so men are met randomly, and after each “interview“, either she reject him (forever, we assume she cannot get back and admit she made a mistake), or agree to marry him. An important assumption is that rejected men cannot be recalled.

From a mathematical point of view, we need to find the optimal stopping time. Here, the problem is slightly different compared with that one (with optimal time to get a bonus) or this one (with the optimal time to sit in a bar and have a beer). Here, we do not give “grades” to guy. The only thing that is observed is their relative ranks. Our girl cannot know if she’s meting the best of all men (out of http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png), but she knows if this one is better than the ones she already met. From a mathematical point of view, at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she knows the relative rank of http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (compared with the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png), not his absolute rank. We also assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png is known.

The optimal strategy is that she has to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png (some kind of calibration period), and then, starting at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she will marry the best over the ones she has already met.
So assume that our girl already met http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png guys, and decided to reject all of them. So now she’s trying to see if the http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png can be the optimal time to stop, and start looking seriously ….For an arbitrary cut-off http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, the probability that the best applicant will show up at some time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif is http://freakonometrics.hypotheses.org/files/2015/12/wedding01.gif

http://freakonometrics.hypotheses.org/files/2015/12/wedding02.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wdeeing03.gif

The http://freakonometrics.hypotheses.org/files/2015/12/wedd02.gif term is because there is only one “best” guy, and the http://freakonometrics.hypotheses.org/files/2015/12/wedd03.gifis the probability that he shows up at time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif (this can be visualized below)

Thus, we can write

http://freakonometrics.hypotheses.org/files/2015/12/wedding04.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wedding05.gif

Thus, since the minimum of http://freakonometrics.hypotheses.org/files/2015/12/mariage18.png is obtained when http://freakonometrics.hypotheses.org/files/2015/12/mariage19.png, which is the optimal time to stop (or here to start seeking), i.e. 36.7%.

Hence, the best strategy is to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage20.png=37% of the candidates (which is the maximum value of the function above), and then to select the first one (if possible) that is better than all previous candidates.

Consider the following Monte Carlo procedure: assume that she rejects – automatically – the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (we consider a loop with all possible values for http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png) and then gets married with the first one who is the best one she’s seen during the calibration period (or overall, which is the same),

n=100
ns=1000000
MOY1=MOY2=rep(NA,n)
for(m in 2:(n-1)){
WHICH=rep(NA,ns); MARIAGE=rep(0,ns)
for(s in 1:ns){
Z=sample(1:n,size=n,replace=FALSE)
mx=max(Z[1:m])
STOP=FALSE
for(k in (m+1):n){
if((Z[k]>mx)&(STOP==FALSE)){
WHICH[s]=k
    STOP=TRUE
MARIAGE[s]=1
}
}
}
HIS=WHICH[is.na(WHICH)==FALSE]
TH=table(HIS)
MOY1[m]=mean(HIS)
MOY2[m]=mean(HIS)*mean(MARIAGE)
THH=rep(NA,100)
THH[as.numeric(names(TH))]=as.numeric(TH)/ns
}

If we run it over all possible http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png we get

http://freakonometrics.hypotheses.org/files/2015/12/mariage-anim.gif

The “distribution” (in green) can be seen as the probability to marry the guy of level http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png, given that the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png were rejected. The sum is not one since there is a non null probability to marry no one. Actually, the probability to get married is the following

The more she waits, the smaller the probability of getting married. But on the other hand, the more she waits, the “better” the husband…. On the graph below is plotted the rank of the guy she marries, if she gets married (it was actually the vertical plain line in red on the animation)

So there is a trade-off. If not getting married gives a 0 satisfaction (lower than finally marrying anyone), and if marrying the guy with rank http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png gives here satisfaction http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png ,we have

(it was the vertical doted line in red on the animation). So it looks like it is optimal to test the first 35-38% men, and then to marry the best one she finds (if he is better than the best one she met during the “testing” procedure). So our previous analysis looks correct…

Now to go further, I have to admit that this model is known in academic literature as the secretary problem. In 1989, Thomas Ferguson wrote a nice paper inStatistical Science entitled who solved the secretary problem (here). Anthony Mucci published also an article in the Annals of Probability on possible extensions, in 1973 (here), or Thomas Lorenzen (there) in 1981. This problem is definitively an interesting one !