Tag Archives: tests

De la réussite au brevet dans cinq collèges rennais

Ce matin, on a travaillé sur des données de réussite au brevet des collèges, sur Rennes (dans le cadre du cours de statistique). On dispose de données pour 5 collèges rennais, sur une douzaine d’années (entre 2001 et 2015).

  • Analyse de le temps de l’agrégation des collèges

On avait commencé par quelque chose de simple: regarder l’évolution dans le temps du taux de réussite,

> base=
+ read.table(
+ "http://freakonometrics.free.fr/brevet_rennes.csv", header=TRUE,sep=";",dec=",")
> idx=seq(4,ncol(base),by=3)
> annees=as.numeric(substr(names(base)[idx],2,5))
> College=6
> plot(annees,base[College,idx],type="b")

Continue reading De la réussite au brevet dans cinq collèges rennais

Statistical Tests: Asymptotic, Exact, ou based on Simulations?

This morning, in our mathematical statistics course, we’ve been discussing the ‘proportion test‘, i.e. given a sample of Bernoulli trials, with , we want to test

against 

A natural test (which can be related to the maximum likelihood ratio test) is  based on the statistic

The test function is here

To get the bounds of the acceptance region, we need the distribution of , under . Consider here a numerical application

n=20
p=.5
set.seed(1)
echantillon=sample(0:1,size=n,
            prob=c(1-p,p),
            replace=TRUE)
  • the asymptotic distribution

The first (and standard idea) is to use the central limit theorem, since

So, under ,

Then  while . The acceptance region is then between the two red lines, below,

T=sqrt(n)*(mean(echantillon)-.5)/
  sqrt(mean(echantillon)*
  (1-mean(echantillon)))
u=seq(-3,3,by=.01)
v=dnorm(u)
plot(u,v,type="l",lwd=2)
abline(v=qnorm(.025),col="red")
abline(v=qnorm(.975),col="red")
abline(v=T,col="blue")

  • the exact distribution

Here we use the fact that

Using transformation of the ‘density’, we can (at least numerically) compute the (exact) distribution of

 

u=seq(-3,3,by=.01)
v=sqrt(.5*(1-.5))*n*dbinom(round(
  (sqrt(.5*(1-.5))*u/sqrt(n)+.5)*n),
  size=n,prob=.5)/sqrt(n)

Here I used a round value, it guess it would be better with a floor function, but here the graph looks symmetric (which is something I like)

abline(v=sqrt(n)*(qbinom(.025,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red")
abline(v=sqrt(n)*(qbinom(.975,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red")
lines(u,v,type="s")

  • distribution based on Monte Carlo simulations

Probably more interesting, here we do not use the fact that we might know the distribution of the mean. We just generate random samples, under , and then compute ,

T=rep(NA,1000)
for(i in 1:1000){
x=sample(0:1,size=n,
         prob=c(1-.5,.5),
         replace=TRUE)
m=mean(x)
T[i]=(m-.5)/sqrt(m*(1-m))*sqrt(n)}
lines(density(T),lwd=2)
abline(v=quantile(T,.025),col="red")
abline(v=quantile(T,.975),col="red")

Estimation et prévision pour des séries temporelles

Pour la fin de cours de modèles de prévision , quelques transparents sur l’identification et l’estimation des modèles SARIMA, quelques compléments sur les tests (racines unités et non-stationnarité, ainsi que saisonnalité), et enfin, quelques pistes pour construire des prédictions (avec une quantification de l’incertitude), avec des codes R. Les transparents sont en ligne ici (même si la page de garde est identique aux autres, il s’agit de nouveau matériel). Maintenant que les transparents sont finis (et en ligne) les prochains billets seront orientés autour de la modélisation et des aspects computationnels.

We keep breaking records ? so what ?… Get statistical perspective….

This summer, we have been told that some financial series broke some records (here, in French)

For instance, the French CAC40 had negative return for 11 consecutive days (which has never been seen, so far).

> library(tseries)
> x<-get.hist.quote("^FCHI")
> Y=x$Close
> Z=diff(log(Y))
> RUN=rle(as.character(Z>=0))$lengths
> n=length(RUN)
> LOSS=RUN[seq(2,n,by=2)]
> GAIN=RUN[seq(1,n,by=2)]
> TG=sort(table(GIN))
> TG[as.character(1:13)]
GAIN
   1    2    3    4    5    6    7    8    9 <NA> <NA> <NA>   13
 645  336  170   72   63   21    7    3    4   NA   NA   NA    1
 
> TL=sort(table(LOSS))
> TL[as.character(1:15)]
LOSS
   1    2    3    4    5    6    7    8    9 <NA>   11 <NA> <NA> 
 664  337  186   68   42   14    5    3    1   NA    1   NA   NA 
 
> TR=sort(table(RUN))
> TR[as.character(1:15)]
RUN
   1    2    3    4    5    6    7    8    9 <NA>   11 <NA>   13 
1309  673  356  140  105   35   12    6    5   NA    1   NA    1

Indeed 11 consecutive days of negative returns is a record. But one should keep in mind the fact that the real records for runs is 13 consecutive days with positivereturns…
But what does that mean ? Can we still assume time independence of log-returns (since today, a lot of financial models are still based on that assumption) ?
Actually. if financial series were time-independence, such a probability, indeed, should be rather small. At least on 11 or 10 runs. Something like

http://freakonometrics.blog.free.fr/public/perso3/cacindp03.gif

(assuming that each day, the probability to observe a negative return is 50%). But maybe not over 25 years (6250 trading days): the probability to observe a sub-sequence of 10 consecutive negative value (with daily probability of one half) over 6250 observations will be much larger. My guess is that is would be

http://freakonometrics.blog.free.fr/public/perso3/cacindp02.gif

where at the numerator we have the number of favourable cases over the total number of cases. At the numerator, the first number the number of cases where the first 10 (at least) are negative, then for the second one, we count the number of cases where the first is positive, then the next 10 (at least) are negative (and then the second is positive and then the next 10 are negative, the third is positive etc). For those interested by more details (and a more general formula on runs), an answer can be found here.
But note that the probability is quite large… So it is not that unlikely to observe such a sequence over 25 years.
A classical idea when looking at time series is to look at the autocorrelation function of the returns,

which might suggest that there is no correlation with past returns. But it should be possible to do more advanced tests.
On the CAC40 series, we can run an independence run test on the latest 100 consecutive days, and look at the p-value,

> library(lawstat)
> u=as.vector(Z[(n-100):n])
> runs.test(u,plot=TRUE)
 
	Runs Test - Two sided
 
data:  u 
Standardized Runs Statistic = -0.4991, p-value = 0.6177

The B’s here are returns lower than the median (almost null, so they might be considered as negative returns). With such a high p-value, we accept the null hypothesis, i.e. time independence.
If we consider a moving-time window

we can see that we accept the assumption of independence most the the time.
Actually, here, the time window is 100 days (+/- 50 days). But it is possible to consider 200 days,

or even 400 days,

So, except if we focus on 2006, it looks like we should reject the idea of time dependence in financial markets.
It is also possible to look more carefully at the distribution of runs, and to compare it with the case of independent samples (here we consider monte carlo generation of sequences having the same size),

> m=length(Z)
> ns=100000
> HIST=matrix(NA,ns,15)
> for(j in 1:ns){
+ XX=sample(c("A","B"),size=m,replace=TRUE)
+ RUNX=rle(as.character(XX))$lengths
+ S=sort(table(RUNX))
+ HIST[j,]=S[as.character(1:15)]
+ }
> meana=function(x){sum(x[is.na(x)==FALSE])/length(x)}
> cbind(TR[as.character(1:15)],apply(HIST,2,meana),
+       round(m/(2^(1+1:15))))
 
     [,1]       [,2] [,3]
1    1309 1305.12304 1305
2     673  652.46513  652
3     356  326.21119  326
4     140  163.05101  163
5     105   81.52366   82
6      35   40.74539   41
7      12   20.38198   20
8       6   10.16383   10
9       5    5.09871    5
10     NA    2.56239    3
11      1    1.26939    1
12     NA    0.63731    1
13      1    0.31815    0
14     NA    0.15812    0
15     NA    0.08013    0

The first column above is the empirical frequency of runs of length 1,2,3, etc. The second one is the average frequencies obtained on random simulation of independent sample. The third one is the theoretical frequency based on a (geometric distribution with mean 1).

Here again, it looks like our time series behave like an independent sample. Here is also a nice paper by Mark Schilling on the longest run of heads.
So it is not that odd to observe such a series of losses on financial markets….