Tag Archives: R-english

I really need to find hot (and sexy) topics

50 days ago (here), I was supposed to be very optimistic about the probability that I could reach a million viewed pages on that blog (over a bit more than two years). Unfortunately, the wind has changed and today, the probability is quite low…

 base=read.table("millionb.csv",sep=";",header=TRUE)
X1=cumsum(base$nombre)
base=read.table("million2b.csv",sep=";",header=TRUE)
X2=cumsum(base$nombre)
X=X1+X2
 D=as.Date(as.character(base$date),"%m/%d/%Y")
kt=which(D==as.Date("01/06/2010","%d/%m/%Y"))
D0=as.Date("08/11/2008","%d/%m/%Y")
D=D0+1:length(X1)
P=rep(NA,(length(X)-kt)+1)
for(h in 0:(length(X)-kt)){
model  <- arima(X[1:(kt+h)],c(7 1,7),method="CSS") 
 forecast <- predict(model,200)
u=max(D[1:kt+h])+1:300
k=which(u==as.Date("01/01/2011","%d/%m/%Y"))
(P[h+1]=1-pnorm(1000000,forecast$pred[k],forecast$se[k]))
}
plot( D[length(D)-length(P)]+1:220,c(P,rep(NA,220-length(P))),
ylab="Probability to reach 1,000,000",xlab="",
type="l",col="red",ylim=c(0,1))
So, I guess my posts on multiple internal rates of return, or Young’s inequality will have to wait next year… I really need to find some more sexy post to attract readers.. Challenge accepted !

Is it that stupid to make extremely long term forecast when studying mortality ?

I received recently a comment by FCA (here) who raised an important question, about forecast in dynamic mortality models. (S)he mentioned that from his(her) point of view, the econometric models I considered were “good to predict for the next, say, 3 or 4 years. Not for the next 50 years…”. Which was the message I tried to stress last year in a conference about retirement in France (here). But from a quantitativepoint of view, how inconsistent were forecasts made 35 years ago, or 60 years ago ?

Consider here the Lee Carter model, obtained on the periods 1816-1950 (in black below), 1816-1975 (in red) and 1816-2000 (in blue), unfortunately, it is difficult to compare http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s since we have identifiability problems here. Nevertheless, we if consider affine transformation so that  http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s are equal in 1900 and 1950 (say), we obtain

On that graph, we considered an ETS (AAN) forecast. If we do not consider the entire series for forecasting, but only observations following WWI (1945), we obtain

For sketches of the R code,

T=1980
base0=data.frame(D,E,A,Y,a=as.factor(A),
y=as.factor(Y))
base=base0[base0$Y<=T,]
LC2=gnm(D~a+Mult(a,y),offset=log(E),family=
poisson,data=base)
A=LC2$coefficients[1]+LC2$coefficients[2:110]
B=LC2$coefficients[111:220]
K0=LC2$coefficients[221:length(LC2$coefficients)]
Y=as.numeric(K0)
K1=c(K0,forecast(ets(Y,model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y,allowdrift=TRUE),h=240)$mean)
MU=matrix(NA,length(A),length(K1))
MU1=MU2=MU
for(i in 1:length(A)){
for(j in 1:length(K1)){
MU1[i,j]=exp(A[i]+B[i]*K1[j])
MU2[i,j]=exp(A[i]+B[i]*K2[j])
}}
x=40
s=seq(0,109-x-1)
t=2000
Pxt1=cumprod(exp(-diag(MU1[x+1+s,t+s-base1$Year[1]-1])))
Pxt2=cumprod(exp(-diag(MU2[x+1+s,t+s-base1$Year[1]-1])))
r=.035
m=70
h=seq(0,39)
V1=1/(1+r)^(m-x+h)*Pxt1[m-x+h]
V2=1/(1+r)^(m-x+h)*Pxt2[m-x+h]
M=cbind(V1,V2)
apply(M,2,sum)

Actually, it is not that bad…. even if it is only a qualitative intuition. Again, I am not a demographer, and my interest is more on actuarial science… so if we look at the estimation of annuities (still the same insurance contract, as here) for some insured of age 40 in 2000, we get the following graph (where forecasts http://freakonometrics.blog.free.fr/public/maths/viekt.png‘s were obtained on the complete series, i.e. from 1816 until the year we consider),

(here it means that in 1900, I had to forecast mortality for someone of age 40 in 2000… so we had to forecast mortality with a 150 year horizon). Obviously, even if we are able to forecast improvement of mortality rates, it is not enough since it looks like, each year, improvement are alway higher than what what expected. Note that if we run it twice (since there might be problem with initial values in the econometric procedure) we obtain something similar,

So, the output is consistent. And if we change the way we predict future values, e.g. on focusing only on the past 50 years, i.e.

K1=c(K0,forecast(ets(Y[(length(Y)-50):length(Y)],
model="AAN"),h=240)$mean)
K2=c(K0,forecast(auto.arima(Y[(length(Y)-50):length(Y)],
allowdrift=TRUE),h=240)$mean)

we obtain the following graph for the annuity associated to an insurance contract sold in 2000,

so that relative changes compared with 1980 are (in %)

Hence, over a bit more than 25 years, we underestimated annuities of 25%. We if start to take into account possible investments, it is not so bad, I think….  don’t you think ?

 

Finding roots of functions in actuarial science

The following simple code can be used to find roots of functions (based on the secant algorithm),

secant=function(fun, x0, x1, tolerence=1e-07, niter=500){
for ( i in 1:niter ) {
	x2 <- x1-fun(x1)*(x1-x0)/(fun(x1)-fun(x0))
	if (abs(fun(x2)) < tolerence)
		return(x2)
	x0 <- x1
	x1 <- x2
}}

It can be interesting in actuarial science, e.g. to find the actuarial rate so that to present values are equal. For instance, consider the following capital, given only if the insured is still alive (this example was initially considered here). We would like to find the rate so that the probable discounted value is 600,

> Lx=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ header=TRUE,sep=";")
> capital=c(100,100,125,125,150,150)
> n=length(capital)
> x=0.035
> X=45
> f=function(x){
+ capital.act=capital*(1/(1+x))^(1:n)
+ PROBA=Lx[((Lx[,1]>X)*(Lx[,1]<=(X+n)))==1,2]/Lx[(Lx[,1]==X)==1,2]
+ return(sum(capital.act*PROBA))}
>
> f1=function(x){f(x)-600}
> secant(f1,0,0.1)
[1] 0.06022313
> f(0.06022313)
[1] 600

*

Comments on probabilities

The only thing I remember from courses I had in probability a few years ago is that we also have to clearly defined the event we want to calculate the probability. On the Freakonomics blog, last week, the Israeli lottery was mentioned (here, see also there where I mentioned that, and odds facts from the French lottery),

Yesterday, Andrew Gelman claimed (here) that there was a probability error… Well, since Andrew is really a statistician (and a good one… while I am barely an economist), I tried to do the maths…. and to understand where the error was coming from…

Since 6 numbers are drawn out of a pool of numbers from 1 to 37, the total number of combination at each lottery is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto01.png
> (n=choose(37,6))
[1] 2324784

Over 8 lotteries (since there are two draws per week, we can assume there 8 draws per month), the probability of no identical draws is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto02.png

Here is the R code for those who want to check, again,

> prod(n-0:7)/n^8
[1] 0.999988

Each month, the probability of “coincidence” (I define “coincidence” the event “over 8 draws, at least two times, we obtained the same 6-uplet” or more precisely (as mentioned here) “over one calendar month, at least two times, we obtained the same 6-uplet“) is

> (p=1-(prod(n-0:7)/n^8))
[1] 1.204407e-05

The occurrence of a coincidence each month as a Geometric distribution, with probability p. And it is classical, following Gumbel’s definition (here), to consider 1/p, called the “return period“, i.e. the number of months we have to wait until we observe a coincidence (i.e. a repetition in the same month), since for a geometric distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto03.png
> 1/p/(12)
[1] 6919.034

Here, the (expected) return period is 6919 years.

From my point of view, this is “the incident of six numbers repeating themselves within a calendar month”, and this is an event of once in 6919.034 years. On the other hand the median of a geometric distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/loto04.png
> -log(2)/log(1-p)/(12)
[1] 4795.88

which means that we have 50% chance to get such a coincidence over 4796 years.

Of course, if instead of looking at a longer period, say 100 draws, i.e. one year (here I define “coincidence” the event “over 100 draws, at least two times, we obtained the same 6-uplet“), we have in red the expected return period, and in blue the median of the geometric distribution,

> M=E=rep(NA,100)
> for(i in 2:100){
+ p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
+ E[i]=1/p/(100/i)
+ M[i]=-log(2)/log(1-p)/(100/i)
+ }
> plot(1:100,E,ylim=c(0,10000),type="l",col="red",lwd=2)
> lines(1:100,M,col="blue",lwd=2)
> abline(v=8,lty=2)
> points(8,E[8],pch=19,col="red")
> points(8,M[8],pch=19,col="blue")

or below of a log-scaled version

As Xi’an did (here), assume now that there is a lottery over 100 countries. Here I define “coincidence” the event “over k lottery draws over 100 around the world, at least two times, we obtained the same 6-uplet“, and then the previous graph becomes (with on the x axis the level of k)

Here I have a 12% chance if we consider probability to have identical numbers over a month…

But here, we can have one 6-uplet in Israel, and the other one in Egypt, say… If we want to get the same 6-uplet in the same country, the graph is now

i.e. each month there is a chance over one thousand…

> i=8
> p=1-exp((sum(log(n-0:(i-1)))-i*log(n)))
> 1-(1-p)^100
[1] 0.001203689

Note: actually, Xi’an mentioned that the probability that this coincidence [of two identical draws over 188 draws] occurred in at least one out of 100 lotteries (there are hundreds of similar lotteries across the World) is 53%! And I got the same,

> 1-(1-P[188])^100
[1] 0.5305219

Names of villages, in France

Keith Briggs published a post here on names of English place name element distribution, which contains almost twenty maps like the one where names ends by -bourn,bourne,burn (here) or -head (there). Actually, it is possible (Robin mentioned that already here) to do similar things in France… Consider the dataset containing the 35,250 commune names (here), it is an xls file containing the official name, the latitude, and on the longitude. To start with something simple, it is possible also to look at village containing “saint” in it

There are a lot, and there is no obvious geographic trend. For some simple geographic trend, t is possible to see where are villages having a name ending with “sur mer” (meaning literally “on the sea”) below on the left. Obviously, we cannot find such places in the Alps. Similarly for names ending with “Seine” they are clearly on the Seine river, on the right

> ville=read.table("D:\\r-data\\ville.csv",sep=";",header=TRUE)
> nrow(ville)
[1] 35376
> ville$maj=as.character(ville$Nom.Ville)
> n=nchar(ville$maj)
> I=substr(ville$maj,pmax(0,n-8),n)
> Ind=I=="-sur-Mer "
> sum(Ind)
[1] 98
> library(maps)
> map('france', fill = FALSE)
> X=ville[Ind,]
> x=as.numeric(as.character(X$Longitude))
> y=as.numeric(as.character(X$Latitude))
> points(x,y,pch=19,col="blue",cex=.6)

In order to continue with some geographic pattern, consider the end of the names, such as “-gny” (below on the left, in red) or “-ac” (below on the right, in blue)

Some pretend that “-ac” comes from Gaelic, and can be found in Celtic regions (here in Brittany). Obviously, there is also an origin in Occitany (south west of France). And this gave also in Oïl region the “-gny” (in North and North-East regions). Consider similarly end of the names, such as “-an” (below on the left, in red) or “-ey” (below on the right, in blue)

Still about the end of the names, it is also possible to look for village ending either with “-a” (below on the left, in red) or “-o” (below on the right, in blue)

We are now in the southern part of France…. “-a” in Corse and Pyrénées, while “-o”can be found in Corse, and in Brittany. For the beginning “ker-” or “lan-”  (below on the left, in red) or “castel-” (below on the right, in blue),

“ker-” appears in 18,000 location names (as mentioned here) but only in some village names. It is similar to “castel-” in the southern part of France.
To go a bit further, 40 years ago, Georges Brassens sang a song entitled “La ballade des gens qui sont nés quelque part“.

He says that people are usually extremely proud of their villages…. Actually, their are more people proud of living over something than under something: below are villages containing “sous” (i.e. under below on the left, in red) or “sur” (i.e. over below on the right, in blue)

On the other hand, villages containing “grand” or “grande” (i.e. tall or big below on the left, in red) or “petit” or “petite” (i.e. small below on the right, in blue) seem to be correlated: close to a city with “grand” there is a village with “petit” in it. For instance Virieu-le-Grand and Virieu-le-Petit, or  Essigny-le-Grand and Essigny-le-Petit.

And finally, I found surprising to see so many village containing “montagne” (ie mountain below on the left, in orange) or starting with “Mont” (below on the right, in purple) that are far from mountains,

You do not need to live close to some mountains to get mountains in it. Even in Brittany you can find dozen of villages starting with “Mont”….

Extracting information from a keyboard…

Yesterday, Baptiste published a post on “ethno-photography” (here). As he mentioned it, in Paris 8, they experience a real absence of serious cleaning of office equipment. He then shows the keyboard of the only computer they can use in the sociology department (for forty researchers),

Apart from the fact that everyone in France should be ashamed to see how much is spent in universities (which is the first information we have from that picture), we should also be able to guess in which langage people work in this department.
I considered three books (two in French, one in English) and I would like to see the frequency of each letter,

  • Mauss, manuel d’éthnographie (here), 1926
  • Durkheim, Livre II: Les croyances élémentaires in Les formes élémentaires de la vie religieuse (here), 1912
  • Ferri, Criminal Sociology (here), 1896

Those three books are in rich text format, I just changed it to get text files… Then, it is easy to count appearance of letters. E.g. for Mauss,

> library(corpora)
> textfile=scan("MAUSS-manuel.txt",
+ what="char", sep="\n")
Read 1550 items
> textfile<-tolower(textfile)
> M=NA
> for(i in 1:length(textfile)){
+ line=textfile[i]
+ M=c(M,strsplit(as.character(line),"")[[1]])
+ }
> T=table(M)
> T
M
    '     -           \t     !     "     %     &     (     )     ,     .     / 
 5308  1049 86589    44     3     3     2     2   370   391  6609  4909    12 
    :     ;     ?     @     ]     _     ~     ’     =     «     »     ¬     ° 
  819  1178   113     1     1     4     1    39     1   108   107   823     3 
    …     0     1     2     3     4     5     6     7     8     9     a     à 
    1    69   213    83    73    34    48    33    28    64   151 30559  1651 
    â     ä     b     c     ç     d     e     é     è     ê     ë     f     g 
  224     3  3562 14678   110 17713 63955 10354  1798  1000     5  4555  4911 
    h     i     î     ï     j     k     l     m     n     ñ     o     º     ô 
 4359 30851   226    47  1147   247 24792 12844 32525     6 25562     2   151 
    ö     œ     p     q     r     s     t     u     ù     û     ü     v     w 
   12    52 12696  4667 28237 37630 32945 25001   211    40     9  4787   164 
    x     y     z 
 1996  1222   343

Then, we can summarize in to see proportion of standard 26 letters, and we have, for Mauss,

and for Durkeim,

If we compare the two, we have almost the same proportions,

If we look at our book in English now, we have

i.e., if we compare with Mauss for instance

So we have much more E in French than in English, but still, people writing in English use a lot the E. So looking at the E should not give us any clue…. But we can see that in English, the H is as common as the L, or the C. Not in French, where L is much more frequent than the H. But on the picture, the C is more clear than the H. We can also look at the U, which is common in French, not in English… Here, on the keyboard, it is perfectly clear… so I guess people use it frequently.
So I would say that they write more in French than the write in English, on that computer.
Actually, the same idea has been used a long time ago on calculators to see that Benford’s law works: some numbers are really used (as well as the legend pretends that some pages in logarithm books were never used….), see here orthere. So Baptiste, if one day the keyboard is cleaned up, please send me another picture after a few weeks to see if things have changed….
An for those who cannot imagine how it is to work in some universities in France, just look at his blog (here). Pictures are unbelievable….Good luck Baptiste….

Lottery, and martingales

I recently got a comment on a post I published one year ago, here, about the fact that in September 2009, on the 6th and the 10th, the 6 same numbers came out at the lottery, in Bulgaria (but  I do not understand the question: the author of the comment ask about the order the numbers came out…)
Xi’an published also a post on that topic, there, since last week, the same thing happened in Israel.
All that reminded me a discussion I had with a colleague about another post (here) where I mentioned that I found a strange distribution of numbers in the French lottery (the old one actually). For those who want to check, all historical events are here, in a zip file. My colleague was wondering if I found the martingale to win the lottery…

First, I do not like that term, since martingale is something different from a mathematical point of view… Second, let us look if it would have been possible to make some money… (free lunch ?)

> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> ntirage=nrow(loto)
> loto=loto[51:ntirage,]
> ntirage=nrow(loto)
>   N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> length(n)
[1] 28848
> (TN=table(n))
n
1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20
607 576 571 618 579 598 608 582 588 590 562 577 577 580 591 630 558 567 594 608
21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40
578 562 579 583 574 589 602 572 550 598 604 582 545 646 597 618 599 636 609 588
41  42  43  44  45  46  47  48  49
576 589 577 585 618 596 560 571 604

So, it might look nice, but we have to compare that distribution with the one we should have with “independent” draws. It is not possible to look at a discrete uniform distribution: the six numbers are not independent. Each day, the 49 balls are back in the urn, but within a day, we do not have independent draws (it is a sample without replacement of balls). Hence, with 4808 lottery draws, each number cannot be obtained more than 4808 times. So, let us use monte carlo techniques to  look at the theoretical distribution,

> M=matrix(NA,49,1000)
> for(s in 1:1000){
+ B=NA
+ for(i in 1:ntirage){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ M[,s]=sort(table(B))
+ }
> q50=function(x){quantile(x,.5)}
> Q50=apply(M,1,q50)
> lines(1:49,Q50,col="red",lwd=2)
> q10=function(x){quantile(x,.1)}
> Q10=apply(M,1,q10)
> q90=function(x){quantile(x,.9)}
> Q90=apply(M,1,q90)
> polygon(c(1:49,49:1),c(Q10,rev(Q90)),col="light blue",border=NA)
> lines(1:49,Q10,col="red",lty=2)
> lines(1:49,Q90,col="red",lty=2)
> lines(1:49,Q50,col="red",lwd=2)
> points(1:49,sort(TN),pch=19,type="b")

Looking at the graph, it looks like some numbers appeared too frequently, especially the ones that did not appear frequently (bottom left). So, since I have removed the last 50 draws, let us see if we could have used that information, somehow…

> nb=names(sort(TN))
> loto=read.table("D:\\loto.csv",dec=",",header=TRUE,sep=";")
> loto=loto[1:50,]
> N=as.matrix(loto[,c("boule_1","boule_2","boule_3","boule_4","boule_5","boule_6")])
> n=as.vector(N)
> TN=table(n)
> TN[nb]
> barplot(TN[nb])

Unfortunately, numbers that came out too frequently over 4800 draws did not appear that frequently of the last 50. Playing top number might not have been a great strategy.

(numbers that came out frequently are on the right, while those we did not see much are on the left)… What about worst numbers: if I had decided to play the 6 that did not come out very frequently (we’ve seen earlier that they should have appeared even less, actually), would it have been interesting ? As we can see, our top 2 numbers were numbers that did not appear frequently earlier (29 and 47 appears respectively 10 and 11 times over 50 draws)….
Over 50 draws of 6 balls, the expected frequency of 6 given number is around 36.7,..

> S=rep(NA,10000)
> for(s in 1:10000){
+ B=NA
+ for(i in 1:50){B=c(B,sample(1:49,size=6,replace=FALSE))}
+ B=B[-1]
+ S[s]=sum(B%in%(1:6))
+ }
> mean(S)
[1] 36.7694

But here for the top 6, we have

> z=TN[nb]
> sum(rev(z)[1:6])
[1] 29

i.e. the top 6 appeared 29 times over 50 draw of 6 balls (which looks low) and for the worst 6, it is a bit higher,

>  sum(z[1:6])
[1] 38

If we look at the theoretical density of the frequency of 6 given number, we have

i.e. our worst 6 is a nice average (in green) while top 6 did not appear frequently this time (here in blue) ! So we could not have used that information….
Anyway, if some of you are interesting using statistics to get a free lunch, with the nouveau loto, I did not see any strange pattern (data can be downloaded here).

I am terribly sorry, but I cannot help anyone winning at the French Lottery….

Margin of error, and comparing proportions in the same sample

Irecently tried to answer a simple question, asked by @adelaigue. Actually, I thought that the answer would be obvious… but it is a little bit more compexe than what I thought. In a recent survey about elections in Brazil, it was mentionned in a French newspapper that “Mme Rousseff, 62 ans, de 46,8% des intentions de vote et José Serra, 68 ans, de 42,7%” (i.e. proportions obtained from the survey). It is also mentioned that “la marge d’erreur du sondage est de 2,2% ” i.e. the margin of error is 2.2%, which means (for the journalist) that there is a “grande probabilité que les 2 candidats soient à égalité” (there is a “large probability” to have equal proportions).
Usually, in sampling theory, we look at the margin of error of a single proportion. The idea is that the variance of https://latex.codecogs.com/gif.latex?%20\widehat{p}, obtained from a sample of size https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m201.png

thus, the standard error is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m202.png

The standard 95% confidence interval, derived from a Gaussian approximation of the binomial distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m203.png

The largest value is obtained when p is 1/2, and then we have a worst case confidence interval (an upper bound) which is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m204.png

So with a margin of error https://perso.univ-rennes1.fr/arthur.charpentier/latex/m205.png means that https://perso.univ-rennes1.fr/arthur.charpentier/latex/m206.png. Hence, with a 5% margin of error, it means that n=400. While 2.2% means that n=2000:
> 1/.022^2
[1] 2066.116
Classically, we compare proportions between two samples: surveys at two different dates, surveys in different regions, surveys paid by two different newpapers, etc. But here, we wish to compare proportions within the same sample. This has been consider in an “old” paper published in 1993 in the American Statistician,

It contains nice figures to illustrate the difference between the standard approach,

and the one we would like to study here.

This point is mentioned in the book by Kish, survey sampling (thanks Benoit for the reference),


Let https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial05.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial06.png denote empirical frequencies we have obtained from the sample, based on https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png observations. Then since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial07.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial08.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial09.png

we have

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial11.png

Thus, a natural margin of error on the difference between the two proportion is here

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m207.png

which is here 4 points
> n=2000
> p1=46.8/100
> p2=42.7/100
> 1.96*sqrt((p1+p2)-(p1-p2)^2)/sqrt(n)
[1] 0.04142327
Which is exactly the difference we have here ! Hence, the probability of reaching such a value is quite small (2%)
> s=sqrt(p1*(1-p1)/n+p2*(1-p2)/n+2*p1*p2/n)
> (p1-p2)/s
[1] 1.939972
> 1-pnorm(p1-p2,mean=0,sd=sqrt((p1+p2)-(p1-p2)^2)/sqrt(n))
[1] 0.02619152

Actually, we can compare the three margin of errors we have so far,

  • the upper bound
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m208.png
  • the “average one”
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m209.png

where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m212.png
  • the more accurate one we just obtained,
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m213.png

where https://perso.univ-rennes1.fr/arthur.charpentier/latex/m214.png.
> p=seq(0,.5,by=.01)
> ic1=rep(1.96/sqrt(4*n),length(p))
> ic2=1.96*sqrt(p*(1-p))/sqrt(n)
> delta=.01
> ic31=1.96*sqrt(2*p-delta^2)/sqrt(n)
> delta=.2
> ic32=1.96*sqrt(2*p-delta^2)/sqrt(n)
> plot(p,ic32,type=”l”,col=”blue”)
> lines(p,ic31,col=”red”)
> lines(p,ic2)
> lines(p,ic1,lty=2)
So on the graph below, the dotted line is the standard upper bound, the plain line in black being a more accurate one when the probability is https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png (the x-axis). The red line is the true margin of error with a large difference between candidates (20 points) and the blue line with a small difference (1 point).


Remark: an alternative is to consider a chi-square test, comparering two multinomial distributions, with probabilities https://perso.univ-rennes1.fr/arthur.charpentier/latex/m215.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/m216.png where https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png is the average proportion, i.e. 44.75%. Then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial21.png

i.e.  https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png=3.71
> p=(p1+p2)/2
> (x2=n*((p1-p)^2/p+(p2-p)^2/p))
[1] 3.756425
> 1-pchisq(x2,df=1)
[1] 0.05260495
Under the null hypothesis, https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png should have a chi-square distribution, with one degree of freedom (since the average is fixed here). Here the probability to reach that level is around 5% (which can be compared with the 2% we add before).

So finally, I would think that here, stating that there is a “large probability” is not correct…

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.

from two to three…

A short post to give more details about the final remark in the course of Financial Econometrics, and more precisely the formula that can be found in the book of Philip Jorion,

Note that this formula can be found (perhaps written with slight changes) in several papers, e.g. the following sentence (on the http://www.bis.org/website),

or the following formula, on documents from the Bank of England website,

I recently pulished (in French, here) a paper on the Value-at-Risk, including the following graph,

Usually, three times the average over 60 trading days is the larger component, but during the financial crisis, it turned out that the daily component was almost three times higher than the average value over the past the months (this fact was mention by Paul Embrechts in some conference in Paris on risk measures).
The interpreation of the multiplicative k coefficient (which is from 2 to 3 in some publications, or which exceeds 3 in others) has been proposed in a paper of Gerhard Stahl, entitled three cheers. The idea is to use the Bienaymé-Tchebychev inequality. For random variables with finite variance, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-01.png

Recall that this inequality is simply a corrolary of Markov’s inequality

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-02.png

or for any increasing function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-99.png

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-03.png

(taking function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-04.png, applied to https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-05.png). This upper bound can be far away from the true probability, see e.g. the gaussian case below, i.e. if  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-07.png

 

> z = seq(0,3,by=.01)
> P = 2*dnorm(k)
> U = 1/z^2
> plot(z,P,type="l",lwd=2,col="red",xlab="",ylab="")

The ratio between the two is given below,

> plot(z,U/P,type="l",lwd=2,col="purple",xlab="",ylab="",ylim=c(0,10))

Note that it is possible to interprete the axis values as probabilities values, taking quantiles of the gaussian distribution

> plot(pnorm(z),U/P,type="l",lwd=2,col="purple",xlab="",
+ ylab="",ylim=c(0,10),xlim=c(.9,1))
> abline(h=3,lty=2)

The interpretation is that the upper bound is 3 times higher than the true probability in the Gaussian case when z is the quantile of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png distribution associated with probability level 99%.
Note that

  • if z is the 95% quantile of the mathcal{N}(0,1) distribution, the ratio is 2 (1.92)
  • if z is the 99% quantile of the mathcal{N}(0,1) distribution, the ratio is 3 (3.04)
  • if z is the 99.55% quantile of the mathcal{N}(0,1) distribution, the ratio is almost 4 (3.88)
  • if z is the 99.75% quantile of the mathcal{N}(0,1) distribution, the ratio is 5 (5.04)

A more formal explaination is to assume that X is symmetric, and then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-09.png

Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-10.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-11.png, we have an upper bound for the  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.pngValue-at-Risk,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-20.png

where the upper bound is the upper bound for the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.png Value-at-Risk for any distribution with finite variance and centred.
If  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-31.png, then https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-32.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png.  But since, https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png for a https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-36.png distribution, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-21.png

and further

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-22.png

Nikkei’s past experience vs. SP500 (in euros)

Following Michael’s idea (here), I wanted to go further, based on his intuition (and dataset that he kindly sent me, there). If we consider the two series of Nikkei index and SP500 index in euros, we have to following graph,

the code is simply the following (the merging function is simply here to avoid problem with different trading days: since we look at the index and not the return, it is the simplest way to deal with it).

> library(RODBC)
> base = odbcConnectExcel(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/spx_nky_eurusd.xls", 
+ readOnly = TRUE)
> series1 = sqlQuery(base,query="select * from [Tabelle1$A2:B8837]") # SPX
> series2 = sqlQuery(base,query="select * from [Tabelle1$D2:E8631]") # NKY
> series3 = sqlQuery(base,query="select * from [Tabelle1$G2:H8945]") # EURUSD
> odbcCloseAll()
> series4=merge(series1,series3)
> series4$SPEUR=series4$SPX/series4$EURUSD
> series5=merge(series4,series2)
> x=(as.Date(series5[,1])-as.Date("01/01/0000","%d/%m/%Y"))/365.25
> yl=range(series5[,4])
> xl=c(1975,2010)
> plot(x,series5[,4],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="red",xlim=xl,ylim=yl)
> axis(1)
> axis(2, col="red")
> par(new=TRUE)
> yl=range(series5[,5])
> plot(x,series5[,5],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="blue",xlim=xl,ylim=yl)
> axis(4, col="blue")
> mtext("SP500 in Euros", 2, line=2, col="red", cex=1.2)
> mtext("NKY", 4, line=2, col="blue", cex=1.2)

Those two series series seem to have a similar pattern, so an idea can be translate the SP500 on the left,

Interesting isn’t it ? Suppose that we want to forecast (or forsee ?) the SP500 in euro for the next 10 years…

People who enjoy charts would have here a nice tool…

Those two series are extremely correlated, with a correlation of 0.9572,

> X1=series5[2501:n,4]
> X2=series5[1:(n-2500),5]
> cor(X1,X2)
[1] 0.9572484

But are the two series cointegrated (see here, here or therefor material on cointegration) ? Well, using standard procedure, we first have to prove that the two series are integrated. First, let us look at the autocorrelograms,

At first sight, we confirm the economic intuition that those indices should be integrated. Standard tests confirm that intuition,

> acf(X2,lag=1000,col="light green")
> acf(X1,lag=1000,col="light green")
> library(tseries)
> adf.test(X1)
        Augmented Dickey-Fuller Test
data:  X1 
Dickey-Fuller = -1.0768, Lag order = 17, p-value = 0.9264
alternative hypothesis: stationary 
> adf.test(X2)
        Augmented Dickey-Fuller Test
data:  X2 
Dickey-Fuller = -1.2905, Lag order = 17, p-value = 0.8788
alternative hypothesis: stationary

But if we want to go further, we have to find the cointegration relationship between the two series. From an heuristic point of view, a linear regression should be a good proxy,

> reg=lm(X1~X2)
> plot(residuals(reg))

> acf(residuals(reg),lag=1000,col="light green")

> adf.test(residuals(reg))
        Augmented Dickey-Fuller Test
data:  residuals(reg) 
Dickey-Fuller = -5.176, Lag order = 17, p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In adf.test(residuals(reg)) : p-value smaller than printed p-value
> pp.test(residuals(reg))
        Phillips-Perron Unit Root Test
data:  residuals(reg) 
Dickey-Fuller Z(alpha) = -46.9775, Truncation lag parameter = 11,
p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In pp.test(residuals(reg)) : p-value smaller than printed p-value

When we look at the autocorrelation function, it looks like we do have a stationary series.
This idea is – more or less – the idea of Engle-Granger two step procedure. But actually, we can not directly use Dickey-Fuller’s test to see if residuals are integrated. This was proved in Phillips and Ouliaris (1990), who also proposed a test (see e.g. here),

> library(tseries); po.test(cbind(X1,X2))
        Phillips-Ouliaris Cointegration Test
data:  cbind(X1, X2) 
Phillips-Ouliaris demeaned = -53.1766, Truncation lag parameter = 57,
p-value = 0.01
Message d'avis :
In po.test(cbind(X1, X2)) : p-value smaller than printed p-value
Another similar function can be found in R
> library(urca)
> summary(ca.po(cbind(X1,X2)))
######################################## 
# Phillips and Ouliaris Unit Root Test # 
######################################## 
Test of type Pu 
detrending of series none 
Call:
lm(formula = z[, 1] ~ z[, -1] - 1)
Value of test-statistic is: 45.2032 
Critical values of Pu are:
                  10pct    5pct    1pct
critical values 20.3933 25.9711 38.3413

Thus, we has to admit that those series are cointegrated.

Based on that idea, it is possible to model the stationary component, and forecast it for the next ten years, based on the assumption that we know the behavior of one time series. Hence, if we add the confidence interval due to the stationary component uncertainty, we have the following graph,

 Of course, again, only uncertainty related to the stationary process is considered here….

Séminaire Probabilité et Statistique, UBO, Brest

Talk at the statistical seminar at the Université de Bretagne Occidentale, in Brest, Wednesday May 6th Tuesday May 5th, 14h (in  10 days), on “multivariate extremes. Slides can be found here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

[04/05/2009]: some applications in risk management will be shown at the end of talk, as well as some news things on spatial correlation.

and in order to illustrate tail convergence of Archimedean copulas, I have uploaded two animations, with tail independence below,

with tail dependence (or asymptotic dependence),