Tag Archives: R-english

Margin of error, and comparing proportions in the same sample

Irecently tried to answer a simple question, asked by @adelaigue. Actually, I thought that the answer would be obvious… but it is a little bit more compexe than what I thought. In a recent survey about elections in Brazil, it was mentionned in a French newspapper that “Mme Rousseff, 62 ans, de 46,8% des intentions de vote et José Serra, 68 ans, de 42,7%” (i.e. proportions obtained from the survey). It is also mentioned that “la marge d’erreur du sondage est de 2,2% ” i.e. the margin of error is 2.2%, which means (for the journalist) that there is a “grande probabilité que les 2 candidats soient à égalité” (there is a “large probability” to have equal proportions).
Usually, in sampling theory, we look at the margin of error of a single proportion. The idea is that the variance of https://latex.codecogs.com/gif.latex?%20\widehat{p}, obtained from a sample of size https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m201.png

thus, the standard error is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m202.png

The standard 95% confidence interval, derived from a Gaussian approximation of the binomial distribution is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m203.png

The largest value is obtained when p is 1/2, and then we have a worst case confidence interval (an upper bound) which is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m204.png

So with a margin of error https://perso.univ-rennes1.fr/arthur.charpentier/latex/m205.png means that https://perso.univ-rennes1.fr/arthur.charpentier/latex/m206.png. Hence, with a 5% margin of error, it means that n=400. While 2.2% means that n=2000:
> 1/.022^2
[1] 2066.116
Classically, we compare proportions between two samples: surveys at two different dates, surveys in different regions, surveys paid by two different newpapers, etc. But here, we wish to compare proportions within the same sample. This has been consider in an “old” paper published in 1993 in the American Statistician,

It contains nice figures to illustrate the difference between the standard approach,

and the one we would like to study here.

This point is mentioned in the book by Kish, survey sampling (thanks Benoit for the reference),


Let https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial05.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial06.png denote empirical frequencies we have obtained from the sample, based on https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial15.png observations. Then since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial07.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial08.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial09.png

we have

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial11.png

Thus, a natural margin of error on the difference between the two proportion is here

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m207.png

which is here 4 points
> n=2000
> p1=46.8/100
> p2=42.7/100
> 1.96*sqrt((p1+p2)-(p1-p2)^2)/sqrt(n)
[1] 0.04142327
Which is exactly the difference we have here ! Hence, the probability of reaching such a value is quite small (2%)
> s=sqrt(p1*(1-p1)/n+p2*(1-p2)/n+2*p1*p2/n)
> (p1-p2)/s
[1] 1.939972
> 1-pnorm(p1-p2,mean=0,sd=sqrt((p1+p2)-(p1-p2)^2)/sqrt(n))
[1] 0.02619152

Actually, we can compare the three margin of errors we have so far,

  • the upper bound
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m208.png
  • the “average one”
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m209.png

where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/m212.png
  • the more accurate one we just obtained,
https://perso.univ-rennes1.fr/arthur.charpentier/latex/m213.png

where https://perso.univ-rennes1.fr/arthur.charpentier/latex/m214.png.
> p=seq(0,.5,by=.01)
> ic1=rep(1.96/sqrt(4*n),length(p))
> ic2=1.96*sqrt(p*(1-p))/sqrt(n)
> delta=.01
> ic31=1.96*sqrt(2*p-delta^2)/sqrt(n)
> delta=.2
> ic32=1.96*sqrt(2*p-delta^2)/sqrt(n)
> plot(p,ic32,type=”l”,col=”blue”)
> lines(p,ic31,col=”red”)
> lines(p,ic2)
> lines(p,ic1,lty=2)
So on the graph below, the dotted line is the standard upper bound, the plain line in black being a more accurate one when the probability is https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png (the x-axis). The red line is the true margin of error with a large difference between candidates (20 points) and the blue line with a small difference (1 point).


Remark: an alternative is to consider a chi-square test, comparering two multinomial distributions, with probabilities https://perso.univ-rennes1.fr/arthur.charpentier/latex/m215.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/m216.png where https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial19.png is the average proportion, i.e. 44.75%. Then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial21.png

i.e.  https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png=3.71
> p=(p1+p2)/2
> (x2=n*((p1-p)^2/p+(p2-p)^2/p))
[1] 3.756425
> 1-pchisq(x2,df=1)
[1] 0.05260495
Under the null hypothesis, https://perso.univ-rennes1.fr/arthur.charpentier/latex/multinomial22.png should have a chi-square distribution, with one degree of freedom (since the average is fixed here). Here the probability to reach that level is around 5% (which can be compared with the 2% we add before).

So finally, I would think that here, stating that there is a “large probability” is not correct…

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.

from two to three…

A short post to give more details about the final remark in the course of Financial Econometrics, and more precisely the formula that can be found in the book of Philip Jorion,

Note that this formula can be found (perhaps written with slight changes) in several papers, e.g. the following sentence (on the http://www.bis.org/website),

or the following formula, on documents from the Bank of England website,

I recently pulished (in French, here) a paper on the Value-at-Risk, including the following graph,

Usually, three times the average over 60 trading days is the larger component, but during the financial crisis, it turned out that the daily component was almost three times higher than the average value over the past the months (this fact was mention by Paul Embrechts in some conference in Paris on risk measures).
The interpreation of the multiplicative k coefficient (which is from 2 to 3 in some publications, or which exceeds 3 in others) has been proposed in a paper of Gerhard Stahl, entitled three cheers. The idea is to use the Bienaymé-Tchebychev inequality. For random variables with finite variance, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-01.png

Recall that this inequality is simply a corrolary of Markov’s inequality

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-02.png

or for any increasing function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-99.png

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-03.png

(taking function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-04.png, applied to https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-05.png). This upper bound can be far away from the true probability, see e.g. the gaussian case below, i.e. if  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-07.png

 

> z = seq(0,3,by=.01)
> P = 2*dnorm(k)
> U = 1/z^2
> plot(z,P,type="l",lwd=2,col="red",xlab="",ylab="")

The ratio between the two is given below,

> plot(z,U/P,type="l",lwd=2,col="purple",xlab="",ylab="",ylim=c(0,10))

Note that it is possible to interprete the axis values as probabilities values, taking quantiles of the gaussian distribution

> plot(pnorm(z),U/P,type="l",lwd=2,col="purple",xlab="",
+ ylab="",ylim=c(0,10),xlim=c(.9,1))
> abline(h=3,lty=2)

The interpretation is that the upper bound is 3 times higher than the true probability in the Gaussian case when z is the quantile of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png distribution associated with probability level 99%.
Note that

  • if z is the 95% quantile of the mathcal{N}(0,1) distribution, the ratio is 2 (1.92)
  • if z is the 99% quantile of the mathcal{N}(0,1) distribution, the ratio is 3 (3.04)
  • if z is the 99.55% quantile of the mathcal{N}(0,1) distribution, the ratio is almost 4 (3.88)
  • if z is the 99.75% quantile of the mathcal{N}(0,1) distribution, the ratio is 5 (5.04)

A more formal explaination is to assume that X is symmetric, and then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-09.png

Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-10.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-11.png, we have an upper bound for the  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.pngValue-at-Risk,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-20.png

where the upper bound is the upper bound for the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.png Value-at-Risk for any distribution with finite variance and centred.
If  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-31.png, then https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-32.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png.  But since, https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png for a https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-36.png distribution, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-21.png

and further

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-22.png

Nikkei’s past experience vs. SP500 (in euros)

Following Michael’s idea (here), I wanted to go further, based on his intuition (and dataset that he kindly sent me, there). If we consider the two series of Nikkei index and SP500 index in euros, we have to following graph,

the code is simply the following (the merging function is simply here to avoid problem with different trading days: since we look at the index and not the return, it is the simplest way to deal with it).

> library(RODBC)
> base = odbcConnectExcel(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/spx_nky_eurusd.xls", 
+ readOnly = TRUE)
> series1 = sqlQuery(base,query="select * from [Tabelle1$A2:B8837]") # SPX
> series2 = sqlQuery(base,query="select * from [Tabelle1$D2:E8631]") # NKY
> series3 = sqlQuery(base,query="select * from [Tabelle1$G2:H8945]") # EURUSD
> odbcCloseAll()
> series4=merge(series1,series3)
> series4$SPEUR=series4$SPX/series4$EURUSD
> series5=merge(series4,series2)
> x=(as.Date(series5[,1])-as.Date("01/01/0000","%d/%m/%Y"))/365.25
> yl=range(series5[,4])
> xl=c(1975,2010)
> plot(x,series5[,4],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="red",xlim=xl,ylim=yl)
> axis(1)
> axis(2, col="red")
> par(new=TRUE)
> yl=range(series5[,5])
> plot(x,series5[,5],axes=FALSE,xlab="",ylab="",type="l",
+ lwd=3,col="blue",xlim=xl,ylim=yl)
> axis(4, col="blue")
> mtext("SP500 in Euros", 2, line=2, col="red", cex=1.2)
> mtext("NKY", 4, line=2, col="blue", cex=1.2)

Those two series series seem to have a similar pattern, so an idea can be translate the SP500 on the left,

Interesting isn’t it ? Suppose that we want to forecast (or forsee ?) the SP500 in euro for the next 10 years…

People who enjoy charts would have here a nice tool…

Those two series are extremely correlated, with a correlation of 0.9572,

> X1=series5[2501:n,4]
> X2=series5[1:(n-2500),5]
> cor(X1,X2)
[1] 0.9572484

But are the two series cointegrated (see here, here or therefor material on cointegration) ? Well, using standard procedure, we first have to prove that the two series are integrated. First, let us look at the autocorrelograms,

At first sight, we confirm the economic intuition that those indices should be integrated. Standard tests confirm that intuition,

> acf(X2,lag=1000,col="light green")
> acf(X1,lag=1000,col="light green")
> library(tseries)
> adf.test(X1)
        Augmented Dickey-Fuller Test
data:  X1 
Dickey-Fuller = -1.0768, Lag order = 17, p-value = 0.9264
alternative hypothesis: stationary 
> adf.test(X2)
        Augmented Dickey-Fuller Test
data:  X2 
Dickey-Fuller = -1.2905, Lag order = 17, p-value = 0.8788
alternative hypothesis: stationary

But if we want to go further, we have to find the cointegration relationship between the two series. From an heuristic point of view, a linear regression should be a good proxy,

> reg=lm(X1~X2)
> plot(residuals(reg))

> acf(residuals(reg),lag=1000,col="light green")

> adf.test(residuals(reg))
        Augmented Dickey-Fuller Test
data:  residuals(reg) 
Dickey-Fuller = -5.176, Lag order = 17, p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In adf.test(residuals(reg)) : p-value smaller than printed p-value
> pp.test(residuals(reg))
        Phillips-Perron Unit Root Test
data:  residuals(reg) 
Dickey-Fuller Z(alpha) = -46.9775, Truncation lag parameter = 11,
p-value = 0.01
alternative hypothesis: stationary 
Message d'avis :
In pp.test(residuals(reg)) : p-value smaller than printed p-value

When we look at the autocorrelation function, it looks like we do have a stationary series.
This idea is – more or less – the idea of Engle-Granger two step procedure. But actually, we can not directly use Dickey-Fuller’s test to see if residuals are integrated. This was proved in Phillips and Ouliaris (1990), who also proposed a test (see e.g. here),

> library(tseries); po.test(cbind(X1,X2))
        Phillips-Ouliaris Cointegration Test
data:  cbind(X1, X2) 
Phillips-Ouliaris demeaned = -53.1766, Truncation lag parameter = 57,
p-value = 0.01
Message d'avis :
In po.test(cbind(X1, X2)) : p-value smaller than printed p-value
Another similar function can be found in R
> library(urca)
> summary(ca.po(cbind(X1,X2)))
######################################## 
# Phillips and Ouliaris Unit Root Test # 
######################################## 
Test of type Pu 
detrending of series none 
Call:
lm(formula = z[, 1] ~ z[, -1] - 1)
Value of test-statistic is: 45.2032 
Critical values of Pu are:
                  10pct    5pct    1pct
critical values 20.3933 25.9711 38.3413

Thus, we has to admit that those series are cointegrated.

Based on that idea, it is possible to model the stationary component, and forecast it for the next ten years, based on the assumption that we know the behavior of one time series. Hence, if we add the confidence interval due to the stationary component uncertainty, we have the following graph,

 Of course, again, only uncertainty related to the stationary process is considered here….

Séminaire Probabilité et Statistique, UBO, Brest

Talk at the statistical seminar at the Université de Bretagne Occidentale, in Brest, Wednesday May 6th Tuesday May 5th, 14h (in  10 days), on “multivariate extremes. Slides can be found here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

[04/05/2009]: some applications in risk management will be shown at the end of talk, as well as some news things on spatial correlation.

and in order to illustrate tail convergence of Archimedean copulas, I have uploaded two animations, with tail independence below,

with tail dependence (or asymptotic dependence),