Tag Archives: R-english

a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif. Then (under standard suitable conditions)

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> u=5

The function to draw the profile likelihood for the tail index parameter is then

> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")

It is possible to use it that profile likelihood function to derive a confidenceinterval,

> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6315989

$objective
[1] 754.1115
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")

This is done with the following code

> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

On linear models with no constant and R2

In econometrics course we always say to our students that “if you fit a linear model with no constant, then you might have trouble. For instance, you might have a negative R-squared”. So I tried to find databases on the internet such that, when we compute a linear regression, we actually obtain a negative R squared. I have generated hundreds to random databases that should exhibit such a property, in R. With no success. Perhaps to be more specific, I should explain what might happen if we do not include a constant in a linear model. Consider the following dataset, where points are on a straight line, with a negative slope, far from the origin, symmetric with respect to the first diagonal.

> x=1:3
> y=3:1
> plot(x,y)

Points are on a straight line, so it is actually possible to get a perfect linear model. But only if we integrate a constant in our model. This is related to the fact that the correlation between our two variates is -1,

> cor(x,y)
[1] -1

The least-square program is here

http://freakonometrics.blog.free.fr/public/perso5/olssc01b.gif

i.e. the estimate of the slope is

http://freakonometrics.blog.free.fr/public/perso5/olcsc02.gif

Numerically, we obtain

> sum(x*y)/sum(x^2)
[1] 0.7142857

which is the actual slope on the illustration above. If we compute the sum of squares of errors (as a function of the slope), we have here

> ssr=function(b){sum((y-b*x)^2)}
> SSR=Vectorize(ssr)
> B=seq(-1,3,by=.1)
> plot(B,SSR(B),ylim=c(0,ssr(3)),cex=.6,type="b")

so the value we have computed is actually the minimum of the sum of squares of errors. But note that the sum of squares always exceeds the total sum of squares in red on the graph above

> ssr(b)
[1] 6.857143
> sum((y-mean(y))^2)
[1] 2

Recall that the total “coefficient of variation“, denoted http://freakonometrics.blog.free.fr/public/perso5/R2.gif, is defined as

http://freakonometrics.blog.free.fr/public/perso5/olsnc04.gif

i.e.

> 1-ssr(b)/sum((y-mean(y))^2)
[1] -2.428571

which is negative. It is also sometimes defined as “the square of the sample correlation coefficient between the outcomes and their predicted values“. Here it would be related to

> cor(b*x,y)
[1] -1

so we would have a unit http://freakonometrics.blog.free.fr/public/perso5/R2.gif . So obviously, using the http://freakonometrics.blog.free.fr/public/perso5/R2.gif in a model without a constant would give odd results. But the weird part is that if we run that regression with R, we get

> summary(lm(y~0+x))

Call:
lm(formula = y ~ 0 + x)

Residuals:
1       2       3
2.2857  0.5714 -1.1429

Coefficients:
Estimate Std. Error t value Pr(>|t|)
x   0.7143     0.4949   1.443    0.286

Residual standard error: 1.852 on 2 degrees of freedom
Multiple R-squared: 0.5102,	Adjusted R-squared: 0.2653
F-statistic: 2.083 on 1 and 2 DF,  p-value: 0.2857

Here, the estimation is correct. But the http://freakonometrics.blog.free.fr/public/perso5/R2.gif we obtain tells us that the model is not that bad… So if anyone knows what R computes, I’d be glad to know. The value given by R (thanks Vincent for asking me to look carefully at the R source code) is obtained using Pythagoras’s theorem to compute the total sum of square,

> sum((b*x)^2)/(sum((b*x)^2)+sum((y-b*x)^2))
[1] 0.5102041

So be careful, the http://freakonometrics.blog.free.fr/public/perso5/R2.gif might look good, but meaningless !

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

It is “simply” the average value

For some obscure reasons, simple things are usually supposed to be simple. Recently, on the internet, I saw a lot of posts on the “average time in which you hold a stock“, and two rather different values are mentioned

  • Take any stock in the United States. The average time in which you hold a stock is – it’s gone up from 20 seconds to 22 seconds in the last year” (Michael Hudson on http://www.telegraph.co.uk/) or “The founder of Tradebot, in Kansas City, Mo., told students in 2008 that his firm typically held stocks for 11 seconds” (on http://www.nytimes.com/) among many others
  • Based on the NYSE index data, the mean duration of holding period by US investors was around 7 years in 1940. This stayed the same for the next 35 years.  The average holding period had fallen to under 2 years by the time of the 1987 crash. By the turn of the century it had fallen to below one year. It was around 7 months by 2007” (on http://topforeignstocks.com/ see also the graph below) or “Two-thirds [of the managers of more than 800 institutional funds interviewed in a study] had higher turnover than they predicted […] Even though most are judged by performance over three-year horizons, their average holding period was about 17 months, and 19% of the managers held the typical stock for one year or less” (mentioned on http://online.wsj.com/) again among many others

How comes that on the one hand, some people talk about less than 20 sec. for the “average time in which you hold a stock“, and on the other, around a year. How can we have such a difference ? We are talking about an average time here, not a rare event probability…

To understand what might be wrong, consider the following case, with a market, and two stocks: one is kept over a year (52 weaks) while the other other is traded – and exchanged – every week (52 times per year). What is the “average time in which you hold a stock” ? Is it

  • 26.5 weeks ? the average time for the first stock is 52 weeks, while it is 1 for the second one, i.e. 53 over 2
  • 1.96 weeks ? over a year the first stock has been traded once, while it was exchanged 52 times for the second one, i.e. 104 over 53 (total time over the total number of transactions)

Obviously, there is a selection bias in that study (see here for an illustration of that concept, in French). In order to get a better understanding, consider the following simple model, with a large number of simulated stocks. At each transaction, they can be hold by 3 types of investors,

  • with probability 70%, hold – on average – for 20 sec.
  • with probability 20%, hold – on average – for 15 days
  • with probability 10%, hold – on average – for 10 years

As claimed by Warren Buffett, “my favorite time frame for holding a stock is forever“, so it might not be absurd to consider investors who keep a stock for a long period of time. Assume further that the time frame for holding a stock is exponentially distributed (the rate depending on the kind of investor). Assume that those stocks are observed during a period of time of 20 years (which might sound reasonable). Several techniques can be used to estimate the “average time in which you hold a stock

  • The first one is to calculate the mean, per stock, of the holding time, and to consider the average over all the stocks. Maybe it would be a good idea to exclude the last observation (since data were censored),
  • The second one is to divide the (total) period of time by the (total) number of investors that hold the stock during that time frame (or number of transactions)
  • A third idea might be to use the first method, but instead of removing the last one, to use an estimator of the mean based on Kaplan-Meier estimate
  • A fourth idea is to look at what happened at a specific date (say after 10 years), i.e. which investor had the stock, and how long he kept it.

The code to generate that process is the following

> set.seed(1)
> invest=sample(size=ns,c("A","B","C"),
+ prob=c(.7,.2,.1),replace=TRUE)
> lambda=(invest=="A")*20/(365*24*60*60)+
+        (invest=="B")*15/365+
+        (invest=="C")*10
> E=rexp(ns,rate=1/lambda)
> T=cumsum(E)
> T=T[T<20]
> plot(c(T,50),0:length(T),type="s",xlim=c(0,20),col="blue")

with the following trajectory for the number of investor that did hold that specific stock between time 0 and time 20.

Then, the different techniques are the following,

# method 1
> E1=diff(T)
> m1=mean(E)
> M1[s]=m1

for the first one (means of time length, per stock),

# method 2
> if(length(T)>1){
+ n2=length(T)-1
+ d2=T[length(T)-l]-T[1]
+ N2[s]=n2; D2[s]=d2
+ }

for the second one (time length and number of transactions),

+ # method 3
+ T3=c(T,20)
+ C3=c(rep(0,length(T)-1),1)
+ km=survfit(Surv(diff(T3), 1-C3)~1)
+ m3=summary(km,rmean='individual')$table[5]
+ M3[s]=m3

for the third one (based on a prediction of the expected mean, from Kaplan-Meier estimate) and

# method 4
> T0=c(0,T,20)
> m4=min(T0[T0>10])-max(T0[T0<10])
> M4[s]=m4

for the fourth one (based on what happened at time 10). Using monte carlo simulations, we get very different quantities, that can all be interpreted as the “average time in which you hold a stock”

> sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE)
[1] 0.3692335
> mean(M1,na.rm=TRUE)
[1] 0.5469591
> mean(M3,na.rm=TRUE)
[1] 1.702908
> mean(M4,na.rm=TRUE)
[1] 12.40229

If we change to probabilities (and assume that high frequency investors are much more important than long-term ones), e.g.

> invest=sample(size=ns,c("A","B","C"),
+ prob=c(.9,.09,.01),replace=TRUE)

then the first two estimates are rather different. But not the last two.

> sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE)
[1] 0.04072227
> mean(M1,na.rm=TRUE)
[1] 0.06393767
> mean(M3,na.rm=TRUE)
[1] 0.2504322
> mean(M4,na.rm=TRUE)
[1] 12.05508

So I have to confess that the “average time in which you hold a stock” can be almost anything from 10 sec. to 10 years, it clearly depends on the way the average is calculated. The second point is that if the proportion of high frequency trading is extremely high, I should not affect the last one (which is, from my point of view, the most interesting one, an might also be improved by here also integrate a censored variate). So I guess people should be careful when discussing such quantities… And if anyone is willing to share data on that topic, I’d be glad to look at them…

Even odds

This evening, I found a nice probabilistic puzzle on http://www.futilitycloset.com/“A bag contains 16 billiard balls, some white and some black. You draw two balls at the same time. It is equally likely that the two will be the same color as different colors. What is the proportion of colors within the bag?”
To be honest, I did not understood the answer on the blog, but if we write it down, we want to solve

http://freakonometrics.blog.free.fr/public/perso5/futil-01.gif

Let us count: if http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif is the total number of balls, and if http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif is the number of white http://freakonometrics.blog.free.fr/public/perso5/futil-05.gifballs then

http://freakonometrics.blog.free.fr/public/perso5/cccccwwwwwww.gif

I.e. we want to solve a polynomial equation (of order 2) in http://freakonometrics.blog.free.fr/public/perso5/futil-07.gif, or to be more precise, in  http://freakonometrics.blog.free.fr/public/perso5/futil-00.gif

http://freakonometrics.blog.free.fr/public/perso5/futil-06.gif

If http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif is equal to 16, then http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif is either 6 or 10. It can be visualized below

> balls=function(n=16){
+ NB=rep(NA,n)
+ for(k in 2:(n-2)){
+ NB[k]=(k*(k-1)+(n-k)*(n-k-1))
+ }
+ k=which(NB==n*(n-1)/2)
+ if(length(k)>0){
+ plot(1:n,NB,type="b")
+ abline(h=n*(n-1)/2,col="red")
+ points((1:n)[k],NB[k],pch=19,col="red")}
+ return((1:n)[k])}
> balls()
[1]  6 10

But more generally, we can seek other http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif‘s and other pairs of solutions of such a problem. I am not good in arithmetic, so let us run some codes. And what we get is quite nice: if http://freakonometrics.blog.free.fr/public/perso5/futil-10.gif admits a pair of solutions, then http://freakonometrics.blog.free.fr/public/perso5/futil-10.gif is the squared of another integer, say http://freakonometrics.blog.free.fr/public/perso5/futil-13.gif. Further, the difference between http://freakonometrics.blog.free.fr/public/perso5/futil-11.gif and http://freakonometrics.blog.free.fr/public/perso5/futil-12.gif is precisely http://freakonometrics.blog.free.fr/public/perso5/futil-13.gif. And http://freakonometrics.blog.free.fr/public/perso5/futil-12.gif will be one of the answers when the total number of balls will be http://freakonometrics.blog.free.fr/public/perso5/futil-20.gif. Thus, recursively, it is extremely simple to get all possible answers. Below, we have http://freakonometrics.blog.free.fr/public/perso5/futil-04.gifhttp://freakonometrics.blog.free.fr/public/perso5/futil-03.gifhttp://freakonometrics.blog.free.fr/public/perso5/futil-21.gif and the difference between http://freakonometrics.blog.free.fr/public/perso5/futil-21.gif and http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif,

> for(s in 4:1000){
+ b=balls(s)
+ if(length(b)>0) print(c(s,b,diff(b)))
+ }
[1] 9 3 6 3
[1] 16  6 10  4
[1] 25 10 15  5
[1] 36 15 21  6
[1] 49 21 28  7
[1] 64 28 36  8
[1] 81 36 45  9
[1] 100  45  55  10
[1] 121  55  66  11
[1] 144  66  78  12
[1] 169  78  91  13
[1] 196  91 105  14
[1] 225 105 120  15
[1] 256 120 136  16
[1] 289 136 153  17
[1] 324 153 171  18
[1] 361 171 190  19
[1] 400 190 210  20
[1] 441 210 231  21
[1] 484 231 253  22
[1] 529 253 276  23
[1] 576 276 300  24
[1] 625 300 325  25
[1] 676 325 351  26
[1] 729 351 378  27
[1] 784 378 406  28
[1] 841 406 435  29
[1] 900 435 465  30
[1] 961 465 496  31

Thus, given http://freakonometrics.blog.free.fr/public/perso5/futil-22.gif, consider an urn with http://freakonometrics.blog.free.fr/public/perso5/futil-23.gif balls. We draw two balls at the same time. It is equally likely that the two will be the same color as different colors. Then the number of colors within the bag are respectively

http://freakonometrics.blog.free.fr/public/perso5/futil-24.gif

Finally, observe that the http://freakonometrics.blog.free.fr/public/perso5/futil-11.gif‘s are well known, from Pascal’s triangle,

also known as triangular numbers,

http://freakonometrics.blog.free.fr/public/perso5/tr-pascal.gif

Maths can be magic, sometimes…

MAT8886 Extremes and sums (of i.i.d. random variables)

Yesterday, we have discussed briefly sums and maximas of i.i.d. random variables using the concept of subexponential distributions. Today, we will introduce the concept of regular variation: a positive function is said to be regularly varying (at infinity), denoted http://freakonometrics.blog.free.fr/public/perso5/subexp-30.gif, for some http://freakonometrics.blog.free.fr/public/perso5/subexp-31.gif, if

http://freakonometrics.blog.free.fr/public/perso5/subexp-33.gif
for all http://freakonometrics.blog.free.fr/public/perso5/subexo_34.gif. An this concept can be related to sums and maxima (see section 6.2.6 in Embrechts et al. (1997)). Consider i.i.d. positive random variables http://freakonometrics.blog.free.fr/public/perso5/subsexp-01.gif: lethttp://freakonometrics.blog.free.fr/public/perso5/subexp-2.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-3.gif. Then it can be shown easily that

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-20.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-10.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-23.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Z.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-13.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-14.gif
If is not that simple to check for such convergences, it is still possible to use graphs to study the behavior of the empirical version of those quantities. Consider the following function to visualize convergence of empirical ratios,

CONVERGENCE=function(g,p=1,n=500000){
set.seed(1)
X=g(n);X1=g(n);X2=g(n);X3= g(n);X4=g(n)
Tp =cummax(X^p)/cumsum(X^p)
Tp1=cummax(X1^p)/cumsum(X1^p)
Tp2=cummax(X2^p)/cumsum(X2^p)
Tp3=cummax(X3^p)/cumsum(X3^p)
Tp4=cummax(X4^p)/cumsum(X4^p)
plot(Tp4,type="l",ylim=c(0,1),log="x",
xlim=c(100,n),ylab="",col="light blue",xlab="")
lines(Tp1,col="light green")
lines(Tp2,col="yellow")
lines(Tp3,col="pink")
lines(Tp,lwd=2)
abline(h=0:1,col="red",lty=2)
}

or the following to study the “asymptotic” distribution of the ratio on simulated samples

LIMITDIST=function(g,p=1,n=500000,ns=1000){
set.seed(1)
T=rep(NA,ns)
for(i in 1:ns){
X=g(n)
T[i]=max(X^p)/sum(X^p)
}
hist(T,breaks=seq(0,1,by=.05),probability=TRUE,
col="light green",ylab="",xlab="",main="")
}

In the case of exponentially distributed variables, we have

CONVERGENCE(rexp)

For variables with a lognormal distribution,

CONVERGENCE(rlnorm)

And finally, consider the case of a Pareto distribution

rpareto=function(n){runif(n)^(-1/1.5)-1}
CONVERGENCE(rpareto)

Here, it looks like those three distributions have finite variance (and actually, they do). To go one step further, for http://freakonometrics.blog.free.fr/public/perso5/subexp00.gif, define http://freakonometrics.blog.free.fr/public/perso5/suuuuuubexp.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-5.gif. Then analogous results can be derived,

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-99.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-11.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-25.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Zk.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-12.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-15.gif
Again, it is possible to use the function defined above,

CONVERGENCE(rexp,p=2)

or

CONVERGENCE(rexp,p=3)

or even

CONVERGENCE(rexp,p=10)

If the power is not too high, it looks like the ratio goes to zero. But when it becomes larger, it looks like more simulations might be necessary to say something relevant.

CONVERGENCE(rlnorm,p=2)

or

CONVERGENCE(rlnorm,p=3)

Here also, it looks like we have a light tailed distribution (and actually, it is the case). And finally, if we consider the case of a Pareto distribution

CONVERGENCE(rpareto,p=2)

Then it looks like it is an heavy tailed distribution. In order to get a better understanding, plot the distribution of the ratio obtained from 1,000 simulated samples (of size 500,000),

LIMITDIST(rpareto,p=1)

versus

LIMITDIST(rpareto,p=2)

So obviously, something is going on between 1 and 2 (recall that the power parameter of the Pareto distribution is 1.5).

Gold price and fear

Via @theEconomist, I understood that there might be connections between the price of Gold (which is said to be extremely high nowadays) and the VIX SP500 index (the option volatility index, i.e. the so-called “fear index“, as discussed – in French- a few months ago). This has been discussed also on several blogs, e.g. http://etfdailynews.com/ or http://blogs.marketwatch.com/. Via Yahoo quotes, it is possible to get also easily the

SP500 VIX index.

> library(tseries)
> X=get.hist.quote("^VIX")
> T=time(VIX)
> Y=as.POSIXlt(T)$year+1900
> X2011=X[Y==2011,]
> VIX=X2011[,4]
> VIX100=as.numeric(VIX)/VIX[1]*100
> T2011=T[Y==2011]
> plot(T2011,VIX100,lwd=2,col="red",type="l",
+ xlab="",ylab="",ylim=c(60,290))

And a huge xls file can give us the price of gold (on a daily basis). But we can extract only one series (with the price in USD, which is the series of interest here)

> goldprice=read.table(
+ "http://freakonometrics.blog.free.fr/
  public/data/goldpriceUSD.csv",
+ header=TRUE,sep=";",dec=",")
> T=as.Date(goldprice$Name,"%d/%m/%y")
> GP=goldprice$USdollar
> Y=as.POSIXlt(T)$year+1896
> GP2011=GP[Y==2011]
> GP100=GP2011/GP2011[1]*100
> T2011=T[Y==2011]
> lines(T2011-4*365.25,GP100,lwd=2,col="blue")

We can see that scales are quite different on those two series (starting at 100 at the beginning of January 2011),

An alternative might be not to consider the price of gold, but something more psychological, like Internet researches. It is possible to download the csv file for queries on gold price on Google, via google insight.

 
> google=read.table(
+ "http://freakonometrics.blog.free.fr/public/data/google.csv",
+ skip=4,header=TRUE,sep=",",nrows=51)
> W=as.Date(substr(as.character(google$Semaine),1,10))
> G=google$gold.price
> G100=G/G[1]*100
> lines(W,G100,lwd=2,col="blue")

which gives the following graph (again, starting at 100 at the beginning of January 2011),

Here, we can clearly observe that the two series are related, maybe cointegrated. Nice isn’t it ?

Infidelity and econometrics

On http://www.bakadesuyo.com, there was recently an interesting discussion about infidelity, the key question being “at what ages are men and women most likely to have affairs?” The discussion is based on some graphs, e.g.

The source is a paper by Donald Cox. Based on a sample of 36 men and 22 women 3,432 respondent (NHSLS dataset) . And to be honest, I have been surprised by the shape of the curves. Especially for men… In order to compare, it is possible to use another dataset that can be found in R,

> library(Ecdat)
> data(Fair)
> tail(Fair)
sex age   ym child religious education occupation
596   male  47 15.0   yes         3        16          4
597   male  22  1.5   yes         1        12          2
598 female  32 10.0   yes         2        18          5
599   male  32 10.0   yes         2        17          6
600   male  22  7.0   yes         3        18          6
601 female  32 15.0   yes         3        14          1
rate nbaffairs
596    2         7
597    5         1
598    4         7
599    5         2
600    2         2
601    5         1

with 601 observations (from Fair (1977)). It is possible to run a Poisson regression to describe the number of affairs in the past year. E.g for men

> library(splines)
> regM=glm(nbaffairs~bs(age),family=poisson,
+ data=Fair[Fair$sex=="male",])
> a=seq(20,60)
> N=predict(regM,newdata=data.frame(age=a),type="response")
> plot(a,N,type="l",lwd=2,col="red")

or for women,

> regF=glm(nbaffairs~bs(age),family=poisson,
+ data=Fair[Fair$sex=="female",])
> N=predict(regF,newdata=data.frame(age=a),type="response")
> plot(a,N,type="l",lwd=2,col="red",lty=2)

On that (larger) dataset, we obtain curves that are more intuitive… But maybe the Poisson distribution is not an appropriate model. For instance, having no affairs do not mean that the person did not want to… So perhaps, a more interesting model would be a Poisson model with a zero-inflation, i.e. some people are honest and do not want to have affairs (and appear as 0), while some do want to have some affairs, and the number of affairs is Poisson distributed (and can take the value 0). If we focus on people wo do not want to have affairs, the model (and the prediction) is the following, where we plot the probability of not being interested in having an affair,

> library(pscl)
> regM0=zeroinfl(nbaffairs~bs(age)|bs(age),family=poisson,
+ link="logit",data=Fair[Fair$sex=="male",])
> N0=predict(regM0,newdata=data.frame(age=a),type="zero")
> plot(a,N0,type="l",lwd=2,col="blue")

For those willing to have an affair, here is the parameter of the Poisson distribution of the number of affairs,

> Nc=predict(regM0,newdata=data.frame(age=a),type="count")
> plot(a,Nc,type="l",lwd=2,col="purple")

The same can be done for women, with the probability of no-willing to have an affair,

and to Poisson rate for women willing to have an affair,

If we focus on people willing to have an affair, the curves are the following,

i.e. men below 40 have more interested, but after 40, the probability drops, while women are still more and more likely to be willing to have an affair. On the other hand, young women having affairs might be less, but they usually have much more affairs than men…

Fisher-Tippett theorem and limiting distribution for the maximum

Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples https://freakonometrics.hypotheses.org/files/2018/02/max-00.gif. For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. https://freakonometrics.hypotheses.org/files/2018/02/max-09.gif on the unit interval. Let https://freakonometrics.hypotheses.org/files/2018/02/max-10.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-11.gif. Then, for all https://freakonometrics.hypotheses.org/files/2018/02/max-12.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-13.gif,

https://freakonometrics.hypotheses.org/files/2018/02/max-14.gif

i.e. the limiting distribution of the maximum is Weibull’s.

set.seed(1)
s=1000000
n=100
M=matrix(runif(s),n,s/n)
V=apply(M,2,max)
bn=1
an=1/n
U=(V-bn)/an
hist(U,probability=TRUE,,col="light green",
xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25))
u=seq(-10,0,by=.1)
v=exp(u)
lines(u,v,lwd=3,col="red")

For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-05.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-06.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-07.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-08.gif

which means that the limiting distribution is Fréchet’s.

set.seed(1)
s=1000000
n=100
M=matrix((runif(s))^(-1/2),n,s/n)
V=apply(M,2,max)
bn=0
an=n^(1/2)
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25))
u=seq(0,10,by=.1)
v=dfrechet(u,shape=2)
lines(u,v,lwd=3,col="red")

For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-01.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-02.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-03.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-04.gif

i.e. the limiting distribution for the maximum is Gumbel’s distribution.

library(evd)
set.seed(1)
s=1000000
n=100
M=matrix(rexp(s,1),n,s/n)
V=apply(M,2,max)
(bn=qexp(1-1/n))
log(n)
an=1
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Consider now a Gaussian https://freakonometrics.hypotheses.org/files/2018/02/max-17.gif sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)

https://freakonometrics.hypotheses.org/files/2018/02/max-15.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-16.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-18.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-19.gif. Then we can get

https://freakonometrics.hypotheses.org/files/2018/02/max-20.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-21.gif. I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,

and it is only slightly better with 1,000 observations,

set.seed(1)
s=10000000
n=1000
M=matrix(rnorm(s,0,1),n,s/n)
V=apply(M,2,max)
(bn=qnorm(1-1/n,0,1))
an=1/bn
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since https://freakonometrics.hypotheses.org/files/2018/02/max-22.gif, if

https://freakonometrics.hypotheses.org/files/2018/02/max-23.gif

then

https://freakonometrics.hypotheses.org/files/2018/02/max-24.gif

i.e. using Taylor’s approximation on the right term,

https://freakonometrics.hypotheses.org/files/2018/02/max-25.gif

This gives us normalizing coefficients we should use here.

set.seed(1)
s=10000000
n=1000
M=matrix(rlnorm(s,0,1),n,s/n)
V=apply(M,2,max)
bn=exp(qnorm(1-1/n,0,1))
an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1))
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Quel lancer de dé faire à Serpents et Échelles ?

Hier soir, j’évoquais l’utilisation des chaînes de Markov au jeu Serpents et Échelles. Comme me le faisait remarquer Jean-Philippe, de manière assez étrange, les enfants sont toujours beaucoup plus content après avoir fait un 6 qu’après avoir fait un 1. Mais est-ce réellement la valeur optimale que le dé doit prendre ? Ça dépend bien sûr de la position. Par exemple, au premier lancer, on peut se demander ce que deviendrait le nombre de tours moyen nécessaires pour finir le jeu, conditionnellement aux 6 lancers possibles. On peut calculer, toujours conditionnellement aux 6 valeurs possibles de dés (et des positions où on va se retrouver), l’espérance du nombre de tours pour finir la partie,

esperance=function(h0){
INITIAL = as.numeric(which(M[h0+1,]>0))-1
ESPERANCE=rep(NA,length(INITIAL))
names(ESPERANCE)=INITIAL
for(k in 1:length(INITIAL)){
initial=rep(0,n+1); initial[INITIAL[k]]=1
distrib=initial%*%M
game=rep(NA,1000)
for(h in 1:length(game)){
game[h]=distrib[n+1]
distrib=distrib%*%M}
ESPERANCE[k]=sum(1-game)}
return(ESPERANCE)}

(à partir du code mis en ligne hier), e.g. pour le premier lancer,

> esperance(0)
1        2        3        5        6       14
32.16499 31.99947 31.82348 31.47954 31.36441 29.83020

où la valeur indiquée est la position où l’on pourrait se retrouver (la dernière correspond à un 4). On note que le meilleur premier lancer possible est celui qui nous amène sur la première échelle, i.e. la 4ème case,

Si on regarde case par case, on a les valeurs des “meilleurs” lancers de dés suivant (sans forcément l’unicité) en fonction de sa position sur le plateau

(avec en rouge les serpents et en bleu les échelles). Amusant non ?

Maps with R, and polygon boundaries

With R, it is extremely easy to draw maps. Let us start with something simple, like French regions. Baptiste mentioned on his blog that shapefiles can be downloaded from http://ign.fr/ website. Hence, if you extract the zip file, it is possible to get claims frequency per region (as done in the course ACT2040),

> library(maptools)
> library(maps)
> departements<-readShapeSpatial("DEPARTEMENT.SHP")
> region<-tapply(baseFREQ[,"nbre"],
+ as.factor(baseFREQ[,"region"]),sum)/
+ tapply(baseFREQ[,"exposition"],
+ as.factor(baseFREQ[,"region"]),sum)
> depFREQ=rep(NA,nrow(departements))
> names(depFREQ)=as.character(
+ departements$CODE_REG)
> for(nom in names(region)){
+ depFREQ[names(depFREQ)==nom] =
+ region[nom]}
> plot(departements,col=gray((depFREQ-.05)*20))
> legend(166963,6561753,legend=seq(1,0,by=-.1)/20+.05,
+ fill=gray(seq(1,0,by=-.1)),cex=1.25, bty="n")

Another application is on earthquakes. It is possible to use shapefiles of tectonic plates contour, and to relate earthquakes to plates. Shapefiles can be found onhttp://www.colorado.edu/ (here).

http://freakonometrics.blog.free.fr/public/perso4/plate-tekto.gif

First, we can extract the shapes of the tectonic plates

> plates = readShapePoly("plates.shp",
+ proj4string=CRS("+proj=longlat"))
> PP=SpatialPolygons2PolySet(plates)

Consider Montreal,

> montreal=c(-73.600,45.500)

Given that specific location, it is possible to use the following code to get the associated plate,

> PLATE.loc=function(pt){
+ K=NA
+ for(k in 1:17){
+ c=point.in.polygon(pt[1], pt[2],
+ PP[PP$PID==k,c("X")],PP[PP$PID==k,c("Y")],
+ mode.checked=FALSE)
+ if(c>0){K=k}
+ }
+ return(K)}
> abline(v=montreal[1],col="red")
> abline(h=montreal[2],col="red")
> PLATE.loc(montreal)
[1] 1

and then to plot the associated tectonic plate very easily

> PLATE=function(k0){
+ library(maps)
+ map("world")
+ polygon(PP[PP$PID==k0,c("X")],PP[PP$PID==k0,c("Y")],
+ col="red")
+ for(k in (1:17)[-k0]){polygon(PP[PP$PID==k,c("X")],
+ PP[PP$PID==k,c("Y")],col="light blue")}
+ map("world",add=TRUE)}
> PLATE(PLATE.loc(montreal))

Those code were used in the paper written with Mathieu, and that will be presented on January 30th at the Geotop seminar.

Basics on Markov Chain (for parents)

Markov chains is a very interesting and powerful tool. Especially for parents. Because if you think about it quickly, most of the games our kids are playing at are Markovian. For instance, snakes and ladders…

It is extremely easy to write down the transition matrix, one just need to define all snakes and ladders. For the one above, we have,

n=100
M=matrix(0,n+1,n+1+6)
rownames(M)=0:n
colnames(M)=0:(n+6)
for(i in 1:6){diag(M[,(i+1):(i+1+n)])=1/6}
M[,n+1]=apply(M[,(n+1):(n+1+6)],1,sum)
M=M[,1:(n+1)]
starting=c(4,9,17,20,28,40,51,54,62,
64,63,71,93,95,92)
ending  =c(14,31,7,38,84,59,67,34,19,
60,81,91,73,75,78)
for(i in 1:length(starting)){
v=M[,starting[i]+1]
ind=which(v>0)
M[ind,starting[i]+1]=0
M[ind,ending[i]+1]=M[ind,ending[i]+1]+v[ind]}

So, why is it important to have a Markov Chain ? Because, once you’ve noticed that you had a Markov Chain game, you can derive anything you want. For instance, you can get the distribution after some turns,

powermat=function(P,h){
Ph=P
if(h>1){
for(k in 2:h){
Ph=Ph%*%P}}
return(Ph)}
initial=c(1,rep(0,n))
COLOR=rev(heat.colors(101))
u=1:sqrt(n)
boxes=data.frame(
index=1:n,
ord=rep(u,each=sqrt(n)),
abs=rep(c(u,rev(u)),sqrt(n)/2))
position=function(h=1){
D=initial%*%powermat(M,h)
plot(0:10,0:10,col="white",axes=FALSE,
xlab="",ylab="",main=paste("Position after",h,"turns"))
segments(0:10,rep(0,11),0:10,rep(10,11))
segments(rep(0,11),0:10,rep(10,11),0:10)
for(i in 1:n){
polygon(boxes$abs[i]-c(0,0,1,1),
boxes$ord[i]-c(0,1,1,0),
col=COLOR[min(1+trunc(500*D[i+1]),101)],
border=NA)}
text(boxes$abs-.5,boxes$ord-.5,
boxes$index,cex=.7)
segments(c(0,10),rep(0,2),c(0,10),rep(10,2))
segments(rep(0,2),c(0,10),rep(10,2),c(0,10))}

Here, we have the following (note that I assume that once 100 is reached, the game is over)

Assume for instance, that after 10 turns, your daughter accidentally drops her pawn out of the game. Here is the theoretical (unconditional) position of her pawn after 10 turns,

 so, if she claims she was either on 58, 59 or 60, here are the theoretical probabilities to be in each cell after 10 turns,

> h=10
> (initial%*%powermat(M,h))[59:61]/
+ sum((initial%*%powermat(M,h))[59:61])
[1] 0.1597003 0.5168209 0.3234788

i.e. it is more likely she was on 59 (60th cell of the vector since we start in 0). You can also look at the distribution of the number of turns (at first with only one player).

distrib=initial%*%M
game=rep(NA,1000)
for(h in 1:length(game)){
game[h]=distrib[n+1]
distrib=distrib%*%M}
plot(1-game[1:200],type="l",lwd=2,col="red",
ylab="Probability to be still playing")

Once you have that survival distribution, you have the expected number of turns to finish the game,

> sum(1-game)
[1] 32.16499

i.e. in 33 turns, on average, your daughter reaches the 100 cell. But in 50% of the games, it takes less than 29,

> max(which(1-game>.5))
[1] 29

But assuming that you are playing with your daughter, and that the game is over once one player reaches the 100 cell, it is possible to get the survival distribution of the first time one of us reaches the 100 cell,

plot((1-game[1:200])^2,type="l",lwd=2,col="blue",
ylab="Probability to be still playing (2 players)")

Here, the expected number of turns before ending the game is

> sum((1-game)^2)
[1] 23.40439

And if you ask your son to join the game, the survival distribution function is

plot((1-game[1:200])^3,type="l",lwd=2,col="purple",
ylab="Probability to be still playing (3 players)")

i.e. the expected number of turns before the end is now

> sum((1-game)^3)
[1] 20.02098

PhD defense on copulas

This Wednesday I will be at Université Paris 1 Sorbonne as a member of the jury of the PhD thesis of Pierre-André Maugis, on conditional correlation and vine copula.

Vine copulas were born in 2002 with thepaper of Tim Bedford and Roger M. CookeVines–a new graphical model for dependent random variables. The idea is to use the following decomposition for a multivariate density

(from Bayes formula, with synthetic notations). Then using the relationship between a bivariate density and its copula (density)

thus

Using again Bayes formula,

and we can write

Since  and , the previous expression becomes

or to stress on the most important part (as I see it)

It is common then to assume that this conditional copula does not depend on the conditioning parameter. The more detailed expression of that joint trivariate density is

The (parametric) inference algorithm is defined in Cooke, Joe and Aas (2010) as follows

The important assumption in vine copula models is that conditional copulas are constant. And this assumption might be relevant in some cases. For instance, in the Gaussian case (the observations have a Gaussian joint distribution – or at least copula – and we fit a vine model with Gaussian bivariate copulas).

The code to fit a vine copula is the following,

> library(CDVine)
> library(mnormt)
> SIGMA=matrix(c(1,.6,.7,.6,1,.8,.7,.8,1),3,3)
> X=rmnorm(n=100000,varcov=SIGMA)
> CDVineSeqEst(dat=X, family = c(1,1,1),
+ type = 1, method = "mle")
$par
[1] 0.6001505 0.7023699 0.6698215
 
$par2
[1] 0 0 0

Note that it is consistent with the following algorithm where conditional copulas are fitted. In the following, for all values of the given component, we wit a Gaussian copula for the conditional remaining pair,

> U=pnorm(X)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> GaussCop = normalCopula(param=.5, dim = 2)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> fit12.mpl = fitCopula(GaussCop, U1U2, method="mpl")@estimate
> fit13.mpl = fitCopula(GaussCop, U1U3, method="mpl")@estimate
> fit12.mpl
[1] 0.5984932
> fit13.mpl
[1] 0.7005185
> fit23a=fit23b=rep(NA,99)
> for(i in 4:96){
+ x=i/100
+ C12=pcopula(normalCopula(param=fit12.mpl, dim = 2),U1U2)
+ C13=pcopula(normalCopula(param=fit13.mpl, dim = 2),U1U3)
+ U12=rank(C12)/(nrow(U)+1)
+ U13=rank(C13)/(nrow(U)+1)
+ U23=cbind(U12[abs(U[,1]-x)<.02],U13[abs(U[,1]-x)<.02])
+ V23=cbind(rank(U23[,1])/(nrow(U23)+1),
+ rank(U23[,2])/(nrow(U23)+1))
+ fit23.mpl = fitCopula(GaussCop, V23, method="mpl")@estimate
+ fit23a[i]=fit23.mpl
+ }
> plot(X,fit23a,col="red")

It looks like assuming the conditional copula as constant was a valid assumption here

But note that if the true distribution is not Gaussian, then assuming the conditional copula as constant is not valid anymore (here a trivariate Clayton copula was generated)

Climate change and insurance

I will be in Lyon next Monday to give a talk on “Modeling heat-waves: return period for non-stationary extremes” in a workshop entitled “Changement climatique et gestion des risques“. An interesting reference might be some pages from Le Monde (2010). The talk will be more a discussion about modeling series of temperatures (daily temperatures). A starting point might be the IPCC Third Assessment graph which illustrates the effect on extreme temperatures when (a) the mean temperature increases, (b) the variance increases, and (c) when both the mean and variance increase for a normal distribution of temperature.

I will add here some code used to generate some graphs I will comment. The graph below it the daily minimum temperature,

TEMP=read.table("http://freakonometrics.blog.free.fr/
public/data/TN_STAID000038.txt",header=TRUE,sep=",")
D=as.Date(as.character(TEMP$DATE),"%Y%m%d")
T=TEMP$TN/10
day=as.POSIXlt(D)$yday+1
an=trunc(TEMP$DATE/10000)
plot(D,T,col="light blue",xlab="Minimum
daily temperature in Paris",ylab="",cex=.5)
abline(R,lwd=2,col="red")

We can clearly see an increasing linear trend. But we do not care (too much) here about the increase of the average temperature, but more dispersion, and tails. Here are decenal box-plots

or quantile-regressions

library(quantreg)
PENTESTD=PENTE=rep(NA,99)
for(i in 1:99){
R=rq(T~D,tau=i/100)
PENTE[i]=R$coefficients[2]
PENTESTD[i]=summary(R)$coefficients[2,2]
}
m=lm(T~D)$coefficients[2]
plot((1:99)/100,(PENTE/m-1)*100,type="b")
segments((1:99)/100,((PENTE-2*PENTESTD)/m-1)*100,
(1:99)/100,((PENTE+2*PENTESTD)/m-1)*100,
col="light blue",lwd=3)
points((1:99)/100,(PENTE/m-1)*100,type="b")
abline(h=0,lty=2,col="red")

In order to get a better understanding of the graph above, here are slopes of quantile regressions associated to different probabilities,

The annualized maxima (of minimum temperature, i.e. warmest night of the year)

i.e. the regression of yearly maximas.

tail index of a Generalized Pareto distribution

Instead of looking at observation over a century (the trend is obviously linear), we can focus on seaonal behavior,

B=data.frame(Y=rep(T,3),X=c(day,day-365,day+365),
A=rep(an,3))
library(quantreg)
library(splines)
Q50=rq(Y~bs(X,10),data=B,tau=.5)
Q95=rq(Y~bs(X,10),data=B,tau=.95)
Q05=rq(Y~bs(X,10),data=B,tau=.05)
YP95=predict(Q95,newdata=data.frame(X=1:366))
YP05=predict(Q05,newdata=data.frame(X=1:366))
I=(T>predict(Q95))|(T<predict(Q05))
YP50=predict(Q50,newdata=data.frame(X=1:366))
plot(day[I],T[I],col="light blue",cex=.5)
lines(1:365,YP95[1:365],col="blue")
lines(1:365,YP05[1:365],col="blue")
lines(1:365,YP50[1:365],col="blue",lwd=3)

with on red series from 1900 till 1920, and on purple from 1990 till 2010. If we remove the linear trend, and the seasonal cycle, here are the residuals, assume to be stationary,

on during the year

Obviously, something has been missed,

The graph below is the volatility of the residual series, within the year,

Instead of looking at volatility, we can focus on tails, with tail index per month,

mois=as.POSIXlt(D)$mon+1
Pmax=Dmax=matrix(NA,12,2)
for(s in 1:12){
X=T3[mois==s]
FIT=gpd(X,5)
Pmax[s,1:2]=FIT$par.ests
Dmax[s,1:2]=FIT$par.ses
}
plot(1:12,Pmax[,1],type="b",col="blue",
ylim=c(-.6,0))
segments(1:12,Pmax[,1]+2*Dmax[,1],1:12,Pmax[,1]-
2*Dmax[,1],col="light blue",lwd=2)
points(1:12,Pmax[,1],col="blue")
text(1:12,rep(-.5,12),c("JAN","FEV","MARS",
"AVR","MAI","JUIN","JUIL","AOUT","SEPT",
"OCT","NOV","DEV"),cex=.7)

At the end of the talk, I will also mention multiple city models, e.g. Paris and Marseille,

If we look at residuals (once we have removed the linear trend and the seasonal cycle) we observe some positive dependence

In order to study (strong) tail dependence, define

http://freakonometrics.hypotheses.org/files/2017/07/Llatex2png.2.php_.png

for lower left tail and

http://freakonometrics.hypotheses.org/files/2017/07/Clatex2png.2.php_.png

for upper right tail, where http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-12.2.php_.png is the survival copula associated to http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-13.2.php_.png, i.e.
http://freakonometrics.hypotheses.org/files/2017/01/toclatex2png-14.2.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/01/toclatex2png-15.2.php_.png

It looks like there is no tail dependence (in the uper tail). But it is also possible to study weaker tail dependence, through

http://freakonometrics.hypotheses.org/files/2017/01/toc2latex2png.3.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/01/toc2latex2png.4.php_.png


Slides can be visualized below, I will upload them soon,

Confidence interval for predictions with GLMs

Consider a (simple) Poisson regression http://freakonometrics.hypotheses.org/files/2016/11/poiss01.gif. Given a sample http://freakonometrics.hypotheses.org/files/2016/11/poiss02.gif where http://freakonometrics.hypotheses.org/files/2016/11/poiss03.gif, the goal is to derive a 95% confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif given http://freakonometrics.hypotheses.org/files/2016/11/poiss05.gif, where http://freakonometrics.hypotheses.org/files/2016/11/poiss04.gif is the prediction. Hence, we want to derive a confidence interval for the prediction, not the potential observation, i.e. the dot on the graph below

> r=glm(dist~speed,data=cars,family=poisson)
> P=predict(r,type="response",
+ newdata=data.frame(speed=seq(-1,35,by=.2)))
> plot(cars,xlim=c(0,31),ylim=c(0,170))
> abline(v=30,lty=2)
> lines(seq(-1,35,by=.2),P,lwd=2,col="red")
> P0=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> points(30,P1$fit,pch=4,lwd=3)

i.e.

Let http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif denote the maximum likelihood estimator of http://freakonometrics.hypotheses.org/files/2016/11/poiss07.gif. Then
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif
where http://freakonometrics.hypotheses.org/files/2016/11/poiss101.gif is Fisher information of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif (from standard maximum likelihood theory). Recall that
http://freakonometrics.hypotheses.org/files/2016/11/poiss13.gif
where computation of those values is based on the following calculations
http://freakonometrics.blog.fre<br /><br /> e.fr/public/latex/poiss21.gif
In the case of the log-Poisson regression
http://freakonometrics.hypotheses.org/files/2016/11/poiss36.gif
Let us get back to our initial problem.

  • confidence interval for the linear combination

A first idea to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif is to get a confidence interval for http://freakonometrics.hypotheses.org/files/2016/11/poiss100.gif (by taking exponential values of bounds, since the exponential is a monotone function). Asymptotically, we know that
http://freakonometrics.hypotheses.org/files/2016/11/poiss40.gif

thus, an approximation for the variance matrix of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif will be based on http://freakonometrics.hypotheses.org/files/2016/11/poiss45.gif, obtained by plugging estimators of the parameters.
Then, since http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif as an asymptotic multivariate distribution, any linear combination of the parameters will also be normal, i.e.
http://freakonometrics.hypotheses.org/files/2016/11/poiss47.gif has a normal distribution, centered on http://freakonometrics.hypotheses.org/files/2016/11/poiss49.gif, with variance http://freakonometrics.hypotheses.org/files/2016/11/poiss102.gif where http://freakonometrics.hypotheses.org/files/2016/11/Poiss110.gif is the variance of http://freakonometrics.hypotheses.org/files/2016/11/poiss06.gif. All those quantities can be easily computed. First, we can get the variance of the estimators

> i1=sum(predict(reg,type="response"))
> i2=sum(cars$speed*predict(reg,type="response"))
> i3=sum(cars$speed^2*predict(reg,type="response"))
> I=matrix(c(i1,i2,i2,i3),2,2)
> V=solve(I)

Hence, if we compare with the output of the regression,

> summary(reg)$cov.unscaled
(Intercept)         speed
(Intercept)  0.0066870446 -3.474479e-04
speed       -0.0003474479  1.940302e-05
> V
[,1]          [,2]
[1,]  0.0066871228 -3.474515e-04
[2,] -0.0003474515  1.940318e-05

Based on those values, it is easy to derive the standard deviation for the linear combination,

> x=30
> P2=predict(r,type="link",se.fit=TRUE,
+ newdata=data.frame(speed=x))
> P2
$fit
1
5.046034

$se.fit
[1] 0.05747075

$residual.scale
[1] 1

> sqrt(V[1,1]+2*x*V[2,1]+x^2*V[2,2])
[1] 0.05747084
> sqrt(t(c(1,x))%*%V%*%c(1,x))
[,1]
[1,] 0.05747084

And once we have the standard deviation, and normality (at least asymptotically), confidence intervals are derived, and then, taking the exponential of the bounds, we get confidence interval

> segments(30,exp(P2$fit-1.96*P2$se.fit),
+ 30,exp(P2$fit+1.96*P2$se.fit),col="blue",lwd=3)

Based on that technique, confidence intervals are no longer centered on the prediction. But who cares ?

  • delta method

Actually, those who like to use “more or less” expressions for confidence intervals will not like non centered intervals. So, an alternative is to use the delta method. Instead of writing (again) something on the theory, we can use a package which computes that method,

> estmean=t(c(1,x))%*%coef(reg)
> var=t(c(1,x))%*%summary(reg)$cov.unscaled%*%c(1,x)
> library(msm)
> deltamethod (~ exp(x1), estmean, var)
[1] 8.931232
> P1=predict(r,type="response",se.fit=TRUE,
+ newdata=data.frame(speed=30))
> P1
$fit
1
155.4048

$se.fit
1
8.931232

$residual.scale
[1] 1

The delta method gives us (asymptotic) normality, so once we have a standard deviation, we get the confidence interval.

> segments(30,P1$fit-1.96*P1$se.fit,30,
+ P1$fit+1.96*P1$se.fit,col="blue",lwd=3)

Note that those quantities – obtained with two different approaches – are rather close here

> exp(P2$fit-1.96*P2$se.fit)
1
138.8495
> P1$fit-1.96*P1$se.fit
1
137.8996
> exp(P2$fit+1.96*P2$se.fit)
1
173.9341
> P1$fit+1.96*P1$se.fit
1
172.9101
  • bootstrap techniques

And a third method (but far from what I expect to teach on that course) is to use bootstrap techniques to about those results based on asymptotic normality (we have only 50 observations). The idea is to sample from out dataset, and to run a log-Poisson regression on those new samples, and to repeat a lot of time,