Régression, variables explicatives et géométrie

La régression (comme tout calcul d’espérance conditionnelle) est un problème de projections

Un théorème intéressant est le théorème dit de Frisch-Waugh, permettant de comprendre la différence fondamentale entre un modèle de régression multiple, et les modèles de régression simple. On reviendra sur ce point en cours en évoquant rapidement le paradoxe de Simpson. La formulation est la suivante: on veut régresser http://freakonometrics.hypotheses.org/files/2016/05/FW15.gif sur http://freakonometrics.hypotheses.org/files/2016/05/FW16.gif et http://freakonometrics.hypotheses.org/files/2016/05/FW17.gif, deux ensembles (a priori disjoints) de variables explicatives,

http://freakonometrics.hypotheses.org/files/2016/05/FW02.gif

Les équations normales (associées au problème de minimisation de la somme des carrés des erreurs) sont

http://freakonometrics.hypotheses.org/files/2016/05/FW11.gif

de telle sorte qu’à l’optimum on peut relier les estimateurs des deux jeux de paramètres par

http://freakonometrics.hypotheses.org/files/2016/05/FW12.gif

La première partie correspond à la régression de http://freakonometrics.hypotheses.org/files/2016/05/FW15.gif sur http://freakonometrics.hypotheses.org/files/2016/05/FW13.gif, mais il reste un second terme dès lors que http://freakonometrics.hypotheses.org/files/2016/05/FW16.gif et http://freakonometrics.hypotheses.org/files/2016/05/FW17.gif ne sont pas orthogonales. Notons http://freakonometrics.hypotheses.org/files/2016/05/FW04.gifla matrice de projection (orthogonale) sur http://freakonometrics.hypotheses.org/files/2016/05/FW16.gif

http://freakonometrics.hypotheses.org/files/2016/05/FW06.gif

Alors on peut écrire

http://freakonometrics.hypotheses.org/files/2016/05/FW03.gif

En posant

http://freakonometrics.hypotheses.org/files/2016/05/FW07.gif et http://freakonometrics.hypotheses.org/files/2016/05/FW08.gif,

on retrouve un modèle linéaire classique (à condition de travailler sur la projection des variables sur le sous-espace engendré par http://freakonometrics.hypotheses.org/files/2016/05/FW16.gif , i.e. en transformant les variables),

http://freakonometrics.hypotheses.org/files/2016/05/FW10.gif

Pour aller plus loin sur la géométrie des moindres carrées, et sur le théorème de Frisch-Waugh, je peux renvoyer à des notes de cours, et à quelques transparents

Le graphique ci-dessous correspond au cas où les variables explicatives sont orthogonales, et dans ce cas, la régression multiple est équivalente à deux régressions simples

http://freakonometrics.hypotheses.org/files/2016/05/FW1.gif

Le graphique ci-dessous au cas non orthogonal,

http://freakonometrics.hypotheses.org/files/2016/05/FW2.gif

Le code qui permet de vérifier ces histoires de projections successives est relativement simple. Tout d’abord on importe les données, et on regarde le modèle global,

> chicago=read.table(
+ "http://freakonometrics.free.fr/chicago.txt",
+ header=TRUE,sep=";")
> Y=chicago$Fire
> X1=chicago$X_1
> X2=chicago$X_2
> X3=chicago$X_3
> base=data.frame(Y,X1,X2,X3)
> tail(base)
Y    X1 X2     X3
42  4.8 0.152 19 13.323
43 10.4 0.408 25 12.960
44 15.6 0.578 28 11.260
45  7.0 0.114  3 10.080
46  7.1 0.492 23 11.428
47  4.9 0.466 27 13.731
> regression=lm(Y~X1+X2+X3)
> summary(regression)

Call:
lm(formula = Y ~ X1 + X2 + X3)

Residuals:
Min     1Q Median     3Q    Max
-9.737 -4.565 -1.479  3.751 16.079

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 22.07525    6.19447   3.564 0.000910 ***
X1          -0.62764    5.28130  -0.119 0.905953
X2           0.22378    0.06161   3.632 0.000744 ***
X3          -1.55059    0.38195  -4.060 0.000204 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.527 on 43 degrees of freedom
Multiple R-squared: 0.4417,	Adjusted R-squared: 0.4027
F-statistic: 11.34 on 3 and 43 DF,  p-value: 1.314e-05

> n=length(Y)
> X=matrix(c(rep(1,n),X1,X2,X3),n,4)
> X[1:5,]
[,1]  [,2] [,3]   [,4]
[1,]    1 0.604   29 11.744
[2,]    1 0.765   44  9.323
[3,]    1 0.735   36  9.948
[4,]    1 0.669   37 10.656
[5,]    1 0.814   53  9.730

Ensuite, on va projeter seulement sur les deux premières variables (et la constante)

> FWX1=X[,1:3]
> regression12=lm(Y~X1+X2)
> solve(t(FWX1)%*%FWX1)%*%t(FWX1)%*%Y
[,1]
[1,]  0.08069764
[2,] 11.56913900
[3,]  0.15108490
> summary(regression12)$coefficients
Estimate Std. Error    t value   Pr(>|t|)
(Intercept)  0.08069764 3.49182154 0.02311047 0.98166664
X1          11.56913900 5.05014782 2.29085156 0.02681947
X2           0.15108490 0.06854408 2.20420043 0.03278652
>
>
> FWX2=X[,4]
> H1=FWX1%*%solve(t(FWX1)%*%FWX1)%*%t(FWX1)
> M1=diag(rep(1,n))-H1
> FWX2s=M1%*%FWX2
> FWYs =M1%*%Y
> (beta2=solve(t(FWX2s)%*%FWX2s)%*%t(FWX2s)%*%FWYs)
[,1]
[1,] -1.550594
> summary(regression)$coefficients[4]
[1] -1.550594
>
> (beta1=solve(t(FWX1)%*%FWX1)%*%t(FWX1)%*%Y-
+        solve(t(FWX1)%*%FWX1)%*%t(FWX1)%*%FWX2%*%beta2)
[,1]
[1,] 22.0752495
[2,] -0.6276442
[3,]  0.2237765
> summary(regression)$coefficients[1:3]
[1] 22.0752495 -0.6276442  0.2237765

On retrouve bien l’estimateur du dernier paramètre à l’aide du théorème de Frisch-Waugh, puis en utilisant les équations normales, on en déduit les trois premiers.

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

Local Utility and Multivariate Risk Aversion

The paper with Alfred Galichon and Marc Henry,  on Local Utility and Multivariate Risk Aversion is now available online on http://papers.ssrn.com/,

The present paper We revisit Machina’s local utility as a tool to analyze attitudes to multivariate risks. Using martingale embedding techniques, we show that for non-expected utility maximizers choosing between multivariate prospects, aversion to multivariate mean preserving increases in risk is equivalent to the concavity of the local utility functions, thereby generalizing Machina’s result in Machina (1982). To analyze comparative risk attitudes within the multivariate extension of rank dependent expected utility of Galichon and Henry (2011), we extend Quiggin’s monotone mean and utility preserving increases in risk and show that the useful characterization given in Landsberger and Meilijson (1994) still holds in the multivariate case

It is “simply” the average value

For some obscure reasons, simple things are usually supposed to be simple. Recently, on the internet, I saw a lot of posts on the “average time in which you hold a stock“, and two rather different values are mentioned

  • Take any stock in the United States. The average time in which you hold a stock is – it’s gone up from 20 seconds to 22 seconds in the last year” (Michael Hudson on http://www.telegraph.co.uk/) or “The founder of Tradebot, in Kansas City, Mo., told students in 2008 that his firm typically held stocks for 11 seconds” (on http://www.nytimes.com/) among many others
  • Based on the NYSE index data, the mean duration of holding period by US investors was around 7 years in 1940. This stayed the same for the next 35 years.  The average holding period had fallen to under 2 years by the time of the 1987 crash. By the turn of the century it had fallen to below one year. It was around 7 months by 2007” (on http://topforeignstocks.com/ see also the graph below) or “Two-thirds [of the managers of more than 800 institutional funds interviewed in a study] had higher turnover than they predicted […] Even though most are judged by performance over three-year horizons, their average holding period was about 17 months, and 19% of the managers held the typical stock for one year or less” (mentioned on http://online.wsj.com/) again among many others

How comes that on the one hand, some people talk about less than 20 sec. for the “average time in which you hold a stock“, and on the other, around a year. How can we have such a difference ? We are talking about an average time here, not a rare event probability…

To understand what might be wrong, consider the following case, with a market, and two stocks: one is kept over a year (52 weaks) while the other other is traded – and exchanged – every week (52 times per year). What is the “average time in which you hold a stock” ? Is it

  • 26.5 weeks ? the average time for the first stock is 52 weeks, while it is 1 for the second one, i.e. 53 over 2
  • 1.96 weeks ? over a year the first stock has been traded once, while it was exchanged 52 times for the second one, i.e. 104 over 53 (total time over the total number of transactions)

Obviously, there is a selection bias in that study (see here for an illustration of that concept, in French). In order to get a better understanding, consider the following simple model, with a large number of simulated stocks. At each transaction, they can be hold by 3 types of investors,

  • with probability 70%, hold – on average – for 20 sec.
  • with probability 20%, hold – on average – for 15 days
  • with probability 10%, hold – on average – for 10 years

As claimed by Warren Buffett, “my favorite time frame for holding a stock is forever“, so it might not be absurd to consider investors who keep a stock for a long period of time. Assume further that the time frame for holding a stock is exponentially distributed (the rate depending on the kind of investor). Assume that those stocks are observed during a period of time of 20 years (which might sound reasonable). Several techniques can be used to estimate the “average time in which you hold a stock

  • The first one is to calculate the mean, per stock, of the holding time, and to consider the average over all the stocks. Maybe it would be a good idea to exclude the last observation (since data were censored),
  • The second one is to divide the (total) period of time by the (total) number of investors that hold the stock during that time frame (or number of transactions)
  • A third idea might be to use the first method, but instead of removing the last one, to use an estimator of the mean based on Kaplan-Meier estimate
  • A fourth idea is to look at what happened at a specific date (say after 10 years), i.e. which investor had the stock, and how long he kept it.

The code to generate that process is the following

> set.seed(1)
> invest=sample(size=ns,c("A","B","C"),
+ prob=c(.7,.2,.1),replace=TRUE)
> lambda=(invest=="A")*20/(365*24*60*60)+
+        (invest=="B")*15/365+
+        (invest=="C")*10
> E=rexp(ns,rate=1/lambda)
> T=cumsum(E)
> T=T[T<20]
> plot(c(T,50),0:length(T),type="s",xlim=c(0,20),col="blue")

with the following trajectory for the number of investor that did hold that specific stock between time 0 and time 20.

Then, the different techniques are the following,

# method 1
> E1=diff(T)
> m1=mean(E)
> M1[s]=m1

for the first one (means of time length, per stock),

# method 2
> if(length(T)>1){
+ n2=length(T)-1
+ d2=T[length(T)-l]-T[1]
+ N2[s]=n2; D2[s]=d2
+ }

for the second one (time length and number of transactions),

+ # method 3
+ T3=c(T,20)
+ C3=c(rep(0,length(T)-1),1)
+ km=survfit(Surv(diff(T3), 1-C3)~1)
+ m3=summary(km,rmean='individual')$table[5]
+ M3[s]=m3

for the third one (based on a prediction of the expected mean, from Kaplan-Meier estimate) and

# method 4
> T0=c(0,T,20)
> m4=min(T0[T0>10])-max(T0[T0<10])
> M4[s]=m4

for the fourth one (based on what happened at time 10). Using monte carlo simulations, we get very different quantities, that can all be interpreted as the “average time in which you hold a stock”

> sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE)
[1] 0.3692335
> mean(M1,na.rm=TRUE)
[1] 0.5469591
> mean(M3,na.rm=TRUE)
[1] 1.702908
> mean(M4,na.rm=TRUE)
[1] 12.40229

If we change to probabilities (and assume that high frequency investors are much more important than long-term ones), e.g.

> invest=sample(size=ns,c("A","B","C"),
+ prob=c(.9,.09,.01),replace=TRUE)

then the first two estimates are rather different. But not the last two.

> sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE)
[1] 0.04072227
> mean(M1,na.rm=TRUE)
[1] 0.06393767
> mean(M3,na.rm=TRUE)
[1] 0.2504322
> mean(M4,na.rm=TRUE)
[1] 12.05508

So I have to confess that the “average time in which you hold a stock” can be almost anything from 10 sec. to 10 years, it clearly depends on the way the average is calculated. The second point is that if the proportion of high frequency trading is extremely high, I should not affect the last one (which is, from my point of view, the most interesting one, an might also be improved by here also integrate a censored variate). So I guess people should be careful when discussing such quantities… And if anyone is willing to share data on that topic, I’d be glad to look at them…

Even odds

This evening, I found a nice probabilistic puzzle on http://www.futilitycloset.com/“A bag contains 16 billiard balls, some white and some black. You draw two balls at the same time. It is equally likely that the two will be the same color as different colors. What is the proportion of colors within the bag?”
To be honest, I did not understood the answer on the blog, but if we write it down, we want to solve

http://freakonometrics.blog.free.fr/public/perso5/futil-01.gif

Let us count: if http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif is the total number of balls, and if http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif is the number of white http://freakonometrics.blog.free.fr/public/perso5/futil-05.gifballs then

http://freakonometrics.blog.free.fr/public/perso5/cccccwwwwwww.gif

I.e. we want to solve a polynomial equation (of order 2) in http://freakonometrics.blog.free.fr/public/perso5/futil-07.gif, or to be more precise, in  http://freakonometrics.blog.free.fr/public/perso5/futil-00.gif

http://freakonometrics.blog.free.fr/public/perso5/futil-06.gif

If http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif is equal to 16, then http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif is either 6 or 10. It can be visualized below

> balls=function(n=16){
+ NB=rep(NA,n)
+ for(k in 2:(n-2)){
+ NB[k]=(k*(k-1)+(n-k)*(n-k-1))
+ }
+ k=which(NB==n*(n-1)/2)
+ if(length(k)>0){
+ plot(1:n,NB,type="b")
+ abline(h=n*(n-1)/2,col="red")
+ points((1:n)[k],NB[k],pch=19,col="red")}
+ return((1:n)[k])}
> balls()
[1]  6 10

But more generally, we can seek other http://freakonometrics.blog.free.fr/public/perso5/futil-04.gif‘s and other pairs of solutions of such a problem. I am not good in arithmetic, so let us run some codes. And what we get is quite nice: if http://freakonometrics.blog.free.fr/public/perso5/futil-10.gif admits a pair of solutions, then http://freakonometrics.blog.free.fr/public/perso5/futil-10.gif is the squared of another integer, say http://freakonometrics.blog.free.fr/public/perso5/futil-13.gif. Further, the difference between http://freakonometrics.blog.free.fr/public/perso5/futil-11.gif and http://freakonometrics.blog.free.fr/public/perso5/futil-12.gif is precisely http://freakonometrics.blog.free.fr/public/perso5/futil-13.gif. And http://freakonometrics.blog.free.fr/public/perso5/futil-12.gif will be one of the answers when the total number of balls will be http://freakonometrics.blog.free.fr/public/perso5/futil-20.gif. Thus, recursively, it is extremely simple to get all possible answers. Below, we have http://freakonometrics.blog.free.fr/public/perso5/futil-04.gifhttp://freakonometrics.blog.free.fr/public/perso5/futil-03.gifhttp://freakonometrics.blog.free.fr/public/perso5/futil-21.gif and the difference between http://freakonometrics.blog.free.fr/public/perso5/futil-21.gif and http://freakonometrics.blog.free.fr/public/perso5/futil-03.gif,

> for(s in 4:1000){
+ b=balls(s)
+ if(length(b)>0) print(c(s,b,diff(b)))
+ }
[1] 9 3 6 3
[1] 16  6 10  4
[1] 25 10 15  5
[1] 36 15 21  6
[1] 49 21 28  7
[1] 64 28 36  8
[1] 81 36 45  9
[1] 100  45  55  10
[1] 121  55  66  11
[1] 144  66  78  12
[1] 169  78  91  13
[1] 196  91 105  14
[1] 225 105 120  15
[1] 256 120 136  16
[1] 289 136 153  17
[1] 324 153 171  18
[1] 361 171 190  19
[1] 400 190 210  20
[1] 441 210 231  21
[1] 484 231 253  22
[1] 529 253 276  23
[1] 576 276 300  24
[1] 625 300 325  25
[1] 676 325 351  26
[1] 729 351 378  27
[1] 784 378 406  28
[1] 841 406 435  29
[1] 900 435 465  30
[1] 961 465 496  31

Thus, given http://freakonometrics.blog.free.fr/public/perso5/futil-22.gif, consider an urn with http://freakonometrics.blog.free.fr/public/perso5/futil-23.gif balls. We draw two balls at the same time. It is equally likely that the two will be the same color as different colors. Then the number of colors within the bag are respectively

http://freakonometrics.blog.free.fr/public/perso5/futil-24.gif

Finally, observe that the http://freakonometrics.blog.free.fr/public/perso5/futil-11.gif‘s are well known, from Pascal’s triangle,

also known as triangular numbers,

http://freakonometrics.blog.free.fr/public/perso5/tr-pascal.gif

Maths can be magic, sometimes…

MAT8886 Extremes and sums (of i.i.d. random variables)

Yesterday, we have discussed briefly sums and maximas of i.i.d. random variables using the concept of subexponential distributions. Today, we will introduce the concept of regular variation: a positive function is said to be regularly varying (at infinity), denoted http://freakonometrics.blog.free.fr/public/perso5/subexp-30.gif, for some http://freakonometrics.blog.free.fr/public/perso5/subexp-31.gif, if

http://freakonometrics.blog.free.fr/public/perso5/subexp-33.gif
for all http://freakonometrics.blog.free.fr/public/perso5/subexo_34.gif. An this concept can be related to sums and maxima (see section 6.2.6 in Embrechts et al. (1997)). Consider i.i.d. positive random variables http://freakonometrics.blog.free.fr/public/perso5/subsexp-01.gif: lethttp://freakonometrics.blog.free.fr/public/perso5/subexp-2.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-3.gif. Then it can be shown easily that

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-20.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-10.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-23.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Z.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-13.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-14.gif
If is not that simple to check for such convergences, it is still possible to use graphs to study the behavior of the empirical version of those quantities. Consider the following function to visualize convergence of empirical ratios,

CONVERGENCE=function(g,p=1,n=500000){
set.seed(1)
X=g(n);X1=g(n);X2=g(n);X3= g(n);X4=g(n)
Tp =cummax(X^p)/cumsum(X^p)
Tp1=cummax(X1^p)/cumsum(X1^p)
Tp2=cummax(X2^p)/cumsum(X2^p)
Tp3=cummax(X3^p)/cumsum(X3^p)
Tp4=cummax(X4^p)/cumsum(X4^p)
plot(Tp4,type="l",ylim=c(0,1),log="x",
xlim=c(100,n),ylab="",col="light blue",xlab="")
lines(Tp1,col="light green")
lines(Tp2,col="yellow")
lines(Tp3,col="pink")
lines(Tp,lwd=2)
abline(h=0:1,col="red",lty=2)
}

or the following to study the “asymptotic” distribution of the ratio on simulated samples

LIMITDIST=function(g,p=1,n=500000,ns=1000){
set.seed(1)
T=rep(NA,ns)
for(i in 1:ns){
X=g(n)
T[i]=max(X^p)/sum(X^p)
}
hist(T,breaks=seq(0,1,by=.05),probability=TRUE,
col="light green",ylab="",xlab="",main="")
}

In the case of exponentially distributed variables, we have

CONVERGENCE(rexp)

For variables with a lognormal distribution,

CONVERGENCE(rlnorm)

And finally, consider the case of a Pareto distribution

rpareto=function(n){runif(n)^(-1/1.5)-1}
CONVERGENCE(rpareto)

Here, it looks like those three distributions have finite variance (and actually, they do). To go one step further, for http://freakonometrics.blog.free.fr/public/perso5/subexp00.gif, define http://freakonometrics.blog.free.fr/public/perso5/suuuuuubexp.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-5.gif. Then analogous results can be derived,

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-99.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-11.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-25.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Zk.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-12.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-15.gif
Again, it is possible to use the function defined above,

CONVERGENCE(rexp,p=2)

or

CONVERGENCE(rexp,p=3)

or even

CONVERGENCE(rexp,p=10)

If the power is not too high, it looks like the ratio goes to zero. But when it becomes larger, it looks like more simulations might be necessary to say something relevant.

CONVERGENCE(rlnorm,p=2)

or

CONVERGENCE(rlnorm,p=3)

Here also, it looks like we have a light tailed distribution (and actually, it is the case). And finally, if we consider the case of a Pareto distribution

CONVERGENCE(rpareto,p=2)

Then it looks like it is an heavy tailed distribution. In order to get a better understanding, plot the distribution of the ratio obtained from 1,000 simulated samples (of size 500,000),

LIMITDIST(rpareto,p=1)

versus

LIMITDIST(rpareto,p=2)

So obviously, something is going on between 1 and 2 (recall that the power parameter of the Pareto distribution is 1.5).

Tout le monde peut sortir vainqueur d’un pari ?

Petit paradoxe du jour (emprunté à http://www.futilitycloset.com/). Avec un collègue, suite à un pari stupide, nous nous sommes engagé à mettre une cravate, et celui qui a payé sa cravate le plus cher devra la donner à l’autre (ce que nous ne savions pas avant de choisir ladite cravate). A priori, nous avons autant de chance l’un que l’autre de gagner. Si je perds, je perds le prix de ma cravate, disons http://freakonometrics.blog.free.fr/public/perso5/cravate01.gif. Alors que si je gagne, je gagne une cravate qui vaut plus cher que la mienne, disons http://freakonometrics.blog.free.fr/public/perso5/corrrec-01.gif, avec http://freakonometrics.blog.free.fr/public/perso5/corrre-2.gif. Aussi, mon espérance de gain est

http://freakonometrics.blog.free.fr/public/perso5/cravate04.gif

aussi, en moyenne, je vais sortir gagnant de ce pari. Sauf que mon collègue peut faire le même raisonnement… Donc en moyenne, on y gagne tous… Étonnant, non ?

Gold price and fear

Via @theEconomist, I understood that there might be connections between the price of Gold (which is said to be extremely high nowadays) and the VIX SP500 index (the option volatility index, i.e. the so-called “fear index“, as discussed – in French- a few months ago). This has been discussed also on several blogs, e.g. http://etfdailynews.com/ or http://blogs.marketwatch.com/. Via Yahoo quotes, it is possible to get also easily the

SP500 VIX index.

> library(tseries)
> X=get.hist.quote("^VIX")
> T=time(VIX)
> Y=as.POSIXlt(T)$year+1900
> X2011=X[Y==2011,]
> VIX=X2011[,4]
> VIX100=as.numeric(VIX)/VIX[1]*100
> T2011=T[Y==2011]
> plot(T2011,VIX100,lwd=2,col="red",type="l",
+ xlab="",ylab="",ylim=c(60,290))

And a huge xls file can give us the price of gold (on a daily basis). But we can extract only one series (with the price in USD, which is the series of interest here)

> goldprice=read.table(
+ "http://freakonometrics.blog.free.fr/
  public/data/goldpriceUSD.csv",
+ header=TRUE,sep=";",dec=",")
> T=as.Date(goldprice$Name,"%d/%m/%y")
> GP=goldprice$USdollar
> Y=as.POSIXlt(T)$year+1896
> GP2011=GP[Y==2011]
> GP100=GP2011/GP2011[1]*100
> T2011=T[Y==2011]
> lines(T2011-4*365.25,GP100,lwd=2,col="blue")

We can see that scales are quite different on those two series (starting at 100 at the beginning of January 2011),

An alternative might be not to consider the price of gold, but something more psychological, like Internet researches. It is possible to download the csv file for queries on gold price on Google, via google insight.

 
> google=read.table(
+ "http://freakonometrics.blog.free.fr/public/data/google.csv",
+ skip=4,header=TRUE,sep=",",nrows=51)
> W=as.Date(substr(as.character(google$Semaine),1,10))
> G=google$gold.price
> G100=G/G[1]*100
> lines(W,G100,lwd=2,col="blue")

which gives the following graph (again, starting at 100 at the beginning of January 2011),

Here, we can clearly observe that the two series are related, maybe cointegrated. Nice isn’t it ?

Normalité des estimateurs dans une régression

Cette semaine en cours, on a évoqué la normalité des estimateurs dans une régression linéaire. En supposant la normalité des résidus, on a vu en cours que http://freakonometrics.blog.free.fr/public/perso5/ols-002.gif était un estimateur Gaussien. En particulier, chacun des estimateurs est alors Gaussien, au sens où http://freakonometrics.blog.free.fr/public/perso5/ols--0003.gif, ce qui peut se visualiser sur le graphique suivant (la constante est en abscisse, et la pente en ordonnée), avec un intervalle de confiance à 95%,

En fait, la variance étant inconnue (mais pouvant être estimée), si on remplace la variance des résidus par leur estimateur, la loi de l’estimateur est une loi de Student. De même pour l’estimateur de la pente de la droite de régression, http://freakonometrics.blog.free.fr/public/perso5/ols---ooo4.gif

Le fait que le couple soit également Gaussien fait que l’on peut construire non plus un intervalle de confiance (on n’est plus en dimension 1) mais une ellipse de confiance. On retient une forme elliptique car c’est la plus petite région dans laquelle on se trouvera avec une probabilité de 95% (comme discuté dans un vieux billet).

Mais pour mieux comprendre cette notion d’ellipse de confiance, le plus simple est d’avoir recours à du rééchantillonnage dans la base de données. En effet, comme on ne dispose que d’un jeu de données, on a un estimateur. Et la discussion sur la loi de notre estimateur est purement théorique. Pour visualiser la loi de notre estimateur, il faudrait des centaines, voire des milliers de bases similaires. Que l’on n’a pas. La solution est alors de tirer au hasard des points de notre échantillon (avec remise), i.e. de faire du bootstrap,

set.seed(1)
COEF=matrix(NA,10000,2)
for(s in 1:nrow(COEF)){
I=sample(1:nrow(cars),nrow(cars),replace=TRUE)
COEF[s,]=lm(dist~speed,data=cars[I,])$coefficients
}

Si on regarde ce qui se passe, tirage après tirage, on génère un paquet d’échantillon, et pour chaque échantillon, on ajuste une droite de régression.

http://freakonometrics.blog.free.fr/public/perso5/BOOOT.gif

Les distributions – sur tous ces échantillons – de la constante et de la pente semblent effectivement Gaussiennes,

hist(COEF[,1],col="light blue",prob=TRUE)
u=seq(min(COEF[,1]),max(COEF[,1]),length=500)
v=dnorm(u,mean(COEF[,1]),sd(COEF[,1]))
lines(u,v,lwd=3,col="red")
hist(COEF[,2],col="light blue",prob=TRUE)
u=seq(min(COEF[,2]),max(COEF[,2]),length=500)
v=dnorm(u,mean(COEF[,2]),sd(COEF[,2]))
lines(u,v,lwd=3,col="red")

pour la distribution de l’estimateur de la pente, alors que pour l’estimateur de la constante, on a la distribution suivante,

Si on regarde maintenant la loi jointe,

on retrouve une forme elliptique pour le nuage des points (et une forte corrélation négative entre les deux estimateurs). Mais si on creuse un peu

library(ellipse)
reg=lm(dist~speed,data=cars)
e=ellipse(reg)
plot(e,type="l",lwd=2)
polygon(e,col="light blue")
points(COEF,cex=.5)

l’ellipse ne coïncide pas (tout à fait) avec celle obtenu théoriquement. Ce qui laisse à penser que l’hypothèse de normalité des résidus est peut-être à revoir..,

Fisher-Tippett theorem with an historical perspective

A couple of weeks ago, Rafael asked me if I had something on the history of extreme value theory. Since I will get back to fundamental results about extremes in my course, I promised I will write down a short post on all that issue.

To start from the beginning, in 1928, Ronald Fisher and Leonard Tippett formulated the three types of limiting distributions for the maximum term of a random sample (Fisher & Tippett (1928)). The problem was to characterize function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/ext-2.gif

where http://freakonometrics.hypotheses.org/files/2015/12/ext-3.gif where http://freakonometrics.hypotheses.org/files/2015/12/ext-4.gif‘s are i.i.d. with cumulative distribution function http://freakonometrics.hypotheses.org/files/2015/12/ext-5.gif. They had supporting arguments, but no (rigorous) proof. Nevertheless, the obtained that the only possible types for G were

http://freakonometrics.hypotheses.org/files/2015/12/ext-6.gif

i.e. Fréchet type (Pareto-type tails), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-7.gif

i.e. Weibull type (bounded distribution type), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-8.gif

i.e. Gumbel type (exponential-type tails). Emil Gumbel has been intensively using the so-called Gumbel distribution on river flows, since (as he explained in 1958), “it seems that the rivers know the theory. It only remains to convince the engineers of the validity of this analysis“.
Independently of that work (published in 1928), Maurice Fréchet considered in 1927 (in Sur la loi de probabilité de l’écart maximum) possible limits of

http://freakonometrics.hypotheses.org/files/2015/12/ext-9.gif

and obtained only http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif as possible limit. Richard von Mises gave in 1936 sufficient, but not necessary conditions for their (max) domain of attraction, i.e. characterization of function http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gif such that the maxima converges to some specific function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif (von Mises (1936)). E.g. he noticed that a sufficient condition on http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gifto be in the (max) domain of attraction of the Gumbel distribution is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-13.gif

Then in 1943, Boris Gnedenko gave a complete characterization of those three types, with a complete characterization for two of them (heavy tails, i.e. Fréchet type and bounded support, i.e. Weibull) but his necessary and sufficient condition was based on a function that was not explicitly defined (see Gnedenko (1943)). Laurens de Haan in the 70’s derived checkable condition for Gumbel’s type.
Boris Gnedenko proved (in Section 4 of his paper) that F is the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif if and only if http://freakonometrics.hypotheses.org/files/2015/12/ext-16.gif is regularly varying at infinity, with index http://freakonometrics.hypotheses.org/files/2015/12/ext-17.gif (even if the term “regular variation” was not mentioned in the paper). Similar results were derived to characterize functions in the (max) domain of attraction of Weibull. For the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif, Boris Gnedenko obtained that a necessary and sufficient condition was that there exists a function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif such http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif goes to 0 at infinity and

http://freakonometrics.hypotheses.org/files/2015/12/ext-20.gif

Several papers have discussed what function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif could be e.g. David Mejzler in 1949 (in Russian, but see also his 1965 paper), and Laurens de Hann in 1970 and 1971 (following the dramatic flood in the Netherlands in 1953, researchers in the Netherlands have focuses on dikes, and extreme value applications).

Mejzler’s idea was to work on quantiles, and not on the cumulative distribution function. I.e. define

http://freakonometrics.hypotheses.org/files/2015/12/ext-21.gif

Then a necessary and sufficient condition for F to be in the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-23.gif

Laurens de Haan proved in 1971 that function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif can be – in general – given by

http://freakonometrics.hypotheses.org/files/2015/12/ext-25.gif

And in 1976, Laurens de Haan obtained a three-type convergence working on quantile function http://freakonometrics.hypotheses.org/files/2015/12/ext-26.gif (with a much shorter proof).
There have been many many papers extending Fisher-Tippett’s theorem, e.g. on non-independent sequences, like exchangeable ones (in a paper by Simeon Berman in 1962, or on stationary Gaussian sequences in 1964).

Infidelity and econometrics

On http://www.bakadesuyo.com, there was recently an interesting discussion about infidelity, the key question being “at what ages are men and women most likely to have affairs?” The discussion is based on some graphs, e.g.

The source is a paper by Donald Cox. Based on a sample of 36 men and 22 women 3,432 respondent (NHSLS dataset) . And to be honest, I have been surprised by the shape of the curves. Especially for men… In order to compare, it is possible to use another dataset that can be found in R,

> library(Ecdat)
> data(Fair)
> tail(Fair)
sex age   ym child religious education occupation
596   male  47 15.0   yes         3        16          4
597   male  22  1.5   yes         1        12          2
598 female  32 10.0   yes         2        18          5
599   male  32 10.0   yes         2        17          6
600   male  22  7.0   yes         3        18          6
601 female  32 15.0   yes         3        14          1
rate nbaffairs
596    2         7
597    5         1
598    4         7
599    5         2
600    2         2
601    5         1

with 601 observations (from Fair (1977)). It is possible to run a Poisson regression to describe the number of affairs in the past year. E.g for men

> library(splines)
> regM=glm(nbaffairs~bs(age),family=poisson,
+ data=Fair[Fair$sex=="male",])
> a=seq(20,60)
> N=predict(regM,newdata=data.frame(age=a),type="response")
> plot(a,N,type="l",lwd=2,col="red")

or for women,

> regF=glm(nbaffairs~bs(age),family=poisson,
+ data=Fair[Fair$sex=="female",])
> N=predict(regF,newdata=data.frame(age=a),type="response")
> plot(a,N,type="l",lwd=2,col="red",lty=2)

On that (larger) dataset, we obtain curves that are more intuitive… But maybe the Poisson distribution is not an appropriate model. For instance, having no affairs do not mean that the person did not want to… So perhaps, a more interesting model would be a Poisson model with a zero-inflation, i.e. some people are honest and do not want to have affairs (and appear as 0), while some do want to have some affairs, and the number of affairs is Poisson distributed (and can take the value 0). If we focus on people wo do not want to have affairs, the model (and the prediction) is the following, where we plot the probability of not being interested in having an affair,

> library(pscl)
> regM0=zeroinfl(nbaffairs~bs(age)|bs(age),family=poisson,
+ link="logit",data=Fair[Fair$sex=="male",])
> N0=predict(regM0,newdata=data.frame(age=a),type="zero")
> plot(a,N0,type="l",lwd=2,col="blue")

For those willing to have an affair, here is the parameter of the Poisson distribution of the number of affairs,

> Nc=predict(regM0,newdata=data.frame(age=a),type="count")
> plot(a,Nc,type="l",lwd=2,col="purple")

The same can be done for women, with the probability of no-willing to have an affair,

and to Poisson rate for women willing to have an affair,

If we focus on people willing to have an affair, the curves are the following,

i.e. men below 40 have more interested, but after 40, the probability drops, while women are still more and more likely to be willing to have an affair. On the other hand, young women having affairs might be less, but they usually have much more affairs than men…

Fisher-Tippett theorem and limiting distribution for the maximum

Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples https://freakonometrics.hypotheses.org/files/2018/02/max-00.gif. For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. https://freakonometrics.hypotheses.org/files/2018/02/max-09.gif on the unit interval. Let https://freakonometrics.hypotheses.org/files/2018/02/max-10.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-11.gif. Then, for all https://freakonometrics.hypotheses.org/files/2018/02/max-12.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-13.gif,

https://freakonometrics.hypotheses.org/files/2018/02/max-14.gif

i.e. the limiting distribution of the maximum is Weibull’s.

set.seed(1)
s=1000000
n=100
M=matrix(runif(s),n,s/n)
V=apply(M,2,max)
bn=1
an=1/n
U=(V-bn)/an
hist(U,probability=TRUE,,col="light green",
xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25))
u=seq(-10,0,by=.1)
v=exp(u)
lines(u,v,lwd=3,col="red")

For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-05.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-06.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-07.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-08.gif

which means that the limiting distribution is Fréchet’s.

set.seed(1)
s=1000000
n=100
M=matrix((runif(s))^(-1/2),n,s/n)
V=apply(M,2,max)
bn=0
an=n^(1/2)
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25))
u=seq(0,10,by=.1)
v=dfrechet(u,shape=2)
lines(u,v,lwd=3,col="red")

For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-01.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-02.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-03.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-04.gif

i.e. the limiting distribution for the maximum is Gumbel’s distribution.

library(evd)
set.seed(1)
s=1000000
n=100
M=matrix(rexp(s,1),n,s/n)
V=apply(M,2,max)
(bn=qexp(1-1/n))
log(n)
an=1
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Consider now a Gaussian https://freakonometrics.hypotheses.org/files/2018/02/max-17.gif sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)

https://freakonometrics.hypotheses.org/files/2018/02/max-15.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-16.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-18.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-19.gif. Then we can get

https://freakonometrics.hypotheses.org/files/2018/02/max-20.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-21.gif. I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,

and it is only slightly better with 1,000 observations,

set.seed(1)
s=10000000
n=1000
M=matrix(rnorm(s,0,1),n,s/n)
V=apply(M,2,max)
(bn=qnorm(1-1/n,0,1))
an=1/bn
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since https://freakonometrics.hypotheses.org/files/2018/02/max-22.gif, if

https://freakonometrics.hypotheses.org/files/2018/02/max-23.gif

then

https://freakonometrics.hypotheses.org/files/2018/02/max-24.gif

i.e. using Taylor’s approximation on the right term,

https://freakonometrics.hypotheses.org/files/2018/02/max-25.gif

This gives us normalizing coefficients we should use here.

set.seed(1)
s=10000000
n=1000
M=matrix(rlnorm(s,0,1),n,s/n)
V=apply(M,2,max)
bn=exp(qnorm(1-1/n,0,1))
an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1))
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Construire une courbe ROC

Juste avant les vacances, Jean-Pierre Liégeois, un jeune lecteur du var, me demandais par courriel, “à partir d’une régression logistique (ou d’une matrice de confusion 2×2), comment programmer en R, un programme qui construit la courbe ROC associée“. Avant d’aller plus loin (et de répondre a la question), je vais renvoyer vers un vieux billet sur les matrices de confusion. L’idée est que l’on suppose que l’on dispose d’un prédicteur d’une variable prenant des valeurs 0 et 1 (ou pour reprendre la terminologie classique “positif” et “négatif”), par exemple un modèle logistique. Formellement, pour l’ensemble de nos observations, on a une valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png et (comme je l’expliquais dans un autre billet) et d’un score \widehat{S}. Et c’est ce score qu’on va utiliser pour construire la courbe ROC. Ce score sera utilise pour prédire http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png . La règle d’affectation est alors simple: on se fixe un seuil http://freakonometrics.hypotheses.org/files/2018/02/ROC-04.png, et

  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-05.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “positif”
  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-06.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “négatif”

On peut alors construire une matrice dite de confusion, qui est simplement un table de contingence,

valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png
valeur prédite
http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png
“positif” “négatif”
“positif” TP FP
“négatif” FN TN

où TP désigne les vrais positifs (true positive), TN les vrais négatifs (true negative),FP désigne les faux positifs (false positive) ou erreur de type I (dans une terminologie de théorie de la décision, ou de théorie des tests), et FN désigne les faux négatifs (false negative) ou erreur de type II.
Quid de la mise en œuvre sous R ? Commençons par générer des données, et estimons un modèle de régression.

set.seed(1)
n=50
X=rnorm(n)
Y=rbinom(n,size=1,prob=
exp(2*X-1)/(1+exp(2*X-1)))
B=data.frame(Y,X)
reg=glm(Y~X,family=binomial,data=B)
S=predict(reg,type="response")

On a maintenant notre observation (variable prenant les valeurs 0 ou 1) et notre score. On va ensuite pouvoir choisir plusieurs valeurs possible pour le seuil, et visualiser le taux de vrais positifs, en fonction du taux de faux positifs.

plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP=sum((Ps==1)*(Y==0))/sum(Y==0)
TP=sum((Ps==1)*(Y==1))/sum(Y==1)
points(FP,TP,cex=.5,col="red")
}

On a alors le graphique suivant,

Si on relit les points, on a alors la courbe ROC,

FP=TP=rep(NA,101)
plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP[1+s*100]=sum((Ps==1)*(Y==0))/sum(Y==0)
TP[1+s*100]=sum((Ps==1)*(Y==1))/sum(Y==1)
}
lines(c(FP),c(TP),type="s",col="red")

En fait, le code est assez simple, et il traîne dans différents packages de R, e.g.

library(ROCR)
pred=prediction(P,Y)
perf=performance(pred,"tpr", "fpr")
plot(perf,colorize = TRUE)

On peut aussi s’amuser a bootstrapper l’échantillon pour construire des intervalles de confiance, ou ajuster des modèles théoriques,

library(verification)
roc.plot(Y,P, xlab = "False Positive Rate",
ylab = "True Positive Rate", main = "", CI = TRUE,
n.boot = 100, plot = "both", binormal = TRUE)

ou encore (toujours avec des bornes de confiance obtenues par bootstrap)

library(pROC)
PROC=plot.roc(Y,P,main="", percent=TRUE,
ci=TRUE)
SE=ci.se(PROC,specificities=seq(0, 100, 5))
plot(SE, type="shape", col="light blue")