Tag Archives: R-english

Non transitivity of correlation for random vectors in dimension 3

Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)

“If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic”

Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,

Projections on bivariate planes of the three dimensional vector are

Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?

If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent

Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,

Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,

that can be visualized below,

In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.

To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix

http://freakonometrics.blog.free.fr/public/perso6/CORRMATRICE.gif

Given two pairs of correlations, http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, what could we say about http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif ? For instance, the intuition is that if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif are positive, then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by  http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif.Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like

http://freakonometrics.blog.free.fr/public/perso6/kendall1.gif

which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,

But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif. Consider the following code

U=seq(-1,1,by=.1)
V=seq(-1,1,by=.001)
FSUP=function(a,b){
DF=function(c){min(eigen(matrix
(c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a,b){
DF=function(c){min(eigen(matrix(
c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}
MSUP=outer(U,U,Vectorize(FSUP))
MINF=outer(U,U,Vectorize(FINF))
library(RColorBrewer)
clr=rev(brewer.pal(6, "RdBu"))
U=U[2:20]
MSUP=MSUP[2:20,2:20]
MINF=MINF[2:20,2:20]
persp(U,U,MSUP,col="green",shade=TRUE)
image(U,U,MSUP,breaks=((-3):3)/3,col=clr)
persp(U,U,MINF,col="green",shade=TRUE)
image(U,U,MINF,breaks=((-3):3)/3,col=clr)

Here, we can derive the lower and the upper bound for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif.

In the dark blue area, the bound for the correlation can be really low, while in the dark red, the bound is very high (either the lower bound on the left, or the upper bound on the right). Since it might be hard to read, it is possible to fix for instance http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, and to derive bonds for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif.
V=seq(-1,1,by=.001)
U=seq(-1,1,by=.1)
U=U[2:(length(U)-1)]
V=V[2:(length(V)-1)]
U=c(-.9999,U,.9999)
V=c(-.99999,V,.99999)
FSUP=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}

VS=Vectorize(FSUP)(U)
VI=Vectorize(FINF)(U)
plot(c(U,U),c(VS,VI),col="white")
polygon(c(U,rev(U)),c(VS,rev(VI)),
col="yellow",border=NA)
lines(U,VS,lwd=2,col="red")
lines(U,VI,lwd=2,col="red")
On the graph below, we have bound for a negative correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the left, with -0.7) and a positive correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the right, here +0.7),

We do observe here extremely nice ellipses… Consider the case of a null correlation http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif then the region for possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is the unit circle.
The interpretation is that if http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif is null, and so is http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif is either -1 or +1 (perfect negative/positive correlation) then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif has to be null…

Finding Waldo, a flag on the moon and multiple choice tests, with R

I have to admit, first, that finding Waldo has been a difficult task. And I did not succeed. Neither could I correctly spot his shirt (because actually, it was what I was looking for). You know, that red-and-white striped shirt. I guess it should have been possible to look for Waldo’s face (assuming that his face does not change) but I still have problems with size factor (and resolution issues too). The problem is not that simple. At thehttp://mlsp2009.conwiz.dk/ conference, a price was offered for writing an algorithm in Matlab. And one can even find Mathematica codes online. But most of the those algorithms are based on the idea that we look for similarities with Waldo’s face, as described in problem 3 on http://www1.cs.columbia.edu/~blake/‘s webpage. You can find papers on that problem, e.g. Friencly & Kwan (2009) (based on statistical techniques, but Waldo is here a pretext to discuss other issues actually), or more recently (but more complex) Garg et al. (2011) on matching people in images of crowds.

What about codes in R ? On http://stackoverflow.com/, some ideas can be found (and thank Robert Hijmans for his help on his package). So let us try here to do something, on our own. Consider the following picture,

With the following code (based on the following file) it is possible to import the picture, and to extract the colors (based on an RGB decomposition),

> library(raster)
> waldo=brick(system.file("DepartmentStoreW.grd",
+ package="raster"))
> waldo
class       : RasterBrick
dimensions  : 768, 1024, 786432, 3 (nrow,ncol,ncell,nlayer)
resolution  : 1, 1  (x, y)
extent      : 0, 1024, 0, 768  (xmin, xmax, ymin, ymax)
coord. ref. : NA
values      : C:\R\win-library\raster\DepartmentStoreW.grd
min values  : 0 0 0
max values  : 255 255 255

My strategy is simple: try to spot areas with white and red stripes (horizontal stripes). Note that here, I ran the code on a Windows machine, the package is not working well on Mac. In order to get a better understanding of what could be done, let us start with something much more simple. Like the picture below, with Waldo (and Waldo only). Here, it is possible to extract the three colors (red, green and blue),

> plot(waldo,useRaster=FALSE)

It is possible to extract the red zones (already on the graph above, since red is a primary color), as well as the white ones (green zones on the graphs means a white region on the picture, on the left)

# white component
white = min(waldo[[1]] , waldo[[2]] , waldo[[3]])>220
focalswhite = focal(white, w=3, fun=mean)
plot(focalswhite,useRaster=FALSE)

# red component
red = (waldo[[1]]>150)&(max(  waldo[[2]] , waldo[[3]])<90)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

i.e. here we have the graphs below, with the white regions, and the red ones,

From those two parts, it has been possible to extract the red-and-white stripes from the picture, i.e. some regions that were red above, and white below (or the reverse),

# striped component
striped = red; n=length(values(striped)); h=5
values(striped)=0
values(striped)[(h+1):(n-h)]=(values(red)[1:(n-2*h)]==
TRUE)&(values(red)[(2*h+1):n]==TRUE)
focalsstriped = focal(striped, w=3, fun=mean)
plot(focalsstriped,useRaster=FALSE)

So here, we can easily spot Waldo, i.e. the guy with the red-white stripes (with two different sets of thresholds for the RGB decomposition)

Let us try somthing slightly more complicated, with a zoom on the large picture of the department store (since, to be honest, I know where Waldo is…).

Here again, we can spot the white part (on the left) and the red one (on the right), with some thresholds for the RGB decomposition

Note that we can try to be (much) more selective, playing with threshold. Here, it is not very convincing: I cannot clearly identify the region where Waldo might be (the two graphs below were obtained playing with thresholds)

And if we look at the overall pictures, it is worst. Here are the white zones, and the red ones,

and again, playing with RGB thresholds, I cannot spot Waldo,

Maybe I was a bit optimistic, or ambitious. Let us try something more simple, like finding a flag on the moon. Consider the picture below on the left, and let us see if we can spot an American flag,

Again, on the left, let us identify white areas, and on the right, red ones

Then as before, let us look for horizontal stripes

Waouh, I did it ! That’s one small step for man, one giant leap for R-coders ! Or least for me… So, why might it be interesting to identify areas on pictures ? I mean, I am not Chloe O’Brian, I don’t have to spot flags in a crowd, neither Waldo, nor some terrorists (that might wear striped shirts). This might be fun if you want to give grades for your exams automatically. Consider the two following scans, the template, and a filled copy,

A first step is to identify regions where we expect to find some “red” part (I assume here that students have to use a red pencil). Let us start to check on the template and the filled form if we can identify red areas,

exam = stack("C:\\Users\\exam-blank.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE) 
exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

First, we have to identify areas where students have to fill the blanks. So in the template, identify black boxes, and get the coordinates (here manually)

exam = stack("C:\\Users\\exam-blank.png")
black = max(  exam[[1]] ,exam[[2]] , exam[[3]])<50
focalsblack = focal(black, w=3, fun=mean)
plot(focalsblack,useRaster=FALSE)
correct=locator(20)
coordinates=locator(20)
X1=c(73,115,156,199,239)
X2=c(386,428.9,471,510,554)
Y=c(601,536,470,405,341,276,210,145,79,15)
LISTX=c(rep(X1,each=10),rep(X2,each=10))
LISTY=rep(Y,10)
points(LISTX,LISTY,pch=16,col="blue")

The blue points above are where we look for students’ answers. Then, we have to define the vector of correct answers,

CORRECTX=c(X1[c(2,4,1,3,1,1,4,5,2,2)],
X2[c(2,3,4,2,1,1,1,2,5,5)])
CORRECTY=c(Y,Y)
points(CORRECTX, CORRECTY,pch=16,col="red",cex=1.3)
UNCORRECTX=c(X1[rep(1:5,10)[-(c(2,4,1,3,1,1,4,5,2,2)
+seq(0,length=10,by=5))]],
X2[rep(1:5,10)[-(c(2,3,4,2,1,1,1,2,5,5)
+seq(0,length=10,by=5))]])
UNCORRECTY=c(rep(Y,each=4),rep(Y,each=4))

Now, let us get back on red areas in the form filled by the student, identified earlier,

exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=5, fun=mean)

Here, we simply have to compare what the student answered with areas where we expect to find some red in,

ind=which(values(focalsred)>.3)
yind=750-trunc(ind/610)
xind=ind-trunc(ind/610)*610
points(xind,yind,pch=19,cex=.4,col="blue")
points(CORRECTX, CORRECTY,pch=1,
col="red",cex=1.5,lwd=1.5)

Crosses on the graph on the right below are the answers identified as correct (here 13),

> icorrect=values(red)[(750-CORRECTY)*
+ 610+(CORRECTX)]
> points(CORRECTX[icorrect], CORRECTY[icorrect],
+ pch=4,col="black",cex=1.5,lwd=1.5)
> sum(icorrect)
[1] 13

In the case there are negative points for non-correct answer, we can count how many incorrect answers we had. Here 4.

> iuncorrect=values(red)[(750-UNCORRECTY)*610+
+ (UNCORRECTX)]
> sum(iuncorrect)
[1] 4

So I have not been able to find Waldo, but I least, that will probably save me hours next time I have to mark exams…

Correlations, dimension, and risk measure

Yesterday, while I was attending the IFM2 conference, at HEC Montreal, I heard a nice talk about credit risk, and a comparison between contagion (or at least default correlation), for corporate and retail companies (in the US). And it was mentioned that default correlation was much lower for retail companies than it could be for corporate risk. In a discussion that followed those slides, it was mentioned that banks in the US should actually have been working more with those small firms, since contagion risk was much lower.

A problem here is that the link between correlation, risk and dimension is rather complicated:

  • corporate means a small number of firms, high correlation (and possible large individual losses)
  • retail means a large number of firms (even perhaps extremely large), lower correlation (and small individual losses)

A simple model for default models is based on the assumption that we deal with an exchangeable portfolio (as in a previous post). With the following code, given an (individual) default probability, a default correlation, and a number of firms, it is possible to calculate the probability to have more than a given number of defaults.

 proba=function(s,a,m,n){
 b=a/m-a
 choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
 dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
 stop.on.error =  FALSE)$value}

CDF=function(x=10,r=.4,m=.1,n=50){
a=m*(1-r)/r ;
V=rep(NA,n+1)
 for(i in 0:n){
 V[i+1]=proba(i,a,m,n)}
 V=V/sum(V);
 return(sum(V[1:(x+1)])) }

It is possible to calculate, for a large range of correlations, the probability to have – at least – 20% of default in the portfolio (in order to compare things that are comparable).

R=seq(.01,.99,by=.01)
VQ=matrix(NA,length(A),2)
for(i in 1:length(A)){
VQ[i,1]=1-CDF(r=A[i],x=4,n=20);  
VQ[i,2]=1-CDF(r=A[i],x=200,n=1000)}

With 20 firms (corporate) we want to have at least 4 defaults, while with 1000 firms (retail) there should be 200 defaults. As mentioned in the previous post, the relationship between correlation and quantiles of sums is not simple. Hence, it might not be monotone. The dotted line is the probability to have at least 4 defaults when default correlation is 50% (around 10%). The plain line is the probability to have at least 200 defaults, as a function of the correlation,

plot(A,1-VQ[,2],type="l",col="red",ylim=c(0,.22))
abline(h=1-VQ[50,1],lty=2,col="red")

In that case, with only a correlation of 10% among retail firms, the probability of having 20% defaults is the same as the same probability for corporate, but with 50% correlation… One should remember that in portfolio analysis, the links between correlation, dimension and risk measure is a sensitive issue…

Open data and ecological fallacy

A couple of days ago, on Twitter, @alung mentioned an old post I did publish on this blog about open-data, explaining how difficult it was to get access to data in France (the post, published almost 18 months ago can be found here, in French). And  @alung was wondering if it was still that hard to access nice datasets. My first answer was that actually, people were more receptive, and I now have more people willing to share their data. And on the internet, amazing datasets can be found now very easily. For instance in France, some detailed informations can be found about qualitifications, houses and jobs, by small geographical areas, on http://www.recensement.insee.fr (thanks @coulmont for the link). And that is great for researchers (and anyone actually willing to check things by himself).

But one should be aware that those aggregate data might not be sufficient to build up econometric models, and to infere individual behaviors. Thinking that relationships observed for groups necessarily hold for individuals is a common fallacy (the so-called ” ecological fallacy“).

In a popular paper, Robinson (1950) discussed “ecological inference“, stressing the difference between ecological correlations (on groups) and individual correlations (see also Thorndike (1937)) He considered two aggregated quantities, per american state: the percent of the population that was foreign-born, and the percent that was literate. One dataset used in the paper was the following

> library(eco)
> data(forgnlit30)
> tail(forgnlit30)
Y          X         W1          W2 ICPSR
43 0.076931986 0.03097168 0.06834300 0.077206504    66
44 0.006617641 0.11479052 0.03568792 0.002847920    67
45 0.006991899 0.11459207 0.04151310 0.002524065    68
46 0.012793782 0.18491515 0.05690731 0.002785916    71
47 0.007322475 0.13196654 0.03589512 0.002978594    72
48 0.007917342 0.18816461 0.02949187 0.002916866    73

The correlation between  foreign-born and literacy was

> cor(forgnlit30$X,1-forgnlit30$Y)
[1] 0.2069447

So it seems that there is a positive correlation, so a quick interpretation could be that in the 30’s, amercians were iliterate, but hopefully, literate immigrants got the idea to come in the US. But here, it is like in Simpson’s paradox, because actually, the sign should be negative, as obtained on individual studies. In the state-based-data study, correlation was positive mainly because foreign-born people tend to live in states where the native-born are relatively literate…

Hence, the problem is clearly how individuals were grouped. Consider the following set of individual observations,

> n=1000
> r=-.5
> Z=rmnorm(n,c(0,0),matrix(c(1,r,r,1),2,2))
> X=Z[,1]
> E=Z[,2]
> Y=3+2*X+E
> cor(X,Y)
[1] 0.8636764

Consider now some regrouping, e.g.

> I=cut(Z[,2],qnorm(seq(0,1,by=.05)))
> Yg=tapply(Y,I,mean)
> Xg=tapply(X,I,mean)

Then the correlation is rather different,

>  cor(Xg,Yg)
[1] 0.1476422

Here we have a strong positive individual correlation, and a small (positive correlation) on grouped data, but almost anything is possible.

Models with random coefficients have been used to make ecological inferences. But that is a long story, andI will probably come back with a more detailed post on that topic, since I am still working on this with @coulmont (following some comments by @frbonnet on his post on recent French elections on http://coulmont.com/blog/).

Short versus long papers, in academic journals

This Monday, during my talk on quantile regressions (at the Montreal R-meeting), we’ve seen how those nice graphs could be interpreted, with the evolution of the slope of the linear regression, as a function of the probability level. One illustration was on large hurricanes, from Elsner, Kossin & Jagger (2008). The other one was on birthweight, from Abrevaya (2001).

It is also to illustrate that technique to academic publication, e.g. the length of papers, over time. Actually, the data we can extract from Scopus are quite similar to the ones uses on hurricanes. For several journals, it is possible to look at the length of articles. Since Scopus is quite expensive ($60,000 per year for the campus, as far as remember, so I can imagine the penalty I might have to pay for sharing such a dataset)

base=read.table("/home/scopus.csv",
header=TRUE,sep=",")
pages=base$Page.end-base$Page.start
year=base$Year

Again, a first idea can be to look at boxplots, and regression on (nonparametric) quantiles, here for Econometrica,

boxplot(pages~as.factor(year),col="light blue")
Q=function(p=.9) as.vector(by(pages,as.factor(year),
function(x) quantile(x,p)))
u=1:16
points(u,Q(p),pch=19,col="blue")
abline(lm(Q(p)~u,weights=table(year)),lwd=2,col="blue")

Consider now (as in the slides in the previous post) a quantile regression (instead of a regression on quantiles), for instance in the Annals of Probability,

library(quantreg)
u=seq(.05,.95,by=.01)
coefstd=function(u) summary(rq(pages~year,
tau=u))$coefficients[,2]
coefest=function(u) summary(rq(pages~year,
tau=u))$coefficients[,1]
CS=Vectorize(coefstd)(u)
CE=Vectorize(coefest)(u)
k=2
plot(u,CE[k,],ylim=c(min(CE[k,]-2*CS[k,]),
max(CE[k,]+2*CS[k,])))
polygon(c(u,rev(u)),c(CE[k,]+1.96*CS[k,],
rev(CE[k,]-1.96*CS[k,])),
col="light green",border=NA)
lines(u,CE[k,],lwd=2,col="red")
abline(h=0)

We have the following slope, for the year, as a function of the probability level,

The slope is always positive, so size of papers is increasing with time, short and long papers. But the influence of time is much larger for long paper than short one: for short papers (lower decile) every year, the size keeps increasing, with one more page every three years. For long paper (upper decile), it is two more pages every three years.

If we look now at the Annals of Statistics, we have

and for the evolution of the slope of the quantile regression,

Again the impact is positive: papers are longer in 2010 than 15 years ago. But the trend is the reverse: short papers (lower decile) are much longer, almost one more page every year, with long paper increase only by one more page every two years… Initially, I want to run such a study on a much longer term, with quantile regressions and splines to see when there might have been a change, both in lower and upper tails. Unfortunately, as suggested by some colleagues, there might have been some changes in the format of the journal (columns, margins, fonts, etc). That’s a shame, because I rediscover nice short papers of 5-10 pages published 20 or 30 years ago. They are nice to read (and also potentially interesting for a post on the blog). 5 pages, that’s perfect, but 40 pages, that’s way too long. I wonder if I am the only one having this feeling, missing those short but extremely interesting papers….

Talk on quantiles at the R Montreal group

This afternoon, I will be giving a two-hour talk at McGill on quantiles, quantile regressions, confidence regions, bagplots and outliers. Before defining (properly) quantile regressions, we will mention regression on (local) quantiles, as on the graph below, on hurricanes,

In order to illustrate quantile regression, consider the following natality database,

base=read.table(
"http://freakonometrics.free.fr/natality2005.txt",
header=TRUE,sep=";")

We can use it produce those nice graphs we can find in several papers, modeling weight of newborns,

u=seq(.05,.95,by=.01)
coefstd=function(u) summary(rq(WEIGHT~SEX+
SMOKER+WEIGHTGAIN+BIRTHRECORD+AGE+ BLACKM+
BLACKF+COLLEGE,data=base,tau=u))$coefficients[,2]
coefest=function(u) summary(rq(WEIGHT~SEX+
SMOKER+WEIGHTGAIN+BIRTHRECORD+AGE+ BLACKM+
BLACKF+COLLEGE,data=base,tau=u))$coefficients[,1]
CS=Vectorize(coefstd)(u)
CE=Vectorize(coefest)(u)

The slides can be downloaded on the blog, as well as the R-code.

Nonconvexity, and playing indoor paintball

Following the two previous posts (here and there), on the number of people that don’t get wet while playing with water pistols, consider now an indoor version, in a non-convex room (i.e. player behind wall are now, somehow, protected). In the previous posts, players where playing on a square field, and I briefly mentioned that if the field was a disk, results would have been (roughly) the same: so far, the shape of the field was not an issue. But what if the field is no longer convex,

library(sp)
plot(0:2,0:2,col="white",xlab="",ylab="")
MAP=Polygon(cbind(c(0,0,1,1,2,2,0),
c(0,2,2,1,1,0,0)))
polygon(MAP@coords,col="light blue")

and players hidden behind the wall cannot be reached (red lines above are impossible hits). As earlier, it is still possible to look at the closest neighbor, we just have to exclude pairs that can no longer hit each other.

And again, it is possible to plot safe zones in green.

Once again, it is possible to look more closely are those supposed-to-be “safe zones”, i.e. by looking at the distribution of the location of players that were dry at the end of the game. With 11 players, we obtain


What about the distribution of the number of dry players, over a game ?

touch=function(x1,y1,x2,y2,n=251){
X=seq(x1,x2,length=n)
Y=seq(y1,y2,length=n)
sum(point.in.polygon(X,Y,MAP@coords[,1],
MAP@coords[,2], mode.checked=FALSE)==0)==0
}

NOTWETnc=function(n,p){
sx=runif(50)*2;sy=runif(50)*2
IN=which(point.in.polygon(sx,sy,MAP@coords[,1],
MAP@coords[,2], mode.checked=FALSE)==1)
Sx=sx[IN];Sy=sy[IN]
Sx=Sx[1:n];Sy=Sy[1:n]
IN=IN[1:n]
MI=matrix(NA,n,n)
for(i in 1:(n-1)){
for(j in (i+1):(n)){
MI[j,i]=MI[i,j]=touch(Sx[i],Sy[i],Sx[j],Sy[j])
}}
(d=as.matrix(dist(cbind(Sx,Sy),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dpossible=d
dpossible[MI==FALSE]=999999
dmin=apply(dpossible,2,which.min)
#whonotwet=( (1:n) %notin% names(table(dmin)) )
notwet=n-length(table(dmin))
return(notwet)}

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NSim=10000
Nnc=Vectorize(NOTWETnc)(n=rep(11,NSim))
Nc=Vectorize(NOTWET)(n=rep(11,NSim))
T=table(Nc)
Tn=table(Nnc)
plot(as.numeric(names(Tn)),
Tn/NSim,type="b",col="blue")
lines(as.numeric(names(T)),
T/NSim,type="b",col="red",pch=4)

On 11 players, we have the same distribution as the one on a square field. So convexity is not a key issue here…

Strange isn’t it. And with an odd number of player, not only there is at least one dry player, but at least, half of the players (maybe minus one) have to be wet…

Where hiding if you don’t want to get wet ?

Following the previous post, two additional remarks. Following a comment by@cosi, I have investigated quickly a binomial fit to the distribution of the number of people not getting wet, with a fixed number of players on the field. It looks like it should be a binomial distribution with a fixed probability (2/3) and with size parameter affine in the number of players. @guigui suggested some connexion with with “birds on a wire” problem (see e.g. http://www.cut-the-knot.org/)

n=p=rep(NA,20)
for(i in 1:20){
NSim=10000
N=Vectorize(NOTWET)(n=rep(3+2*i,NSim))
n[i]=mean(N)/(1-var(N)/mean(N))
p[i]=1-var(N)/mean(N)
}
plot(seq(5,43,by=2),n,col="red",type="b")

for the implied size parameter above, and below the implied probability parameter.

plot(seq(5,43,by=2),p,col="blue",type="b")

(as functions of the number of players). I’d be glad to get more details on that 2/3 probability.

Now, let us investigate another question sent by email: “Where should you hide if you don’t want to get wet ?” A first idea could be the following: given that some players are already on the field, where should I go if I do not want to get wet ? Below are some simulations for 7 or 25 players (already on the field). The red area is the area so that I will become someone’s target (perhaps even the target of two players…). The green area is the safe zone.

(with 7 players above, and 25 below)

It looks like, on the border, it might be safer than in the middle of the field. But we have to confirm that intuition… or at least see if that intuition is valid.

Based on what was done the other day, it is possible to look where people that got wet were located (instead of counting dry players as done in the previous function). So here, we simply look where non wet players were standing

NOTWET=function(n,p){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
#plot(x[-whonotwet],y[-whonotwet],pch=19,col="blue",type="p")
#points(x[whonotwet],y[whonotwet],pch=19,col="red")
M=matrix(NA,p,p);u=seq(0,1,by=1/p)
for(i in 1:p){
for(j in 1:p){
M[i,j]=sum((x[whonotwet]>=u[i])&(x[whonotwet]<u[i+1])&
(y[whonotwet]>=u[j])&(y[whonotwet]<u[j+1]))
}}
return(M)}

based on function

"%notin%" <- function(x, y) x[!x %in% y]

On a given grid, we count people playing the game that ended dry (with might avoid boundary bias on nonparametric smooth estimator of distribution, as we’ll see later on). For instance with 11 players,

M11=matrix(0,25,25);
for(s in 1:100000){
M11=M11+NOTWET(11,25)
}

Then we can plot the distribution, on the field,

COL=rev(heat.colors(101)); p=25
u=seq(0,1,by=1/p)
plot(0:1,0:1,col="white",xlab="",ylab="")
for(i in 1:p){
for(j in 1:p){
polygon(c(u[i],u[i],u[i+1],u[i+1]),
 c(u[j],u[j+1],u[j+1],u[j]),border=NA,
col=COL[trunc(M11[i,j])/max(M11)*100+1])
}}

Red means a lot of non-wet people (i.e safer zones). Graphs below are with 7 and 11 players respectively (from the left to the right)

with the following distribution on the diagonal: corners are almost 4 times safer than the middle of the field, with 7 players,

Below are plotted distributions of locations of non-wet players when the total number of players was either 25 (on the left) and 101 (on the right)

with again on the diagonal

Hence, the border is rather safe, but next to the border, it is no safe any longer: is someone is standing right on the border, he will probably shoot at you: there is no one behind him ! This explains the stange behavior on the borders (and corners, thanks JP for the intuitive explanation).
But would it be completely different with a field shaped as a disk ?

using the previous technique of working on a fixed grid (or correcting for boundary bias, since the disk might cover only a fraction of the grid-square), or keeping coordinates of non-wet players, and using standard kernel-based estimator of the distribution (the light yellow circle outside the disk is simply due to bias of the kernel estimator on the border)

NOTWET=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
return(cbind(x[whonotwet],y[whonotwet]))
}

M=t(c(0,0))
for(s in 1:10000){
M=rbind(M11,NOTWET(25))
}
M=M[-1,]

library(ks)
HP=matrix(c(.001,0,0,.001),2,2)
K=kde(x=M11, H=HP)
image(K$eval.points[[1]],K$eval.points[[2]],K$estimate2,
col=rev(heat.colors(101)),xlim=c(-1,1),ylim=c(-1,1))

 

And note that the distribution of the number of players ending the game dry is the same, for a square field, or a disk,

NOTWET2=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NSim=100000
Nsquare=Vectorize(NOTWET)(n=rep(25,NSim))
Ndisk=Vectorize(NOTWET2)(n=rep(25,NSim))
Tsq=table(Nsquare)
Tdk=table(Ndisk)
plot(as.numeric(names(Tsq)),Tsq/NSim,
type="b",col="red")
lines(as.numeric(names(Tdk)),Tdk/NSim,
type="b",pch=4,col="blue")


But so far, it was still simple… I wonder what it might become if we consider a non-convex place, with walls, where player might hide…. Next time, a post on indoor paint-ball !

Playing with fire (or water)

A few days ago,http://www.futilitycloset.com/published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)

The answer is nice, an can be read on the blog.

What puzzled me in this problem is the following: if we know, for sure, that at least one player won’t get wet, we don’t know exactly how many of them won’t get wet (assuming that if they shoot at the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

It is then rather simple to get the distribution of the number of player that did not get wet,

N25=Vectorize(NOTWET)(n=rep(25,NSim))
T=table(N25)
plot(as.numeric(names(T)),T/NSim,type="b")

The graph for different values for the total number of players is the following (based on 25,000 simulations)

If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,

NSim=25000
N51=Vectorize(NOTWET)(n=rep(51,NSim))
T=table(N51)
plot(as.numeric(names(T)),T/NSim,type="b",col="blue")
u=seq(0,51,by=.1)
lines(u,dnorm(u,mean(N51),sd(N51)),col="red",lty=2)

If anyone has an intuition (not to say a proof) for that, I’d be glad to hear it…

Sunday evening, stupid games…

This evening, while I was about to wash the dishes, I heard my elders starting a game (call them Him and Her)
Him: “I have picked – in my head – a number, lower than 50. Try to guess…”
Her: “No way, too difficult…”
Him: “You can try five different numbers…”
Her: “.,. um … No, no way…”
Me: “Wait… each time we suggest a number, you tell us if yours is either above, or below ?”
You can see me coming clearly, can’t you ? Using a simple subdivision rule, we have a fast algorithm (and indeed, if I have to choose between washing the dishes and playing with the kids…)
Him: “um…. ok”
Her: “Daddy, are you sure we will win ?”
Me: “Well… I cannot promise that we will win… but I am rather sure [sic] that we will win quite frequently: more gains than losses…” (I guess).
Her: “Great ! I am playing with daddy…”

Him: “um…. wait, is it one of you trick, again ? I don’t to play anymore… Do you want to see the books we’ve chosen at the library ?”
Her: “Sure…”
Me: “What ? no one wants to see if I was right ? that we have indeed more than 50% chances to win…”
Him and her: “No !”
The point of that story ? If we listen to kids, science will not go forward, trust me. But I am curious… I want to see if my intuition was correct. Actually, the intuition was based on the fact that

> 2^5
[1] 32 
> 2^6
[1] 64

so in 5 or 6 steps the algorithm of subdivision should converge. I guess… I mean, I do not know for sure, since 50 is not a power of 2, so it might be difficult, each time, to split in two: we have to deal only with integers here…
To be sure, let us substitute my laptop to my son… to pick up numbers, randomly (yes, sometimes I feel like I am Doctor Tenma, 天馬博士). The algorithm is simple: there are bounds, and at each stop I should suggest the middle of the interval. If the middle is not an integer, I suggest either the integer below or the integer above (with equal probabilities).

cutinhalf=function(a,b){
m=(a+b)/2
if(m %% 1 == 0){m=m}
if(m %% 1 != 0){m=sample(c(m-.5,m+.5),size=1)}
return(round(m))}

The following functions runs 10,000 simulations, and tells us how many times, out of 5 numbers suggested, we got the good one.

winning=function(lower=1,upper=50,tries=5,NS=100000){
SIM=rep(NA,NS)
for(simul in 1:NS){
interval=c(lower,upper)
(unknownnumber=sample(lower:upper,size=1))
success=FALSE
for(i in 1:tries){
picknumber=cutinhalf(interval[1],interval[2])
if(picknumber==unknownnumber){success=TRUE}
if(picknumber>unknownnumber){interval[2]=picknumber}
if(picknumber<unknownnumber){interval[1]=picknumber}
#print(c(unknownnumber,picknumber,success,interval))
};SIM[simul]=success};return(mean(SIM))}

It looks like the probability that we got the good number is higher than 60%,

> winning()
[1] 0.61801

Which is not bad. And if the upper limit was not 50, but something else, the probability of winning would have been the following.

VWN=function(n){winning(upper=n)}
V=Vectorize(VWN)(seq(25,100,by=5))
plot(seq(25,100,by=5),V,type="b",col="red",ylim=c(0,1))


Actually, after losing a couple of times, I am rather sure that my son would have to us that we can suggest only four numbers. In that case, the probability would have been close to 30%, as shown on the blue curve below (where four numbers only can be suggested)

Anyway, as intuited, with five possible suggestions, we were quite likely to win frequently. Actually with a probability of almost 2 out of 3…and 1 out of 3 if my son had decided to pick an number between 1 and 100, or only 4 possible suggestions… Those are quite large actually, when we think about it. It reminds me that McGyver story I mentioned a few months ago… Anyway, calculating probabilities is nice, but I still have to wash the dishes…

Maths can be cool (to impress your kids)

Just imagine that your kids need some help, to prepare fishes for April 1st, like

Her: “please, Daddy, help us to draw some fishes”

Me: “Sure, Daddy is a champion, actually, I do that everyday at work: drawing fishes – and more generally nice stuff – is exactly Daddy’s job”.

OK, no need to talk neither about Talbot’s curvesellipse negative pedal curvenor Burleigh’s ovals, unless you don’t want to scare them, e.g.

t=seq(0,2*pi,length=100) 
b=.8
c=sqrt(1-b^2)
x=cos(t)-c*sin(t)^2
y=(1-2*c^2+c*cos(t))*sin(t)/b
plot(x,y)

From now on, it is rather simple to draw fishes,

t=seq(0,2*pi,length=100)
y=cos(t)-sin(t)^2/sqrt(2)
x=cos(t)*sin(t)
plot(x,y,type="l",axes=FALSE,xlab="",ylab="")
polygon(c(-2,-2,2,2),
c(-2,2,2,-2),col="light blue",border=NA)
polygon(x,y,col="red",border=NA)
axis(1)
axis(2)
lines(x,y,type="l")

so we can easily get nice fishes,

Ruin probability and infinite time

A couple of weeks ago, I had a discussion with a practitioner, working in some financial company, about ruin, and infinite time. And it reminded me a weird result. Well, not a weird result, but a result I found disturbing, at first, when I was a student (that I rediscovered with the eyes of someone dealing with computational issues, seeing here a difficult theoretical question). Consider a simple ruin problem. A player has wealth . Then he flips a coin: tails he has a gain of 1, heads he experiences a loss of 1. At time , his wealth is where  is associated to the th coin:  is equal to 1 with probability (tails), and -1 with probability  (heads). It is also possible to write

where  can be interpreted as the net gain of the player. In order to get a good understanding of results that can be obtained. Assume  to be given. Let denote the number of heads and  the number of tails. Then , while . Let  denote the number of paths to go from point A (wealth  at time ) to point B (wealth  at time ). Note that this is a Markovian problem, that can be modeled using Markov chains

But here, we will focus on combinatorial results. Hence,

In order to derive probabilities to reach , let  denote the number of paths going from  to . And let denote the number of paths going from  to  that do reach  at some point between  and . Using a simple reflexion property, then if  and  are positive,

Based on those reflexions, two results can be derived (focusing on probability, instead of counting paths). First, we can obtain that

(given that n and x have the same parity). The second result we can obtain is that

Based on those two expressions, if  denotes the first time  become null, given ,

then

This can be computed easily,

> x=10
> p=.55
> ProbN=function(n){
+ pb=0
+ if(abs(n-x) %% 2 == 0)
+ pb=x/n*choose(n,(n+x)/2)*(1-p)^((n+x)/2)*(p)^((n-x)/2)
+ return(pb)}
> plot(Vectorize(ProbN)(1:1000),type="s")

That looks nice… But if we look closer, we can wonder what

would be ? Since we have the distribution of a probabilty measure, we might expect one. But here

> sum(Vectorize(ProbN)(1:1000))
[1] 0.134385

And this is not due to calculation mistakes that we do not get 1 here. Actually, we should write

which might be interpreted as the probability of ruin, starting from , that we denote  from now on. The term on the left can be approximated using monte-carlo simulations

> p=.55
> x=10
> m=1000
> simul=10000
> S=sample(c(-1,1),size=m*simul,replace=TRUE,prob=c(1-p,p))
> MS=matrix(S,simul,m)
> for(k in 2:m) MS[,k]=MS[,k]+MS[,k-1]
> T0=function(vm) which(vm<=(-x))[1]
> MTmin=apply(MS,1,T0)
> mean(is.na(MTmin)==FALSE)
[1] 0.1328

To check the validity of the relationship above, a simple (theoretical) recursive formula can be derived for the term on the right (ruin probability), namely

with a boundary conditions , and . Then is comes that

Note that it might be tricky to check using monte carlo simulation… since we cannot have an infinite number of runs. And we’re dealing precisely with things that do occur when time is infinite. Actually, we can still check convergence, considering an upper limit  for the number of runs, and then letting  go to infinity. Note that an explicit formula can then be derived (using additional border condition )

Using the following code, it is possible to calculate ruin probability, in order to estimate .

> MSmin=apply(MS,1,min)
> mean(MSmin<=(-x))
[1] 0.1328
> (((1-p)/p)^x-((1-p)/p)^m)/(1-((1-p)/p)^m)
[1] 0.1344306

The following graph shows the evolution of ruin probability as a function of initial wealth (with monte carlo simulation, with a fixed horizon – including a confidence interval – versus the analytical expression)

Hence, with stopping times, one should remember that

and that those two terms can be approximated simply using simulations or standard approximations.

Visualization in regression analysis

Visualization is a key to success in regression analysis. This is one of the (many) reasons I am also suspicious when I read an article with a quantitative (econometric) analysis without any graph. Consider for instance the following dataset, obtained from http://data.worldbank.org/, with, for each country, the GDP per capita (in some common currency) and the infant mortality rate (deaths before the age of 5),

> library(gdata)
> XLS1=read.xls("http://api.worldbank.org/datafiles
/NY.GDP.PCAP.PP.CD_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data1=XLS1[-(1:28),c("Country.Name","Country.Code","X2010")]
> names(data1)[3]="GDP"
> XLS2=read.xls("http://api.worldbank.org/datafiles
/SH.DYN.MORT_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data2=XLS2[-(1:28),c("Country.Code","X2010")]
> names(data2)[2]="MORTALITY"
> data=merge(data1,data2)
> head(data)
Country.Code         Country.Name       GDP MORTALITY
1          ABW                Aruba        NA        NA
2          AFG          Afghanistan  1207.278     149.2
3          AGO               Angola  6119.930     160.5
4          ALB              Albania  8817.009      18.4
5          AND              Andorra        NA       3.8
6          ARE United Arab Emirates 47215.315       7.1

If we estimate a simple linear regression – http://freakonometrics.blog.free.fr/public/perso5/logormal01.gif  – we get

> regBB=lm(MORTALITY~GDP,data=data)
> summary(regBB)

Call:
lm(formula = MORTALITY ~ GDP, data = data)

Residuals:
Min     1Q Median     3Q    Max
-45.24 -29.58 -12.12  16.19 115.83

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 67.1008781  4.1577411  16.139  < 2e-16 ***
GDP         -0.0017887  0.0002161  -8.278 3.83e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 39.99 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.2909,	Adjusted R-squared: 0.2867
F-statistic: 68.53 on 1 and 167 DF,  p-value: 3.834e-14

We can look at the scatter plot, including the linear regression line, and some confidence bounds,

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5)",cex=.5)
> text(data$GDP,data$MORTALITY,data$Country.Name,pos=3)
> x=seq(-10000,100000,length=101)
> y=predict(regBB,newdata=data.frame(GDP=x),
+ interval="prediction",level = 0.9)
> lines(x,y[,1],col="red")
> lines(x,y[,2],col="red",lty=2)
> lines(x,y[,3],col="red",lty=2)

We should be able to do a better job here. For instance, if we look at the Box-Cox profile likelihood,

> boxcox(regBB)

it looks like taking the logarithm of the mortality rate should be better, i.e. http://freakonometrics.blog.free.fr/public/perso5/lognormal02.gif or http://freakonometrics.blog.free.fr/public/perso5/lognormal05.gif:

> regLB=lm(log(MORTALITY)~GDP,data=data)
> summary(regLB)

Call:
lm(formula = log(MORTALITY) ~ GDP, data = data)

Residuals:
Min      1Q  Median      3Q     Max
-1.3035 -0.5837 -0.1138  0.5597  3.0583

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  3.989e+00  7.970e-02   50.05   <2e-16 ***
GDP         -6.487e-05  4.142e-06  -15.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7666 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.5949,	Adjusted R-squared: 0.5925
F-statistic: 245.3 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5,log="y")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=seq(300,100000,length=101)
> y=exp(predict(regLB,newdata=data.frame(GDP=x)))*
+ exp(summary(regLB)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scale or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5)

on the standard scale. Here we use quantiles of the log-normal distribution to derive confidence intervals.

But why shouldn’t we take also the logarithm of the GDP ? We can fit a model http://freakonometrics.blog.free.fr/public/perso5/lognormal03.gif or equivalently http://freakonometrics.blog.free.fr/public/perso5/lognormal04.gif.

> regLL=lm(log(MORTALITY)~log(GDP),data=data)
> summary(regLL)

Call:
lm(formula = log(MORTALITY) ~ log(GDP), data = data)

Residuals:
Min       1Q   Median       3Q      Max
-1.13200 -0.38326 -0.07127  0.26610  3.02212

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.50192    0.31556   33.28   <2e-16 ***
log(GDP)    -0.83496    0.03548  -23.54   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.5797 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.7684,	Adjusted R-squared: 0.767
F-statistic:   554 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5,log="xy")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=exp(seq(1,12,by=.1))
> y=exp(predict(regLL,newdata=data.frame(GDP=x)))*
+ exp(summary(regLL)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scales or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5)

on the standard scale. If we compare the last two predictions, we have

with in blue is the log model, and in red is the log-log model (I did not include the first one for obvious reasons).

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif is smaller or larger than http://freakonometrics.blog.free.fr/public/perso5/lrtest22.gif (where in the context of finite versus infinite mean http://freakonometrics.blog.free.fr/public/perso5/lrtest23.gif). I.e. either http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif belongs to the set http://freakonometrics.blog.free.fr/public/perso5/lrtest-10.gif or to its complementary http://freakonometrics.blog.free.fr/public/perso5/lrtest-11.gif. Consider the maximum likelihood estimator http://freakonometrics.blog.free.fr/public/perso5/lrtest24.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/lrtest-9.gif

Let http://freakonometrics.blog.free.fr/public/perso5/lrtest25.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-3.gif denote the constrained maximum likelihood estimators on http://freakonometrics.blog.free.fr/public/perso5/lrtest26.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest27.gif respectively,

http://freakonometrics.blog.free.fr/public/perso5/lrtest-12.gif

http://freakonometrics.blog.free.fr/public/perso5/lrtest-2.gif

Either http://freakonometrics.blog.free.fr/public/perso5/lrtest-13.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-6.gif (on the left), or http://freakonometrics.blog.free.fr/public/perso5/lrtest-14.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-7.gif (on the right)

So likelihood ratios

http://freakonometrics.blog.free.fr/public/perso5/lrtest-15.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-16.gif

 are either equal to

http://freakonometrics.blog.free.fr/public/perso5/lrtest-19.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-18.gif

or

http://freakonometrics.blog.free.fr/public/perso5/lrtest-20.gif        http://freakonometrics.blog.free.fr/public/perso5/lrtest-17.gif

If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$par.ses[1]
+ }
>
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

the Dirichlet distribution

In the course, since we are still introducing some concepts of dependent distributions, we will talk about the Dirichlet distribution, which is a distribution over the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif. Let http://freakonometrics.hypotheses.org/files/2017/07/diri01.gif denote the Gamma distribution with density (on http://freakonometrics.hypotheses.org/files/2017/07/diri03.gif)

http://freakonometrics.hypotheses.org/files/2017/07/diri02.gif

Let http://freakonometrics.hypotheses.org/files/2017/07/diri04.gif denote independent http://freakonometrics.hypotheses.org/files/2017/07/diri05.gif random variables, with http://freakonometrics.hypotheses.org/files/2017/07/diri06.gif. Then http://freakonometrics.hypotheses.org/files/2017/07/diri07.gif where

http://freakonometrics.hypotheses.org/files/2017/07/diri08.gif

has a Dirichlet distribution with parameter

http://freakonometrics.hypotheses.org/files/2017/07/diri09.gif

Note that http://freakonometrics.hypotheses.org/files/2017/07/diri10.gif has a distribution in the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif,

http://freakonometrics.hypotheses.org/files/2017/07/diri40.gif

and has density

http://freakonometrics.hypotheses.org/files/2017/07/diri12.gif

We will write http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif.

The density for different values of http://freakonometrics.hypotheses.org/files/2017/07/diri20.gif can be visualized below, e.g. http://freakonometrics.hypotheses.org/files/2017/07/diri21.gif, with some kind of symmetry,
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet222.gif
or http://freakonometrics.hypotheses.org/files/2017/07/diri22.gif and http://freakonometrics.hypotheses.org/files/2017/07/diri23.gif, below
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet522.gif
and finally, below, http://freakonometrics.hypotheses.org/files/2017/07/diri24.gif


Note that marginal distributions are also Dirichlet, in the sense that if

http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif

then

http://freakonometrics.hypotheses.org/files/2017/07/diri14.gif

if http://freakonometrics.hypotheses.org/files/2017/07/diri15.gif, and if http://freakonometrics.hypotheses.org/files/2017/07/diri16.gif, then http://freakonometrics.hypotheses.org/files/2017/07/diri17.gif‘s have Beta distributions,

http://freakonometrics.hypotheses.org/files/2017/07/diri18.gif

See Devroye (1986) section XI.4, or Frigyik, Kapila & Gupta (2010) .This distribution might also be called multivariate Beta distribution. In R, this function can be used as follows

> library(MCMCpack)
> alpha=c(2,2,5)
> x=seq(0,1,by=.05)
> vx=rep(x,length(x))
> vy=rep(x,each=length(x))
> vz=1-x-vy
> V=cbind(vx,vy,vz)
> D=ddirichlet(V, alpha)
> persp(x,x,matrix(D,length(x),length(x))

(to plot the density, as figures above). Note that we will come back on that distribution later on so-called Liouville copulas (see also Gupta & Richards (1986)).