Tag Archives: quantreg

Short versus long papers, in academic journals

This Monday, during my talk on quantile regressions (at the Montreal R-meeting), we’ve seen how those nice graphs could be interpreted, with the evolution of the slope of the linear regression, as a function of the probability level. One illustration was on large hurricanes, from Elsner, Kossin & Jagger (2008). The other one was on birthweight, from Abrevaya (2001).

It is also to illustrate that technique to academic publication, e.g. the length of papers, over time. Actually, the data we can extract from Scopus are quite similar to the ones uses on hurricanes. For several journals, it is possible to look at the length of articles. Since Scopus is quite expensive ($60,000 per year for the campus, as far as remember, so I can imagine the penalty I might have to pay for sharing such a dataset)

base=read.table("/home/scopus.csv",
header=TRUE,sep=",")
pages=base$Page.end-base$Page.start
year=base$Year

Again, a first idea can be to look at boxplots, and regression on (nonparametric) quantiles, here for Econometrica,

boxplot(pages~as.factor(year),col="light blue")
Q=function(p=.9) as.vector(by(pages,as.factor(year),
function(x) quantile(x,p)))
u=1:16
points(u,Q(p),pch=19,col="blue")
abline(lm(Q(p)~u,weights=table(year)),lwd=2,col="blue")

Consider now (as in the slides in the previous post) a quantile regression (instead of a regression on quantiles), for instance in the Annals of Probability,

library(quantreg)
u=seq(.05,.95,by=.01)
coefstd=function(u) summary(rq(pages~year,
tau=u))$coefficients[,2]
coefest=function(u) summary(rq(pages~year,
tau=u))$coefficients[,1]
CS=Vectorize(coefstd)(u)
CE=Vectorize(coefest)(u)
k=2
plot(u,CE[k,],ylim=c(min(CE[k,]-2*CS[k,]),
max(CE[k,]+2*CS[k,])))
polygon(c(u,rev(u)),c(CE[k,]+1.96*CS[k,],
rev(CE[k,]-1.96*CS[k,])),
col="light green",border=NA)
lines(u,CE[k,],lwd=2,col="red")
abline(h=0)

We have the following slope, for the year, as a function of the probability level,

The slope is always positive, so size of papers is increasing with time, short and long papers. But the influence of time is much larger for long paper than short one: for short papers (lower decile) every year, the size keeps increasing, with one more page every three years. For long paper (upper decile), it is two more pages every three years.

If we look now at the Annals of Statistics, we have

and for the evolution of the slope of the quantile regression,

Again the impact is positive: papers are longer in 2010 than 15 years ago. But the trend is the reverse: short papers (lower decile) are much longer, almost one more page every year, with long paper increase only by one more page every two years… Initially, I want to run such a study on a much longer term, with quantile regressions and splines to see when there might have been a change, both in lower and upper tails. Unfortunately, as suggested by some colleagues, there might have been some changes in the format of the journal (columns, margins, fonts, etc). That’s a shame, because I rediscover nice short papers of 5-10 pages published 20 or 30 years ago. They are nice to read (and also potentially interesting for a post on the blog). 5 pages, that’s perfect, but 40 pages, that’s way too long. I wonder if I am the only one having this feeling, missing those short but extremely interesting papers….

Playing with quantiles, part 1

A standard idea in extreme value theory (see e.g. here, in French unfortunately) is that to estimate the 99.5% quantile (say), we just need to estimate a quantile of level 95% for observations exceeding the 90% quantile.

In extreme value theory, we assume that the 90% quantile (of the initial distribution) can be obtained easily, e.g. the empirical quantile, and then, for the exceeding observations, we fit a Pareto distribution (a Generalized Pareto one to be precise), and get a parametric quantile for the 95% quantile. I.e.

http://freakonometrics.blog.free.fr/public/perso2/quant01.gif

which can be written

http://freakonometrics.blog.free.fr/public/perso2/quant02.gif

So, an estimation of the cumulative distribution function is

http://freakonometrics.blog.free.fr/public/perso2/quant03.gif

and if we invert it, we get the popular expression for high level quantiles,

http://freakonometrics.blog.free.fr/public/perso2/quant04b.gif

Hence, we do not really care about observations in the core of the distribution.

And I was wondering if this can be transposed with quantile regressions. Hence, I would like to get a quantile regression of level 90% (say) of http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif given http://freakonometrics.blog.free.fr/public/perso2/qqqo5.gif, based on observations http://freakonometrics.blog.free.fr/public/perso2/qqq04.gif‘s, but all observations such that http://freakonometrics.blog.free.fr/public/perso2/qqq07.gif for some http://freakonometrics.blog.free.fr/public/perso2/qqq08.gif are missing. More precisely, I have the following sample (here half of the observations are missing),

Assume that we know that I have observations below the http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif quantile of level 25%, and above the http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif quantile of level 75%.
If I want to get the 90% quantile regression, and the 10% quantile, the code is simply,

library(mnormt)
library(quantreg)
library(splines)
set.seed(1)
mu=c(0,0)
r=0
Sigma <- matrix(c(1,r,r,1), 2, 2)
Z=rmnorm(2500,mu,Sigma)
X=Z[,1]
Y=Z[,2]
 
base=data.frame(X,Y)
plot(X,Y,col="blue",cex=.7)
I=(Y>qnorm(.25)
)&(Y<qnorm(.75))
baseI=base[I==FALSE,]
points(X[I],Y[I],col="light blue",cex=.7)
abline(h=qnorm(.25),lty=2,col="blue")
abline(h=qnorm(.75),lty=2,col="blue")
u=seq(-5,5,by=.02)
reg=rq(Y~X,data=base,tau=.05)
lines(u,predict(reg,newdata=data.frame(X=u)),lty=2)
reg=rq(Y~X,data=baseI,tau=.05*2)
lines(u,predict(reg,newdata=data.frame(X=u)))

The graph is the following

Dotted lines – in black – are theoretical lines (if I had all observations), and plain lines are (where half of the sample if missing). Instead of a standard linear quantile regression, it is also possible to try a spline regression,

So obviously, if I miss something in the middle, that’s no big deal, doted and plain lines are here extremely close.
But what if observations http://freakonometrics.blog.free.fr/public/perso2/qqqo5.gif and http://freakonometrics.blog.free.fr/public/perso2/qqq06.gif were correlated ? Consider a Gaussian random vector http://freakonometrics.blog.free.fr/public/perso2/qqq09.gif with correlation http://freakonometrics.blog.free.fr/public/perso2/qqq10.gif (here 0.
6).

It looks like we overestimate the slope for high quantile, but not for lower quantiles. So if observations are correlated, we have to be cautious with that technique.
But why could that be interesting ? Well, because I wanted to run a quantile regression on marathon results. But I could not get the overall dataset (since I had to import observations manually, and I have to admit that it was a bit boring). So I extracted finish times of the first 10% athletes, and the latest 10%. And I was wondering if it was enough to look at the 5% and 95% quantiles, based on the age of the runner… To be continued.