Tag Archives: nonparametric

Circular or spherical data, and density estimation

I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.

  • circular data density estimation

Consider the density of an angle say, i.e. a function http://freakonometrics.hypotheses.org/files/2015/12/circ-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/circ-02.gif

with a circular relationship, i.e. http://freakonometrics.hypotheses.org/files/2015/12/circ-03.gif. It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

http://freakonometrics.hypotheses.org/files/2015/12/circ-04.gif

where http://freakonometrics.hypotheses.org/files/2015/12/circ-05.gif is Bessel modified function of order 1,

http://freakonometrics.hypotheses.org/files/2015/12/circ-06.gif

(which is simply a normalization parameter). There are two parameters here, http://freakonometrics.hypotheses.org/files/2015/12/circ-07.gif (some concentration parameter) and mu a direction.
From a series of observed angleshttp://freakonometrics.hypotheses.org/files/2015/12/circ-08.gif, the maximum likelihood estimator for kappa is solution of

http://freakonometrics.hypotheses.org/files/2015/12/circ-09.gif

where

http://freakonometrics.hypotheses.org/files/2015/12/circ-10.gif

and

http://freakonometrics.hypotheses.org/files/2015/12/circ-11.gif

and where http://freakonometrics.hypotheses.org/files/2015/12/circ-12.gif, where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….

  • density estimation for hours on simulated data

A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let http://freakonometrics.hypotheses.org/files/2015/12/circ-13.gif is the time (in hours) for the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth observation (the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth phone call received). Then set

http://freakonometrics.hypotheses.org/files/2015/12/circ-15.gif

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,

set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
H=circular(Omega,type="angle",units="radians",rotation="clock")
Ht=circular(Omegat,type="angle",units="radians",rotation="clock")
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1))
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )

rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)

or a kernel based estimation of the density (the gray line on the right).

circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)

The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif, we estimate it on the sample http://freakonometrics.hypotheses.org/files/2015/12/circular-density-3.gif. Then we multiply by 3 to get the density only on http://freakonometrics.hypotheses.org/files/2015/12/0-24.gif. For the bandwidth, I took the same as the one that we would have taken on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif

The code is simply the following

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*pi-pi/2
Dd=d$y[I]/max(d$y)+1

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)

Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

We get here those two graphs,

To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.

density of wind direction

On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.

A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.

density of 911 phone calls

Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).

In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.

That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right

We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right

  • density of earth temperatures, or earthquakes

Of course it is also possible to work in higher dimension. Before, we went from densities on http://freakonometrics.hypotheses.org/files/2015/12/circ-16.gif to densities on the unit circle http://freakonometrics.hypotheses.org/files/2015/12/circ-18.gif. But similarly, it is possible to go from http://freakonometrics.hypotheses.org/files/2015/12/circ-17.gif to the unit sphere http://freakonometrics.hypotheses.org/files/2015/12/circ-19.gif. A nice application being global climate studies,

The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.

library(ks)
X=cbind(EQ$Longitude,EQ$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
plot(DX,add=TRUE,col="red")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")

Without any correction, we get the red level curves. The pink one integrates correction.

In statistics, having too much information might not be a good thing

A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-06.gif

where http://freakonometrics.blog.free.fr/public/perso2/inc-out-8.gif is the quantile of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif of the standard normal distribution http://freakonometrics.blog.free.fr/public/perso2/inc-out-09.gif. But usually, standard deviation http://freakonometrics.blog.free.fr/public/perso2/inc-cout-10.gif (the something is was talking about earlier) is usually unknown. So we substitute an estimation of the standard deviation, e.g.

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-02.gif

and the cost we have to pay is that the new confidence interval is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-01.gif

where now http://freakonometrics.blog.free.fr/public/perso2/IC-cout-03.gif is the quantile of the Student distribution, of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif, with http://freakonometrics.blog.free.fr/public/perso2/IC-cout-04.gif degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels,  here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_11.gif i.i.d. with distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif, a copula, but in practice, we start from a samplehttp://freakonometrics.blog.free.fr/public/perso2/ic-cout_12.gif with joint distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cour_14.gif (assumed to have continuous margins, and – unique – copula http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-15.gif

but in the real world, we have to consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-16.gif

where it is standard to consider ranks, i.e. http://freakonometrics.blog.free.fr/public/perso2/ic-cout_109.gif are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout-17.gif than on perfect sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_18.gif.
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:

With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,

Well, we observed something for finite http://freakonometrics.blog.free.fr/public/maths/mariage01.png, but Christian and Johan obtained an analytical result. Hence, if we denote

http://freakonometrics.blog.free.fr/public/perso2/JSCG-1.gif

the empirical copula in the perfect world (with known margins) and

http://freakonometrics.blog.free.fr/public/perso2/JSCG-2.gif

the one constructed from the pseudo sample, they obtained that, everywhere

http://freakonometrics.blog.free.fr/public/perso2/JSCG-6.gif

with nice graphs of http://freakonometrics.blog.free.fr/public/perso2/JSCG-7.gif,

So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !