Tag Archives: spherical

Overview on Multivariate Distributions

In June 2016, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we will organize every month an internal workshop on that topic. We will start tomorrow afternoon, 13:00-14:30, and I will give a brief talk on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. Slides are now online,

Visualising a Circular Density

This afternoon, Jean-Luc asked me some help about an old post I did publish, minuit, l’heure du crime; and some graphs published a few days after, where I used a different visualisation, in another post.

The idea is that the hour can be seen as circular, in the sense that 23:58 is actually very close to 00:03. So when we use a nonparametric kernel estimator of time events, we have to take into account that property. More specifically, consider the density of an angle, i.e. a function f(\cdot) such that \int_0^{2\pi}f(\omega)d\omega=1
with a circular relationship, in the sense that f(\omega+2\pi)=f(\omega).

In the dataset sent by Jean-Luc, we have some thefts in a big city, in France. The dataset is a simple spreadsheet with one columns, with ’00:20′ or ’17:45′ inside. Those are more or less reported time of thefts, as declared to the police.

B=read.table("Temp_Heures_VV.csv",header=TRUE,
  sep=";")
HM=as.character(B[,1])
H=substr(HM,1,(nchar(HM)-3))
M=substr(HM,(nchar(HM)-1),(nchar(HM)))
X=as.numeric(H)+as.numeric(M)/60

The time is a number from 0 to 24.

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*3.141592-3.141592/2
Dd=d$y[I]/max(d$y)+1

The idea to get a nice density estimation is to use a simple mirror technique : we have three versions of the data, one for today, one for yesterday, and one for tomorrow. Of course, we have to use a shorter bandwidth.

R=1/24/max(d$y)/3+1 
plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
     type="l",axes=FALSE,xlab="",ylab="")
for(i in 3.14159/12*(0:12)){ 
  segments(-cos(i),-sin(i),cos(i),sin(i),col="grey")} 
segments(.9*cos(O12),.9*sin(O12),
         1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

The dotted line would be a uniform distribution over the day. The true distribution is the black bold line. The area in purple is when we have more crimes, and the blue line is when we have less crimes. The blue area is equal to the purple one. There is a clear symmetry in the evening around midnight (but not during the day : 6 am is not the same as 6 pm). This graph is the circular visualisation of the kernel density estimator, the same way the rose diagram is the circular visualisation of the histogram.

Moving the North Pole to the Equator

I am still working with @3wen on visualizations of the North Pole. So far, it was not that difficult to generate maps, but we started to have problems with the ice region in the Arctic. More precisely, it was complicated to compute the area of this region (even if we can easily get a shapefile). Consider the globe,

worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)

and then, add three points in the northern hemisphere, and plot the associated triangle

P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+

for some given projection, e.g.

coord_map("ortho", orientation=c(61, -74, 0))

This can be done with the following function

proj1=function(x=75){
triangle <- data.frame(long=c(-70,-110,-90*(x<90)+90*(x>90)),
lat=c(60,60,x*(x<90)+(90-(x-90))*(x>90)),group=1, region=1)
worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)
P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+
coord_map("ortho", orientation=c(61, -74, 0)) 
print(P1)
}

or

I am not sure if I understand why the projection of the triangle is not convex on the graph above, but say it’s not a big deal, here. Actually, our problem is that our interest is on regions (polygons, from a geometrical point of view) that do contain the North Pole. And here, it starts to be messy. I can easily move the upper point on the other side of the globe, but the polygon is not correct,

I do understand that it should be a problem, non-trivial, but it means that it should not be that simple to compute the area of a polygon (a region) that contains the North Pole. Which is exactly what we did observe in our computation. And I believe that one heuristic interpretation is related to the following graph

My skills in geometry are extremely poor. So do not expect that I will go through the code of the function that compute the area of a polygon ! Actually, my idea is the following : if the problem is that the North Pole is in the region, let’s consider some rotation, to shift the North Pole on the Equation. The code here is, from latitudes and longitude, to get new latitudes and longitudes, after a rotation around the y-axis (the North Pole will go down, along Greenwhich meridian) is

rotation=function(Z,theta){
lon=Z[,1]/180*pi; lat=Z[,2]/180*pi
x=cos(lon)*cos(lat)
y=sin(lon)*cos(lat)
z=sin(lat)
pt1=cbind(x,y,z)
M=matrix(c(cos(theta),0,-sin(theta),0,1,0,sin(theta),0,cos(theta)),3,3)
pt2=t(M%*%t(pt1))
lat=asin(pt2[,3])*180/pi
lon=atan2(pt2[,2],pt2[,1])*180/pi
return(cbind(lon,lat))}

With a rotation from  (no change) to  (the North Pole on the equator), we get

From now on, it is possible to compute the area of any region containing the North Pole ! One should simply apply the rotation function on all datebases generated from shapefiles (and then the opposite rotation to get a proper location) ! We can then compute the centroid of the ice region, for example,

r.glace=glace
r.glace[,1:2]=rotation(glace[,1:2],pi/2)
M=matrix(NA,length(unique(glace$id)),3)
j=0
for(i in unique(glace$id)){j=j+1
Polyglace <- as(r.glace[glace$id==i,c("long","lat")],"gpc.poly")
M[j,1]=area.poly(Polyglace)
M[j,2:3]=centroid(r.glace[r.glace$id==i,c("long","lat")])
}
Z=c(weighted.mean(M[,2],M[,1]),weighted.mean(M[,3],M[,1]))
rotation(rbind(Z),-pi/2)[1,])

And we get

and below, we can visualize all the locations of the centroid of the ice region in the past 25 years

Circular or spherical data, and density estimation

I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.

  • circular data density estimation

Consider the density of an angle say, i.e. a function http://freakonometrics.hypotheses.org/files/2015/12/circ-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/circ-02.gif

with a circular relationship, i.e. http://freakonometrics.hypotheses.org/files/2015/12/circ-03.gif. It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

http://freakonometrics.hypotheses.org/files/2015/12/circ-04.gif

where http://freakonometrics.hypotheses.org/files/2015/12/circ-05.gif is Bessel modified function of order 1,

http://freakonometrics.hypotheses.org/files/2015/12/circ-06.gif

(which is simply a normalization parameter). There are two parameters here, http://freakonometrics.hypotheses.org/files/2015/12/circ-07.gif (some concentration parameter) and mu a direction.
From a series of observed angleshttp://freakonometrics.hypotheses.org/files/2015/12/circ-08.gif, the maximum likelihood estimator for kappa is solution of

http://freakonometrics.hypotheses.org/files/2015/12/circ-09.gif

where

http://freakonometrics.hypotheses.org/files/2015/12/circ-10.gif

and

http://freakonometrics.hypotheses.org/files/2015/12/circ-11.gif

and where http://freakonometrics.hypotheses.org/files/2015/12/circ-12.gif, where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….

  • density estimation for hours on simulated data

A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let http://freakonometrics.hypotheses.org/files/2015/12/circ-13.gif is the time (in hours) for the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth observation (the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth phone call received). Then set

http://freakonometrics.hypotheses.org/files/2015/12/circ-15.gif

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,

set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
H=circular(Omega,type="angle",units="radians",rotation="clock")
Ht=circular(Omegat,type="angle",units="radians",rotation="clock")
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1))
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )

rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)

or a kernel based estimation of the density (the gray line on the right).

circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)

The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif, we estimate it on the sample http://freakonometrics.hypotheses.org/files/2015/12/circular-density-3.gif. Then we multiply by 3 to get the density only on http://freakonometrics.hypotheses.org/files/2015/12/0-24.gif. For the bandwidth, I took the same as the one that we would have taken on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif

The code is simply the following

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*pi-pi/2
Dd=d$y[I]/max(d$y)+1

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)

Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

We get here those two graphs,

To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.

density of wind direction

On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.

A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.

density of 911 phone calls

Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).

In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.

That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right

We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right

  • density of earth temperatures, or earthquakes

Of course it is also possible to work in higher dimension. Before, we went from densities on http://freakonometrics.hypotheses.org/files/2015/12/circ-16.gif to densities on the unit circle http://freakonometrics.hypotheses.org/files/2015/12/circ-18.gif. But similarly, it is possible to go from http://freakonometrics.hypotheses.org/files/2015/12/circ-17.gif to the unit sphere http://freakonometrics.hypotheses.org/files/2015/12/circ-19.gif. A nice application being global climate studies,

The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.

library(ks)
X=cbind(EQ$Longitude,EQ$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
plot(DX,add=TRUE,col="red")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")

Without any correction, we get the red level curves. The pink one integrates correction.