Tag Archives: circle

Radial Graphs for Time Series

On How to: Weather Radials, there was a nice visualisation of temperatures. Since I am too old fashioned for ggplot2, I wanted to reproduce a similar graph with the old plot style.

Assume that daily temperature is in a vector X (e.g. temperature in Montréal, QC, in 2009). To get a radial plot, use

> n=length(X)
> theta=seq(0,1-1/n,length=n)*2*pi
> r=30+X
> plot(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",xlab="",ylab="",axes=FALSE)
> for(t in 1:n){
+   if(X[t]>0) CL=rgb(0,0,1,.4)
+   if(X[t]<0) CL=rgb(1,0,0,.4)
+   if(X[t]==0) CL="white"
+   segments((30+X[t])*cos(pi/2-theta[t]),(30+X[t])*sin(pi/2-theta[t]),30*cos(pi/2-theta[t]),30*sin(pi/2-theta[t]),col=CL)
+ }
> for(r in 10*seq(0,6)) lines(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",col="light blue")

Where hiding if you don’t want to get wet ?

Following the previous post, two additional remarks. Following a comment by@cosi, I have investigated quickly a binomial fit to the distribution of the number of people not getting wet, with a fixed number of players on the field. It looks like it should be a binomial distribution with a fixed probability (2/3) and with size parameter affine in the number of players. @guigui suggested some connexion with with “birds on a wire” problem (see e.g. http://www.cut-the-knot.org/)

n=p=rep(NA,20)
for(i in 1:20){
NSim=10000
N=Vectorize(NOTWET)(n=rep(3+2*i,NSim))
n[i]=mean(N)/(1-var(N)/mean(N))
p[i]=1-var(N)/mean(N)
}
plot(seq(5,43,by=2),n,col="red",type="b")

for the implied size parameter above, and below the implied probability parameter.

plot(seq(5,43,by=2),p,col="blue",type="b")

(as functions of the number of players). I’d be glad to get more details on that 2/3 probability.

Now, let us investigate another question sent by email: “Where should you hide if you don’t want to get wet ?” A first idea could be the following: given that some players are already on the field, where should I go if I do not want to get wet ? Below are some simulations for 7 or 25 players (already on the field). The red area is the area so that I will become someone’s target (perhaps even the target of two players…). The green area is the safe zone.

(with 7 players above, and 25 below)

It looks like, on the border, it might be safer than in the middle of the field. But we have to confirm that intuition… or at least see if that intuition is valid.

Based on what was done the other day, it is possible to look where people that got wet were located (instead of counting dry players as done in the previous function). So here, we simply look where non wet players were standing

NOTWET=function(n,p){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
#plot(x[-whonotwet],y[-whonotwet],pch=19,col="blue",type="p")
#points(x[whonotwet],y[whonotwet],pch=19,col="red")
M=matrix(NA,p,p);u=seq(0,1,by=1/p)
for(i in 1:p){
for(j in 1:p){
M[i,j]=sum((x[whonotwet]>=u[i])&(x[whonotwet]<u[i+1])&
(y[whonotwet]>=u[j])&(y[whonotwet]<u[j+1]))
}}
return(M)}

based on function

"%notin%" <- function(x, y) x[!x %in% y]

On a given grid, we count people playing the game that ended dry (with might avoid boundary bias on nonparametric smooth estimator of distribution, as we’ll see later on). For instance with 11 players,

M11=matrix(0,25,25);
for(s in 1:100000){
M11=M11+NOTWET(11,25)
}

Then we can plot the distribution, on the field,

COL=rev(heat.colors(101)); p=25
u=seq(0,1,by=1/p)
plot(0:1,0:1,col="white",xlab="",ylab="")
for(i in 1:p){
for(j in 1:p){
polygon(c(u[i],u[i],u[i+1],u[i+1]),
 c(u[j],u[j+1],u[j+1],u[j]),border=NA,
col=COL[trunc(M11[i,j])/max(M11)*100+1])
}}

Red means a lot of non-wet people (i.e safer zones). Graphs below are with 7 and 11 players respectively (from the left to the right)

with the following distribution on the diagonal: corners are almost 4 times safer than the middle of the field, with 7 players,

Below are plotted distributions of locations of non-wet players when the total number of players was either 25 (on the left) and 101 (on the right)

with again on the diagonal

Hence, the border is rather safe, but next to the border, it is no safe any longer: is someone is standing right on the border, he will probably shoot at you: there is no one behind him ! This explains the stange behavior on the borders (and corners, thanks JP for the intuitive explanation).
But would it be completely different with a field shaped as a disk ?

using the previous technique of working on a fixed grid (or correcting for boundary bias, since the disk might cover only a fraction of the grid-square), or keeping coordinates of non-wet players, and using standard kernel-based estimator of the distribution (the light yellow circle outside the disk is simply due to bias of the kernel estimator on the border)

NOTWET=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y),
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
whonotwet=( (1:n) %notin% names(table(dmin)) )
return(cbind(x[whonotwet],y[whonotwet]))
}

M=t(c(0,0))
for(s in 1:10000){
M=rbind(M11,NOTWET(25))
}
M=M[-1,]

library(ks)
HP=matrix(c(.001,0,0,.001),2,2)
K=kde(x=M11, H=HP)
image(K$eval.points[[1]],K$eval.points[[2]],K$estimate2,
col=rev(heat.colors(101)),xlim=c(-1,1),ylim=c(-1,1))

 

And note that the distribution of the number of players ending the game dry is the same, for a square field, or a disk,

NOTWET2=function(n){
x=(runif(n*20)*2-1)*1
y=(runif(n*20)*2-1)*1
I=which((x^2+y^2<1))
x=x[I];y=y[I]
x=x[1:n];y=y[1:n]
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NOTWET=function(n){
x=runif(n)
y=runif(n)
(d=as.matrix(dist(cbind(x,y), 
method = "euclidean",upper=TRUE)))
diag(d)=999999
dmin=apply(d,2,which.min)
notwet=n-length(table(dmin))
return(notwet)}

NSim=100000
Nsquare=Vectorize(NOTWET)(n=rep(25,NSim))
Ndisk=Vectorize(NOTWET2)(n=rep(25,NSim))
Tsq=table(Nsquare)
Tdk=table(Ndisk)
plot(as.numeric(names(Tsq)),Tsq/NSim,
type="b",col="red")
lines(as.numeric(names(Tdk)),Tdk/NSim,
type="b",pch=4,col="blue")


But so far, it was still simple… I wonder what it might become if we consider a non-convex place, with walls, where player might hide…. Next time, a post on indoor paint-ball !

Circular or spherical data, and density estimation

I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.

  • circular data density estimation

Consider the density of an angle say, i.e. a function http://freakonometrics.hypotheses.org/files/2015/12/circ-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/circ-02.gif

with a circular relationship, i.e. http://freakonometrics.hypotheses.org/files/2015/12/circ-03.gif. It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

http://freakonometrics.hypotheses.org/files/2015/12/circ-04.gif

where http://freakonometrics.hypotheses.org/files/2015/12/circ-05.gif is Bessel modified function of order 1,

http://freakonometrics.hypotheses.org/files/2015/12/circ-06.gif

(which is simply a normalization parameter). There are two parameters here, http://freakonometrics.hypotheses.org/files/2015/12/circ-07.gif (some concentration parameter) and mu a direction.
From a series of observed angleshttp://freakonometrics.hypotheses.org/files/2015/12/circ-08.gif, the maximum likelihood estimator for kappa is solution of

http://freakonometrics.hypotheses.org/files/2015/12/circ-09.gif

where

http://freakonometrics.hypotheses.org/files/2015/12/circ-10.gif

and

http://freakonometrics.hypotheses.org/files/2015/12/circ-11.gif

and where http://freakonometrics.hypotheses.org/files/2015/12/circ-12.gif, where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….

  • density estimation for hours on simulated data

A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let http://freakonometrics.hypotheses.org/files/2015/12/circ-13.gif is the time (in hours) for the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth observation (the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth phone call received). Then set

http://freakonometrics.hypotheses.org/files/2015/12/circ-15.gif

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,

set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
H=circular(Omega,type="angle",units="radians",rotation="clock")
Ht=circular(Omegat,type="angle",units="radians",rotation="clock")
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1))
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )

rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)

or a kernel based estimation of the density (the gray line on the right).

circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)

The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif, we estimate it on the sample http://freakonometrics.hypotheses.org/files/2015/12/circular-density-3.gif. Then we multiply by 3 to get the density only on http://freakonometrics.hypotheses.org/files/2015/12/0-24.gif. For the bandwidth, I took the same as the one that we would have taken on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif

The code is simply the following

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*pi-pi/2
Dd=d$y[I]/max(d$y)+1

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)

Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

We get here those two graphs,

To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.

density of wind direction

On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.

A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.

density of 911 phone calls

Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).

In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.

That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right

We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right

  • density of earth temperatures, or earthquakes

Of course it is also possible to work in higher dimension. Before, we went from densities on http://freakonometrics.hypotheses.org/files/2015/12/circ-16.gif to densities on the unit circle http://freakonometrics.hypotheses.org/files/2015/12/circ-18.gif. But similarly, it is possible to go from http://freakonometrics.hypotheses.org/files/2015/12/circ-17.gif to the unit sphere http://freakonometrics.hypotheses.org/files/2015/12/circ-19.gif. A nice application being global climate studies,

The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.

library(ks)
X=cbind(EQ$Longitude,EQ$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
plot(DX,add=TRUE,col="red")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")

Without any correction, we get the red level curves. The pink one integrates correction.