Category Archives: Graphics

Visualiser les résultats du premier tour

Juste quelques lignes de code, pour visualiser les résultats du premier tour en France. L’idée est de faire une carte assez minimaliste, avec des cercles centrés sur les centroïdes des départements. On commence par récupérer les données pour le fond de carte, un fichier 7z sur le site de l’ign.

download.file("https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/GEOFLA_THEME-DEPARTEMENTS_2016$GEOFLA_2-2_DEPARTEMENT_SHP_LAMB93_FXX_2016-06-28/file/GEOFLA_2-2_DEPARTEMENT_SHP_LAMB93_FXX_2016-06-28.7z",destfile = "dpt.7z")

On a dans ce fichier des informations sur les centroïdes

library(maptools)
library(maps)
departements<-readShapeSpatial("DEPARTEMENT.SHP")
plot(departements)
points(departements@data$X_CENTROID,departements@data$Y_CENTROID,pch=19,col="red")

Comme ça ne marche pas très bien, on va le refaire à la main, par exemple pour l’Ille-et-Vilaine,

pos=which(departements@data[,"CODE_DEPT"]==35)
Poly_35=departements[pos,]
plot(departements)
plot(Poly_35,col="yellow",add=TRUE)
departements@data[pos,c("X_CENTROID","Y_CENTROID")]
points(departements@data[pos,c("X_CENTROID","Y_CENTROID")],pch=19,col="red")
library(rgeos)
(ctd=gCentroid(Poly_35,byid=TRUE))
points(ctd,pch=19,col="blue")

Comme ça marche mieux, on va utiliser ces centroïdes.

ctd=as.data.frame(gCentroid(departements,byid=TRUE))
plot(departements)
points(ctd,pch=19,col="blue")

Maintenant, il nous faut les résultats des élections, par département. On peut aller scraper le site du ministère d’ intérieur. On a une page par département, donc c’est facile de parcourir. Par contre, dans l’adresse url, il faut le code région. Comme je suis un peu fainéant, au lieu de faire une table de correspondance, on teste tout, jusqu’à ce que ça marche. L’idée est d’aller cherche le nombre de voix obtenues par un des candidats.

candidat="M. Emmanuel MACRON"
library(XML)
voix=function(no=35){
testurl=FALSE
i=1
vect_reg=c("084","027","053","024","094","044","032","028","075","076","052",
"093","011","001","002","003","004")
region=NA
while((testurl==FALSE)&(i<=20)){
reg=vect_reg[i]
nodpt=paste("0",no,sep="")
# if(!is.na(as.numeric(no))){if(as.numeric(no)<10) nodpt=paste("00",no,sep="")}
url=paste("http://elections.interieur.gouv.fr/presidentielle-2017/",reg,"/",nodpt,"/index.html",sep="")
test=try(htmlParse(url),silent =TRUE)
if(!inherits(test, "try-error")){testurl=TRUE
region=reg}
i=i+1
}
tabs tab=tabs[[2]]
nb=tab[tab[,1]==candidat,"Voix"]
a<-unlist(strsplit(as.character(nb)," "))
as.numeric(paste(a,collapse=""))}

On peut alors tester

> voix(35)
[1] 84648

Comme ça semble marcher, on le fait pour tous les départements

liste_dpt=departements@data$CODE_DEPT
nbvoix=Vectorize(voix)(liste_dpt)

On peut alors visualiser sur une carte.

plot(departements,border=NA)
points(ctd,pch=19,col=rgb(1,0,0,.25),cex=nbvoix/50000)

Et on peut aussi tenter pour une autre candidate,

candidat="Mme Marine LE PEN"

et on obtient la carte suivante

plot(departements,border=NA)
points(ctd,pch=19,col=rgb(0,0,1,.25),cex=nbvoix/50000)

Evolution du nombre de demandeurs d’emploi

Allez, on continue avec les projets rendus dans le cadre du projet R de la formation en Data Science pour l’Actuariat, Meryem Schalck a proposé une jolie animation des statistiques des demandeurs d’emploi. Sur http://dares.travail-emploi.gouv.fr/, on récupère les séries d’évolution du nombre de demandeurs d’emploi, par département (entre autres), sous forme de fichier Excel. Histoire de simplifier, je mets en ligne une copie en csv,

ufc<- read.csv("http://freakonometrics.free.fr/DemandeV2.csv", header=TRUE, sep = ";")

Pour charger les packages, Meryem propose une jolie petite fonction

Chargepackages <- function(x){
x <- as.character(x)
if(!require(x, character.only = TRUE)){
install.packages(x, dependencies = TRUE)
require(x, character.only = TRUE)
}
}
packages <- list("sp", "rgdal", "animation", "rgeos", "mapproj", "maptools", "dplyr", "ggplot2", "RColorBrewer", "classInt", "PBSmapping", "ggmap", "splancs", "osmar", "maps", "scales", "osmar", "geosphere", "RCurl", "bitops", "XML", "splancs", "PBSmapping", "htmltools"
)
lapply(packages, Chargepackages)

On va retravailler un peu les données pour commencer

x<-ufc$nombre
y<-ufc$demande
dt<-as.Date(ufc$date,"%d/%m/%Y")
n<-length(x)

et ensuite on fait notre animation,

colfunc <- colorRampPalette(c("green","yellow", "red"))
YlOrBr <- data.frame(col=colfunc(length(unique(y))),num=unique(y)[order(unique(y))])
ufc %>%
left_join(YlOrBr, by=c("demande"="num")) -> YlOrBr
oopt = ani.options(interval = 0.5)
library(scales)
YlOrBr$Date<-as.Date(YlOrBr$date,"%d/%m/%Y")
ani.record(reset = TRUE) # clear history before recording
for (i in 1:length(unique(dt))) {
YlOrBr[i>=x,] %>%
ggplot() +  geom_point(aes(x=Date,y=demande),colour=as.character(YlOrBr$col[i>=x]))+
scale_x_date(breaks=date_breaks("6 month"),limits = c(min(dt),max(dt))) +
ylim(c(1000,max(y))) +
ggtitle ("Nombre de demandeurs d'emploi par mois entre 01/2000 et 04/2016") +
xlab("Date") +
ylab("Nombre de demandeurs d'emploi cat A ensemble") +
theme(axis.text.x  = element_text(angle=45, vjust=0.5, size=5)) -> p
print(p)
ani.record()
}
oopts = ani.options(interval = 0.2)
ani.record()
saveHTML(ani.replay(),img.name = "record_plot")

et voilà

Inter-relationships in a matrix

Last week, I wanted to displaying inter-relationships between data in a matrix. My friend Fleur, from AXA, mentioned an interesting possible application, in car accidents. In car against car accidents, it might be interesting to see which parts of the cars were involved. On https://www.data.gouv.fr/fr/, we can find such a dataset, with a lot of information of car accident involving bodily injuries (in France, a police report is necessary, and all of them are reported in a big dataset… actually several dataset, with information of people involved, cars, locations, etc). For 2014 claims, the dataset is

> base = read.csv("https://www.data.gouv.fr/s/resources/base-de-donnees-accidents-corporels-de-la-circulation-sur-6-annees/20150806-153355/vehicules_2014.csv")

Let us keep only claims involving two vehicules,

> T=table(base$Num_Acc)
> idx=names(T)[which(T==2)]

For 2014, we have 32,222 claims.

> length(idx)
[1] 32222

In this dataset, we have information about where cars were hit,

plus ‘9’ for multiple hot (in rollover accidents) and ‘0’ should be missing information.

> nom=c("NA","Front","Front R",'Front L',"Back","Back R","Back L","Side R","Side L","Multiple")

Now, we simply have to go through our dataset, and get the matrix. My first idea was to get a symmetric one,

> B=base[base$Num_Acc %in% idx,]  
> B=B[order(B$Num_Acc),]
> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+   M[a,b]=M[a,b]+1
+   M[b,a]=M[b,a]+1
+ }
> rownames(M)=nom
> colnames(M)=nom

The problem, when we ask for a symmetric chord diagram, is that we cannot have Front – Front claims (since values on the diagonal are removed)

> library(circlize)
> chordDiagramFromMatrix(M,symmetric=TRUE)

So let’s pretend that there could be some possible distinction in the dataset, between the first and the second row. Like the first one is the ‘responsible’ driver. Or like, for insurer, the first one is your insured. Just to avoid this symmetry problem

> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+ M[a,b]=M[a,b]+1
+ }
> rownames(M)=paste("A",nom,sep=" ")
> colnames(M)=paste("B",nom,sep=" ")

If we visualize the chord diagram, this time it is more complex to analyze,

> chordDiagram(M)

Below we have the first row (say our driver, letter A) and on top, the second row (say the other driver, letter B),

In bodily injury claims, we observe a large proportion of Front – Front claims, as well as Front – Back. And as expected Back-Back are not that common….

Minimalist Maps

This week, I mentioned a series of maps, on Twitter,

Friday evening, just before leaving the office to pick-up the kids after their first week back in class, Matthew Champion (aka ) sent me an email, asking for more details. He wanted to know if I did produce those graphs, and if he could mention then, in a post. The truth is, I have no idea who produced those graphs, but I told him one can easily reproduce them. For instance, for the cities, in R, use

> library(maps)
> data("world.cities")
> plot(world.cities$lon,world.cities$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

It is possible to get a more minimalist one by plotting only cities with more than 100,000 unhabitants, e.g.,

> world.cities2 = world.cities[
+ world.cities$pop>100000,]
> plot(world.cities2$lon,world.cities2$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

For the airports, it was slightly more complex since on http://openflights.org/data.html#airport, 6,977 airports  are mentioned. But on http://www.naturalearthdata.com/, I found another dataset with only 891 airports.

> library(maptools)
> shape <- readShapePoints(
+ "~/data/airport/ne_10m_airports.shp")
> plot(shape,pch=19,cex=.7)

On the same website, one can find a dataset for ports,

> shape <- readShapePoints(
+ "~/data/airport/ne_10m_ports.shp")
> plot(shape,pch=19,cex=.7)

This is for graphs based on points. For those based on lines, for instance rivers, shapefiles can be downloaded from https://github.com/jjrom/hydre/tree/, and then, use

> require(maptools)
> shape <- readShapeLines(
+ "./data/river/GRDC_687_rivers.shp")
> plot(shape,col="blue")

For roads, the shapefile can be downloaded from http://www.naturalearthdata.com/

> shape <- readShapeLines(
+ "./data/roads/ne_10m_roads.shp")
> plot(shape,lwd=.5)

Last, but not least, for lakes, we need the polygons,

> shape <- readShapePoly(
+ "./data/lake/ne_10m_lakes.shp")
> plot(shape,col="blue",border="blue",lwd=2)

Nice, isn’t it? See See the world differently with these minimalist maps for ‘s post.

Shapefiles from Isodensity Curves

Recently, with @3wen, we wanted to play with isodensity curves. The problem is that it is difficult to get – numerically – the equation of the contour (even if we can easily plot it). Consider the following surface (just for fun, in order to illustrate the idea)

> f=function(x,y) x*y+(1-x)*(1-y)
> u=v=seq(0,1,length=21)
> v=seq(0,1,length=11)
> f=outer(u,v,f)
> persp(u,v,f,theta=angle,phi=10,box=TRUE,
+ shade=TRUE,ticktype="detailed",xlab="",
+ ylab="",zlab="",col="yellow")

For instance, assume that we want to locate areas where the density exceed 0.7 (here in the lower left corner, SW, and the upper right corner, NE)

> image(u,v,f)
> contour(u,v,f,add=TRUE,levels=.7)

Continue reading Shapefiles from Isodensity Curves

Kernel Density Estimation with Ripley’s Circumferential Correction

The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.

There are new applications, and new graphs, too

Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).

Generating Hurricanes with a Markov Spatial Process

The National Hurricane Center (NHC) collects datasets with all  storms in North Atlantic, the North Atlantic Hurricane Database (HURDAT). For all sorms, we have the location of the storm, every six jours (at midnight, six a.m., noon and six p.m.). Note that we have also the date, the maximal wind speed – on a 6 hour window – and the pressure in the eye of the storm.

It is possible to run the following function

library(XML)
extract.track=function(year=2012,p=TRUE){

Continue reading Generating Hurricanes with a Markov Spatial Process

Allez les Bleus !

In almost three weeks, the (FIFA) World Cup will start, in Brazil. I have to admit that I am not a big fan of soccer, so I will not talk to much about it. Actually, I wanted to talk about colors, and variations on some colors. For instance, there are a lot of blues. In order to visualize standard blues, let us consider the following figure, inspired by the well known chart of R colors,

BLUES=colors()[grep("blue",colors())]
RGBblues=col2rgb(BLUES)
library(grDevices)
HSVblues=rgb2hsv( RGBblues[1,], RGBblues[2,], RGBblues[3,])
HueOrderBlue=order( HSVblues[1,], HSVblues[2,], HSVblues[3,] )
SetTextContrastColor=function(color) ifelse( mean(col2rgb(color)) > 127, "black", "white")
TextContrastColor=unlist( lapply(BLUES, SetTextContrastColor) )
c=11
l=6
plot(0, type="n", ylab="", xlab="",axes=FALSE, ylim=c(0,11), xlim=c(0,6))
for (j in 1:11){
  for (i in 1:6){
  k=(j-1)*6 + i
rect(i-1,j-1,i,j, border=NA, col=BLUES[ HueOrderBlue[k] ])
text(i-.5,j-.5,paste(BLUES[k]), cex=0.75, col=TextContrastColor[ HueOrderBlue[k] ])}}

All the color names that contain “blue” in it are here.

 

Having the choice between several possible colors is interesting, but it can also be interesting to get a palette of blue colors, What we can get is the following

library(RColorBrewer)
blues=colorRampPalette(brewer.pal(9,"Blues"))(100)

In order to illustrate the use of palette colors, consider some data, on soccer players (officially registered). The dataset – lic-2012-v1.csv – can be downloaded from http://data.gouv.fr/fr/dataset/… (I will also use a dataset we have on location of all towns, in France, with latitudes and longitudes)

base1=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE)
base1$cp=base1$dep*1000+base1$com
base2=read.csv("lic-2012-v1.csv", header=TRUE)
base2=base2[base2$fed_2012==111,]
names(base2)[1]="cp"
base2$cp=as.numeric(as.character(base2$cp))

The problem with France (I should probably say one of the many problems) is that regions and departements are not well coded, in the standard functions. To explain where départements are, let us use the dept.rda file, and then, we can get a matching between R names, and standard (administrative) ones,

base21=base2[,c("cp","l_2012","pop_2010")]
base21$dpt=trunc(base21$cp/1000)
library(maps)
load("dept.rda")
base21$nomdpt=dept$dept[match(as.numeric(base21$dpt),dept$CP)]
L=aggregate(base21$l_2012,by=list(Category=base21$nomdpt),FUN=sum)
P=aggregate(base21$pop_2010,by=list(Category=base21$nomdpt),FUN=sum)
base=data.frame(D=P$Category,Y=L$x/P$x,C=trunc(L$x/P$x/.0006))
france=map(database="france")
matche=match.map(france,base$D,exact=TRUE)
map(database="france", fill=TRUE,col=blues[base$C[matche]],resolution=0)

Here are the rates of soccer players (with respect to the total population) It is also possible to look at rate not by département, but by town,

base10=base1[,c("cp","long","lat","pop_2010")]
base20=base2[,c("cp","l_2012")]
base=merge(base10,base20)
Y=base$l_2012/base$pop_2010
QY=as.numeric(cut(Y,c(0,quantile(Y,(1:99)/100),10),labels=1:100))
library(maps)
map("france",xlim=c(-1,1),ylim=c(46,48))
points(base$long,base$lat,cex=.4,pch=19,col=blues[QY])

The darker the dot, the more player, We can also zoom in, to get a better understanding, in the northern part of France, for instance, or in the Southern part,

We can obtain a map which is not (too) far away from the one mentioned a few months ago on http://slate.fr/france/78502/.

Conditional densities, on one single graph

With Stéphane Tufféry we’ve been working on credit scoring1 and we’ve been using the popular german credit dataset,

> myVariableNames <- c("checking_status","duration","credit_history",
+ "purpose","credit_amount","savings","employment","installment_rate",
+ "personal_status","other_parties","residence_since","property_magnitude",
+ "age","other_payment_plans","housing","existing_credits","job",
+ "num_dependents","telephone","foreign_worker","class")
> credit = read.table(
+ "http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data",
+ header=FALSE,col.names=myVariableNames)
> credit$class <- credit$class-1

We wanted to get a nice code to produce a graph like the one below,

Yesterday, Stéphane came up with the following code, that can easily be adapted

> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> varQuanti = function(base,y,x)
+ {
+ layout(matrix(c(1, 2), 2, 1, byrow = TRUE),heights=c(3, 1))
+	par(mar = c(2, 4, 2, 1))
+	base0 <- base[base[,y]==0,]
+	base1 <- base[base[,y]==1,]
+	xlim1 <- range(c(base0[,x],base1[,x]))
+	ylim1 <- c(0,max(max(density(base0[,x])$y),max(density(base1[,x])$y)))
+	plot(density(base0[,x]),main=" ",col=CL[1],ylab=paste("Density of ",x),
+		 xlim = xlim1, ylim = ylim1 ,lwd=2)
+	par(new = TRUE)
+	plot(density(base1[,x]),col=CL[6],lty=1,lwd=2,
+		 xlim = xlim1, ylim = ylim1,xlab = '', ylab = '',main=' ')
+	legend("topright",c(paste(y," = 0"),paste(y," = 1")),
+		   lty=1,col=CL[c(1,6)],lwd=2)
+	texte <- c("Kruskal-Wallis'Chi² = \n\n",
+       round(kruskal.test(base[,x]~base[,y])$statistic*1000)/1000)
+	text(xlim1[2]*0.8, ylim1[2]*0.5, texte,cex=0.75)
+	boxplot(base[,x]~base[,y],horizontal = TRUE,xlab= y,col=CL[c(2,5)])
+}
> varQuanti(credit,"class","duration")

The code is not complex, but since I usually waste a lot of time on my graphs, I will try to upload more frequently short posts, dedicated to graphs, in R (without ggplot).

1.for a chapter on statistical learning in the forthcoming Computational Actuarial Science with R

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…

Playing cards, with R

In my courses on R, I usually show how to insert a picture as a background for a graph. But it is also to see the picture as an object, and to insert it in a graph everywhere we like to see it, as explained on the awesome blog http://rsnippets.blogspot.ca/…. (in a post published in January 2012). I wanted to insert cards in a graph. Cards can be found, e.g. on wikipedia, even French versions, like the one I used to play with when I was a kid (see e.g. the Jack of clubs, http://commons.wikimedia.org/…, or the Queen of hearts, http://commons.wikimedia.org/…). But graphs are in svg. First, we have to export them in ppm, either using gimp, or online, with http://www.sciweavers.org/… instance. Here, I have a copy of the 32 cards, and the code to read one, in R, is

library(pixmap)
card=read.pnm("1000px_10_of_clubs.ppm")

Then, I can plot the cart using

plot(card,add=TRUE)

(on a predefined graph) The interesting part is that it is possible to plot the picture within a given box, but it has be bee specified when we read the image file, using

card=read.pnm("1000px_10_of_clubs.ppm",bbox=c(300,200,800,1100))
plot(card,add=TRUE)

If we want to visulize all the cards, first, we have to store the pictures (the cards) in some R format, in a list, then to check for all of them for their dimensions, and then, we can write a code to plot any of them, anywhere we like (again it has to be specified when we read the file, which might take a while)

L=list(cards="french cards")
L2=list(cards="french cards")
color=c("spades","clubs","hearts","diamonds")
nb=c("07","08","09","10","Jack","Queen","King","01")
N=1:32
for(n in N){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1        #color
  name_card=paste("1000px_",nb[i],"_of_",color[j],".ppm",sep="")
L[[n+1]]=read.pnm(name_card)  
L2[[n+1]]=name_card
}

Now,if we want to play one specific card (out of those 32), we can use

card_plot=function(id,loc){
usr <- par("usr")
pin <- par("pin")
card=L[[id+1]]
x.asp <- (card@size[2] * (usr[2] - usr[1]) / pin[1])
y.asp <- (card@size[1] * (usr[4] - usr[3]) / pin[2])
card.height <-.9
card.width <- card.height * x.asp / y.asp
y.0 <- loc[2]
x.0 <- loc[1]
bbox <- c(x.0, y.0, x.0 + card.width, y.0 + card.height)
card <- read.pnm(L2[[id+1]],bbox = bbox)
plot(card,add=TRUE)
}

Note that, here, first we read the file to check the dimensions, and then, we read it again, using the appropriate box (with height given, here 0.9). Now, it is possible to plot the 32 cards on the same graph, for a given ordering

seq_card_plot=function(seq_id){
  X=seq(0,7*.5,by=.5)
  Y=0:4
  table = plot(0:4,0:4,ylim=c(0,4),
  axes=FALSE,xlab="",ylab="",col="white")    
  for(n in 1:length(seq_id)){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1         #color
    card_plot(id=seq_id[n],loc=c(X[i],Y[j])) 
  }}

If we did not shuffle the cards, it would be

seq_card_plot(N)

But it is possible to shuffle the cards, of course,

set.seed(1)
seq_card_plot(sample(N))

Now, to be honest, I am a bit disappointed because I did not use the fact that I have vector based images here. So it should be possible to get much nicer images, I guess…

LaTeX in R graphs

A nice post was recently published on the rsnippets blog, about the tikzDevice R package. This package is – indeed – awesome. Even if it has been removed from the CRAN website. Of course, it can be download from the archive folder, on http://cran.r-project.org/…, but also (for a more recent version)  on http://download.r-forge.r-project.org/…. But first, it is necessary to install the following package.

> install.packages("filehash")

Then, we download on of the tikzDevice.zip files, and load it, e.g. using (on Mac)

Then, we can load the library

> library("tikzDevice")

If we want to use nice LaTeX formulas, it might be necessary to upload some (LaTeX) libraries and to specify the encoding format

> "options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))

(this is detailed, e.g. in  http://yihui.name/…), then, we write a code to plot a graph. The idea is to produce a tex file which contains the graph, or more precisely which will produce a pdf graph when we compile it. We start with

> tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))

We will produce a 8×4 graph. The graph is the following,

> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u>=0)&(u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")

We can add text (or TeX based text)

> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\in[0,1])=\\displaystyle{\\int_0^1 \\varphi(x)dx}}$", cex = 1.5)

And we end the file with a standard

> dev.off()

This will produce a .tex file. If we compile this file, we can generate a pdf file, that can be inserted in lecture notes, slides or articles,

Nice, isn’t it ?

Happy St Patrick’s Day

I love Saint Patrick’s Day for, at least, two reasons. The first one is that, on March 17th, you can play out loud The Pogues, the second one is that it’s the only day in the year when I really enjoy getting a Guiness in a pub. And Guiness is important in statistical science (I did mention a couple of hours ago – on this blog –  that beers were important for social reasons in the academic world, but that was for other reasons…)

> theta=seq(0,pi/2,length=101)
> leaf=sin(2*theta)+.25*sin(6*theta)
> for(k in 0:3)
+ polygon(leaf*cos(theta+k*pi/2),leaf*sin(theta+k*pi/2),col="green")

As mentioned in all my statistics and econometrics courses, the history of statistics (I mean here mathematical statistics) is closely related to Guinness.

A long time ago, there was a Guinness Brewing Company of Dublin, which – as its name suggests – was an Irish brewing company. And the boss, who was to inherit the family business, decided to attract young students, trained in chemistry at Cambridge or Oxford.

In 1899, William Sealy Gosset, who had obtained a double degree in math and chemistry, left Oxford to Dublin. And to be quite honest, being graduate in maths meant when he had studied differential equations and astronomy. Basically, mathematics were useless for Guinness, and he got there with his expertise in chemistry. In fact, William turned out to be also a very good administrator, but this has nothing to do with our story.

William had good memories of his studies in math, and he wondered if he could find a problem to look at. He started studies on workmanship, noting that conditions vary so much (temperature, from hops, malt, manufacturing conditions …) that there were only few consistent data. The “law of errors”  (the central limit theorem) can not apply under these conditions.

In short, Bill (now we know each other a little, we’ll call him Bill) took many measurements, and noticed that the Poisson distribution could be an interesting model to work with. To make the story short, Bill managed to use statistical techniques to control the variance of the production, meaning that he was able to lower losses in the production of beer.

A nice application like this one deserved publication in a scientific journal … Well, of course the Poisson distribution has long been known (it was 1904 and a few months before, Von Bortkiewicz found elegant applications of this law, as discussed in a post  a few weeks ago). But there was a disclosure issue there: Bill’s contract prohibited him from disclosing secrets to the competitors.

Meanwhile, Bill had met Karl Pearson, who was then editor of Biometrika, and encouraged him to publish his results. In 1906, Bill who had helped Guiness to gain a lot of money – doing applied mathematics can be usefull – managed to take a sabbatical to work with Pearson to Galton Laboratory biometrics. Bill and Karl decided to publish the work under a pseudonym “Student.” The legend claims that they had hesitated to use “pupil.”

And for almost 30 years, “Mr Gosset” honorable employee Guinness led a dissolute life by publishing in statistical journals (after work in the brewery) always under the pseudonym “Student”. Of course, it might not be that simple. I mean, Bill had a family life, too. And his wife was the captain of the national Hockey team. So I hardly imagine Bill playing the smart ass and doing mathematical computations, when it was time to wash the dishes or iron his shirt…

In 1908, he wrote a remarkable “the probable error of the mean” remarked, at least, by Ronald Fisher. In fact, Bill found that there was a interesting law, but – as the normal – it was difficult to manipulate to obtain confidence intervals. Without a computer, he had the idea of ​​using monte carlo methods to tabulate quantiles and construct its tables. And he was probably the first one to look carefully at the problem of small samples, unlike Karl Pearson, who always put focus on the asymptotic case.

In fact, looking at his small sample, he saw the denominator magnitudes very close to those specifically manipulated Karl, in particular a square root of chi-square law. Well, of course, remained the normality assumption, but at least we had some results for finite samples !

For the story, William Gosset suggested to use letter z for its statistics, the ratio between the mean and (empirical) standard deviation. But a few years later, statisticians became accustomed to use this letter for Gaussian distribution (i.e. when the variance is known), and it became the standard to use the letter t. Hence finally the present name of “Student-t distribution” and in regression outputs, we have the “t-test”.

A legend (told by Harold Hotelling in his memoirs) claims that the Guinness family discovered this double life on the day of the death of William Gosset in 1937 when mathematicians requested financial assistance to print a volume of the works of their employee. But another legend claims that Mr Guinness himself would have suggested his nickname when he had expressed his intention to publish his research… So I guess we’ll never know. But at least, I’ll think about Bill when I’ll get my first Guiness tonight (but I will probably not be able to tell this story anymore when I’ll reach the fourth…)

Overdispersion with different exposures

In actuarial science, and insurance ratemaking, taking into account the exposure can be a nightmare (in datasets, some clients have been here for a few years – we call that exposure – while others have been here for a few months, or weeks). Somehow, simple results because more complicated to compute just because we have to take into account the fact that exposure is an heterogeneous variable.

The exposure in insurance ratemaking can be seen as a problem of censored data (in my dataset, the exposure is always smaller than 1 since observations are contracts, not policyholders),

  • the number of claims https://latex.codecogs.com/gif.latex?N_i on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the number of claims https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

And as always, the variable of interest is the unobserved one, because we have to price insurance contract with a cover period of one (full) year. So we have to model the yearly frequency of insurance claims.

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-09.30.00.png

In our dataset, we have https://latex.codecogs.com/gif.latex?(Y_i,E_i)‘s – or more generally also some additional covariates https://latex.codecogs.com/gif.latex?(Y_i,E_i,\boldsymbol{X}_i)‘s. For ratemaking, we need to estimate https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) and perhaps also https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) (for instance to test if the Poisson assumption is valid, or not). To estimate the expected value, a natural estimate for https://latex.codecogs.com/gif.latex?\mathbb{E}(N) (forget about covariates as a start) is
https://latex.codecogs.com/gif.latex?m_N=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}
which is also the weight average of annualized individual counts
https://latex.codecogs.com/gif.latex?m_N=\sum_{i=1}^n%20\frac{%20E_i}{\sum_{i=1}^n%20E_i}%20\cdot%20\frac{Y_i}{E_i}
We consider the ratio of the total number of claims to the total exposure-to-
risk. This estimate appears for instance if we consider a Poisson process, so that https://latex.codecogs.com/gif.latex?N\sim\mathcal{P}(\lambda) while https://latex.codecogs.com/gif.latex?Y\sim\mathcal{P}(\lambda%20\cdot%20E). Then, the likelihood is

https://latex.codecogs.com/gif.latex?\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})=\prod_{i=1}^n%20\frac{e^{-\lambda%20E_i}%20[\lambda%20E_i]^{Y_i}}{Y_i!}

i.e.

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20-\lambda%20\sum_{i=1}^n%20E_i%20+\sum_{i=1}^n%20Y_i%20\log[\lambda%20E_i]%20-%20\log\left(\prod_{i=1}^n%20Y_i!\right)

The first order condition is here

https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20\lambda}\log%20\mathcal{L}(\lambda,\boldsymbol{Y},\boldsymbol{E})%20=%20%20-%20\sum_{i=1}^n%20E_i%20+\frac{1}{\lambda}\sum_{i=1}^n%20Y_i%20=0

which is satisfied if

https://latex.codecogs.com/gif.latex?\widehat{\lambda}=\frac{\sum_{i=1}^n%20Y_i}{\sum_{i=1}^n%20E_i}

So, we do have an estimator for the expected value, and a natural estimator for https://latex.codecogs.com/gif.latex?\mathbb{E}(N\vert\boldsymbol{X}=\boldsymbol{x}) is then (if we consider categorical covariates)
https://latex.codecogs.com/gif.latex?m_{N|\boldsymbol{x}}%20=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20Y_i}{\sum_%20{i,\boldsymbol{X}_i=\boldsymbol{x}}%20E_i}

Now, we need an estimate for the variance, or more precisely the conditional variable. Assume (as a starting point) that all have the same exposure https://latex.codecogs.com/gif.latex?E. For instance, if https://latex.codecogs.com/gif.latex?E is one half, insured were observed only the first six months. Then https://latex.codecogs.com/gif.latex?N=Y+Y%27 with https://latex.codecogs.com/gif.latex?Y\overset{\mathcal%20L}{=}Y%27 (https://latex.codecogs.com/gif.latex?Y is the number of claims on the first six months, while https://latex.codecogs.com/gif.latex?Y%27 are the number of claims on the last six months), i.e. https://latex.codecogs.com/gif.latex?\text{Var}(N)=\text{Var}(Y)+%20\text{Var}(Y%27) if we assume independent increments. I.e.
https://latex.codecogs.com/gif.latex?\text{Var}(N)=2\text{Var}(Y), or conversely https://latex.codecogs.com/gif.latex?E%20\cdot\text{Var}(N)=\text{Var}(Y). More generally, it is reasonable to assume that

https://latex.codecogs.com/gif.latex?\text{Var}(Y)=E\cdot%20\text{Var}(N)
for all values of https://latex.codecogs.com/gif.latex?E. And then
https://latex.codecogs.com/gif.latex?\text{Var}\left(\frac{Y}{E}\right)=\frac{1}{E}\cdot%20\text{Var}(N)
Thus, it seems legitimate to assume that the empirical variance of https://latex.codecogs.com/gif.latex?N can be written
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20S_{Y/E}^2
Since the average of https://latex.codecogs.com/gif.latex?Y_i/E is https://latex.codecogs.com/gif.latex?\overline{N}=m_N, then
https://latex.codecogs.com/gif.latex?S_N^2=E\cdot%20\frac{1}{n}\sum_{i=1}^n%20\left[\frac{Y_i}{E}-\overline{N}\right]^2}%20=%20\frac{1}{n}\sum_{i=1}^n%20E\left[\frac{Y_i}{E}-\overline{N}\right]^2}
or equivalently
https://latex.codecogs.com/gif.latex?S_N^2=\frac{1}{n}\sum_{i=1}^n%20\frac{E}{E^2}\left[Y_i-\overline{N}\cdot%20E\right]^2}%20=\frac{1}{n}\sum_{i=1}^n%20\frac{1}{E}[Y_i-\overline{N}\cdot%20E]^2i.e.
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E]^2%20}{nE}
Thus, with different https://latex.codecogs.com/gif.latex?E_i‘s, it would be legitimate (I guess) to consider
https://latex.codecogs.com/gif.latex?S_N^2=\frac{\sum_{i=1}^n%20[Y_i-\overline{N}\cdot%20E_i]^2%20}{\sum_{i=1}^n%20E_i}
Thus, an estimator for https://latex.codecogs.com/gif.latex?\text{Var}(N|\boldsymbol{X}=\boldsymbol{x}) is
https://latex.codecogs.com/gif.latex?S_{N|\boldsymbol{x}}^2=\frac{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}}%20[Y_i-\overline{N}\cdot%20E_i]^2}{\sum_{i,\boldsymbol{X}_i=\boldsymbol{x}%20}%20E_i}

This can be used to test is the Poisson assumption is valid to model frequency. Consider the following dataset,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  T=table(sinistres$nocontrat)
>  T1=as.numeric(names(T))
>  T2=as.numeric(T)
>  nombre1 = data.frame(nocontrat=T1,nbre=T2)
>  I = contrat$nocontrat%in%T1
>  T1= contrat$nocontrat[I==FALSE]
>  nombre2 = data.frame(nocontrat=T1,nbre=0)
>  nombre=rbind(nombre1,nombre2)
>  baseFREQ = merge(contrat,nombre)

Here, we do have our two variables of interest, the exposure, per contract,

>  E <- baseFREQ$exposition

and the (observed) number of claims (during that time frame)

>  Y <- baseFREQ$nbre

It is possible to compute without covariates, the average (yearly) number of claims, per contract, and the associated variance

> (mean=weighted.mean(Y/E,E))
[1] 0.07279295
> (variance=sum((Y-mean*E)^2)/sum(E)) 
[1] 0.08778567

It looks like the variance is (slightly) larger than the average (we’ll see in a few weeks how to test it, more formally). It is possible to add covariates, for instance the density of population, in the area where the policyholder lives,

>  X=as.factor(baseFREQ$densite)
>  for(i in 1:length(levels(X))){
+ 	   Ei=E[X==levels(X)[i]]
+ 	   Yi=Y[X==levels(X)[i]]
+  (meani=weighted.mean(Yi/Ei,Ei))    # moyenne 
+  (variancei=sum((Yi-meani*Ei)^2)/sum(Ei))    # variance
+ cat("Density, zone",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+ }
Density, zone 11 average = 0.07962411  variance = 0.08711477 
Density, zone 21 average = 0.05294927  variance = 0.07378567 
Density, zone 22 average = 0.09330982  variance = 0.09582698 
Density, zone 23 average = 0.06918033  variance = 0.07641805 
Density, zone 24 average = 0.06004009  variance = 0.06293811 
Density, zone 25 average = 0.06577788  variance = 0.06726093 
Density, zone 26 average = 0.0688496   variance = 0.07126078 
Density, zone 31 average = 0.07725273  variance = 0.09067 
Density, zone 41 average = 0.03649222  variance = 0.03914317 
Density, zone 42 average = 0.08333333  variance = 0.1004027 
Density, zone 43 average = 0.07304602  variance = 0.07209618 
Density, zone 52 average = 0.06893741  variance = 0.07178091 
Density, zone 53 average = 0.07725661  variance = 0.07811935 
Density, zone 54 average = 0.07816105  variance = 0.08947993 
Density, zone 72 average = 0.08579731  variance = 0.09693305 
Density, zone 73 average = 0.04943033  variance = 0.04835521 
Density, zone 74 average = 0.1188611   variance = 0.1221675 
Density, zone 82 average = 0.09345635  variance = 0.09917425 
Density, zone 83 average = 0.04299708  variance = 0.05259835 
Density, zone 91 average = 0.07468126  variance = 0.3045718 
Density, zone 93 average = 0.08197912  variance = 0.09350102 
Density, zone 94 average = 0.03140971  variance = 0.04672329

Perhaps graphs would be a nice tool to play with, to visualize that information

> plot(meani,variancei,cex=sqrt(Ei),col="grey",pch=19,
+ xlab="Empirical average",ylab="Empirical variance")
> points(meani,variancei,cex=sqrt(Ei))

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.26.png

The size of the circles is related to the size of the group (the area is proportional to the total exposure within the group). The first diagonal corresponds to the Poisson model, i.e. the variance should be equal to the mean. It is also possible to consider other covariates, like the gas type

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.52.02.png

or the car brand,

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.50.49.png

It is also possible to consider the age of the driver as a categorical variate

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.51.40.png

Actually, the age is interesting: we can observe on that dataset a feature that Jean-Philippe Boucher observed also on his own datasets. Let us look more carefully where are the different ages,

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-10.55.17.png

On the right, we can observe young (unexperienced) drivers. That was expected. But some classes are below the first diagonal: the expected frequency is large, but not the variance. I.e. we know for sure that young drivers have more car accidents. It is not an heterogeneous class, on the contrary: young drivers can be seen as a relatively homogeneous class, with a high frequency of car accidents.

With the original dataset (here, I use only a subset with 50,000 clients), we do obtain the following graph:

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-01-a%CC%80-11.27.04.png

If we do not observe underdispersion for young drivers, observe that those are incredibly homogeneous classes. With a clear impact of experience, since circles are moving downward from age 18 to 25.

Another disturbing story (this was – one more time – suggestion from Jean-Philippe) that it might be possible to consider the exposure as a standard variable, and see if the coefficient is actually equal to 1. Without any covariate,

>  reg=glm(Y~log(E),family=poisson("log"))
>  summary(reg)

Call:
glm(formula = Y ~ log(E), family = poisson("log"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.83045    0.02822 -100.31   <2e-16 ***
log(E)       0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

i.e. the parameter is clearly strictly smaller than 1. And it is neither related to significance,

> library(car)
> linearHypothesis(reg,"log(E)",1)
Linear hypothesis test

Hypothesis:
log(E) = 1

Model 1: restricted model
Model 2: Y ~ log(E)

  Res.Df Df  Chisq Pr(>Chisq)    
1  49999                         
2  49998  1 251.19  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

nor to the fact that I did not take into account covariates,

> reg=glm(nbre~log(exposition)+carburant+as.factor(ageconducteur)+as.factor(densite),family=poisson("log"),data=baseFREQ)
>  summary(reg)

Call:
glm(formula = nbre ~ log(exposition) + carburant + as.factor(ageconducteur) + 
    as.factor(densite), family = poisson("log"), data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.7114  -0.3200  -0.2637  -0.1896  12.7104  

Coefficients:
                              Estimate Std. Error z value Pr(>|z|)    
(Intercept)                  -14.07321  181.04892  -0.078 0.938042    
log(exposition)                0.56781    0.03029  18.744  < 2e-16 ***
carburantE                    -0.17979    0.04630  -3.883 0.000103 ***
as.factor(ageconducteur)19    12.18354  181.04915   0.067 0.946348    
as.factor(ageconducteur)20    12.48752  181.04902   0.069 0.945011

(etc). So it might be a too strong assumption to assume that the exposure is an exogenous variate here. But that’s another story !

Fractals and Kronecker product

A few years ago, I went to listen to Roger Nelsen who was giving a talk about copulas with fractal support. Roger is amazing when he gives a talk (I am also a huge fan of his books, and articles), and I really wanted to play with that concept (that he did publish later on, with Gregory Fredricks and José Antonio Rodriguez-Lallena). I did mention that idea in a paper, writen with Alessandro Juri, just to mention some cases where deriving fixed point theorems is not that simple (since the limit may not exist).

The idea in the initial article was to start with something quite simple, a the so-called transformation matrix, e.g.

https://latex.codecogs.com/gif.latex?T=\frac{1}{8}\left(\begin{matrix}1&%200%20&%201%20\\%200%20&%204%20&%200%20\\%201%20&%200&1\end{matrix}\right)
Here, in all areas with mass, we spread it uniformly (say), i.e. the support of https://latex.codecogs.com/gif.latex?T(C^\perp) is the one below, i.e. https://latex.codecogs.com/gif.latex?1/8th of the mass is located in each corner, and https://latex.codecogs.com/gif.latex?1/2 is in the center. So if we spread the mass to have a copula (with uniform margin,)we have to consider squares on intervals https://latex.codecogs.com/gif.latex?[0,1/4]https://latex.codecogs.com/gif.latex?[1/4,3/4] and https://latex.codecogs.com/gif.latex?[3/4,1],

Then the idea, then, is to consider https://latex.codecogs.com/gif.latex?T^2=\otimes^2T, where  https://latex.codecogs.com/gif.latex?\otimes^2T is the tensor product (also called Kronecker product) of https://latex.codecogs.com/gif.latex?T with itself. Here, the support of https://latex.codecogs.com/gif.latex?T^2(C^\perp) is

Then, consider https://latex.codecogs.com/gif.latex?T^3=\otimes^3T, where https://latex.codecogs.com/gif.latex?\otimes^3T is the tensor product of https://latex.codecogs.com/gif.latex?T with itself, three times. And the support of https://latex.codecogs.com/gif.latex?T^3(C^\perp) is

Etc. Here, it is computationally extremely simple to do it, using this Kronecker product. Recall that if https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}=(a_{i,j}), then

https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}\otimes\mathbf{B}%20=%20\begin{pmatrix}%20a_{11}%20\mathbf{B}%20&%20\cdots%20&%20a_{1n}\mathbf{B}%20\\%20\vdots%20&%20\ddots%20&%20\vdots%20\\%20a_{m1}%20\mathbf{B}%20&%20\cdots%20&%20a_{mn}%20\mathbf{B}%20\end{pmatrix}

So, we need a transformation matrix: consider the following https://latex.codecogs.com/gif.latex?4\times4 matrix,

> k=4
> M=matrix(c(1,0,0,1,
+            0,1,1,0,
+            0,1,1,0,
+            1,0,0,1),k,k)
> M
[,1] [,2] [,3] [,4]
[1,]    1    0    0    1
[2,]    0    1    1    0
[3,]    0    1    1    0
[4,]    1    0    0    1

Once we have it, we just consider the Kronecker product of this matrix with itself, which yields a https://latex.codecogs.com/gif.latex?4^2\times4^2 matrix,

> N=kronecker(M,M)
> N[,1:4]
[,1]  [,2] [,3] [,4]
[1,]     1    0    0    1
[2,]     0    1    1    0
[3,]     0    1    1    0
[4,]     1    0    0    1
[5,]     0    0    0    0
[6,]     0    0    0    0
[7,]     0    0    0    0
[8,]     0    0    0    0
[9,]     0    0    0    0
[10,]    0    0    0    0
[11,]    0    0    0    0
[12,]    0    0    0    0
[13,]    1    0    0    1
[14,]    0    1    1    0
[15,]    0    1    1    0
[16,]    1    0    0    1

And then, we continue,

> for(s in 1:3){N=kronecker(N,M)}

After only a couple of loops, we have a https://latex.codecogs.com/gif.latex?4^5\times4^5 matrix. And we can plot it simply to visualize the support,

> image(N,col=c("white","blue"))

As we zoom in, we can visualize this fractal property,