Equidistant points on a map

This morning, I had a comment on a recent post, regarding a graph I did upload on the blog, which was extracted from a paper now online (see http://hal.archives-ouvertes.fr/hal-00871883). Jo (from KUL, I guess I can share that piece of information) asked me

I was wondering whether you would want to share the R code for plotting figures 1 and 14? W.r.t. the former, the figure-in-figure is a nice touch; as to the latter, I am curious to know how you translated distance in km to the size parameters of the graph (par(“usr”)) for plotting the corresponding concentric circles (and the arrow indicating the radius) on top of your map.

At first, I thought I made a mistake in my plot. I mean, each time I have a question, I start to be suspicious, and I start to wonder if what I did was valid, or not. Here was the graph

Let’s make it clear: I do not draw circles here. So yes, I believe that what I did is valid. What I did is simple. First, I get the background map,

library(maps)
map("world",xlim=c(130,150),ylim=c(25,45),fill=TRUE,col="light green")

Then, I need some function to compute distance from coordinates. The functions I use are

deg2rad = function(deg) return(deg*pi/180)
DISTANCEDEG = function(long1, lat1, long2, lat2) {
R=6371; d=acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1)) * R
return(d) 
}

The center here will be Tokyo (東京),

X=139+45/60
Y=35+40/60

The idea now is simple: I generate a grid (here 501×501)

VX=seq(X-10,X+10,length=501)
VY=seq(Y-10,Y+10,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
ZDeg=deg2rad(cbind(VtX,VtY))

I compute the distance from all those points to Tokyo, and check is the distance is larger or smaller than a given value,

L=500
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L

If the distance is smaller than 500km, then I put a blue dot on the graph,

points(VtX[D1],VtY[D1],pch=19,cex=.2,col="light blue")

Then I use the same procedure for 250km (obviously, it is more convenient to start from larger and to go to smaller distances)

L=250
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtX[D],VtY[D],pch=19,cex=.2,col="light yellow")

Then, I did draw an arrow to ilustrate the largest distance

k=which.max(VtX[D1])
arrows((VtX[D1])[k],(VtY[D1])[k],X,Y,code=3,length=.1)
text(((VtX[D1])[k]+X)/2,Y+.35,"500 km")

And now, I have the graph.

Now, the point is that it should depend on the kind of projection we use, right? So here is a function that can be used for different kind of projections (some slight changes are necessary, since the map is now centered on some point, and we cannot use standard coordinates)

library(mapproj)
mapjapan = function(pr="conic",pm=45){
map("world","japan",fill=TRUE,col="light green",projection=pr, par=pm)
MP=mapproject(data.frame(x=X,y=Y),projection="")
Xp=MP$x
Yp=MP$y
VX=seq(X-10,X+10,length=501)
VY=seq(Y-10,Y+10,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
MP=mapproject(data.frame(x=VtX,y=VtY),projection="")
VtXp=MP$x
VtYp=MP$y
ZDeg=deg2rad(cbind(VtX,VtY))
L=500
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D1],VtYp[D1],pch=19,cex=.2,col="light blue")
L=250
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
L=100
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light blue")
L=50
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
points(Xp,Yp,pch=19,cex=.4,col="red")
map("world","japan",projection=pr, par=pm,add=TRUE)
}

The default function here produces a map based on a conic projection,

mapjapan()

But we can also use a Bonne projection (a pseudo-conic one, named after Rigobert Bonne)

mapjapan("bonne")

or Lagrange projection,

mappjapan("lagrange",NULL)

and (as a last one), Albers projections,

mapjapan("albers",c(30,40))

Of course, much more projections are possible !

We do not see much here, right ? So let us play with a larger country to visualize something. Like Canada. And the distance to, say, Winnipeg.

The first thing to do is to define the coordinates of Winnipeg,

X=-(97+08/60)
Y=(49+53/60)

Then, we slightly change our function

mapcanada = function(pr="conic",pm=45){
map("world","canada",fill=TRUE,col="light green",projection=pr, par=pm)
MP=mapproject(data.frame(x=X,y=Y),projection="")
Xp=MP$x
Yp=MP$y
VX=seq(X-30,X+30,length=501)
VY=seq(Y-30,Y+30,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
MP=mapproject(data.frame(x=VtX,y=VtY),projection="")
VtXp=MP$x
VtYp=MP$y
ZDeg=deg2rad(cbind(VtX,VtY))
L=2000
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D1],VtYp[D1],pch=19,cex=.2,col="light blue")
L=1000
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
L=500
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light blue")
L=200
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
points(Xp,Yp,pch=19,cex=.4,col="red")
map("world","canada",projection=pr, par=pm,add=TRUE)
}

Now, we can have some fun

mapcanada()

mapcanada("bonne",45)

mapcanada("albers",c(30,40))

mapcanada("lagrange",NULL)

Fun, isn’t it ? Changing the projection will change the shape of equidistant curves.

Somewhere else, part 82

Some writings worth reading

avec aussi quelques articles en français en cette première moitié de semaine,

Did I miss something ?

Generating your own normal distribution table

It might sounds incredibly old fashion, but for my the exam for the ACT2121 probability course (to prepare for the exam P of the Society of Actuaries), I will provide a standard normal distribution table. The problem is that it is never the one we’re looking for (sometimes it is the survival function, sometimes it is the cumulative distribution function, sometimes we consider only positive values, etc). Here is the one that will be given for the exam, this Friday.

Now, here is the code to generate it.

I did use the following code to generate the table (in a latex format),

> u=seq(0,3.09,by=0.01)
> p=pnorm(u)
> m=matrix(p,ncol=10,byrow=TRUE

We have here the table that we wish to have in our table,

> options(digits=4)
> m
        [,1]   [,2]   [,3]   [,4]   [,5]   [,6]   [,7]   [,8]   [,9]  [,10]
 [1,] 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
 [2,] 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
 [3,] 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
 [4,] 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
 [5,] 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
 [6,] 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
 [7,] 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
 [8,] 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
 [9,] 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
[10,] 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
[11,] 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
[12,] 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
[13,] 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
[14,] 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
[15,] 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
[16,] 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
[17,] 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
[18,] 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
[19,] 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
[20,] 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
[21,] 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
[22,] 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
[23,] 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
[24,] 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
[25,] 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
[26,] 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
[27,] 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
[28,] 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
[29,] 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
[30,] 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
[31,] 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
> rownames(m)=seq(0,3,b=.1)
> colnames(m)=seq(0,.09,by=.01)

To put it in a nice latex format, we can use

> library(xtable)
> newm=xtable(m,digits=4)
> print.xtable(newm, type="latex", file="nor1.tex")

We now have a simple tex file containing a table.

\begin{table}[ht]
\centering
\begin{tabular}{rrrrrrrrrrr}
  \hline
 & 0 & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.006 & 0.007 & 0.008 & 0.009 \\ 
  \hline
0 & 0.5000 & 0.5040 & 0.5080 & 0.5120 & 0.5160 & 0.5199 & 0.5239 & 0.5279 & 0.5319 & 0.5359 \\ 
  0.1 & 0.5398 & 0.5438 & 0.5478 & 0.5517 & 0.5557 & 0.5596 & 0.5636 & 0.5675 & 0.5714 & 0.5753 \\ 
  0.2 & 0.5793 & 0.5832 & 0.5871 & 0.5910 & 0.5948 & 0.5987 & 0.6026 & 0.6064 & 0.6103 & 0.6141 \\ 
  0.3 & 0.6179 & 0.6217 & 0.6255 & 0.6293 & 0.6331 & 0.6368 & 0.6406 & 0.6443 & 0.6480 & 0.6517 \\ 
  0.4 & 0.6554 & 0.6591 & 0.6628 & 0.6664 & 0.6700 & 0.6736 & 0.6772 & 0.6808 & 0.6844 & 0.6879 \\ 
  0.5 & 0.6915 & 0.6950 & 0.6985 & 0.7019 & 0.7054 & 0.7088 & 0.7123 & 0.7157 & 0.7190 & 0.7224 \\ 
  0.6 & 0.7257 & 0.7291 & 0.7324 & 0.7357 & 0.7389 & 0.7422 & 0.7454 & 0.7486 & 0.7517 & 0.7549 \\ 
  0.7 & 0.7580 & 0.7611 & 0.7642 & 0.7673 & 0.7704 & 0.7734 & 0.7764 & 0.7794 & 0.7823 & 0.7852 \\ 
  0.8 & 0.7881 & 0.7910 & 0.7939 & 0.7967 & 0.7995 & 0.8023 & 0.8051 & 0.8078 & 0.8106 & 0.8133 \\ 
  0.9 & 0.8159 & 0.8186 & 0.8212 & 0.8238 & 0.8264 & 0.8289 & 0.8315 & 0.8340 & 0.8365 & 0.8389 \\ 
  1 & 0.8413 & 0.8438 & 0.8461 & 0.8485 & 0.8508 & 0.8531 & 0.8554 & 0.8577 & 0.8599 & 0.8621 \\ 
  1.1 & 0.8643 & 0.8665 & 0.8686 & 0.8708 & 0.8729 & 0.8749 & 0.8770 & 0.8790 & 0.8810 & 0.8830 \\ 
  1.2 & 0.8849 & 0.8869 & 0.8888 & 0.8907 & 0.8925 & 0.8944 & 0.8962 & 0.8980 & 0.8997 & 0.9015 \\ 
  1.3 & 0.9032 & 0.9049 & 0.9066 & 0.9082 & 0.9099 & 0.9115 & 0.9131 & 0.9147 & 0.9162 & 0.9177 \\ 
  1.4 & 0.9192 & 0.9207 & 0.9222 & 0.9236 & 0.9251 & 0.9265 & 0.9279 & 0.9292 & 0.9306 & 0.9319 \\ 
  1.5 & 0.9332 & 0.9345 & 0.9357 & 0.9370 & 0.9382 & 0.9394 & 0.9406 & 0.9418 & 0.9429 & 0.9441 \\ 
  1.6 & 0.9452 & 0.9463 & 0.9474 & 0.9484 & 0.9495 & 0.9505 & 0.9515 & 0.9525 & 0.9535 & 0.9545 \\ 
  1.7 & 0.9554 & 0.9564 & 0.9573 & 0.9582 & 0.9591 & 0.9599 & 0.9608 & 0.9616 & 0.9625 & 0.9633 \\ 
  1.8 & 0.9641 & 0.9649 & 0.9656 & 0.9664 & 0.9671 & 0.9678 & 0.9686 & 0.9693 & 0.9699 & 0.9706 \\ 
  1.9 & 0.9713 & 0.9719 & 0.9726 & 0.9732 & 0.9738 & 0.9744 & 0.9750 & 0.9756 & 0.9761 & 0.9767 \\ 
  2 & 0.9772 & 0.9778 & 0.9783 & 0.9788 & 0.9793 & 0.9798 & 0.9803 & 0.9808 & 0.9812 & 0.9817 \\ 
  2.1 & 0.9821 & 0.9826 & 0.9830 & 0.9834 & 0.9838 & 0.9842 & 0.9846 & 0.9850 & 0.9854 & 0.9857 \\ 
  2.2 & 0.9861 & 0.9864 & 0.9868 & 0.9871 & 0.9875 & 0.9878 & 0.9881 & 0.9884 & 0.9887 & 0.9890 \\ 
  2.3 & 0.9893 & 0.9896 & 0.9898 & 0.9901 & 0.9904 & 0.9906 & 0.9909 & 0.9911 & 0.9913 & 0.9916 \\ 
  2.4 & 0.9918 & 0.9920 & 0.9922 & 0.9925 & 0.9927 & 0.9929 & 0.9931 & 0.9932 & 0.9934 & 0.9936 \\ 
  2.5 & 0.9938 & 0.9940 & 0.9941 & 0.9943 & 0.9945 & 0.9946 & 0.9948 & 0.9949 & 0.9951 & 0.9952 \\ 
  2.6 & 0.9953 & 0.9955 & 0.9956 & 0.9957 & 0.9959 & 0.9960 & 0.9961 & 0.9962 & 0.9963 & 0.9964 \\ 
  2.7 & 0.9965 & 0.9966 & 0.9967 & 0.9968 & 0.9969 & 0.9970 & 0.9971 & 0.9972 & 0.9973 & 0.9974 \\ 
  2.8 & 0.9974 & 0.9975 & 0.9976 & 0.9977 & 0.9977 & 0.9978 & 0.9979 & 0.9979 & 0.9980 & 0.9981 \\ 
  2.9 & 0.9981 & 0.9982 & 0.9982 & 0.9983 & 0.9984 & 0.9984 & 0.9985 & 0.9985 & 0.9986 & 0.9986 \\ 
  3 & 0.9987 & 0.9987 & 0.9987 & 0.9988 & 0.9988 & 0.9989 & 0.9989 & 0.9989 & 0.9990 & 0.9990 \\ 
   \hline
\end{tabular}
\end{table}

and the following code to get a graph, illustrating was was actually computed, in the table (see a previous post for more details)

> library("tikzDevice")
> options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))
+ tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))
> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")
> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\leq x)=\\displaystyle{
+ \\int_{-\\infty}^x \\varphi(t)dt}}$", cex = 1.5)
> dev.off()

Now we have the graph in another tex file. It is possible to embed the code in a tex file, or to compile the tex file to get a pdf file. I did generate the pdf file.

 Here is the tex file I finally get. It is now extremely simple to get your own normal distribution table. Now, I guess it could be possible to use sweave, or knitr. Once I’ll get a copy of Yihui’s book, I’ll try to use it to generate distribution table for my courses !

Surdispersion et comptage

Cette semaine, au cours d’assurance non-vie, on abordera la surdispersion, qui clôturera la partie du cours sur la modélisation de la fréquence de sinistres. Les transparents sont en ligne. Mais avant de parler de surdispersion, on finira la présentation des GLM. Je mets un lien vers le chapitre 15 du livre de John Fox Applied regression analysis and generalized linear models ainsi que le livre de James K. Lindsey Applying Generalized Linear Models. Je voudrais aussi renvoyer vers les notes de cours de Germán Rodríguez, avec des notes sur la régression de Poisson (avec un petit complément sur la notion de overdispersion).

Les instructions pour le second devoir seront envoyées par courriel.

Panel on academic blogging

This Monday, I joint a panel discussion in Montréal on “Minor forms of academic communication: revamping the relationship between science and society?“, at the World Social Science Forum. The Forum was organized by the ISSC, i.e. the UNESCO. Yes, Monday was Thanksgiving in Canada. I have to admit that I was not used to celebrate Thanksgiving, in Europe, but since I have in North America, I try to enjoy it. As you can learn on http://en.wikipedia.org/wiki/Thanksgiving, at Thanksgiving, you’re supposed to be “Spending Time with Family“. I find it odd that the UNESCO organizes a three day conference on that week-end (the conference started on Sunday actually). It looks like you’re not supposed to have a family life when you’re an academic….

Anyway, that was interesting to join that panel, since I had the opportunity to have interesting discussions with the other members of the panel. I was glad to meet Loïc Le Pape, the editor of a great blog on religion and politics, http://politicsofreligion.hypotheses.org/. It was interesting to see that, even if we do blog on very different topics, with very different styles, we both – as academic bloggers – share the same feelings, the same joy and the same fear about blogging. I also enjoyed meeting – finally – the legendary André Gunthert (see http://andre.gunthert.fr/). You probably know André from his blog http://culturevisuelle.org/icones/. The text of his talk on now online there, and I did really appreciate it (anyone interested by academic blogging should probably read it). I was also glad to discuss about Boulet‘s incredible post on his blog entitled “notre Toyota était fantastique“ (we both love Boulet’s work, and more generally graphical novels, what we call bande dessinée in French, the nineth art actually). Boulet is a comic book writer, publishing on a blog for years, and I have to admit that I prefer reading the published books than the blog (his Notes are the paper version of the blog). But last week, I saw something on his blog that I will never be able to read in a book

It is still a cartoon, probably more an autobiographical graphic novel, but those animations are great. For the first time, I really see the interest of blogging in graphical novels. Please, have a look at “notre Toyota était fantastique“, it is really something you should experience… And I was glad to share André’s point of view, since this is exactly his expertise (as a researcher). I hope that we’ll find some time to discuss more about data visualization, since I believe I have a lot to learn from him.

It was also great to finally meet Marin Dacos, one of the founder of http://hypotheses.org/, which is the platform hosting now my blog. Hearing some macro-vision about academic blogging was interesting, and complementary compared with blogger’s experiences I’ve been reading those past days to prepare my talk. I was also glad to discuss legal aspects related to blogging, since one of the interest of being hosted by an academic platform is to have some kind of backup and support. One of the issue I should still work on is about adding mentions when I use a picture on my blog. I am aware that it is not fair to avoid citations related to pictures, when I upload them on my blog. A few months back, I decided to add mentions, explaining where each pictures were borrowed from. Then, since the owner of the rights of some pictures was checking on the internet using robots (or simply Google), within a week, I got an email asking me to remove those pictures, since it was illegal. The email was not friendly at all. So I did remove the pictures, and all the citations and mentions. So, if some authors of pictures want me to remove a picture from my blog, or want me to add a mention below, I’d be glad to do so ! Please, just send me an email.

And finally, another interesting experience related to this panel was that, for the first time, I discovered what live tweeting was. So far, it was only a legend, that I could read from here and there (see e.g. http://www.theguardian.com/higher-education-network/blog/…). I had already experience the one tweet to mention a talk, like

but here, I guess for the first time, I experienced live teet,

I found that awesome ! I mean, at first, I find that odd to see me mentioned so many time on consecutive posts on twitter. But it is great to see what people in the room got from my talk (I have to admit that it was not live for me since I do not have a smartphone – not even a phone actually – I did discovered that later, in the evening, when I got back home, after some beers with André and Loïc). I did really enjoy what Ewen did on his blog a few month ago, with a detailed summary of the R conference he went to (see http://editerna.free.fr/wp/?p=56). Here, it is the tweet version. And I found it great… I don’t know why it is the first time I see this (I don’t know if it is something you see more in social science than in mathematics, or just due to the fact that I did not attend much conference since I have three kids), but I loved that… Thanks again Ravi.

Somewhere else, part 81

Three very interesting posts, this week, somewhere else on the internet

among other writings worth reading

of course, you need to know the following (quite popular on the internet) to find it funny

et toujours quelques articles et billets en français,

Did I miss something?

Earthquake dynamics

I just upload on http://hal.archives-ouvertes.fr/hal-00871883 a joint paper entitled Modeling earthquake dynamics.

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

Calculs de moments (espérance et variance)

Vendredi, suite du cours ACT2121, de préparation pour l’examen P de la SOA (probability). Un nouveaux jeu d’exercices, sur les thèmes 9 et ssssss8 (tel que classifié dans le livre de Jacques Labelle, qui servira de référence pour ce cours)

La semaine prochaine, nouvel examen, portant sur les 4 thèmes abordés cette semaine, et la semaine passée.

 

The role of blogging in academia

In a few days, I will participate to a panel discussion in Montréal, chaired by Marin Dacos, entitled  “Minor forms of academic communication: revamping the relationship between science and society?“, at the World Social Science Forum. I do not have much expertise  (compared with colleagues involved in the panel) even if I frequently observe the community of academic bloggers, and I regularly interact with some of them. For this panel discussion, Marin asked me to share my experience, as an academic blogger. So, let’s try to describe the Freakonometrics adventure…

  1. The origins: why and how the blog started?
  2. The practice: how do I blog?
  3. The future: why is it still worth blogging, in academia?

I will try to organize my post according to these three items (note that you can get directly to each of them if you want to skip some parts). But to be honest, my post will probably get soon very messy…

  • How blogging started – from experience at Univerisité de Rennes to ‘Freakonometrics’

The first version of the blog started at Université de Rennes 1, following a request from the IT department. In 2007 (as far as I remember), someone came up with the idea that all researchers should have a web page, or at least a page explaining their area of expertise, with links to papers, and lecture notes. But a lot of researchers were reluctant, and that IT person thought that blogs might be an interesting alternative. I was (extremely) skeptical, but since I just arrived in Rennes by that time, I thought it could be fun. I did have web pages for my courses (see a relic of the past, http://193.51.89.161/st/ for a course on time series I gave 10 years ago, in Paris), and I did have a webpage with weekly updates, that could be called a blog. So I did have some kind of experience. But still.

From a technical point of view, blog is a contracted form of weblog, which is a website made up of ongoing entries, that we will call posts. And those posts are published in reverse chronological order. So it makes it difficult to follows for students, unless they go on the blog frequently. There might be tags and categories, that can be used to distinguish posts related to conference, publications, and teaching. This first blog was a great experience. Teaching was fun, students did like the idea of the blog, and comments became a place to discuss. The blog was some kind of (open) forum, there were a lot of comments. I became blog addicted by that time.

Then I started to be recognized in conferences, and I wanted to stop having an eponymous blog (coincidence, or not, it was also by the time I moved to Montréal). Actually, several academic blogs are eponymous, it is rather common (we’ll get back on that later on). And that makes sense since most bloggers tend to identify their blogs, as both personal and professional. Consider for instance Ann Althouse, professor at University of Wisconsin Law School (since 1984) who met her husband through blogging (see http://nytimes.com/…). I guess you can find all here life (personal and professional) online. Boundaries between professional, academic and personal life may be difficult to establish, mainly because all those aspects intertwined constantly in the lives of scholars. After a almost three years, the Freakonometrics adventure started officially.

This blog is clearly an academic blog. Because of the editor, because of the contents, and because it is now hosted by hypotheses.org , a “platform for academic blogs in the humanities and social sciences“. It is one academic blog among many others1 . John Quiggin explained in 2006, “with the arguable exception of law, economics is the academic discipline where blogging has been embraced most enthusiastically“. This might explain why there is such an active – and enthusiastic – community (see more recently another discussion, by John Quiggin). About the academic blogosphere, Jacob Halford said that “the situation within academic blogging seems to be that we are currently a bunch of islands that are vaguely connected but not really arranged into continents and groups. We are all spread out across the digital world with a fragmented network between us.” What can we find in this community? Some economists like to use their name, like Greg Mankiw’s blog for instance (with subtitle “random observations for students of economics“). But most of them prefer to hide themselves behind a short title, like Confessions of a Supply-Side Liberal (by Miles Kimball), The Conscience of a Liberal (by Paul Krugman), or longer one, like Statistical Modeling, Causal Inference, and Social Science (by Andrew Gelman). The editor is never hidden, and we usually find a short bio, including a picture (most of the time), as well as a link to a webpage hosted by some university. Other examples might be I’m a bandit, with subtitle “random topics on optimization, probability and statistics. by Sébastien Bubeck“, or what’s new,  with subtitle “updates on my research and expository papers, discussion of open problems, and other maths-related topics. by Terence Tao“. Some blogs use puns (it is a feature that you can find on almost any blog: most of them use humor, just to explain that this is just a blog) like Hyndsight, “a blog by Rob Hyndman“. But the reference I like more is perhaps more one of those blogs where the (true) name appears, but only slightly…. In those blogs, the name of the editor appears only as the author of posts, like Marginal Revolution, where the contributors are  and , or Normal Deviate (with subtitle “Thoughts on Statistics and Machine Learning“) by Larry Wasserman (you simply have to click on the About hyperlink, but it is the only place you’ll find the name of the editor of the blog). Now, to explain the name of my blog, in a few lines, I should probably spend some time discussing a major influence.

  • How blogging started – Influences

In 2005, University of Chicago economist Steven Levitt and New York Times journalist Stephen Dubner published a collection of ‘economic’ articles, claiming that economics is, at root, the study of incentives. This is how http://freakonomics.com/blog/ started (the first post was published in September 2005). My blog is more about econometrics than economics, and I did borrow (not to say steal) the name freakonometrics to a colleague of mine at the Ecole Polytechnique. According to Francis Kramartz, there were two different approaches in econometric courses: see econometrics as an application of mathematical statistics, where the linear model is projection of the variable(s) of interest on the subset of linear combination of possible explanatory variables, and you derive properties, and then you discuss possible applications. Or you start with applications, with data, and then you try to find a (possibly) predictive model. Francis called this second approach freakonometrics. I find nice the maths behind econometrics (especially when you can mention geometry and projections), but I also love playing with datasets! I love the feeling you can have when you try to extract information, and think about visualization issues. I thought freakonometrics was a proper description of what will be in the blog.

Another important point when I started blogging was related to a so-called open community. I started blogging a few years after discovering R (a free software programming language, and a software environment, for statistical computing and graphics). The community of R users is based on the idea that we should share notes, codes, and tips. Since blogging is sharing some knowledge, it became natural to blog, including codes. And I have to confess that it has always been thrilling to see people willing to re-use what I’ve done in a blog post (as long as they don’t make money of it). Of course, there are alternative to blogging, such as being an active member on a forum (like stackoverflow), or answering to mailing list (see the paper by Timothy Stephen and Teresa Harrison on the Comserve experience). I truly admire contributors on forums or mailing lists. And somehow, we do the same kind of things. Except that on my blog, I am in charge.

  •  What the blog looks like ?

This might be a stupid question: since you read this post, you obviously can look around, and see how the blog looks like. On the left, I try to give a short description of my blog. I pretend that it is an “unpretentious academic blog“. It is an academic blog from its contents, not only because it is written by someone within academia. And it is unpretentious because I want my blog to be casual (but we’ll get back on this point later on). About me, I pretend to be “a surreptitious economist and born-again mathematician“. This is from my background. I did study Mathematics in France, then I discovered Economics. I got a master degree in Economics and Mathematics, and a PhD in Mathematics. Then I chose to join an Economics department, in France, for my first position, and finally got a position in a Mathematics department, in Montréal. Currently, I rediscover Mathematics, but I still love Economics. Econometrics in neither Mathematics, nor Economics. It happens to be somewhere in between. A macroeconomist will analyse and compare transportation prices. A microeconomist will try to understand why people decide to take their bike, or a car to go to work. They will try to explain why return tickets are usually cheaper than one-way tickets. An econometrician will try to get datasets with ticket price, for different dates, different destinations, etc, and then, try to quantify the price difference. Not necessarily explain it. This is what I do in my blog. I explain how to model, and I skip usually the interpretation part.

I claim that I am “a blog activist, and an actuary, too“. About the second part, in Europe, no one knows what actuarial science is, so usually, I do not mention it. In North America, it is much more popular. And yes, I am an actuary. I did publish books on mathematics of insurance, and I am about to edit a book on computational aspects of actuarial science. I am not proud of being an actuary, but the truth is, I find insurance problems puzzling and challenging, for mathematicians and economists. For the first part, yes, I keep writing on my blog that academics should blog. They do have legitimacy to comment and explain, so they should us it. I can even go on a conference on a holiday to talk about blogging !

On the right part of my blog, I try to get legitimacy. They are three different information,

I mention my academic publications since I believe they give me legitimacy, when I talk about econometric models, or probability. This is how academics judge other academics within academia.

I believe that those websites give me some credibility, not as an researcher, but as a blogger. I do also mention top site mentions, such as “100 Savvy Sites on Statistics and Quantitative Analysis“. I should probably mention that it might look like my academic profile gives me credibility as a blogger. But somehow, I have the feeling that the causality effect has changed: I now have credibility in my research because of my blog. Some editors asked me to refer some articles submitted because they read some posts on my blog (as they told me explicitly in their email). Some colleagues invited me because they know me2 from my blog, not from my research papers.

  • How do I blog, and what do I blog about ? 

My research and teaching activities are related to economics, mathematics, actuarial science, etc. I do blog about those subjects, using a less formal medium than academic journals, even if I write to an audience that I am usually talking to (students or researchers). And I cannot pretend that I write for non-economists, or non-mathematicians. Even if I want to, jargon comes naturally (even if I pretend to be casual). Different sorts of posts can be published on the blog. If we try to distinguish, there are

  • post to mention events and news
    • about forthcoming conferences, thesis prices, etc. This was done on the blog when I started, but this information is now shared with microblogging (through Twitter, via @freakonometrics). Nevertheless, there might be some recent examples on the blog, such as a PhD defense or a PhD price, or to mention the panel of the WSSF.
    • I should probably mention that I use micro-blogging to share readings I found interesting. Interesting tweets are now posted in a dedicated category, twice a week
  • posts to mention amusing situations to explain (possibly) difficult subjects
    • about game theory: when should I (otimaly) shoot at my son when playing with waterguns ? We have empty guns, we rush to fill them: the sooner I stop, the more likely he will get wet before I do (which is good, from my point of view), the longer I wait, the more water I will get, and lower the probabiliy to miss him. See also a discussion about optimal strategies to get married
    • about Markov Chains: what is the transition probability of a Markov Chain ? can I derive it in a simple case ? like Snake and Ladder game ? Here, I explained how to model that game, and how to simple Markov chains to see where you might be after rolling a dice ten or twenty time.

see also some post on genetic algorithm,

    • about demography issues: what is the age of the oldest person you know (from a TV commercial) ? How old are the popes,  or the Members of Parlements (twice actually)? Based on some open dataset, I compare the distribution of the age of MPs, and the distribution of the age of people who might be able to vote.

    • about probabililty: if I go the play the roulette, and I wish to maximize the probability of doubling my initial wealth, should I play small or should I play big ? or probability to win when playing cards : the more players there are, the longer the game? Some of the posts were published before I went to Las Vegas (for holidays).
    • about number theory, and complexity of strategies: on a Sunday evening game with kids (I pick a number, and you should find it), or a story about McGyver in an Afghan jail. In this post, I use a nice theoretical result in group theory in a McGyver story, trying to explain that yes, group theory can be fun, too.
    • about geometry: what could be the distance between points, and what are the connections with pigeonholes, is it simple to get your own Essher type graph, on breaking pieces of wood (part 1,  part 2 and part 3) or other technique to share a pizza fairly? Most of those posts are based on discussions with my kids, and I try to go further, and to illustrate difficult geometric concepts, or results, using simple discussions I had with my (young) kids
    • about magic tricks: how to sort matrix, per row. This post is based on the mathematical explanation of some magic trick.
    • about classification, and playing pétanguefollowing some discussions with students of mine, a few years back, where I suggested a dual version of a popular French sport,
    • about econometrics: based on Playmates’s measurements I try to explain the importance of working with individual observations (versus time related ones). Based on those measurements, I try to see if there is some correlation between chest and hip measurement, for young women. It turned out that there were no correlation at all, over 60 years, because we cannot treat those observations as individual independent observations, since there was a strong temporal evolution. Actually, chest and hip measurement had opposite trends with time, which tends to hide the true correlation

    • about stochastic processes, random walks (and option pricing): on the arcsine law, and drunkward’s walk. In those posts, I try to answer drunkward’s important question, and relate them to standard questions in mathematical finance
  • posts to react to articles, discovered online, somewhere else
    • about breaking records: how comes every year is the most expensive one, in terms of natural catastrophes, or about financial records. In this post, I try to see if over almost 20 years  it is an outstanding even to have 11 consecutive days were the index went down (and to compute that probability)

    • about the 100% chance of a nuclear incident called statitical certainty, about an article published in a French newspaper. In that article, two engineers explain that there is a 100% chance to have a nuclear accident, in France, over 30 years. Just by making a simple mistake in probability computation,

    • about e-cigarettes, and confusion in some French newspapper about a scientific study, where it was claimed that there was a difference, but (statistically) not significant.
     
    • about some probability of having an 11 hour match in tennis game, about the probability of having twice the same numbers in a lottery, or in UEFA series. In 2010, there was the longest tennis game, ever. And using some extreme value theory results, I try to estimate the odds of having such a game

    • about the traveling salesman (more a book review): inspired by William Cook’s “In Pursuit of the Traveling Salesman“, some codes were proposed to solve a (difficult) mathematical problem (in the context of collecting candies at Halloween)
    • about surveys, and pools: what does it mean if 75% of the people interviewed in a survey claim that they do not ‘believe’ in surveys, and opinion pools; and some code to predict the winner of some elections (based on several pools)
    • about insurance and bargaining: why is it – sometimes – rational for insurance company to bargain, with their insured, based on some old research paper, published in the 70’s.
    • about financial issues: what does that mean the a financial stock is hold, on average, 8 sec. ? what would a bunker full of gold be like, how large can it actually be. It started a discussion about the (difficult) estimation of what should be a (simple) average time
  • posts to discuss a question asked by a student, or a colleague, that puzzled me (it is then more a discussion, without answers)
    • about the interpretation of a parameter in a model: can weights in weighted least squares be understood as a frequency ?
    • about subadditivity and risk measures: why statements in discussion papers regarding Insurance company solvency might be incorrect, and yield to counterintuitive situations.
  • a presentation (and if possible an explanation) about a paradox,
    • about the Monty Hall paradox (see also a discussion about a similar topic on computing probability with respect to some information or more funny)
    • about Simpson’s paradox, and pies choice,
    • about bias selection: why we should not listen to students and policemen (see ) or why are there always more buses on the opposite side of the road.
    • about probabilities, like nuns and Hell’s angels in an airplane, some nice puzzles, here and there, and probabilities to have brothers and sisters: do boys have more brothers, or more sisters?
    • about events that will occur at some infinite time, with a strictly positive probability
  • posts to share some experiences with students (or by myself) to investigate a model, a dataset, or a computer function
    • about airline tickets: when it is optimal to buy – online – an airline ticket. I did use a dataset mentioned in a study on a similar topic, and I try to explain that this question is related to some risk aversion measure: are we looking for the date where, on average, the price is the lowest, or a date were, with 90% chance, the price is the lowest?
    • about graphs: with Ewen Gallic, what are the connections among twitter accounts of Members of Parliament (in France). The idea was to learn how to play with Twitter API, and to get a nice visualization,

There was also a post on hours of tweets (where I tried to see how long I can survive away from Twitter).

    • about graphical functions: finding Waldo in a picture, using the red and white stripped shirt , or enclaves in maps. Discovering image treatment functions
    • about circular density estimation: how to make sure (for hourly data) that 23:50 is close to 00:10 ? that (for spatial data) that -170 degrees (west) is close to 170 degrees (east) ? with application on earthquake location, or calls to 911. Actually, for 911 calls, the first post was entitled “minuit, l’heure du crime“, where I did try to figure if there were more crimes at midnight (which is precisely the time of discontinuity),
http://freakonometrics.blog.free.fr/public/perso3/911-mois.gif
    • about textmining, and letter appearance in language (and books): including La Disparition, a book written in French with no E (how different is letter appearance probability, compared with the conditional one, when E is removed?). Discovering textmining functions.

see also text extraction from tweets,

    • about first names, in France, per year. Using counts of birth per first name, region, and year, I try to get visualizations of spatial and temporal patterns associated to some common first names,
    • about car speed, or car accident (based on some dataset I got). I try to study the links between the speed of two consecutive cars (following a discussion I had with my wife while I was driving, where I try to explain that if I drive too fast, it might be because the driver in from of me is driving too fast),
    • about sharing datasets: we did generate a dataset linking zip codes and spatial coordinates or with counts of births in France, per day.
  • posts with a more historical perspective on a theory I discuss in a course
    • about the history of extreme value theory:what is the story behind the Fisher-Tippett theorem, and the law of three types, with Gumbel, Weibull and Fréchet distribution ? Did those people (really) work on extreme values ? Who got which result ? See also the history of the return period concept.
    • about the Student t test: who is this Student, and what are the connections with Guiness ?
    • about the chi-square distribution: what did Peason discovered first, the chi or the chi-square distribution ?
    • about discounting: Leonardo Fibonacci and discounting
    • about the law of small number: why is the Poisson distribution so important ?
    • about financial market efficiency (in French): who said that assets prices could be modeled as random walks (and therefore are then ‘purely’ random) ? What is a martingale ? See also an application to temperature time series in Montréal.
    • about optimization: did Newton and Raphson (from the so called Newton-Raphson algorithm) really invented the gradient descent ?
  • codes, references and slides related to some courses, conferences or research papers

    • about demography: codes and tables generated for the Appendices of a book on Bodily Injury, Insurance and Legal issues,
    • about climate change: codes and graphs related to a talk given a Lyon on extremal events, and climate change.

  • a more general discussion, about science and dissemination (not to say teaching)
    • about teaching computer codes to kids: on hacker generation, and why kids should learn how to code
    • about research mythology: how journalist discuss research issues (based on two experiences)
    • about humanities versus sciences: visiting the Gugenheim museum in NYC versus MoMaths (Museum of Mathematics), with kids

 

  • How do I blog, and what do I not blog about ? 

Now, I should admit that I will not blog on all topics. For instance, I was involved in some discussions, where a student of mine asked interesting questions, about religion and education. I wanted to share things I’ve heard (actually, read), but a lawyer told me that I should avoid to do so. And I know there are things that I should not post on my blog if I do not want to ruin my career. So for legal issues, there are things that I will not write on my blog. And similarly, I try to avoid libel actions, so when I write that someone published something stupid in a post, or in an article, to try to say that as nicely as I can. Again, I claim that my blog is “unpretentious“, so I am not here to fight, nor to be preachy, just to have fun! As we’ll discuss in a couple of paragraphs, I do fight every morning on my bike, I do fight when I arrive at work. My blog is still a peacefull place, and I want to keep it like that…

Another reason why I will not blog about everything is that I do not have legitimacy to blog on everything. I mean, I know a bit of mathematics, but when I post something about a property in geometry, I feel like an impostor. Similarly when I write something about some history of some statistical concept, about regulation in insurance, about simple game theory result, etc. I try to publish on topics that are either related to my research, or to my teaching activities. Even tonight, when writing this post, it looks like a big fraud to me, and I find it extremely hard to write some exegesis about my blogging activity.

Finally, I should confess that I do not blog about everything because I try keep some ideas for my research, that will (hopefully) end up with a publication in a peer-reviewed journal. Blogging does not get much credit in an academic career, let’s be pragmatic… Blogging is not a substitute for other academic writings! But they can clearly coexist (see a discussion on insidehighered blog). If we compare a blog post and a (standard) academic article, it takes more time to write a paper, mainly because full referencing is necessary, because it is necessary to convince the editors, as well as referee(s) that you wrote something original, that is probably a major contribution. It is much faster to publish a post! And probably most important, while blogging, you can explore a question, you do not need to answer it.

  • How do I blog? Somewhere between an academic paper and a journalist article?

As mentioned previously, some academics do publish posts in blogs hosted by newspapers, such as The Conscience of a Liberal (by Paul Krugman). Somehow, those journals (here the New York Times) host academics the same way newspapers hosted some opinion pages, were academics where invited to give their point of view, a few years back. But, as John Quiggin explained in 2006, “newspapers are generally reluctant to repost on academic working papers and similar publications unless the conclusions are obviously newsworthly“. Economics in newspapers has to be related to macroeconomics, and science has to be related…. to medicine or technology (the science page is now a nice advertising page for the most recent smartphone applications, or the electronic cigarette, as mentioned previously). And, as mentioned by Robert Cottrell, a couple of decades ago, experts (not to say scholar) “have functioned as sources for newspaper journalists. Their opinions would emerge often mangled and simplified, always truncated, in articles over which they had no final control.” Now, with blogs, it is possible to read them directly, in a style that is easier to understand, compared with academic (standard) publications. “The general reader has access to expertise that was easily available, a decade ago, only to the insider or the specialist“. Writing a post is an academic blog is neither pretending being a journalist, nor writing an academic article. In the traditional process of research, we discuss with colleagues, possibly in conferences, but only the final publication remains. False starts and heuristics are skipped, because they might appear as un-necessary to get an understanding of the article. And I truly believe that this is exactly where blogging become interesting.

Why am I still blogging? and why I will probably do it for a long time…

  • Why am I still blogging ? a peaceful island within academia ?

One of the main reason why I am still blogging after 6 years, with enthusiasm, is because it is still a lot of fun. And it is a place that I appreciate all the more that academia is currently a nightmare. I might sound incredibly cynical (and I have to confess that I think I am becoming cynical), but I interact with the blogging community because I want to. I interact with students and colleagues because I have to. There are two important issues in academia, when talking about money: tuition fees rise, and the decrease of public funding for research. I have the feeling that (undergraduate) students, are consumers. And consumers are the real bosses, you know that, right? And as a professor, I am like the seller in the store. I can try to give some advice, but I am useless. Most of my students are no longer interested by the story behind a model, they want some recipes for their future job. They want to know what to use, and when. And since universities evaluate their professors, they ask students to fill evaluation forms. And professors do everything to get good evaluations. That is a simple and extremely rational game. And about the colleagues, I can tell you so many stories, that I have experienced, or heard about. Everyone is now suspicious. And again, it is rational. The less money there is to do some research, the more competition. It there are one or two grants in my field of research, in Québec, I have no more colleagues and friends, I have only competitors. This is not (only) a feeling I have, it is truly something that I observed. Many time, I have been with colleagues, and we’ve been working with doors closed, not because we did not want to get disturbed, but to avoid that someone still our idea. We’ve even been to work in some coffee, outside the university. With students, it is a commercial world, while with colleagues, it is a world of paranoia (most of the time for good reasons). I always find odd, when I fill a form for a grant, the section about my ‘main contributions‘. When I arrived in Canada, some colleagues were paternalistic (one more time for good reasons) and they told me that I had to mention the impact of my research. Like my research could save the world, or help to cure from some awful diseases… No, what I do is theoretical, probably useless, and I cannot expect too much. Unfortunately, I am not Alvin Roth (who proved that mathematics can save lives, literally). And yes, when I have three consecutive hours to do some research, it is probably because either I missed a course, or because I forgot about a meeting…

Compared with those (standard) academic activities, blogging is fun. Within the blogosphere, I do not see competition, just motivation and stimulation. You can interact with other bloggers, learn from them, and so far, it is still a pleasure to blog. Some bloggers claim that it is a shame that blogging is not recognized (formally) within academia, but I think it is actually a great opportunity. We do blog because we want to, it is not another required task. So it can still be fun… And when talking about the impact of my activities, I believe that my blog has much more impact than my teaching (in a class room) or my research.

  • Why am I still blogging ? who I am blogging for ?

I have to confess that I blog mainly for myself, in the sense that I do not want to have a readership waiting for me to post something everyday! Also in the sense that my blog gives me a complete freedom to talk about things I find fun, with a whole person style (and actually I do use my bog to develop my own “writing voice“, to use Jill Walker’s words), discussing about personal issues. I remember when I started blogging. At first, I thought that no one was reading my blog, and then friends, colleagues told me that they were. It is actually thrilling (not to say scary) to have 5,000 readers for a blog post, when you think about the number of readers of academic articles. As claimed by Brendan O’Connor, “blogs are a more effective medium for intellectual influence than journal articles“. Somehow, it looks like academic journals try to avoid exposure. I mean, publishing an article in the Journal of Narrowly Focused Hyper Specialized Field Studies is a great place to hide your research.

If I wanted to be provocative, I would say that research is a social activity, where we need to keep, and to create, interactions with various researchers, reading papers, keeping our mind open. On the other hand, blogging is definitively a personal activity. And since I am an old bear, usually reluctant about standard social activities, blogging is perfect for me.

As mentioned in some previous posts, I use my blog as a note book, to keep traces of ideas, codes. So yes, blogging is personal. But it is opened, and anyone can access it. So I use my blog to promote my work, and my scholarship. Using Melissa Gregg‘s quote, I see academic “blogging as conversational scholarship“. Blog are great to encourage conversation! Blogs are coffee house (in the sense of XVIIth Century, in England). In blogs, we connect to other blogs, using comments, reactions, and hyperlinks. But actually, Derek de Solla Price explained in 1963 (see e.g. Doug Horne’s paper) “the prototype of the modern scientific paper is a social device rather than a technique for accumulating quanta of information“. So having informal discussion is probably the best way to work, as an academic. This is also the idea of Diana Crane, “the growth of scientific knowledge is a kind of diffusion process in which ideas are transmitted from person to person“. Using blogs, we can develop and connect a network of various people, from PhD students to practitioners in the industry, as well as more experienced academics who might share common interests. The blog is read by students, former students, colleagues, probably the dean, and even the department secretary. My kids too, someday…

  • Why am I still blogging ? cost-benefit analysis

With a simple cost-benefit analysis, I will probably blog if benefits are more important than costs. One component is related to time issue: is blogging costing, or saving time? I receive frequently emails, asking for explanations on a technical question (from students, former students, or anyone actually) that will need a  detailed email answer. I still believe that a “reply to public” is possible, with a blog post. Similarly, while teaching, the same question is asked twice a year. The first year, I can write an answer in a blog post, and then, I can integrete it to my notes (blog posts can even be more interesting than lectures notes). I cannot believe that blogging is a waste of time, since I see my blog as a long-term memory (I do have an extremely short-term memory, unfortunately). And just to be naughty, I do see a lot of academics that “do not have time to waste blogging” who can write extremely long, detailed (and most of the time, nicely structured) replies. If I write a detailed reply to a specific question (because I found the question interesting) I find it stupid not to share it. All the more because other people might also be interested in interacting… With blogs, dissemination is immediate, as well as comments and feedback.

A lot of researchers within academia still reject blogs because they’re not serious, and not peer reviewed. Not serious, I can live with it. I am a big fan of the Ig Nobels prices: yes, we can do serious research without being too serious. But rejecting blogs because they are not peer reviewed… you’ve got to be kidding me! Comments are open, and unless you want to sell Louis Vitton bags or Viagra, I publish all of them. In blog, comments can be more constructive than comments you get from referees in a so-called peer reviewed journal. Blog posts are published on the (open) web, not in some journal so expensive that no one can actually read it. Yes, commenting is not a formal or rigorous as a peer review publication, but having opened comments may contribute to establish quality and credibility of a blog. Having comments from the community is a great benefit. Further, blogging is interesting since I believe it did improve my teaching. In the sense that writing posts helped me (many times) to clarify my ideas. And I believe that my lectures are then better. So I guess I will keep blogging for a long long time…

I guess I will stop here, even if I might have tons of other things I would like to add. Based on this post, I now have think of something interesting to share in the panel, next week. But I also have to keep in mind “academic blogging can be an important medium, when it avoids the meta-narcissistic onanism of blogging about how important academic blogging is”, as claimed by Chris Parr.

1. To mention only some of them, see Edwin Chen’s http://blog.echen.me/, Michael Giberson and Lynne Kiesling’s http://knowledgeproblem.com/, Christopher Long’s http://angrystatistician.blogspot.ca/, Josh Hendrickson’s http://everydayecon.wordpress.com/, Rob Hyndman’s http://robjhyndman.com/hyndsight/, Andrew Gelman’s http://andrewgelman.com/, Christian Robert’s http://xianblog.wordpress.com/, David Stern’s http://stochastictrend.blogspot.ca/, John Cook’s http://www.johndcook.com/blog, Greg Mankiw’s http://gregmankiw.blogspot.com/, Mark Thoma’s http://economistsview.typepad.com/, Tony Cookson’s http://blog.thisyoungeconomist.com/, John Mount and Nina Zumel’s http://www.win-vector.com/blog/, John Myles White’s http://www.johnmyleswhite.com/, Bill McBrid’s http://www.calculatedriskblog.com/, Alex Singleton’s http://www.alex-singleton.com/, Liyun Chen’s http://blog.cloudlychen.net/, Eeshan Malhotra’s http://www.hotdamndata.com/, Percy Beach’s http://scepticalacademic.blogspot.ca/, Miles Kimball’s http://blog.supplysideliberal.com/, Brad DeLong’s http://delong.typepad.com/sdj/, Honglang Wang’s http://honglangwang.wordpress.com/, Matt Asher’s http://www.statisticsblog.com/, Cosma Rohilla Shalizi’s http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/, Denis Haine’s http://denishaine.wordpress.com/blog-2/, Dimiter Toshkov’s http://rulesofreason.wordpress.com/, Christopher Gandrud’s http://christophergandrud.blogspot.ca/, Jodi Beggs’s http://www.economistsdoitwithmodels.com/, Eric Nguyen’s http://blog.datapunks.com/, Matt Bogard’s http://econometricsense.blogspot.ca/, Andrew Ziem’s http://heuristically.wordpress.com/, Dave Giles’s http://davegiles.blogspot.ca/, Jeff Ely and Sandeep Baliga’s http://cheaptalk.org/, Tyler Cowen and Alex Tabarrok’s http://marginalrevolution.com/, Kevin Bryan’s http://afinetheorem.wordpress.com/, Eran Raviv’s http://eranraviv.com/category/blog/, Gianluca Baio’s http://gianlubaio.blogspot.ca/, Ulrich Matter’s http://giventhedata.blogspot.ca/, Gregor Gorjanc’s http://ggorjan.blogspot.ca/, Nate Silver’s http://fivethirtyeight.blogs.nytimes.com/, Jesse Anttila-Hughes and Solomon Hsiang’s http://fight-entropy.com/, Francis Smart’s http://econometricsbysimulation.com/, Corey Chivers’s http://bayesianbiologist.com/, Steve Walker’s http://stevencarlislewalker.wordpress.com/,  Sebastien Bubeck’s https://blogs.princeton.edu/imabandit/, James Hamilton and Menzie Chinn’s http://www.econbrowser.com/

2. I won’t have time to discuss this point today, but I have been discussing with a lot of persons who truly believe that they know me because they read frequently my blog. And, to be honest, that might be true, and it is a strange felling. I mean, I remember some diners where people told me that they’ve been on my blog, and they remember what I did post, and I am like “great, but I don’t know you at all… I have never read any of your research papers, I don’t know what you might be working about…” This asymmetry put me in some awkward situations.

Please, never use my codes without checking twice (at least)!

I wanted to get back on some interesting experience, following a discussion I had with Carlos after my class, this morning. Let me simplify the problem, and change also the dataset. Consider the following dataset

> db = read.table("http://freakonometrics.free.fr/db2.txt",header=TRUE,sep=";")

Let me change also one little thing (in the course, we use the age of people as explanatory variables, so let us consider rounded figures to),

> db$X1=round(db$X1*10)
> db$X2=round(db$X2*10)

Assume that you want to work with factors, because you don’t see why there should be some linear model (and you did not look at the awesome posts on smoothing techniques)

> db$X1F=cut(db$X1,c(-12,45,75,120))
> db$X2F=cut(db$X2,c(100,200,300))

Then you run your regression,

> reg = glm(Y~X1F+X2F+X3,family=binomial,data=db)

So far, nothing wrong, you can try, no error, no warning. Then Carlos wanted to use a ROC curve to see how the model was performing… so he did use some code I uploaded on the blog, something like

> reg = glm(Y~X1+X2+X3,family=binomial,data=db)
> S = predict(reg,type="response")
> Y = db$Y
> plot(0:1,0:1,xlab="False Positive Rate",ylab="True Positive Rate",cex=.5)
> for(s in seq(0,1,by=.01)){
+   Ps=(S>s)*1
+   FP=sum((Ps==1)*(nombre==0))/sum(nombre==0)
+   TP=sum((Ps==1)*(nombre==1))/sum(nombre==1)
+   points(FP,TP,cex=.5,col="red")
+ }

To make it nicer, let us use the following code

> ROCcurve=function(s){
+ Ps=(S>s)*1
+ FP=sum((Ps==1)*(Y==0))/sum(Y==0)
+ TP=sum((Ps==1)*(Y==1))/sum(Y==1)
+ return(c(FP,TP))}
> u=seq(0,1,by=.001)
> vectROC=Vectorize(ROCcurve)(u)

Here, I should mention that I got a warning, but to be honest, when you see the graph, you’re so puzzled that you forget about it…

There were 50 or more warnings (use warnings() to see the first 50)

Carlos ran the code, show me the graph, and asked me “can my model be that bad?”,

 My first answer is “no, your model cannot be that bad! you use (almost) all your explanatory variables, you cannot have such a bad model…“. So, where the problem comes from? My first guess is that this is what you get using some random sort of a variable. For instance, if you use the code above on

> reg2 = glm(Y~X1+X2+X3,family=binomial,data=db)
> S = sort(predict(reg2,type="response"))
> Y = db$Y

Here, one variable is sorted, and we get

This confirms the idea that using a random classifier, the ROC curve is on the diagonal. But here, when you look at the code, you do not see any sorting operation.

So, again, what went wrong? The problem is actually very simple. The dataset looks like

> head(db)
  Y X1  X2 X3      X1F       X2F
1 1 33 163  B (-12,45] (100,200]
2 1 64 185  D  (45,75] (100,200]
3 1 53 166  B  (45,75] (100,200]
4 1 55 197  C  (45,75] (100,200]
5 1 41 184  C (-12,45] (100,200]
6 1 78 196  C (75,120] (100,200]

If we look at the factor variable, the lowest part is not included, in the interval. More precisely,

> levels(db$X1F)
[1] "(-12,45]" "(45,75]"  "(75,120]"

and if we look more precisely and the range of the two variables, we get

> range(db$X1)
[1] -12 120

Wait… the minimum is -12, but it does not appear in the factor (and we do not get any warning)? Yes,

> db[which.min(db$X1)+(-2):2,]
    Y  X1  X2 X3      X1F       X2F
429 1  76 227  A (75,120] (200,300]
430 0  35 186  D (-12,45] (100,200]
431 0 -12 109  E     <NA> (100,200]
432 1  76 225  B (75,120] (200,300]
433 1  61 206  A  (45,75] (200,300]

Now we start so see more clearly what went wrong…  there is a missing value in the dataset. And when we get our prediction, there is no missing value: the missing value has be droped.

> length(predict(reg))
[1] 999
> nrow(db)
[1] 1000

So when we compare the prediction and the observed value…. there is a problem, since the vectors don’t match. Actually, it was mentioned in the output of the regression (but we did not look at it)

> summary(reg)

Call:
glm(formula = Y ~ X1F + X2F + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.9432   0.1627   0.2900   0.4823   1.1377  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)    
(Intercept)   0.22696    0.23206   0.978 0.328067    
X1F(45,75]    1.86569    0.23397   7.974 1.53e-15 ***
X1F(75,120]   2.97463    0.65071   4.571 4.85e-06 ***
X2F(200,300]  1.11643    0.32695   3.415 0.000639 ***
X3B          -0.06131    0.31076  -0.197 0.843609    
X3C           0.75013    0.35825   2.094 0.036268 *  
X3D           0.13846    0.31399   0.441 0.659226    
X3E          -0.13277    0.31859  -0.417 0.676853    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 802.34  on 998  degrees of freedom
Residual deviance: 607.61  on 991  degrees of freedom
  (1 observation deleted due to missingness)
AIC: 623.61

Number of Fisher Scoring iterations: 7

Yes, there is a tiny little sentence,

  (1 observation deleted due to missingness)

So that was it? Yes! If you drop that missing value, you get something more realistic,

> S= predict(reg,type="response")
> Y=db$Y[-which.min(db$X1)]

or (probably better), change the left point of the interval

> db$X1F=cut(db$X1,c(-13,45,75,120))
> reg = glm(Y~X1F+X2F+X3,family=binomial,data=db)
> S= predict(reg,type="response")
> Y= db$Y

(the output would have been almost the same).

Observe that if we had used a dedicated package, we would not have encountered this problem. For instance (starting with the initial values of the vectors)

> library(ROCR)
> prediction(S,Y)
Error in prediction(S, Y) : 
  Number of predictions in each run must be equal to the number of labels for each run.

We do have the answer here: both vectors do not have the same length…

So, what is my point? R is great, because of all the packages, but as a teacher, I do not feel comfortable asking my student to use those functions, as black boxes. So I try to write my own codes, to get the same output. So yes, I do write codes to explain what the black box is doing, to simplify the algorithm, and show what’s going on. When working on a (forthcoming) book as the Editor, we had a discussion with Rob Hyndman about that issue. I wanted the contributors to explain the core of the code, with a simplified algorithm, when using a dedicated package. I do truly believe that using simplified codes might help to understand better. Until you start to have problems. Because I write a code to deal with one specific problem, there is no check for possible errors. And once the code is understood, please, please do not use it! use R function that can handle errors…

Somewhere else, part 80

A series of writings worth reading

et toujours quelques articles et billets en français,

Did I miss something?

Some heuristics about spline smoothing

Let us continue our discussion on smoothing techniques in regression. Assume that . where is some unkown function, but assumed to be sufficently smooth. For instance, assume that  is continuous, that exists, and is continuous, that  exists and is also continuous, etc. If  is smooth enough, Taylor’s expansion can be used. Hence, for https://latex.codecogs.com/gif.latex?x\in(\alpha,\beta)

which can also be writen as

for some https://latex.codecogs.com/gif.latex?a_k‘s. The first part is simply a polynomial.

The second part, is some integral. Using Riemann integral, observe that

for some https://latex.codecogs.com/gif.latex?b_i‘s, and some

Thus,

Nice! We have our linear regression model. A natural idea is then to consider a regression of https://latex.codecogs.com/gif.latex?Y on https://latex.codecogs.com/gif.latex?\boldsymbol{X} where

https://latex.codecogs.com/gif.latex?\boldsymbol{X}%20=%20(1,X,X^2,\cdots,X^d,(X-x_1)_+^d,\cdots,(X-x_k)_+^d%20)

given some knots https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_k\}. To make things easier to understand, let us work with our previous dataset,

plot(db)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_146.png

If we consider one knot, and an expansion of order 1,

attach(db)
library(splines)
B=bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red")
lines(xr[xr>=3],predict(reg)[xr>=3],col="blue")

The prediction obtained with this spline can be compared with regressions on subsets (the doted lines)

reg=lm(yr~xr,subset=xr<=3)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red",lty=2)
reg=lm(yr~xr,subset=xr>=3)
lines(xr[xr>=3],predict(reg),col="blue",lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_160.png

It is different, since we have here three parameters (and not four, as for the regressions on the two subsets). One degree of freedom is lost, when asking for a continuous model. Observe that it is possible to write, equivalently

reg=lm(yr~bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1),data=db)

So, what happened here?

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

Here, the functions that appear in the regression are the following

http://freakonometrics.hypotheses.org/files/2013/10/Selection_161.png

Now, if we run the regression on those two components, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

If we add one knot, we get

http://freakonometrics.hypotheses.org/files/2013/10/Selection_162.png

the prediction is

reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_147.png

Of course, we can choose much more knots,

B=bs(xr,knots=1:9,Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr,predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_148.png

We can even get a confidence interval

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_149.png

And if we keep the  two knots we chose previously, but consider Taylor’s expansion of order 2, we get

B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=2)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_163.png

So, what’s going on? If we consider the constant, and the first component of the spline based matrix, we get

k=2
plot(db)
B=cbind(1,B)
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_164.png

If we add the constant term, the first term and the second term, we get the part on the left, before the first knot,

k=3
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_165.png

and with three terms from the spline based matrix, we can get the part between the two knots,

k=4
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_166.png

and finallty, when we sum all the terms, we get this time the part on the right, after the last knot,

k=5
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_167.png

This is what we get using a spline regression, quadratic, with two (fixed) knots. And can can even get confidence intervals, as before

reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)

http://freakonometrics.hypotheses.org/files/2013/10/Selection_168.png

The great idea here is to use functions https://latex.codecogs.com/gif.latex?(x-x_i)_+, that will insure continuity at point https://latex.codecogs.com/gif.latex?x_i.

Of course, we can use those splines on our Dexter application,

http://freakonometrics.hypotheses.org/files/2013/10/Selection_170.png

Here again, using linear spline function, it is possible to impose a continuity constraint,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=1),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_172.png

But we can also consider some quadratic splines,

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=2),data=db)
lines(c(1:94,96),predict(reg),col="red")

http://freakonometrics.hypotheses.org/files/2013/10/Selection_171.png

Some heuristics about local regression and kernel smoothing

In a standard linear model, we assume that . Alternatives can be considered, when the linear assumption is too strong.

  • Polynomial regression

A natural extension might be to assume some polynomial function,

Again, in the standard linear model approach (with a conditional normal distribution using the GLM terminology), parameters can be obtained using least squares, where a regression of  on  is considered.

Even if this polynomial model is not the real one, it might still be a good approximation for . Actually, from Stone-Weierstrass theorem, if  is continuous on some interval, then there is a uniform approximation of  by polynomial functions.

Just to illustrate, consider the following (simulated) dataset

set.seed(1)
n=10
xr = seq(0,n,by=.1)
yr = sin(xr/2)+rnorm(length(xr))/2
db = data.frame(x=xr,y=yr)
plot(db)

with the standard regression line

reg = lm(y ~ x,data=db)
abline(reg,col="red")

Consider some polynomial regression. If the degree of the polynomial function is large enough, any kind of pattern can be obtained,

reg=lm(y~poly(x,5),data=db)

But if the degree is too large, then too many ‘oscillations’ are obtained,

reg=lm(y~poly(x,25),data=db)

and the estimation might be be seen as no longer robust: if we change one point, there might be important (local) changes

plot(db)
attach(db)
lines(xr,predict(reg),col="red",lty=2)
yrm=yr;yrm[31]=yr[31]-2 
regm=lm(yrm~poly(xr,25)) 
lines(xr,predict(regm),col="red")
  • Local regression

Actually, if our interest is to have locally a good approximation of  , why not use a local regression?

This can be done easily using a weighted regression, where, in the least square formulation, we consider

(it is possible to consider weights in the GLM framework, but let’s keep that for another post). Two comments here:

  • here I consider a linear model, but any polynomial model can be considered. Even a constant one. In that case, the optimization problem is

which can be solve explicitly, since

  • so far, nothing was mentioned about the weights. The idea is simple, here: if you can a good prediction at point , then  should be proportional to some distance between  and : if  is too far from , then it should not have to much influence on the prediction.

For instance, if we want to have a prediction at some point , consider . With this model, we remove observations too far away,

Actually, here, it is the same as

reg=lm(yr~xr,subset=which(abs(xr-x0)<1)

A more general idea is to consider some kernel function  that gives the shape of the weight function, and some bandwidth (usually denoted h) that gives the length of the neighborhood, so that

This is actually the so-called Nadaraya-Watson estimator of function .
In the previous case, we did consider a uniform kernel , with bandwith ,

But using this weight function, with a strong discontinuity may not be the best idea… Why not a Gaussian kernel,

This can be done using

fitloc0 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~1,data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

On our dataset, we can plot

ul=seq(0,10,by=.01)
vl0=Vectorize(fitloc0)(ul)
u0=seq(-2,7,by=.01)
linearlocalconst=function(x0){
w=dnorm((xr-x0))
plot(db,cex=abs(w)*4)
lines(ul,vl0,col="red")
axis(3)
axis(2)
reg=lm(y~1,data=db,weights=w)
u=seq(0,10,by=.02)
v=predict(reg,newdata=data.frame(x=u))
lines(u,v,col="red",lwd=2)
abline(v=c(0,x0,10),lty=2)
}
linearlocalconst(2)

Here, we want a local regression at point 2. The horizonal line below is the regression (the size of the point is proportional to the wieght). The curve, in red, is the evolution of the local regression

Let us use an animation to visualize the construction of the curve. One can use

library(animate)

but for some reasons, I cannot install the package easily on Linux. And it is not a big deal. We can still use a loop to generate some graphs

vx0=seq(1,9,by=.1)
vx0=c(vx0,rev(vx0))
graphloc=function(i){
name=paste("local-reg-",100+i,".png",sep="")
png(name,600,400)
linearlocalconst(vx0[i])
dev.off()}

for(i in 1:length(vx0)) graphloc(i)

and then, in a terminal, I simply use

    convert -delay 25 /home/freak/local-reg-1*.png /home/freak/local-reg.gif

Of course, it is possible to consider a linear model, locally,

fitloc1 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=1),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

or even a quadratic (local) regression,

fitloc2 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=2),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}

Of course, we can change the bandwidth

To conclude the technical part this post, observe that, in practise, we have to choose the shape of the weight function (the so-called kernel). But there are (simple) technique to select the “optimal” bandwidth h. The idea of cross validation is to consider

where  is the prediction obtained using a local regression technique, with bandwidth . And to get a more accurate (and optimal) bandwith  is obtained using a model estimated on a sample where the ith observation was removed. But again, that is not the main point in this post, so let’s keep that for another one…

Perhaps we can try on some real data? Inspired from a great post on http://f.briatte.org/teaching/ida/092_smoothing.html, by François Briatte, consider the Global Episode Opinion Survey, from some TV show, http://geos.tv/index.php/index?sid=189 , like Dexter.

library(XML)
library(downloader)
file = "geos-tww.csv"
html = htmlParse("http://www.geos.tv/index.php/list?sid=189&collection=all")
html = xpathApply(html, "//table[@id='collectionTable']")[[1]]
data = readHTMLTable(html)
data = data[,-3]
names(data)=c("no",names(data)[-1])
data=data[-(61:64),]

Let us reshape the dataset,

data$no = 1:96
data$mu = as.numeric(substr(as.character(data$Mean), 0, 4))
data$se =  sd(data$mu,na.rm=TRUE)/sqrt(as.numeric(as.character(data$Count)))
data$season = 1 + (data$no - 1)%/%12
data$season = factor(data$season)
plot(data$no,data$mu,ylim=c(6,10))
segments(data$no,data$mu-1.96*data$se,
data$no,data$mu+1.96*data$se,col="light blue")

As done by François, we compute some kind of standard error, just to reflect uncertainty. But we won’t really use it.

plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
for(s in 1:8){reg=lm(mu~no,data=db,subset=season==s)
lines((s-1)*12+1:12,predict(reg)[1:12],col="red") }

Henre, we assume that all seasons should be considered as completely independent… which might not be a great assumption.

db = data
NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=5)
plot(data$no,data$mu)
lines(NW,col="red")

We can try to look the curve with a larger bandwidth. The problem is that there is a missing value, at the end. If we (arbitrarily) fill it, we can run a kernel regression,

db$mu[95]=7
NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=12) 
plot(data$no,data$mu,ylim=c(6,10)) 
lines(NW,col="red")