# Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar) src &lt;- osmsource_api() bb &lt;- center_bbox(3.07758808135,50.37404355, 1000, 1000) ua &lt;- get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){ nat_ids &lt;- find(ua, way(tags(k %in% vc))) nat_ids &lt;- find_down(ua, way(nat_ids)) nat &lt;- subset(ua, ids = nat_ids) nat_poly &lt;- as_sp(nat, type)}   listev = function(vc,type="polygons"){ nat_ids &lt;- find(ua, way(tags(v %in% vc))) nat_ids &lt;- find_down(ua, way(nat_ids)) nat &lt;- subset(ua, ids = nat_ids) nat_poly &lt;- as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines") H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M) plot(W,add=TRUE,col="blue") plot(P,add=TRUE,col="green") if(!is.null(B)) plot(B,add=TRUE,col="red") if(!is.null(C)) plot(C,add=TRUE,col="purple") if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp) library(raster) library(rgdal) library(rgeos) library(maptools) identification = function(xy,h,PLG){ b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2), y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h))) pb1=Polygon(b) Pb1 = list(Polygons(list(pb1), ID=1)) SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0")) UC=gUnionCascaded(PLG) return(gIntersection(SPb1,UC)) }

and then, we identify, as follows

whichidtf = function(xy,h){ h=.7*h label="EMPTY" if(!is.null(identification(xy,h,M))) label="HOUSE" if(!is.null(identification(xy,h,P))) label="PARK" if(!is.null(identification(xy,h,W))) label="WATER" if(!is.null(identification(xy,h,U))) label="UNIVERSITY" if(!is.null(identification(xy,h,C))) label="CHURCH" return(label) }

Let is use colored rectangle to make sure it works

nx=length(vx) vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2) ny=length(vy) vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2) plot(M,border="white") for(i in 1:(nx-1)){ for(j in 1:(ny-1)){ lb=whichidtf(c(vx[i],vy[j]),h) if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey") if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green") if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue") if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple") }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png) library(grid) download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png") tree &lt;- readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png") download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png") download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png") water &lt;- readPNG("angle-double-up.png") rev_water=water rev_water[,,3]=water[,,4] home &lt;- readPNG("home.png") rev_home=home rev_home[,,4]=home[,,4]*.5 church &lt;- readPNG("church.png") rev_church=church rev_church[,,1]=church[,,4]*.5 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white") for(i in 1:(nx-1)){ for(j in 1:(ny-1)){ lb=whichidtf(c(vx[i],vy[j]),h) if(lb=="HOUSE") rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8) if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

# European R Users Meeting

Wednesday, I will give a talk at the European R Users Meeting about our recent work (with Ewen Gallic) on the use of collaborative data in demography. Slides (actually a longer version of the slides) are now online (including a 16:9 version that should fit better to the screen actually).

This Tuesday, I will be giving the second part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon.

In the morning, we will talk about variable section and penalization, and in the afternoon, it will be on changing the loss function (quantile regression).

# When “learning Python” becomes “practicing R” (spoiler)

15 years ago, a student of mine told me that I should start learning Python, that it was really a great language. Students started to learn it, but I kept postponing. A few years ago, I started also Python for Kids, which is really nice actually, with my son. That was nice, but not really challenging. A few weeks ago, I also started a crash course in Python, taught by Pierre. The truth is I think I will probably give up. I keep telling myself (1) I can do anything much faster in R (2) Python is not intuitive, especially when you’re used to practice R for almost 20 years… Last week, I also had to link Python and R for our pricing game : Ali wrote some template codes in Python, and I had to translate them in R. And it was difficult…

Anyway, since it was a school break this week, I said to my son that we should try to practice together, with a nice challenge. For those willing to try it, you’d better stop here, because I will spoil it.

# Using convolutions (S3) vs distributions (S4)

Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions.

Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum.

The distribution function of the sum of two independent (positive) random variables is $F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)$

pSum2 = function(x) integrate(function(y) plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value Let us visualize that cumulative distribution function vx=seq(0.1,50,by=.1) vy=Vectorize(pSum2)(vx) plot(vx,vy,type="l",ylim=c(0,1)) abline(h=.5,lty=2) Let us find an upper bound to compute (in a decent time) quantiles pSum2(350) [1] 0.99195 and then use the uniroot function to inverse that function qSum = function(u) uniroot(function(x) Vectorize(pSum2)(x)-u, interval=c(0,350))$root vu=seq(.01,.99,by=.01) vv=Vectorize(qSum)(vu)

The median is here

qSum(.5) [1] 14.155

Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function $F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)$

pSum3 = function(x) integrate(function(y) pSum2(x-y)*dlnorm(y,2,2),0,x)$value If we look at some values we good pSum3(4) [1] 0.015624 pSum3(5) Error in integrate(function(y) plnorm(x - y, 1, 2) * dlnorm(y, 2, 1), : maximum number of subdivisions reached So obviously, there are computational issues here. Let us consider the following alternative expression $F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x)$. Of course, it is necessary here to compute the density of the sum of two variables dSum2 = function(x) integrate(function(y) dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value pSum3 = function(x) integrate(function(y) dlnorm(x-y,2,2)*dSum2(y),0,x)$value Again, let us compute some values pSum3(4) [1] 0.0090285 pSum3(5) [1] 0.01186 This one seems to work quite well. But it is just an illusion. pSum3(9) Error in integrate(function(y) dlnorm(x - y, 1, 2) * dlnorm(y, 2, 1), : maximum number of subdivisions reached Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more. Let us consider distributions in the S4-type format of the following package library(distr) X1 = Lnorm(mean=1,sd=2) X2 = Lnorm(mean=2,sd=1) S2 = X1+X2 To compute the median, we simply have to use distr::q(S2)(.5) [1] 14.719 We can also visualize it easily plot(q(S2)) which looks (very) close to what we got, manually. But here, it is also possible to work with the sum of 3 (independent) random variables X3 = Lnorm(mean=2,sd=2) S3 = X1+X2+X3 To compute the median, use distr::q(S3)(.5) [1] 33.208 The function is here plot(q(S3)) # (Advanced) R Crash Course, for Actuaries The fourth year of the Data Science for Actuaries program started this morning. I will be there for the introduction to R. The slides are available online (created with slidify, the .Rmd file is also available) A (standard) markdown is also available (as well as the .Rmd file). I have to thank Ewen for his help on slidify (especially for the online quizz, and the integration of leaflet maps or the rgl animated graph….) # Visualizing effects of a categorical explanatory variable in a regression Recently, I’ve been working on two problems that might be related to semiotic issues in predictive modeling (i.e. instead of a standard regression table, how can we plot coefficient values in a regression model). To be more specific, I have a variable of interest $Y$ that is observed for several individuals $i$, with explanatory variables $\mathbf{x}_i$, year $t$, in a specific region $z_i\in\{A,B,C,D,E\}$. Suppose that we have a simple (standard) linear model (forget about time here) $$y_i=\beta_0+\beta_1x_{1,i}+\cdots+\beta_kx_{k,i}+\sum_j \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i$$ Let us forget the temporal effect to focus on the spatial effect today. And consider some simulated dataset. There will be only one (continuous) explanatory variable. And I will generate correlated covariates, just to be more realistic. n=1000 library(mnormt) r=.5 Sigma=matrix(c(1,r,r,1), 2, 2) set.seed(1) X=rmnorm(n,c(0,0),Sigma) X1=cut(X[,1],c(-100,quantile(X[,1],c(.1,.4,.7,.85)), 100),labels=LETTERS[1:5]) X2=X[,2] Y=5+X[,1]-X[,2]+rnorm(n)/2 db=data.frame(Y,X1,X2) Here we have $$y_i=\beta_0+\beta_1x_{1,i}+\sum_{j\in\{A,B,C,D,E\}} \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i$$ The goal here is to get to graph to visualize the vector $\hat\alpha=(\hat\alpha_A,\cdots,\hat\alpha_E)$. Let us run the linear regression reg1=lm(Y~X1+X2,data=db) idx=which(substr(names(reg1$coefficients), 1,2)=="X1") v1=reg1$coefficients[idx] names(v1)=LETTERS[2:5] barplot(v1,col=rgb(0,0,1,.4)) Note that it is possible to add some sort of “confidence interval” to discuss significance (or to avoid to spend hours discussing differences in bar heights that are not significantly different) library(Hmisc) sv1=summary(reg1)$coefficients[idx,2] (bp1=barplot(v1,ylim=range(c(0,v1+2*sv1)))) errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

My main concern here is the “reference” that is considered. Should $A$ be the reference? Why not $B$

db$X1=relevel(db$X1,"B") reg1=lm(Y~X1+X2,data=db) idx=which(substr(names(reg1$coefficients),1,2)=="X1") v1=reg1$coefficients[idx] names(v1)=LETTERS[c(1,3:5)] library(Hmisc) sv1=summary(reg1)$coefficients[idx,2] (bp1=barplot(v1) errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE) Why not the smallest one? Why not the largest one?… What if there is no simple way to choose. Furthermore, let us get back to the original point, which is that there might be some temporal aspects. More precisely, we can have $\hat\alpha^{(t)}=(\hat\alpha_A^{(t)},\cdots,\hat\alpha_E^{(t)})$. If we have also $\hat\alpha^{(t+1)}$ and we get another plot, how do we interpret it. If for $E$ the bar is taller, it means that relative to $A$, the difference has increased. I have the feeling that the interpretation is more complicated because we do not see, on that graph, changes in $\hat\alpha^{(t)}_A$. Let us try something else. First, let us get back to the original setting db$X1=relevel(db$X1,"A") Consider here the regression without the intercept, so that all values remain reg1=lm(Y~0+X1+X2,data=db) idx=which(substr(names(reg1$coefficients),1,2)=="X1") v1=reg2$coefficients[idx] names(v1)=LETTERS[1:5] barplot(v1) It can be hard to read, especially if $Y$ takes (very) large values, and you think that barplots should start at 0. But still, having those 5 values is nice. Why not rescale that graph? A natural idea my be to consider the case where no spatial component is considered, and to look at the difference with that reference. reg1=lm(Y~1+X2,data=db) reg2=lm(Y~0+X1+X2,data=db) idx=which(substr(names(reg2$coefficients),1,2)=="X1") v1=reg2$coefficients[idx] v2=v1-reg1$coefficients["(Intercept)"] barplot(v2,col=rgb(0,0,1,.4)) sv2=summary(reg2)$coefficients[idx,2] (bp2=barplot(v2,ylim=range(c(v2-2*sv2,v2+2*sv2)))) errbar(bp2[,1],v2,v2-2*sv2,v2+2*sv2,add=TRUE) I like that graph, I should admit it. Now, I still have some remaining questions. For instance, can we insure that when only the intercept is considered, the value of $\hat\beta_0$ is somewhere between $\hat\beta_A,\cdots,\hat\beta_E$? Is it possible that $\hat\beta_A-\hat\beta_0,\cdots,\hat\beta_E-\hat\beta_0$ are all positive? In that case, I would find that hard to interpret. Actually, if I really want values that can be seen as compared to some average, why not consider a (weighted) average of $\hat\beta_A,\cdots,\hat\beta_E$? (weights being here proportion in each class, in each region) w=table(db$X1) v3=v1-sum(w*v1)/sum(w) (bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3)))) errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

I like that one. But what if, instead of normalizing at the end, we normalize the original dependent variable. By “normalize”, I mean “rescale”, to have a centered variable.

db$Y0=db$Y-mean(db$Y) reg3=lm(Y0~0+X1+X2,data=db) sv3=summary(reg3)$coefficients[idx,2] (bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3)))) errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

This one is nice, because it is extremely simple to explain. But what if instead of a linear regression, we add a logistic one (with $Y\in\{0,1\}$)? or a Poisson regression…

So maybe it cannot be the best solution here. Let us try something else… In insurance ratemaking, people like to use “zonier“. It is a two-stage regression. The idea is to run a regression without any spatial components, first. Then, consider the regression of residuals on spatial variables. Here, it would be something like

reg1=lm(Y~1+X2,data=db) reg2=lm(Y~0+X1+X2,data=db)

Since we focus on residuals, those are centered, and we have an easy interpretation of respective values

sv4=summary(reg4)$coefficients[idx,2] v4=reg4$coefficients (bp4=barplot(v4,names.arg=LETTERS[1:5]))) errbar(bp4[,1],v4,v4-2*sv4,v4+2*sv4,add=TRUE)

I guess that it can also be use in generalized linear models, with Pearson (or deviance) residuals.

Another possible idea can be the following. Again, the goal is not to have the true values, but to visualize on a graph how regions can be different. Here, all of them are significantly different. And in region $A$, $Y$ is smaller, ceteris paribus (other things equal in the sense that we have taken into account $x_1$). And in region $E$ it is larger. Here, the graph helps to “see” those differences.

Why not consider a completely different graph. What if we plot vector $a$ instead of $\alpha$, where $a_A$ can be interpreted as the value of the coefficient if we consider region $A$ against “not region $A$“. What if we consider 5 regressions where dichotomous versions of $Z$ are considered : $Z_j=\mathbf{1}_{Z=j}$.

v5=sv5=rep(NA,5) names(v5)=LETTERS[1:5] for(k in 1:5){ reg=lm(Y~I(X1==LETTERS[k])+X2,data=db) v5[k]=reg$coefficients[2] sv5[k]=summary(reg)$coefficients[2,2]}

We can plot that sequence of values, including some confidence intervals (that would be related to significance with respect to all other regions)

(bp5=barplot(v5,ylim=range(c(v5-2*sv5,v5+2*sv5)))) errbar(bp5[,1],v5,v5-2*sv5,v5+2*sv5,add=TRUE)

Looking at values does not give intuitive results, but I have the feeling that it is easy to explain what we plot (we compare each region to “the rest of the world”), and the ordering of $a$ seems to be consistent with $\alpha$ (but I could not prove it).

Here are some ideas I got. I should be able to provide other graphs, but I would love to discuss with anyone on that topics, to find a proper and nice way to visualize effects of a categorical explanatory variable in a regression model (that can be a logistic one). Comments are open…

# Holt-Winters with a Quantile Loss Function

Exponential Smoothing is an old technique, but it can perform extremely well on real time series, as discussed in Hyndman, Koehler, Ord & Snyder (2008)),

when Gardner (2005) appeared, many believed that exponential smoothing should be disregarded because it was either a special case of ARIMA modeling or an ad hoc procedure with no statistical rationale. As McKenzie (1985) observed, this opinion was expressed in numerous references to my paper. Since 1985, the special case argument has been turned on its head, and today we know that exponential smoothing methods are optimal for a very general class of state-space models that is in fact broader than the ARIMA class.

Furthermore, I like it because I think it has nice pedagogical features. Consider simple exponential smoothing, $$L_{t}=\alpha Y_{t}+(1-\alpha)L_{t-1}$$ where $\alpha\in(0,1)$ is the smoothing weight. It is locally constant, in the sense that ${}_{t}\hat Y_{t+h} = L_{t}$

 library(datasets) X=as.numeric(Nile) SimpleSmooth = function(a){ T=length(X) L=rep(NA,T) L[1]=X[1] for(t in 2:T){L[t]=a*X[t]+(1-a)*L[t-1]} return(L) } plot(X,type="b",cex=.6) lines(SimpleSmooth(.2),col="red")

When using the standard R function, we get

hw=HoltWinters(X,beta=FALSE,gamma=FALSE, l.start=X[1]) hw$alpha [1] 0.2465579 Of course, one can replicate that optimal value V=function(a){ T=length(X) L=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*L[t-1] erreur[t]=X[t]-L[t-1] } return(sum(erreur^2)) } optim(.5,V)$par [1] 0.2464844

Here, the optimal value for $\alpha$ is the one that minimizes the one-step prediction, for the $\ell_2$ loss function, i.e. $$\sum_{t=2}^n(Y_t-{}_{t-1}\hat Y_t)^2$$ where here ${}_{t-1}\hat Y_t = L_{t-1}$. But one can consider another loss function, for instance the quantile loss function, $$\ell_{\tau}(\varepsilon)=\varepsilon(\tau-\mathbb{I}_{\varepsilon\leq 0})$$. The optimal coefficient is then obtained using

HWtau=function(tau){ loss=function(e) e*(tau-(e&lt;=0)*1) V=function(a){ T=length(X) L=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*L[t-1] erreur[t]=X[t]-L[t-1] } return(sum(loss(erreur))) } optim(.5,V)$par } Here is the evolution of $\alpha^\star_\tau$ as a function of $\tau$ (the level of the quantile considered). T=(1:49)/50 HW=Vectorize(HWtau)(T) plot(T,HW,type="l") abline(h= hw$alpha,lty=2,col="red")

Note that the optimal $\alpha$ is decreasing with $\tau$. I wonder how general this result can be…

Of course, one can consider more general exponential smoothing, for instance the double one, with $$L_t=\alpha Y_t+(1-\alpha)[L_{t-1}+B_{t-1}]$$and$$B_t=\beta[L_t-L_{t-1}]+(1-\beta)B_{t-1}$$so that the prediction is now ${}_{t}\hat Y_{t+h} = L_{t}+hB_t$ (it is now locally linear – and no longer constant).

hw=HoltWinters(X,gamma=FALSE,l.start=X[1]) hw$alpha alpha 0.4200241 hw$beta beta 0.05973389

The code to compute the smoothed series is the following

DoubleSmooth = function(a,b){ T=length(X) L=B=rep(NA,T) L[1]=X[1]; B[1]=0 for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] } return(L+B) }

Here also it is possible to replicate R using the $\ell_2$ loss function

V=function(A){ a=A[1] b=A[2] T=length(X) L=B=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1]; B[1]=X[2]-X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] erreur[t]=X[t]-(L[t-1]+B[t-1]) } return(sum(erreur^2)) } optim(c(.5,.05),V)$par [1] 0.41904510 0.05988304 (up to numerical optimization approximation, I guess). But here also, a quantile loss function can be considered HWtau=function(tau){ loss=function(e) e*(tau-(e&lt;=0)*1) V=function(A){ a=A[1] b=A[2] T=length(X) L=B=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1]; B[1]=X[2]-X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] erreur[t]=X[t]-(L[t-1]+B[t-1]) } return(sum(loss(erreur))) } optim(c(.5,.05),V)$par }

and we can plot those values on a graph

T=(1:49)/50 HW=Vectorize(HWtau)(T) plot(HW[1,],HW[2,],type="l") abline(v= hw$alpha,lwd=.4,lty=2,col="red") abline(h= hw$beta,lwd=.4,lty=2,col="red") points(hw$alpha,hw$beta,pch=19,col="red")

(with $\alpha$ on the $x$-axis, and $\beta$ on the $y$-axis). So here, it is extremely simple to change the loss function, but so far, it should be done manually. Of course, one do it also for the seasonal exponential smoothing model.

# The myth of interpretability of econometric models

There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model.

We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency $Y$ is driven by some non-observable risk factor $\Theta$, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor $\Theta$ is strongly correlated with $X_1$, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of $Y$ on $X_1$ to derive our actuarial pricing model. But assume that, possibly, risk factor $\Theta$ is also strongly correlated with $X_2$, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”.

Of course, since $\Theta$ is strongly correlated with $X_1$ and $X_2$, it means that $X_1$ and $X_2$ are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to  illustrate.

 set.seed(123) n=1e5 Theta=rnorm(n) X1=Theta+rnorm(n)/8 X2=Theta+rnorm(n)/8 L=exp(-3+Theta) Y=rpois(n,L) B=data.frame(Y,X1,X2)

Our first idea was to consider a model where $Y$ is “explained” by the first variable $X_1$,

 g1=glm(Y~X1,data=B,family=poisson) summary(g1)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97778 0.01544 -192.88 &lt;2e-16 *** X1 0.97926 0.01092 89.64 &lt;2e-16 ***

As expected, our variable is “significant”, but also, probably more interesting, $X_2$, has no impact on the residuals

 B$e1=residuals(g1,type="pearson") g1e=lm(e1~X2,data=B) summary(g1e) Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0003618 0.0031696 0.114 0.909 X2 0.0028601 0.0031467 0.909 0.363 The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers. But we can also consider the other story. We can consider a model where $Y$ is “explained” by the second variable $X_2$,  g2=glm(Y~X2,data=B,family=poisson) summary(g2) Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97724 0.01544 -192.81 &lt;2e-16 *** X2 0.97915 0.01093 89.56 &lt;2e-16 *** Here also we have a valid model, that can be interpreted, and here also $X_1$, has no impact on the residuals  B$e2=residuals(g2,type="pearson") g2e=lm(e2~X1,data=B) summary(g2e)   Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0004863 0.0031733 0.153 0.878 X1 0.0027979 0.0031504 0.888 0.374

The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver.

So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable

 AIC(g1) [1] 51013.39 AIC(g2) [1] 51013.15

Why not incorporate the two explanatory variables $X_1$ and $X_2$, at the same time, in our regression model, and let “the model” decide what to do…?

 g=glm(Y~X1+X2,data=B,family=poisson) summary(g)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.98132 0.01547 -192.723 2e-16 *** X1 0.49310 0.06226 7.920 2.38e-15 *** X2 0.49375 0.06225 7.931 2.17e-15 ***

It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it?  So why not try some LASSO here?

library(glmnet) fit=glmnet(x=as.matrix(B[,c("X1","X2")]), y=B$Y,family="poisson") plot(fit,xvar="lambda") Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here? # Optimal Portfolios, or sort of… Last week, we got our first class on portfolio optimization. We’ve seen Markowitz’s theory where expected returns and the covariance matrix are given, > download.file(url="http://freakonometrics.free.fr/portfolio.r",destfile = "portfolio.r") > source("portfolio.r") > library(zoo) > library(FRAPO) > library(IntroCompFinR) > library(rrcov) > data( StockIndex ) > pzoo = zoo ( StockIndex , order.by = rownames ( StockIndex ) ) > rzoo = ( pzoo / lag ( pzoo , k = -1) - 1 ) * 100 > Moments <- function ( x , method = c ( "CovClassic" , "CovMcd" , "CovMest" , "CovMMest" , "CovMve" , "CovOgk" , "CovSde" , "CovSest" ) , ... ) { method <- match.arg ( method ) ans <- do.call ( method , list ( x = x , ... ) ) + return ( getCov ( ans ) )} > covmat=Moments(as.matrix(rzoo),"CovClassic") > (covmat=round(covmat,1)) SP500 N225 FTSE100 CAC40 GDAX HSI SP500 17.8 12.7 13.8 17.8 19.5 18.9 N225 12.7 36.6 10.8 15.0 16.2 16.7 FTSE100 13.8 10.8 17.3 18.8 19.4 19.1 CAC40 17.8 15.0 18.8 30.9 29.9 22.8 GDAX 19.5 16.2 19.4 29.9 38.0 26.1 HSI 18.9 16.7 19.1 22.8 26.1 58.1 > er=apply(as.matrix(rzoo),2,mean) > (er=round(er,1)) SP500 N225 FTSE100 CAC40 GDAX HSI 0.6 -0.2 0.4 0.5 0.8 1.0 > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) We can now visualize the efficient frontier (and admissible portfolios) below > u=c(12,ef$sd,12,12) > v=c(5,ef$er,-1,5) > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) > points(sqrt(diag(covmat)),er,pch=19,col="blue") > text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) That was the starting point of our class. We did also mention that something important was actually hard to visualize on that graph : the correlation between returns. It is not in the points (which are univariate, with expected return and standard deviation), but in the efficient frontier. For instance, here is our correlation matrix > (cormat=covmat/(sqrt(diag(covmat) %*% t(diag(covmat))))) SP500 N225 FTSE100 CAC40 GDAX HSI SP500 1.00 0.50 0.79 0.76 0.75 0.59 N225 0.50 1.00 0.43 0.45 0.43 0.36 FTSE100 0.79 0.43 1.00 0.81 0.76 0.60 CAC40 0.76 0.45 0.81 1.00 0.87 0.54 GDAX 0.75 0.43 0.76 0.87 1.00 0.56 HSI 0.59 0.36 0.60 0.54 0.56 1.00 We can actually change the correlation between FT500 and FTSE100 (which is here .786) courbe=function(r=.786){ R=cormat R[1,3]=R[3,1]=r covmat2=(sqrt(diag(covmat) %*% t(diag(covmat))))*R ef <- efficient.frontier(er, covmat2, alpha.min=-2.5, alpha.max=2.5, nport=50) plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) points(sqrt(diag(covmat)),er,pch=19,col=c("blue","red")[c(2,1,2,1,1,1)]) text(sqrt(diag(covmat)),er,names(er),pos=4,col=c("blue","red")[c(2,1,2,1,1,1)],cex=.6) polygon(u,v,border=NA,col=rgb(0,0,1,.3)) } for instance, with a correlation of 0.6, we get the following efficient frontier > courbe(.6) and with a stronger correlation > courbe(.9) So clearly, correlation does matter. A lot. But more important, one should keep in mind that expected returns and covariances are not given, but estimated. Previously, we did use the standard estimator for the variance matrix. But another (more robust) estimator can be considered covmat=Moments(as.matrix(rzoo),"CovSde") er=apply(as.matrix(rzoo),2,mean) ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return",xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) points(sqrt(diag(covmat)),er,pch=19,col="blue") text(sqrt(diag(covmat)),er,names(er),pos=4,col="blue",cex=.6) polygon(u,v,border=NA,col=rgb(0,0,1,.3)) It did influence (horizontal) position of points, since variances are now different, as well as the efficient frontier, with clearly much lower variances that can be reached. And to illustrate a last point, to illustrate the fact that we do have estimators based on observed returns, what if we had observed different ones? A way to get an idea of what might happened is to use bootstrap, e.g. of daily returns. > covmat=Moments(as.matrix(rzoo),"CovClassic") > er=apply(as.matrix(rzoo),2,mean) > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > a=sqrt(diag(covmat)) > b=er > k=1 > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) > for(i in 1:100){ + id=sample(nrow(rzoo),replace=TRUE) + covmat=Moments(as.matrix(rzoo)[id,],"CovClassic") + er=apply(as.matrix(rzoo)[id,],2,mean) + points(sqrt(diag(covmat))[k],er[k],cex=.5) + } or for another asset Here is what we got on the (estimated) efficient frontier > covmat=Moments(as.matrix(rzoo),"CovClassic") > er=apply(as.matrix(rzoo),2,mean) > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5) > points(sqrt(diag(covmat)),er,pch=19,col="blue") > text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) > for(i in 1:100){ + id=sample(nrow(rzoo),replace=TRUE) + covmat=Moments(as.matrix(rzoo)[id,],"CovClassic") + er=apply(as.matrix(rzoo)[id,],2,mean) + ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) + lines(ef$sd,ef$er,col="red") + } Thus, it is somehow rather difficult to assess wheter a portfolio is optimal, or not… At least from a statistical perspective…. # Traffic Flow of Kota Kinabalu (with R) This morning, we had our first practicals on network flows, using an example mentioned in some papers published by Noraini Abdullah and Ting Kien Hua, max flow min cut theorem to minimize traffic congestion in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu. From the roads mentioned in the articles, I did try my best to locate the nodes on a map, m=matrix(c(0,5.995910, 116.105520, 1,5.992737, 116.093718, 2,5.992066, 116.109883, 3,5.976947, 116.095760, 4,5.985766, 116.091580, 5,5.988940, 116.080112, 6,5.968318, 116.080764, 7,5.977454, 116.075460, 8,5.974226, 116.073604, 9,5.969651, 116.073753, 10,5.972341, 116.069270, 11,5.978818, 116.072880),3,12) we can be visualized below library(OpenStreetMap) map = openmap(c(lat= 6.000, lon= 116.06), c(lat= 5.960, lon= 116.12)) map=openproj(map) plot(map) points(t(m[3:2,]),col="black", pch=19, cex=3 ) text(t(m[3:2,]),c("s",1:10,"t"),col="white") If the source is realistic (up north), I do not feel very confortable with the location of the sink (on the west). But let’s pretend it’s find (to do the maths, at least). To extract information about edge capacity, on that network use the following code that will extract the three tables from the paper library(devtools) install_github("ropensci/tabulizer") library(tabulizer) location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf' out <- extract_tables(location) with Windows, it seems to be necessary to download another package first library(devtools) install_github("ropensci/tabulizerjars") install_github("ropensci/tabulizer") library(tabulizer) location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf' out <- extract_tables(location) Now we can get out data frame with capacities B1=as.data.frame(out[[2]]) B2=as.data.frame(out[[3]]) E=data.frame(from=B1[3:20,"V3"], to=B1[3:20,"V4"]) E=E[-c(6,8),] capacity=as.character(B2$V3[-1]) capacity[6]="843" capacity[4]="2913" E$capacity=as.numeric(capacity) We can add those edges on our map (without the arrows to indicate the direction, it would be to heavy to read) plot(map) points(t(m[3:2,]),col="black", pch=19, cex=3 ) B=data.frame(i=as.character(c("s",paste("V",1:10,sep=""),"t")), x=m[3,],y=m[2,]) for(i in 1:nrow(E)){ i1=which(B$i==as.character(E$from[i])) i2=which(B$i==as.character(E$to[i])) segments(B[i1,"x"],B[i1,"y"],B[i2,"x"],B[i2,"y"],lwd=3) } text(t(m[3:2,]),c("s",1:10,"t"),col="white") To get the graph with capacities, an alternative is to use library(igraph) g=graph_from_data_frame(E) E(g)$label=E$capacity plot(g) but it does not respect geographical locations of nodes. It can actually be done using plot(g, layout=as.matrix(B[,c("x","y")])) To get a better understanding of the capacities of the road, use plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$capacity/200)

From that network with capacities, the goal is to determine maximum flow on that network, from the source to the sink. This can be done with R using

> (m=max_flow(graph=g, source="s", target="t")) $value [1] 2571 $flow [1] 1191 1380 1422 1380 231 0 231 0 1149 1422 1149 0 0 1149 1422 [16] 1149

Our maximum flow is here 2571, which is different from was is actually claimed both in the two papers  max flow min cut theorem to… and application of the Shortest Path… (“the maximum flow for the capacitated network with 12 nodes and 16 edges of the selected scope in this study was 2598 vehicles per hour“) where there are clearly typos since values in the table and on the graph are different. Here I did use the ones from the tables.

E$flux1=m$flow E(g)$label=E$flux1 plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$flux1/200) That is nice, but rather odd. Actually, a much simpler flow can be considered, but the same global value E$flux2=c(1422,1149,1422,1149,0,0,0,0, 1149,1422,1149,0,0,1149,1422,1149) E(g)$label=E$flux2 plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$flux2/200) Nice, isn’t it. It is actually possible to do exactly the same on another paper they have, on the same city, traffic congestion problem of road networks in Kota Kinabalu. location <- 'http://www.worldresearchlibrary.org/up_proc/pdf/999-150486366625-30.pdf' out <- extract_tables(location) dim(out[[3]]) B1=as.data.frame(out[[3]]) E=data.frame(from=B1[2:61,"V2"], to=B1[2:61,"V3"], capacity=B1[2:61,"V4"]) E$capacity=as.numeric( as.character(E$capacity)) library(igraph) g=graph_from_data_frame(E) m=max_flow(graph=g, source="S", target="T") E$flux1=m$flow E(g)$label=E$flux1 plot(g, edge.width=E$flux1/200, edge.arrow.size=0.15)

Here the value of the maximal flow is 4017, just as they found in the original paper

# Multinomial Logit as an Iterated Logit Regression

For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions.

Consider here a response variable $Y$ with a multinomial distribution (3 factors to have something more general than the binomial), taking values $\{A,B,C\}$, with respective probabilities $\mathbf{p}=(p_A,p_B,p_C)$. Here is a code to generate some multinomial variables

msample=function(A,B,C){ Y=rep(NA,B) for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])} return(Y) }

and here is a code to generate a dataset with $n$ rows,

generate3=function(n,x,pb=c(-2,0)){ set.seed(x) X1=runif(n) X2=runif(n) X3=runif(n) s1=pb[1]+X1+X2 s2=pb[2]-X1+X2 P1=exp(s1)/(1+exp(s1)+exp(s2)) P2=exp(s2)/(1+exp(s1)+exp(s2)) Y=msample(0:2,n,cbind(1-P1-P2,P1,P2)) df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3) return(df) }

Let us generate a training dataset and a validation one

 pb=c(.31,.42) DF1=generate3(1000,1,pb=pb) DF2=generate3(500,2,pb=pb)

With a multivariate logistic regression
$$\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$
$$\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$
$$\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$

For convenience, consider the most popular factor in our training dataset

modalite=names(sort(table(DF1$Y),decreasing = TRUE)) Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions.  library(nnet) reg=multinom(as.factor(Y) ~ ., data = DF1) mp1=predict (reg, DF1, "probs") mp2=predict (reg, DF2, "probs") An alternative can be the following. consider a first regression model on the Bernoulli variable $Y_A=\mathbf{1}(Y=A)$. Actually, we will consider the most important factor, but for convenience, assume that it is $A$. $$\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}$$ On our dataset, estimate that model, and get predictions. In the case where $Y\neq A$, define another Bernoulli variable $Y_B=\mathbf{1}(Y=B|Y\neq A)$. We can estimate that model and derive two probabilities, $\mathbb{P}(Y=B|Y\neq A)$ and $\mathbb{P}(Y=C|Y\neq A)$ (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. $\mathbb{P}[Y=A]$ is obtained from the first model, and we can derive the other two from $\mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A]$ and $\mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A]$.  reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial) reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial) p11=predict (reg1, newdata=DF1, type="response") p12=predict (reg2, newdata=DF1, type="response") p21=predict (reg1, newdata=DF2, type="response") p22=predict (reg2, newdata=DF2, type="response") mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12)) mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22)) colnames(mmp1)=colnames(mmp2)=modalite

Let us compare the predicted probabilites, on the same dataset (here the training dataset)

> mmp1[1:9,c("0","1","2")] 0 1 2 1 0.19728737 0.4991805 0.3035321 2 0.17244580 0.5648537 0.2627005 3 0.19291753 0.5971058 0.2099767 4 0.09087176 0.7787304 0.1303978 5 0.23400225 0.4083022 0.3576955 6 0.18063647 0.6637352 0.1556283 7 0.13188881 0.7402710 0.1278401 8 0.13776970 0.6524959 0.2097344 9 0.12325864 0.6790336 0.1977078 > mp1[1:9,c("0","1","2")] 0 1 2 1 0.19691036 0.5022692 0.3008205 2 0.17123189 0.5680647 0.2607034 3 0.19293066 0.5984402 0.2086291 4 0.08821851 0.7813318 0.1304497 5 0.23470739 0.4109990 0.3542936 6 0.18249687 0.6602168 0.1572863 7 0.13128711 0.7400898 0.1286231 8 0.13525341 0.6553618 0.2093848 9 0.12090016 0.6815915 0.1975084 

The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions.

# Traveling Salesman

In the second part of the course on graphs and networks, we will focus on economic applications, and flows. The first series of slides are on the traveling salesman problem. Slides are available online.

# Networks with R

In order to practice with network data with R, we have been playing with the Padgett (1994) Florentine’s wedding dataset (discussed in the lecture). The dataset is available from

> library(network) > data(flo) > nflo=network(flo,directed=FALSE) > plot(nflo, displaylabels = TRUE, + boxed.labels = + FALSE)

The next step was to move from the network package to igraph. Since we have the adjacency matrix, we can use it

> library(igraph) > iflo=graph_from_adjacency_matrix(flo, + mode = "undirected") > plot(iflo)

The good thing is that a lot of functions are available, for instance we can get shortest paths, between two specific nodes. And we can give appropriate colors to the nodes that we’ll cross

> AP=all_shortest_paths(iflo, + from="Peruzzi", + to="Ginori") > L=AP$res[[1]] > V(iflo)$color="yellow" > V(iflo)$color[L[2:4]]="light blue" > V(iflo)$color[L[c(1,5)]]="blue" > plot(iflo)

We can also visualize edges, but I found it slightly more complicated (to extract edges from the output)

> liens=c(paste(as.character(L)[1:4], + "--", + as.character(L)[2:5],sep=""), + paste(as.character(L)[2:5], + "--", + as.character(L)[1:4],sep="")) > df=as.data.frame(ends(iflo,E(iflo))) > names(df)=c("src","target") > lstn=sort(unique(c(as.character(df[,1]),as.character(df[,2]),"Pucci"))) > Eliens=paste(as.numeric(factor(df[,1],levels=lstn)),"--", + as.numeric(factor(df[,2],levels=lstn)),sep="") > EU=unlist(lapply(Eliens,function(x) x%in%liens)) > E(iflo)$color=c("grey","black")[1+EU] > plot(iflo) But it works. It is also possible to use some D3js visualization > library( networkD3 ) > simpleNetwork (df) Then the next question was to add a vertice to the network. The most simple way to do it is probability through the adjacency matrix > flo2=flo > flo2["Pucci","Bischeri"]=1 > flo2["Bischeri","Pucci"]=1 > nflo2=network(flo2,directed=FALSE) > plot(nflo2, displaylabels = TRUE, + boxed.labels = + FALSE) Then, we’ve been playing with centrality measures. > plot(iflo,vertex.size=betweenness(iflo)) The goal was to see how related they were. Here, for all of them, “Medici” is the central node. But what about the others? > B=betweenness(iflo) > C=closeness(iflo) > D=degree(iflo) > E=eigen_centrality(iflo)$vector > base=data.frame(betw=B,close=C,deg=D,eig=E) > cor(base) betw close deg eig betw 1.0000000 0.5763487 0.8333763 0.6737162 close 0.5763487 1.0000000 0.7572778 0.7989789 deg 0.8333763 0.7572778 1.0000000 0.9404647 eig 0.6737162 0.7989789 0.9404647 1.0000000

Those measures are quite correlated. It is also possible to use a hierarchical graph to visualize how close those centrality measures can be

> H=hclust(dist(t(base)), + method="ward") > plot(H)

Instead of looking at values of centrality measures, it is possible to looks are ranks

> rbase=base > for(i in 1:4) rbase[,i]=rank(base[,i]) > H=hclust(dist(t(rbase)), + method="ward") > plot(H)

Here the eigenvector measure is very close to the degree of vertices.

Finally, it is possible to seek clusters (in the context of coalition here, in case a war should start between those families)

> kc <- fastgreedy.community ( iflo )

Here we have 3 classes (+1 for the node that is disconnected from the other families)

> V(iflo)$color=c("yellow","orange", + "light blue")[membership ( kc )] > plot(iflo) > plot(kc,iflo) # I Got The Feelin’ Last week, I’ve been going through my CD collection, trying to find records I haven’t been listing for a while. And I got the feeling that music I listen to nowadays is slower than the one I was listening to in my 20’s. I was wondering if that was an age issue, or it was simply the fact that music in the 90s was “faster” than the one released in 2015. So I tried to scrap the BPM database to get a more appropriate answer about this “feeling” I have. I did extract two information: the BPM (beat per minute) and the year (of release). Here is a function to extract information from the website, > library(XML) > extractbpm = function(VBP,P){ + url=paste("https://www.bpmdatabase.com/music/search/?artist=&title=&bpm=",VBP,"&genre=&page=",P,sep="") + download.file(url,destfile = "page.html") + tables=readHTMLTable("page.html") + return(tables)} For instance > extractbpm(115,13)$track-table Artist Title 1 Eros Ramazzotti y Claudio Guidetti Dimelo A Mi 2 Everclear Volvo Driving Soccer Mom 3 Evils Toy Dear God 4 Expose In Walked Love 5 Fabolous ft. 2 Chainz When I Feel Like It 6 Fabolous ft. 2 Chainz When I Feel Like It 7 Fabolous ft. 2 Chainz When I Feel Like It 8 Fanny Lu Fanfarron 9 Featurecast Ain't My Style 10 Fem 2 Fem Obsession 11 Fernando Villalona Mi Delito 12 Fever Ray Triangle Walks 13 Firstlove Freaky 14 Fito Blanko Pegadito Suavecito 15 Flechazo Del Norte Mariposa Traicionera 16 Fluke Switch/Twitch 17 Flyleaf Something Better 18 FM Static The Next Big Thing 19 Fonseca Eres Mi Sueno 20 Fonseca ft. Maffio & Nayer Eres Mi Sueno 21 Francesca Battistelli Have Yourself A Merry Little Christmas 22 Frankie Ballard Young & Crazy 23 Frankie J. More Than Words 24 Frank Sinatra The Hucklebuck 25 Franz Ferdinand The Dark Of The Matinée Mix BPM Genre Label Year 1 — 115 — Sony 2009 2 — 115 — Capitol Records 2003 3 — 115 — — — 4 — 115 — Arista Records 1994 5 Explicit 115 Urban Def Jam/Island Def Jam 2013 6 — 115 Urban Def Jam/Island Def Jam 2013 7 Radio Edit 115 Urban Def Jam/Island Def Jam 2013 8 — 115 Latin Pop Universal Latino 2011 9 Psychemagik Dub 115 — Jalapeno 2012 10 — 115 — Critique Records 1993 11 — 115 — Mt&vi Records/caminante Records 2001 12 Rex The Dog Remix 115 — Little Idiot/Mute 2012 13 — 115 — Jwp Music 2000 14 — 115 Merengue Mambo Crown Loyalty 2012 15 — 115 — Hacienda 2010 16 Album Version 115 — One Little Indian Records 2004 17 — 115 Alternative A&M/Octone 2013 18 — 115 — Tooth & Nail Records 2007 19 — 115 Merengue Mambo 10 2012 20 Urban Version 115 — 10 2012 21 — 115 — Word/Fervent/Warner Bros 2009 22 — 115 Country Warner Bros 2015 23 Mynt Rocks Radio Edit 115 — Columbia 2005 24 — 115 Jazz Columbia 1950 25 — 115 New Wave — 2004

We have here one of the few old songs, a 1950 tune by Frank Sinatra. If we scrap the website, with a simple loop (where the bpm is from 40 to 200). Start with

 BASE=NULL > vbp=40 > p=1

and then, a loop based on

> while(vbp<=2017){ + F=extractbmp(vbp,p) + if(length(F)==1){ + BASE=rbind(BASE,F[[1]][,c("Artist","Title","BPM","Year")]) + p=p+1} + if(length(F)==0){ + vbp=vbp+1 + p=1}}

Then we should clean the dataset

BASE=BASE[-duplicated(BASE),] BASE=BASE[-which(BASE$Year=="—"),] BASE$y=as.numeric(as.character(BASE$Year)) BASE$bpm=as.numeric(as.character(BASE$BPM)) BASE=BASE[BASE$y>=1940,]

and we end up with almost 50,000 tunes.

boxplot(BASE$bpm~as.factor(BASE$y), col="light blue")

Over the past 20 years, it looks like speed of tunes has declined (let us forget tunes of 2017, clearly we have a problem here…)

library(mgcv) plot(BASE$y,BASE$bpm) reg=gam(bpm~s(y),data=BASE) B=data.frame(y=1950:2017) p=predict(reg,newdata=B) lines(B$y,p,lwd=3,col="red") which is confirmed with a (smoothed) regression p2=predict(reg,newdata=B,se.fit=TRUE) plot(B$y,p2$fit,lwd=3,col="red",type="l",ylim=c(90,140)) lines(B$y,p2$fit+p2$se.fit) lines(B$y,p2$fit-p2\$se.fit)

even when incorporating the confidence band. Bumps are probably related to smoothing parameters, but indeed, it looks like the average speed of music tune has decreased, from 110-115 in the 90’s to less than 100 nowadays. Now to be honest, I would love to have access to personal information from itunes, deezer or spotify, to get a better understanding (eg when in the week, in the day, do we like to listen to faster music for instance). But so far, I could not have access to such data. Too bad…