Tag Archives: R-english

The U.S. Has Been At War 222 Out of 239 Years

This morning, I discovered an interesting statistic, America Has Been At War 93% of the Time – 222 Out of 239 Years – Since 1776,  i.e. the U.S. has only been at peace for less than 20 years total since its birth. I wanted to check, get a better understanding and look at other countries in the world.

As always, we can try to extract information from wikipedia, since there are pages dedicated to that information

url="https://en.wikipedia.org/wiki/List_of_wars_involving_the_United_States"
download.file(url,destfile = "warUS.html")
url="https://en.wikipedia.org/wiki/List_of_wars_involving_France"
download.file(url,destfile = "warFR.html")
url="https://fr.wikipedia.org/wiki/Liste_des_guerres_de_la_France#Premi.C3.A8re_R.C3.A9publique"
download.file(url,destfile = "guerre.html")
url="https://en.wikipedia.org/wiki/List_of_wars_involving_Canada"
download.file(url,destfile = "warCAN.html")

If we look at the US page, there are tables, so it should be easy to extract it. For instance,

Even if the war did last 1 day, we will say that the US were at war in 1811. The information we want to confirm can be “there were 21 full years – from Jan 1st till Dec 31st – where the US were not at war, once, during those years“. From the row above, we can claim that the US were at war in 1811. Most of the time, we have

I.e. there is a beginning (here 1775) and an end (1783). So here, the US are said to be at war in 1775, 1776, 1777, 1778, 1779, 1780, 1781, 1782, 1783. To extract the information, we look for regular expressions in the first column, with number, on 4 digits.

https://freakonometrics.hypotheses.org/files/2017/03/guerre-us1.png

Well, sometimes it can be a bit tricky, since we have 3 dates, 1941, 1945 and (in the legend) 1944. But if we consider the minimal and the maximal dates, we have our range of dates.

Now that we we how to extract information, let’s do it. The code will be

library(stringr)
ext_date=function(x){
dates12="[0-9]{4}"
#grep(pattern = dates2, x = col1[1])
L=str_extract_all(as.character(x),dates12)
return_L=list()
if(length(L)>0){
for(j in 1:length(L))
if(length(L[[j]])==1) return_L[[j]]=as.numeric(L[[j]])
if(length(L[[j]])>=2) return_L[[j]]=seq(min(as.numeric(L[[j]])),max((as.numeric(L[[j]]))))
}
return(return_L)}

For the US, we get the following years

library(XML)
tables=readHTMLTable("warUS.html")
list_dates=list()
for(i in 1:length(tables)){
if(!is.null(dim(tables[[i]]))){
if(ncol(tables[[i]])>1){
col1=tables[[i]][,1]
list_dates[[i]]=lapply(col1,ext_date)
}
}}
d=unique(unlist(list_dates))

(red means at war, while green means no-war) and indeed,

> length(d)
[1] 222

there were 222 years with war.  Now, what about another country. Like France. Here I use the French wiki page, since information is not in tables in the English one.

tables=readHTMLTable("guerre.html")
list_dates=list()
for(i in 1:length(tables)){
if(!is.null(dim(tables[[i]]))){
if(ncol(tables[[i]])>1){
col1=tables[[i]][,1]
col2=tables[[i]][,2]
col12=paste(col1,col2)
list_dates[[i]]=lapply(col12,ext_date)
}
}}
d=unique(unlist(list_dates))

On the same period of time (starting in 1775), France was also on war most of the time.

Less than the US, but still: 185 years with war,

> length(d[d>=1775])
[1] 185

And on a longer period of time? Why not start, say, around the Hundred Years’s War,

meaning that since 1337, there were (only) 174 years without a single war where France was involved.

Let’s try another one. Like Canada,

tables=readHTMLTable("warCAN.html")
list_dates=list()
for(i in 1:length(tables)){
if(!is.null(dim(tables[[i]]))){
if(ncol(tables[[i]])>1){
col1=tables[[i]][,1]
list_dates[[i]]=lapply(col1,ext_date)
}
}}
d=unique(unlist(list_dates))

Guess what… there’s a lot of green on that graph. Surprised?

Install R Packages on the Ubuntu Virtual Machine

For the (Advanced) R Crash Course of the Data Science for Actuaries program, we will use the Ubuntu virtual machine. There might be some issues when installing some packages… One trick can be to open a terminal

and then to use the sudo command, to install some packages,

(after entering the password). Just type (or copy/paste)

sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libxml2-dev
sudo apt-get install openjdk-8-*
update-alternatives --config java
sudo apt-get install aptitude
sudo aptitude install libgdal-dev
sudo aptitude install libproj-dev
sudo apt-get install build-essential libcurl4-gnutls-dev libxml2-dev libssl-dev
sudo apt-get install curl
sudo apt-get build-dep r-cran-rgl
sudo apt-get install r-cran-plyr r-cran-xml r-cran-reshape r-cran-reshape2 r-cran-rmysql
sudo apt-get install r-cran-rjava
sudo apt-get install r-cran-glmnet
sudo apt-get build-dep r-cran-boot
sudo apt-get build-dep r-cran-class
sudo apt-get build-dep r-cran-cluster
sudo apt-get build-dep r-cran-codetools
sudo apt-get build-dep r-cran-foreign
sudo apt-get build-dep r-cran-kernsmooth
sudo apt-get build-dep r-cran-lattice
sudo apt-get build-dep r-cran-mass
sudo apt-get build-dep r-cran-matrix
sudo apt-get build-dep r-cran-mgcv
sudo apt-get build-dep r-cran-nlme
sudo apt-get build-dep r-cran-nnet
sudo apt-get build-dep r-cran-rpart
sudo apt-get build-dep r-cran-spatial
sudo apt-get build-dep r-cran-survival
sudo apt-get build-dep r-cran-rodbc
sudo apt-get build-dep

Then, in RStudio, enter

install.packages("RCurl")
install.packages("xml")
install.packages("rJava")
install.packages("rgdal")
install.packages("xlsx")
install.packages("devtools")

It should be fine…

Third Actuarial Pricing Game

With the support of ACTINFO Chair and the (French) Institute of Actuaries, our Third Actuarial Pricing Game starts today ! There is a toolbox file available online, with

  • a description of the game : the rules, the dates, and a description of the datasets
  • 3 datasets : one underwriting and one claims databases, for year 0 (training data) and one underwriting dataset to enter the game

Anyone can play. Students from various programs around the world, as well as practitioners are welcome to play. It can be by teams, and there are no limit on the size. And there is no registration: to start playing, teams have to submit a dataset before the deadline (end of February), to pricing-game@univ-rennes1.fr.

Forecasting Natural Catastrophes (is rather difficult)

Following my previous post, I wanted to spend more time, on the time series with “global weather-related disaster losses as a proportion of global GDP” over the time period 1990-2016 that Roger Pilke sent me last night.

db=data.frame(year=1990:2016,
ratio=c(.23,.27,.32,.37,.22,.26,.29,.15,.40,.28,.14,.09,.24,.18,.29,.51,.13,.17,.25,.13,.21,.29,.25,.2,.15,.12,.12))

In my previous post, I spend some time explaining that we should provide some sort of ‘confidence interval’ when we try to predict a pattern. That was what we call ‘model uncertainty’. But there are two (important) issues that I did not mention. (1) it is a time series, so why not use techniques dedicated to time series objects ? (2) we do not really care actually about ‘model uncertainty’ (unless we want to assess if a decreasing trend is significant, or not), and we care more about real prediction uncertainty: in the next ten years, what could be the range for the this ratio, with some given probability (say 95%)? Could we say that with 95% chance the global weather-related disaster losses as a proportion of global GDP should be (each year) within 0 and 0.35 or 0 and 0.7?

A first idea might be to use exponential smoothing techniques (without a seasonal component here).

ratio=ts(db$ratio,start=1990,frequency=1)
plot(ratio,xlim=c(1990,2030))
hw=HoltWinters(ratio,gamma=FALSE)
phw=predict(hw,n.ahead=15,prediction.interval = TRUE)
plot(hw,phw,xlim=c(1990,2030))
polygon(c(2017:2031,rev(2017:2031)), c(phw[,2],rev(phw[,3])),border=NA,col=rgb(0,0,1,.2))

The decreasing trend is coming from the fact that exponential smoothing is here a linear regression, with weight exponentially decaying with time (the older, the smaller the weight). But we cannot use that prediction, since the ratio cannot (obviously) be negative. So why not consider, here, the logarithm of the ratio

plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
hw=HoltWinters(log(ratio),gamma=FALSE)
phw=predict(hw,n.ahead=15,prediction.interval = TRUE)
abline(v=2016,lty=2,col="grey")
lines(2017:2031,exp(phw[,2]),col="blue")
lines(2017:2031,exp(phw[,3]),col="blue")
lines(c(1992:2016,2017:2031),c(exp(hw$fitted[,1]),exp(phw[,1])),col="red")
polygon(c(2017:2031,rev(2017:2031)),exp(c(phw[,2],rev(phw[,3]))),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

The confidence band is huge, here. What if we consider some ARIMA model here?

fit=auto.arima(ratio)
farma=forecast(fit,15)
farma=cbind(as.numeric(farma$fitted)[1:15],as.numeric(farma$lower[,1]),as.numeric(farma$upper[,1]),as.numeric(farma$lower[,2]),as.numeric(farma$upper[,2]))
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
lines(2017:2031,farma[,4],col="blue")
lines(2017:2031,farma[,5],col="blue")
lines(2017:2031,farma[,1],col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,4],rev(farma[,5])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

Here, there is an intercept, but no dynamics for the time series (which is considered, here, as a pure white noise). We get exactly the same if we consider the average value of the series

fit=lm(ratio~1,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

Here, we get back to my previous post, if we want to consider a possible trend (and not only an intercept)

fit=lm(ratio~year,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

Again, the confidence region is not based on inference related error, but on model uncertainty: we try to visualize where future observations might be with (say) 95% chance. Note we can also consider (why not?) a quadratic regression

fit=lm(ratio~poly(year,2),data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

I am usually not a huge fan of those polynomial regression, but recently, I’ve seen that a lot in economic papers (like “if it’s not linear, add a squared version of the explanatory variable”, which is a rather odd strategy, I’ll publish some posts on that issue this year).

Here again, it might be more clever to consider a logarithmic transformation of the ratio, to insure that the ratio remains positive

fit=lm(log(ratio)~year,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(exp(pf+s^2/2),exp(pf-1.96*s),exp(pf+1.96*s))
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(exp(predict(fit)+s^2/2),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

Observe that future trend is mainly driven by the three latest observations, that were rather low (compared with older observations). What if we remove them?

dbna=db
db$ratio[25:27]=NA
fit=lm(ratio~1,data=dbna)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016-3,lty=2,col="grey")
ndb=data.frame(year=2014:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2014:2031,farma[,2],col="blue")
lines(2014:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit)[1:24],farma[,1]),col="red")
polygon(c(2014:2031,rev(2014:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

More funny, if we consider a quadratic regression, we obtain an increasing trend for the future

fit=lm(ratio~poly(year,2),data=dbna)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016-3,lty=2,col="grey")
ndb=data.frame(year=2014:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2014:2031,farma[,2],col="blue")
lines(2014:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit)[1:24],farma[,1]),col="red")
polygon(c(2014:2031,rev(2014:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)

As we can see, it is rather difficult to get relevant prediction for the future, based on 25 observations…. If anyone has a suggestion, comments are open…

 

What is a Linear Trend, by the way?

I had a very strange discussion on twitter (yes, another one), about regression curves. I think it started with a tweet based on some xkcd picture (just for fun, because it was New Year’s Day)

There were comments on that picture, by econometricians, mainly about ‘significant’ trends when datasets are very noisy. And I mentioned a graph that I saw earlier, a couple of days ago

Let us reproduce that graph (Roger kindly sent me the dataset)

db=data.frame(year=1990:2016,
ratio=c(.23,.27,.32,.37,.22,.26,.29,.15,.40,.28,.14,.09,.24,.18,.29,.51,.13,.17,.25,.13,.21,.29,.25,.2,.15,.12,.12))
library(ggplot2)

The graph is here (with the same aesthetic conventions as Roger’s initial graph, i.e. using some sort of barplot)

ggplot(db, aes(year, ratio)) +
geom_bar(stat="identity") +
stat_smooth(method = "lm", se = FALSE)

My point was that we miss the ‘confidence band’ of the regression

In R, at least, it is quite natural to get (and actually, it is the default version of the graph function)

ggplot(db, aes(year, ratio)) +
geom_bar(stat="identity") +
stat_smooth(method = "lm", se = TRUE)

It is hard to claim that the ‘regression line’ is significant (in the sense “significantly non horizontal”). To be more specific, if we look at the output of the regression model, we get

summary(lm(ratio~year,data=db))

Coefficients:
Estimate    Std. Error t value Pr(>|t|)
(Intercept) 9.158531 4.549672  2.013 0.055 .
year       -0.004457 0.002271 -1.962 0.061 .
---
Signif. codes: 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(which is exactly what Roger used in his graph to plot his red straight line). The p-value of the estimator of the slope, in a linear regression model is here 6%. But I found Roger’s point puzzling

See also

First of all, let us get back to a more standard graph, with a scatterplot, and not bars,

ggplot(db, aes(year, ratio)) +
stat_smooth(method = "lm") +
geom_point()

Here, we observe points https://latex.codecogs.com/gif.latex?\{y_{1990},y_{1991},\cdots,y_{2016}\}. In order to draw that blue line, we assume (Econometrics 101, actually) that those observations are realizations of random variables https://latex.codecogs.com/gif.latex?\{Y_{1990},Y_{1991},\cdots,Y_{2016}\}. Randomness here does not come from a survey, or from ‘balls in an urn’. Randomness is because hurricanes and floods are themselves seen are realizations of random events. Yes, there might be measurement errors, but that’s not where randomness comes from (here). When we talk about ‘randomness’, it should be related to ‘model error’ i.e. the error we make if we consider a linear model (here), that is

https://latex.codecogs.com/gif.latex?Y_t=\beta_0+\beta_1t+\varepsilon_t

Even if observations are not obtained from balls in an urn, there is some kind of randomness here. Randomness means that we might have errors (random errors) around the estimated value (that is on the blue curve), https://latex.codecogs.com/gif.latex?y_t=\widehat{y}_t+\widehat{\varepsilon}_t. One might consider a nonlinear model to reduce the error,

ggplot(db, aes(year, ratio)) +
geom_point() +
geom_smooth()

but in the case, the danger is to overfit

So yes, when we fit a linear model, there is always some kind of randomness, and it is possible to get a ‘confidence band’, that will be very useful for predictions (e.g. for reinsurance purpose here).

Non-Uniform Population Density in some European Countries

A few months ago, I did mention that France was a country with strong inequalities, especially when you look at higher education, and research teams. Paris has almost 50% of the CNRS researchers, while only 3% of the population lives there.

It looks like Paris is the only city, in France. And I wanted to check that, indeed, France is a country with strong inequalities, when we look at population density.

Using data from sedac.ciesin.columbia.edu, it is possible to get population density on a small granularity level,

> rm(list=ls())
> base=read.table(
+      "/home/charpentier/glp00ag.asc",
+      skip=6)
> X=t(as.matrix(base,ncol=8640))
> X=X[,ncol(X):1]

The scales for latitudes and longitudes can be obtained from the text file,

> #ncols         8640
> #nrows         3432
> #xllcorner     -180
> #yllcorner     -58
> #cellsize      0.0416666666667

Hence, we have

> library(maps)
> world=map(database="world")
> vx=seq(-180,180,length=nrow(X)+1)
> vx=(vx[2:length(vx)]+vx[1:(length(vx)-1)])/2
> vy=seq(-58,85,length=ncol(X)+1)
> vy=(vy[2:length(vy)]+vy[1:(length(vy)-1)])/2

If we plot our density, as in a previous post, on Where People Live,

> I=seq(1,nrow(X),by=10)
> J=seq(1,ncol(X),by=10)
> image(vx[I],vy[J],log(1+X[I,J]),
+ col=rev(heat.colors(101)))
> lines(world[[1]],world[[2]])

we can see that we have a match, between the big population matrix, and polygons of countries.

Consider France, for instance. We can download the contour polygon with higher precision,

> library(rgdal)
> fra=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds",
+ "fr.rds")
> Fra=readRDS("fr.rds")
> n=length(Fra@polygons[[1]]@Polygons)
> L=rep(NA,n)
> for(i in 1:n) L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
> idx=which.max(L)
> polygon_Fr=
+       Fra@polygons[[1]]@Polygons[[idx]]@coords
> min_poly=apply(polygon_Fr,2,min)
> max_poly=apply(polygon_Fr,2,max)
> idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
> idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
> sub_X=X[idx_i,idx_j]
> image(vx[idx_i],vy[idx_j],
+       log(sub_X+1),col=rev(heat.colors(101)),
+       xlab="",ylab="")
> lines(polygon_Fr)

We are now able to extract information about population for France, only (actually, it is only mainland France, islands are not considered here… to avoid complicated computations

> library(sp)
> xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
> dim(xy)
[1] 65730     2

Here, we have 65,730 small squares, in France.

> pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
> dim(pip)=dim(sub_X)
> Fr=sub_X[pip]
> sum(Fr)
[1] 58105272

Observe that the total population within the French polygon is close to 60 million people, which is consistent with actual figures. Now, if we look more carefully at repartition over the French territory

> library(ineq)
> Gini(Fr)
[1] 0.7296936

Gini coefficient is rather high (over 70%), but it is also possible to visualize Lorenz curve,

> plot(Lc(Fr))

Observe that in 5% of the territory, we can find almost 54% of the population

> 1-min(LcF$L[LcF$p>.95])
[1] 0.5462632

In order to compare with other countries, consider the

> LC=function(rds="fr.rds"){
+ Fra=readRDS(rds)
+ n=length(Fra@polygons[[1]]@Polygons)
+ L=rep(NA,n)
+ for(i in 1:n) 
L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
+ idx=which.max(L)
+ polygon_Fr=
+      Fra@polygons[[1]]@Polygons[[idx]]@coords
+ min_poly=apply(polygon_Fr,2,min)
+ max_poly=apply(polygon_Fr,2,max)
+ idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
+ idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
+ sub_X=X[idx_i,idx_j]
+ xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
+ dim(xy)
+ pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
+ dim(pip)=dim(sub_X)
+ Fr=sub_X[pip]
+ return(list(gini=Gini(Fr),LC=Lc(Fr))
+ }
> FRA=LC()

For instance, consider Germany, or Italy

> deu=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/DEU_adm0.rds","deu.rds")
> DEU=LC("deu.rds")
> ita=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/ITA_adm0.rds","ita.rds")
> ITA=LC("ita.rds")

It is possible to plot Lorenz curve, together,

> plot(FRA$LC,col="blue")
> lines(DEU$LC,col="black")
> lines(ITA$LC,col="red")

Observe that France is clearly below the other ones. Compared with Germany, there is a significant difference

> FRA$gini
[1] 0.7296936
> DEU$gini
[1] 0.5088853

More precisely, if 54% of French people live in 5% of the territory, only 40% of Italians, and 32% of the Germans,

> 1-min(FRA$LC$L[FRA$LC$p>.95])
[1] 0.5462632
> 1-min(ITA$LC$L[ITA$LC$p>.95])
[1] 0.3933227
> 1-min(DEU$LC$L[DEU$LC$p>.95])
[1] 0.3261124

How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in , the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time( 
  stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé 
      0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean))
+ }

We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations,

> vN = c(10,100,1000,10000)
> T = Vectorize(time_lm)(vN)

we can then plot it

> plot(vN,T[2,]/1e6,log="xy",col="red",type="b",
+      xlab="Number of Observations",ylab="Time")
> lines(vN,T[1,]/1e6,col="blue",type="b")

It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones).

And actually, the same story olds for logistic regressions. Consider the following dataset

> n=1e6
> X=runif(n)
> S=-3+2*X+rnorm(n)
> Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S)))
> B=data.frame(X,Y)

The frequentist version of the logistic regression is

> glm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = glm(Y~X,data=B[idx,],family=binomial)
+   summary(reg)
+ }

and the Bayesian one, using stan,

> stan_glm = "
+ data {
+ int N;
+ vector[N] x;
+ int<lower=0,upper=1> y[N];
+ }
+ parameters {
+ real alpha;
+ real beta;
+ }
+ model {
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ y ~ bernoulli_logit(alpha + beta * x);
+ }
+ "
> stanmodel = stan_model(model_code = stan_glm) )
> glm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+        data = list(N=n,
+        x = X[idx],
+        y = Y[idx]),
+        iter = 1000, warmup=200)
+   summary(fit)
+ }

Again, we can see how long it takes to run those regression models

> time_gl m= function(n){
+   M = microbenchmark(glm_freq(n),
+   glm_bayes(n),times=50)
+   return(apply( matrix(M$time,nrow=2),1,mean))
+ }

 

Where People Live, part 2

Following my previous post, I wanted to use another dataset to visualize where people live, on Earth. The dataset is coming from sedac.ciesin.columbia.edu. We you register, you can download the database

> base=read.table("glp00ag15.asc",skip=6)

The database is a ‘big’ 1440×572 matrix, in each cell (latitude and longitude) we have the population

>  X=t(as.matrix(base,ncol=1440))
>  dim(X)
[1] 1440  572

The dataset looks like

> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ log(X+1)[,ncol(X):1],col=rev(heat.colors(101)),
+ axes=FALSE,xlab="",ylab="")

Now, if we keep only place where people actually live (i.e. removing cold desert and oceans) we get

> M=X>0
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white","light green"),
+ axes=FALSE,xlab="",ylab="")

Then, we can visualize where 50% of the population lives,

> Order=matrix(rank(X,ties.method="average"),
+ nrow(X),ncol(X))
> idx=cumsum(sort(as.numeric(X),
+ decreasing=TRUE))/sum(X)
> M=(X>0)+(Order>length(X)-min(which(idx>.5)))
> image(seq(-180,180,length=nrow(X)), + seq(-90,90,length=ncol(X)), + M[,ncol(X):1],col=c("white",
+ "light green",col="red"), + axes=FALSE,xlab="",ylab="")

50% of the population lives in the red area, and 50% in the green area. More precisely, 50% of the population lives on 0.75% of the Earth,

> table(M)/length(X)*100
M
         0          1          2 
69.6233974 29.6267968  0.7498057

And 90% of the population lives in the red area below (5% of the surface of the Earth)

> M=(X>0)+(Order>length(X)-min(which(idx>.9)))
> table(M)/length(X)*100
M
        0         1         2 
69.623397 25.512335  4.864268 
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white",
+ "light green",col="red"),
+ axes=FALSE,xlab="",ylab="")

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

Forecasts with ARIMA Models

In our time series class this morning, I was discussing forecasts with ARIMA Models. Consider some simple stationnary AR(1) simulated time series

> n=95
> set.seed(1)
> E=rnorm(n)
> X=rep(0,n)
> phi=.85
> for(t in 2:n) X[t]=phi*X[t-1]+E[t]
> plot(X,type="l")

If we fit an AR(1) model,

> model=arima(X,order=c(1,0,0),
+             include.mean = FALSE)
> P=predict(model,n.ahead=20)
> plot(P$pred)
> lines(P$pred+2*P$se,col="red")
> lines(P$pred-2*P$se,col="red")
> abline(h=0,lty=2)
> abline(h=2*P$se[20],lty=2,col="red")
> abline(h=-2*P$se[20],lty=2,col="red")

we observe the exponential decay of the forecast towards 0, and the increasing confidence interval (where the variance increases, from the variance of the white noise to the variance of the stationnary time series). Plain lines are conditional forecast (given our latest observation, since the AR(1) is a first order Markov process), and dotted lines are unconditional. Let us store some values, to use them as benchmark

> s=P$se[20]
> y=P$pred

If we fit a MA(1) model

> model=arima(X,order=c(0,0,1),
+             include.mean = FALSE)
> P=predict(model,n.ahead=20)
> plot(P$pred)
> lines(P$pred+2*P$se,col="red")
> lines(P$pred-2*P$se,col="red")
> abline(h=0,lty=2)
> abline(h=2*s,lty=2,col="red")
> abline(h=-2*s,lty=2,col="red")
> lines(y,col="grey")

after two lags, the forecast is null, and the (conditional) variance remains constant. But if we consider a moving average process with a longer order,

> model=arima(X,order=c(0,0,14),
+             include.mean = FALSE)
> P=predict(model,n.ahead=20)
> plot(P$pred)
> lines(P$pred+2*P$se,col="red")
> lines(P$pred-2*P$se,col="red")
> abline(h=0,lty=2)
> abline(h=2*s,lty=2,col="red")
> abline(h=-2*s,lty=2,col="red")
> lines(y,col="grey")

we get an output that can be compared with the AR(1) processes. Which makes sense since our AR(1) process can also be seen as a MA(∞), with infinite order.

But if we think that our time series is not stationary, an we fit an integrated model

> model=arima(X,order=c(0,1,0),
+             include.mean = FALSE)
> P=predict(model,n.ahead=20)
> plot(P$pred)
> lines(P$pred+2*P$se,col="red")
> lines(P$pred-2*P$se,col="red")
> abline(h=0,lty=2)
> abline(h=2*s,lty=2,col="red")
> abline(h=-2*s,lty=2,col="red")
> lines(y,col="grey")

we observe the (standard) martingale property: the forecast is flat, and the confidence interval keeps increasing, and actually, the variance increases towards infinity (at a linear rate). So one should be very careful when differentiation a time series… it will have a huge impact on the forecasts….