Tents, Tweets, and Events: Ongoing Protests and Social Media

Our paper, entitled Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media, written with Marco Toledo Bastos (aka ) and Dan Mercea (aka) just appeared in the Journal of Communication

Recent protest movements have fuelled deliberations about the extent to which social media ignite protests. In this paper we compare time-series data of Twitter, Facebook, and onsite protest activity to test the hypothesis of Granger-causality between social media streams and protestors attending demonstrations during the Indignados in Spain, the Occupy movement in the U.S., and the Vinegar protests in Brazil. After applying a Gaussianization procedure to the time series, we confirmed the hypothesis that contentious communication on Twitter and Facebook was Granger-causal of onsite protest activity during the Indignados and the Occupy protests, with bidirectional causality between online and onsite protest activity in the Occupy series. The Vinegar protests in Brazil presented Granger-causality only between Facebook and Twitter and between protestors and injured or arrested protestors. The results indicate that the causal relationship between online and onsite political varies considerably across different socioeconomic contexts with different levels of Internet penetration.

Supervised Classification, Logistic and Multinomial

We will start, in our Data Science course,  to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue)

> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
> clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
> x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
> y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
> z <- c(1,1,1,1,1,0,0,1,0,0)
> df <- data.frame(x,y,z)
> plot(x,y,pch=19,cex=2,col=clr1[z+1])

To get a prediction, i.e. a partition of the space in two parts, consider some logistic regression

> reg=glm(z~x+y,data=df,family=binomial)
> summary(reg)
 
Call:
glm(formula = z ~ x + y, family = binomial, data = df)
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.6593  -0.4400   0.2564   0.5830   1.5374  
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   -1.706      1.999  -0.854    0.393
x             -5.489      5.360  -1.024    0.306
y              8.568      5.515   1.554    0.120
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 13.4602  on 9  degrees of freedom
Residual deviance:  8.1445  on 7  degrees of freedom
AIC: 14.144
 
Number of Fisher Scoring iterations: 5

Given some point, the predicted class is obtained using

> pred_1 <- function(x,y){
+ predict(reg,newdata=data.frame(x=x,
+ y=y),type="response")>.5
+ }

(here, the predicted class is simply the one that is the most likely). To visualize it use

> x_grid<-seq(0,1,length=101)
> y_grid<-seq(0,1,length=101)
> z_grid <- outer(x_grid,y_grid,pred_1)
> image(x_grid,y_grid,z_grid,col=clr2)
> points(x,y,pch=19,cex=2,col=clr1[z+1])

Since the logistic regression is a (generalized) linear model, the line that separate the two regions is a straight line.

Continue reading Supervised Classification, Logistic and Multinomial

Modeling Earthquake Dynamics

In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

The paper is online on https://hal.archives-ouvertes.fr/.

Feldstein and Horioka, Mother of all puzzles

The joint paper  The “Mother of all puzzles” at thirty: A Meta-Analysis is finally out.

This paper provides a meta-analysis of 1651 point estimates of Feldstein and Horioka saving retention coefficient from 49 peer-reviewed papers published over three decades. We get two main results. First, correcting for publication bias, we find a consistent underlying coefficient lying between 0.56 and 0.67 for studies using the original paper. Second, heterogeneity reported in the estimates of the Feldstein and Horioka can be explained by a few main factors. In particular, we find evidence that the saving retention coefficient is systematically underestimated with models written in first difference, models using the saving ratio or the current account ratio as the dependent variable instead of the investment ratio, and models including indicators of the public deficit or indicators of the country size as additional explanatory variables.

John Snow, and Google Maps

In my previous post, I discussed how to use OpenStreetMaps (and standard plotting functions of R) to visualize John Snow’s dataset. But it is also possible to use Google Maps (and ggplot2 types of graphs).

library(ggmap)
get_london <- get_map(c(-.137,51.513), zoom=17)
london <- ggmap(get_london)

Again, the tricky part comes from the fact that the coordinate representation system, here, is not the same as the one used on Robin Wilson’s blog.

> library(maptools)
> setwd("/cholera/")
> deaths <- readShapePoints("Cholera_Deaths")
> head(deaths@coords)
coords.x1 coords.x2
0  529308.7  181031.4
1  529312.2  181025.2
2  529314.4  181020.3
3  529317.4  181014.3
4  529320.7  181007.9
5  529336.7  181006.0
> X <- deaths@coords

or, use d X_deaths.RData. So now, we have to change it

df_deaths <- data.frame(X)
library(sp)
library(rgdal)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))

Here, we have the same coordinate system as the one used in Google Maps. Now, we can add a layer, with the points,

london + geom_point(aes(x=coords.x1, y=coords.x2),data=data.frame(df_deaths@coords),col="red")

Again, it is possible to add the density, as an additional layer,

london + geom_point(aes(x=coords.x1, y=coords.x2), 
data=data.frame(df_deaths@coords),col="red")+
geom_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2), size = 0.3) + 
stat_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") + scale_fill_gradient(low = "green", high = "red",guide = FALSE) + 
scale_alpha(range = c(0, 0.3), guide = FALSE)

 

John Snow, and OpenStreetMap

While I was working for a training on data visualization, I wanted to get a nice visual for John Snow’s cholera dataset. This dataset can actually be found in a great package of famous historical datasets.

library(HistData)
data(Snow.deaths)
data(Snow.streets)

One can easily visualize the deaths, on a simplified map, with the streets (here simple grey segments, see Vincent Arel-Bundock’s post)

plot(Snow.deaths[,c("x","y")], col="red", pch=19, cex=.7,xlab="", ylab="", xlim=c(3,20), ylim=c(3,20))
slist <- split(Snow.streets[,c("x","y")],as.factor(Snow.streets[,"street"]))
invisible(lapply(slist, lines, col="grey"))

Continue reading John Snow, and OpenStreetMap

Visualizing Clusters

Consider the following dataset, with (only) ten points

x=c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y=c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
plot(x,y,pch=19,cex=2)

We want to get – say – two clusters. Or more specifically, two sets of observations, each of them sharing some similarities.

Since the number of observations is rather small, it is actually possible to get an exhaustive list of all partitions, and to minimize some criteria, such as the within variance. Given a vector with clusters, we compute the within variance using

within_var = function(I){
I0=which(I==0)
I1=which(I==1)
xbar0=mean(x[I0])
xbar1=mean(x[I1])
ybar0=mean(y[I0])
ybar1=mean(y[I1])
w=sum(I0)*sum( (x[I0]-xbar0)^2+(y[I0]-ybar0)^2 )+
  sum(I1)*sum( (x[I1]-xbar1)^2+(y[I1]-ybar1)^2 )
return(c(I,w))
}

Then, to compute all possible partitions, use

base2=function(z,n=10){
  Base.b=rep(0,n)
  ndigits=(floor(logb(z, base=2))+1)
  for(i in 1:ndigits){
    Base.b[ n-i+1]=(z%%2)
    z=(z%/%2)}
  return(Base.b)}
L=function(x) within_var(base2(x))
S=sapply(1:(2^10),L)

The cluster indices at the mimimum is here

I=S[1:n,which.min(S[n+1,])]

To visualize those clusters, use

cluster_viz = function(indices){
library(RColorBrewer)
CL2palette=rev(brewer.pal(n = 9, name = "RdYlBu"))
CL2f=CL2palette[c(1,9)]
plot(x,y,pch=19,xlab="",ylab="",xlim=0:1,ylim=0:1,cex=2,col=CL2f[1+I])
CL2c=CL2palette[c(3,7)]
I0=which(indices==0)
I1=which(indices==1)
xbar0=mean(x[I0])
xbar1=mean(x[I1])
ybar0=mean(y[I0])
ybar1=mean(y[I1])
segments(x[I0],y[I0],xbar0,ybar0,col=CL2c[1])
segments(x[I1],y[I1],xbar1,ybar1,col=CL2c[2])
points(xbar0,ybar0,pch=19,cex=1.5,col=CL2c[1])
points(xbar1,ybar1,pch=19,cex=1.5,col=CL2c[2])}

and then, simply

cluster_viz(I)

But that was possible only because http://latex.codecogs.com/gif.latex?n is not to large (since the total number of scenarios – with only 2 clusters – is http://latex.codecogs.com/gif.latex?2^n, or http://latex.codecogs.com/gif.latex?2^{n-1} if we changes zeros in ones).

Continue reading Visualizing Clusters

k-means clustering and Voronoi sets

In the context of http://latex.codecogs.com/gif.latex?k-means, we want to partition the space of our observations into http://latex.codecogs.com/gif.latex?k classes. each observation belongs to the cluster with the nearest mean. Here “nearest” is in the sense of some norm, usually the http://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm.

Consider the case where we have 2 classes. The means being respectively the 2 black dots. If we partition based on the nearest mean, with the http://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm we get the graph on the left, and with the http://latex.codecogs.com/gif.latex?\ell_1 (Manhattan) norm, the one on the right,

Points in the red region are closer to the mean in the upper part, while points in the blue region are closer to the mean in the lower part. Here, we will always use the standard http://latex.codecogs.com/gif.latex?\ell_2 (Euclidean) norm. Note that the graph above is related to Voronoi diagrams (or Voronoy, from Вороний in Ukrainian, or Вороно́й in Russian) with 2 points, the 2 means.

Continue reading k-means clustering and Voronoi sets

Readings on Inequalities

Here are some reading for the course on inequalities,

Inequalities and Quantile Regression

In the course on inequality measure, we’ve seen how to compute various (standard) inequality indices, based on some sample of incomes (that can be binned, in various categories). On Thursday, we discussed the fact that incomes can be related to different variables (e.g. experience), and that comparing income inequalities between coutries can be biased, if they have very different age structures.

So we’ve seen quantile regressions. I can mention some old slides (used in a crash course at McGill three years ago)., as well as a more technical discussion on ties, and non-unicity of the regression line.

In order to illustrate, consider  the following dataset

> salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)
> plot(salary$yd,salary$sl)
> abline(lm(sl~yd,data=salary),col="blue")

We have here the stndard regression line, obtained using ordinary least squares. Here we have the expected income given the experience. But we can also use a quantile regression,

http://latex.codecogs.com/gif.latex?Q_\tau(Y\vert\boldsymbol{X})=\boldsymbol{X}^{\text{\sffamily%20T}}\boldsymbol{\beta}

> library(quantreg)
> Q10 <- rq(sl~yd,data=salary,tau=.1)
> Q90 <- rq(sl~yd,data=salary,tau=.9)
> abline(Q10,col="red")
> abline(Q90,col="purple")

A classical tool to describe inequalities is the ratio of the 90% quantile over the 10% quantile (among so many others,

> ratio9010 = function(age){
+   predict(Q90,newdata=data.frame(yd=age))/
+   predict(Q10,newdata=data.frame(yd=age))
+ }

For instance, among people with 5 years of experience, there is an inequality index of

> ratio9010(5)
1.401749

while for people with 30 years of experience, it would be

> ratio9010(30)
1.9488

If we plot the evolution of this 90-10 ratio, as a function of the experience, we get the following increasing trend

> A=0:30
> plot(A,Vectorize(ratio9010)(A),type="l",ylab="90-10 quantile ratio")

So clearly, comparing inequalitis ceteris paribus between two groups, should be performed carefully, and probably including some covariates.

Analyse des Données et Cartes

Vendredi, on continuera en cours la classification (non supervisée) et en particulier les méthodes hiérarchiques, et les arbres. On va utiliser la base suivante, avec les résultats des élections présidentielles (premier tour) de 2012,

elections2012 <- read.csv("http://komodo.regardscitoyens.org/public/presidentielles2012/resultats/resultats_departements_final_T1.csv",
sep=";",header=TRUE,dec=",")

La base qu’on garde contient juste les voix exprimées, en pourcentage

voix <- which(substr(names(elections2012),1,12)=="X..Voix.Exp.")
X <- as.matrix(elections2012[,voix])
colnames(X) <- substr(names(elections2012)[voix],13,nchar(names(elections2012)[voix]))
rownames(X) <- elections2012[,3]

Parfois, il convient de normaliser les données. Ici, sur les département, ça n’est pas nécessaire. Par contre, sur les candidats, ça le serait (comme pour l’ACP en fait).  Pour visualiser les distances (entre départements), on peut utiliser

heatmap(X)

On peut faire une CAH, sur notre matrice de distance entre rangs

cah <- hclust(dist(X))
plot(cah,cex=.6)

et si on souhaite garder 5 groupes (par exemple) on utilise

rect.hclust(cah,k=5)
groups.5 <- cutree(cah,5)

On peut aussi visualiser les classes avec la fonction

library(dendroextras)
plot(colour_clusters(cah,k=5))

Pour comprendre qui se trouve dans nos groupes, on peut utiliser

aggregate(X,list(groups.5),mean)

qui va nous renvoyer les “votes  moyens” pour chaque candidat, dans chaque groupe. Maintenant, pour en finir avec la visualisation, on peut utiliser le code suivant (un peu long, certes) qui permettra de visualiser sur un carte les différents groupes

carte_classe <- function(groupes){
library(stringr)
elections2012$dep <- elections2012$Libellé.du.département
elections2012$dep <- tolower(elections2012$dep)
elections2012$dep <- str_replace_all(elections2012$dep, pattern = " |-|'|/", replacement = "")
library(maps)
france<-map(database="france")
france$dep <- france$names
france$dep <- tolower(france$dep)
france$dep <- str_replace_all(france$dep, pattern = " |-|'|/", replacement = "")
corresp_noms <- elections2012[, c("Libellé.du.département", "dep")]
corresp_noms$dep[which(corresp_noms$dep %in% "corsesud")] <- "corsedusud"
col2001<-groupes+1
names(col2001) <- corresp_noms$dep[match(names(col2001), corresp_noms$Libellé.du.département)]
color <- col2001[match(france$dep, names(col2001))]
map(database="france", fill=TRUE, col=color)
}

carte_classe(groups.5)

Avec ces fonctions, on devrait pouvoir tester différentes méthodes pour constituer des groupes.

Inequality, Poverty and Welfare

For the fourth cours on Inequalities, we will get back on the quantile regression, and discuss welfare functions as well as poverty indices. Slides are now online

To illustrate, we will use the following datasets

uk88 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes88.csv",sep=";",header=FALSE)$V1
uk92 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes92.csv",sep=";",header=FALSE)$V1
uk96 <- read.csv("http://www.vcharite.univ-mrs.fr/pp/lubrano/cours/fes96.csv",sep=";",header=FALSE)$V1
cpi <- c(421.7, 546.4, 602.4)
y88 <- uk88/cpi[1]
y92 <- uk92/cpi[2]
y96 <- uk96/cpi[3]

and for the part on applications of quantile regression

salary <- read.table("http://data.princeton.edu/wws509/datasets/salary.dat",header=TRUE)

Big Data : passer d’une analyse de corrélation à une interprétation causale

Le rôle d’un actuaire dans une compagnie d’assurance est assez souvent d’estimer la probabilité qu’un événement survienne, ou ses possibles conséquences financières, et fonction de variables dites « explicatives ». On voit en effet que certaines variables sont « statistiquement corrélées » avec la survenance d’un accident dans l’année, mais prétendre que l’on dispose d’une « explication » est un peu dangereux. Ce sont pourtant les interrogations qui étaient soulevées lors des débats sur l’utilisation du sexe en tarification : si les femmes ont moins d’accidents, en moyenne, que les hommes, pourquoi ne pas utiliser cette variable en tarification ? Le problème est que s’arrêter à une étude des corrélations ne permet pas de vraiment comprendre ce qui se cache vraiment derrière un phénomène. Dans un précédant billet, on notait que de telles études pouvaient conduire à des interprétations paradoxales, et erronées.
Comme le notait Dubuisson (2008), « comme le reconnaissent les actuaires, la mise en évidence d’un lien causal entre le critère choisi et la variation de la sinistralité s’apparente à la quête du Graal ». Les données massives, big data, permettent peut être d’avoir accès à davantage d’information, et de mieux comprendre ce qui peut causer un risque. Un exemple classique est un paradoxe qui a été long à comprendre par les épidémiologistes, sur la mortalité infantile, le poids des bébés et le tabagisme de la mère. Nous allons reprendre cet exemple ici et analyser comment l’utilisation de données massives a permis de mieux comprendre ce qui pouvait réellement causer une surmortalité infantile.

Continue reading Big Data : passer d’une analyse de corrélation à une interprétation causale

An Open Lab-Notebook Experiment