Our article Can historical demography benefit from the collaborative data of genealogy websites?, writen with Ewen Gallic, just appeared in Population,
Tag Archives: Ewen
La démographie historique peut-elle tirer profit des données collaboratives des sites de généalogie ?
Notre article La démographie historique peut-elle tirer profit des données collaboratives des sites de généalogie ?, écrit avec Ewen Gallic, vient juste de paraître dans Population, la revue de l’INED. Les codes R sont toujours en ligne sur https://3wen.github.io/genealogie_fr/
Modeling Joint Lives within Families
With Olivier Cabrignac and Ewen Gallic, we recently uploaded a research paper, entitled “Modeling Joint Lives within Families”
Family history is usually seen as a significant factor insurance companies look at when applying for a life insurance policy. Where it is used, family history of cardiovascular diseases, death by cancer, or family history of high blood pressure and diabetes could result in higher premiums or no coverage at all. In this article, we use massive (historical) data to study dependencies between life length within families. If joint life contracts (between a husband and a wife) have been long studied in actuarial literature, little is known about child and parents dependencies. We illustrate those dependencies using 19th century family trees in France, and quantify implications in annuities computations. For parents and children, we observe a modest but significant positive association between life lengths. It yields different estimates for remaining life expectancy, present values of annuities, or whole life insurance guarantee, given information about the parents (such as the number of parents alive). A similar but weaker pattern is observed when using information on grandparents.
The paper is online on https://arxiv.org/abs/2006.08446.
L’assurance aujourd’hui
Très joli numéro des annales des mines, coordonnée par Pierre-Charles Pradier sur l’assurance aujourd’hui
Pierre-Charles a eu la gentillesse de me demander deux courtes contribution
dont une avec Laurence Barry et Ewen Gallic sur les probabilités prédictives
Insurance data science : Text
At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about text based data and NLP this Thursday. Slides are available online
Ewen Gallic (AMSE) will present a tutorial on tweets. I can upload a few additional slides on LSTM (recurrent neural nets)
Insurance data science : Pictures
At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online
Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.
We will try to identify what is on the following pictures, starting with the car
(we will see that the car is indeed identified)
We will also discuss previous pictures from the summer school
Insurance data science : use and value of unusual data #1
Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).
I will give an introductionary talk on Monday morning, and the slides are now available
There will be some hands-on applications, on R. I will share some codes in the slides.
7eme rencontres R
Cette fin de semaine, les 7emes rencontres R sont organisées a Rennes. Ewen fera une (courte) présentation vendredi matin de nos travaux en démographie… Les slides sont en ligne (et l’article aussi).
European R Users Meeting
Wednesday, I will give a talk at the European R Users Meeting about our recent work (with Ewen Gallic) on the use of collaborative data in demography. Slides (actually a longer version of the slides) are now online (including a 16:9 version that should fit better to the screen actually).
Démographie historique à l’aide de données généalogiques participatives
Voilà plusieurs mois qu’avec Ewen Gallic on travaille sur des données généalogiques. Le premier papier, Étude de la démographie française du XIXe siècle à partir de données collaboratives de généalogie est fini. Il s’agit d’une note métodologique, décrisant comment on a reconstitué les arbres de ces 2,45 millions de personnes (701 millions d’enregistrements dans lesquels il a fallu faire du ménage), correspondant aux descendants sur 3 générations de personnes nées en France, entre 1800 et 1804.
Pour illustrer l’apport de ces données riches, on a commencé par étudier la mortaiité au cours du XIXème siècle
et noté que, certes, on sous-estime la mortalité des moins de 20 ans, et des personnes très âgées, mais globalement, nos donnéees sont conformes à ce que nous attendions (peut être moins sur la natalité). On a également commencé à étudier la migration, de génération en génération
(ici la proportion de descendants nés dans le même département que leur aieux). Plein d’autres résultats, à lire dans le papier, en ligne sur hal et beaucoup d’autres résultats sur la page github créée par Ewen.
Is Big Data Good or Evil?
Tonight, with Ewen Gallic.
Shapefiles from Isodensity Curves
Recently, with @3wen, we wanted to play with isodensity curves. The problem is that it is difficult to get – numerically – the equation of the contour (even if we can easily plot it). Consider the following surface (just for fun, in order to illustrate the idea)
> f=function(x,y) x*y+(1-x)*(1-y) > u=v=seq(0,1,length=21) > v=seq(0,1,length=11) > f=outer(u,v,f) > persp(u,v,f,theta=angle,phi=10,box=TRUE, + shade=TRUE,ticktype="detailed",xlab="", + ylab="",zlab="",col="yellow")
For instance, assume that we want to locate areas where the density exceed 0.7 (here in the lower left corner, SW, and the upper right corner, NE)
> image(u,v,f) > contour(u,v,f,add=TRUE,levels=.7)
Kernel Density Estimation with Ripley’s Circumferential Correction
The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.
In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.
There are new applications, and new graphs, too
Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).
Date of death, birthday and Elvis Presley
10 days ago, a study published on http://www.annalsofepidemiology.org/ mentioned that “Death has a preference for birthdays” (as claimed in the title). The conclusion of the paper is that, in general, birthdays do not evoke a postponement mechanism but appear to end up in a lethal way more frequently than expected (“anniversary reaction”). Well, this is not new, and several previous articles have mentioned that point, e.g. Angermeyer et al. (1987).
I found the idea interesting since in demography, there is a large literature trying to extrapolate death rates from discrete to continuous time. Extrapolation are usually extremely smooth. But none of them integrate that aspect of mortality precisely on the birthday. The problem is that it is rather difficult to say something since datasets with individual observations are rare, online.
But yesterday, @coulmont sent me a tweet mentioning a website. I do not know if this is legal (even if some explanations are given), but I will mention courtesy of http://ssdmf.info/. It is a so-called Social Security Death Master File, containing individual informations about deaths in the US, as well as geographic information (as described on http://www.ssa.gov/), for people having a social security number.
With R, it is possible to work on those files (even they are huge, with tens of millions observations). For instance, we can check who is inside.
> elvis=scan("ssdm2",skip=22371720,n=1,what="character",sep=",") > elvis [1] " 409522002PRESLEY ELVIS 0800197701081935 "
If you believe that Elvis is dead, you might agree that this database can be accurate (or at least, not too bad). And further, we can see here how to read the result: Elvis was born on January 8, 1935 (8 last digits), and died on August 16, 1977 (8 digits before). Obviously here, there are some problems with the dataset (we do not have the day of the death of Elvis). So here, we remove all the observations that do not give us proper dates. Then, the idea is to assume that the person died in 2000 (or any year since the point is to focus on days and months). Then, we count the number of days between the day of death and the birthday in 2001 (that would have been after) and the one in 2000 (that was either before or after the death), so that we can derive the number of days after the birthday,
dates=substr(base,66,81) death=as.Date(substr(dates,1,8),"%m%d%Y") birth=as.Date(substr(dates,9,16),"%m%d%Y") indice=is.na(death)|is.na(birth) mean(indice) mdeath=substr(dates,1,2) ddeath=substr(dates,3,4) mbirth=substr(dates,9,10) dbirth=substr(dates,11,12) indice=which(ddeath!="00") birth1=as.Date(paste(mbirth[indice], dbirth[indice],"2000",sep=""),"%m%d%Y") birth2=as.Date(paste(mbirth[indice], dbirth[indice],"2001",sep=""),"%m%d%Y") death=as.Date(paste(mdeath[indice],ddeath[indice], "2000",sep=""),"%m%d%Y") k=length(indice) diffday=cbind((as.numeric(death-birth1))[1:k], (as.numeric(death-birth2))[1:k]) DIFF=apply(diffday,1,function(x) {min(x[x>=0])})
What we have here is the number of days following the previous birthday. If we look at the distribution of that number of days, we obtain
counts=table(DIFF) plot(as.numeric(names(counts)), as.numeric(counts)) counts["0"]/(mean(counts[100:200])) > counts["0"]/(mean(counts[100:200])) 0 1.121261
Thus, the death excess on the day of birth was around 12%, which is rather close to the one obtained from the Swiss mortality statistics 1969–2008 (in Ajdacic-Gross et al. (2012)). Note that here, we just play with a small subset of the entire dataset,
That database is probably extremely interesting, except that it suffers a huge selection bias, since only dead people are in that database. So it might be useless if we wish to study life expectancy of people named Bill versus people named Georges (that was something I wanted to investigate initially). But we’ll see what else we can do with it (since Ewen have been able to write some code to go through that huge dataset).
Do you still have time to sleep ?
Last week, @3wen (Ewen) helped me to write nice R functions to extract tweets in R and build datasets containing a lot of information. I’ve tried a couple of time on my own. Once on tweet contents, but it was not convincing and once on the activity on Twitter following an event (e.g. the death of someone famous). I have to admit that I am not a big fan of databases that can be generated using standard function to study tweets. For instance, we can only extract tweets, notre-tweets (which is also an important indicator of tweet-activity). @3wen suggested to use
require("RJSONIO")
The first step is to extract some information from a tweet, and store it in a dataset (details can be found on https://dev.twitter.com/)
obtenir_ligne <- function(unTweet){ date_courante=unTweet$created_at id_courant=unTweet$id_str text=unTweet$text nb_followers=unTweet$user$followers_count nb_amis=unTweet$user$friends_count utc_offset=unTweet$user$utc_offset listeMentions=unTweet$entities$user_mentions return(c(list(c(id_courant,date_courante,text, nb_followers,nb_amis,utc_offset)), list(do.call("rbind",lapply(listeMentions, function(x,id_courant) c(id_courant, x$screen_name),unTweet$id_str))))) }
Now that we have the code to extract information from one tweet, let us find several tweets, from one user, say my account,
nom="Freakonometrics"
The (small) problem here, is that we have a limitation: we can only get 100 tweets per call of the function
n=100 tweets_courants=scan(paste( "http://api.twitter.com/1/statuses/user_timeline.json? include_entities=true&include_rts=true&screen_name= ",nom,"&count=",n,sep=""),what = "character", encoding="latin1") tweets_courants=paste(tweets_courants[ 1:length(tweets_courants)],collapse=" ") tweets_courants=fromJSON(tweets_courants, method = "C")
Then, we use our function to build a database with 100 lines,
extracTweets <- lapply(tweets_courants, obtenir_ligne) mentions=do.call("rbind",lapply(extracTweets, function(x) x[[2]])) colnames(mentions)=list("id","screen_name") res=t(sapply(extracTweets,function(x) x[[1]])) colnames(res) <- list("id","date","text", "nb_followers","nb_amis","utc_offset")
The idea then is simply to use a loop, based on the latest id observed
dernier_id=tweets_courants[[length( tweets_courants)]]$id_str
So, here we go,
compteurLimite=100 while(compteurLimite<4100){ tweets_courants=scan(paste( "http://api.twitter.com/1/statuses/user_timeline.json? include_entities=true&include_rts=true&screen_name= ",nom,"&count=",n,"&max_id=",dernier_id,sep=""), what = "character", encoding="latin1") tweets_courants=paste(tweets_courants[ 1:length(tweets_courants)],collapse=" ") tweets_courants=fromJSON(tweets_courants, method = "C") extracTweets <- lapply(tweets_courants[ 2:length(tweets_courants)],obtenir_ligne) mentions=rbind(mentions,do.call("rbind", lapply(extracTweets,function(x) x[[2]]))) res=rbind(res,t(sapply(extracTweets,function(x) x[[1]]))) t(sapply(extracTweets,function(x) x[[1]])) dernier_id=tweets_courants[[length( tweets_courants)]]$id_str compteurLimite=compteurLimite+100 } resFreakonometrics=res= data.frame(res,stringsAsFactors=FALSE)
All the information about my own tweets (and re-tweets) are stored in a nice dataset. Actually, we have even more, since we have extracted also names of people mentioned in tweets,
mentionsFreakonometrics= data.frame(mentions)
We can look at people I mention in my tweets
gazouillis=sapply(split(mentionsFreakonometrics, mentions$screen_name),nrow) gazouillis=gazouillis[order(gazouillis, decreasing=TRUE)] plot(gazouillis) plot(gazouillis,log="xy")
> gazouillis[1:20] tomroud freakonometrics adelaigue dmonniaux 155 84 77 56 J_P_Boucher embruns SkyZeLimit coulmont 42 39 35 31 Fabrice_BM 3wen obouba msotod 31 30 29 27 StatFr nholzschuch renaudjf squintar 26 25 23 23 Vicnent pareto35 romainqc valatini 23 22 22 22
If we plot those frequencies, we can clearly observe a standard Pareto distribution,
![]() |
![]() |
Now, let us spend some time with dates and time of tweets (it was the initial goal of this post)… One more time, there is a (small) technical problem that we have to deal with: language. We need a function to convert date in English (on Twitter) to dates in French (since I have a French version of R),
changer_date_anglais <- function(date_courante){ mois <- c("Jan","Fév", "Mar", "Avr", "Mai", "Jui", "Jul", "Aoû", "Sep", "Oct", "Nov", "Déc") months <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec") jours <- c("Lun","Mar","Mer","Jeu", "Ven","Sam","Dim") days <- c("Mon","Tue","Wed","Thu", "Fri","Sat","Sun") leJour <- substr(date_courante,1,3) leMois <- substr(date_courante,5,7) return(paste(jours[match(leJour,days)]," ", mois[match(leMois,months)],substr( date_courante,8,nchar(date_courante)),sep="")) }
So now, it is possible to plot the times where I am online, tweeting,
DATE=Vectorize(changer_date_anglais)(res$date) DATE=sapply(resSkyZeLimit$date, changer_date_anglais,simplify=TRUE) DATE2=strptime(as.character(DATE), "%a %b %d %H:%M:%S %z %Y") lt= as.POSIXlt(DATE2, origin="1970-01-01") heure=lt$hour+lt$min/60 plot(DATE2,heure)
On this graph, we can see that I am clearly not online almost 6 hours a day (or at least not on Twitter). It is possible to visualize more precisely the period of the day where I might be on Twitter,
hist(heure,breaks=0:24,col="light green",proba=TRUE) X=c(heure-24,heure,heure+24) d=density(X,n = 512, from=0, to=24,bw=1) lines(d$x,d$y*3,lwd=3,col="red")
or, if we want to illustrate with some kind of heat plot,
Note that we did it for my Twitter account, but we can also run the code on (almost) anyone on Twitter. Consider e.g. @adelaigue. Since Alexandre is tweeting in France, we have to play with time-zones,
res=extractR("adelaigue") DATE=Vectorize(changer_date_anglais)(res$date) DATE2=strptime(as.character(DATE), "%a %b %d %H:%M:%S %z %Y",tz = "GMT")+2*60*60
or I can also look at @skythelimit who’s usually twitting from Singapore (I am in Montréal). I can seen clearly when we might have overlaps,
res=extractR("skythelimit")
Nice isn’t it. But it is possible to do much better… for instance, for those who do not ask specifically not to be Geo-located, we can see where they do tweet during the day, and during the night… I am quite sure a dozen posts with those functions can be written…