Category Archives: Databases

Le sport en France

Je voulais profiter de la rentree pour mettre en ligne quelques billets sur la data science (comme on dit), en particulier en me basant sur des projets R de la formation en Data Science pour l’Actuariat. L’an passe, j’avais déjà mis en ligne un billet sur le sport (“le sport, une activité de riches“). Cette fois, en m’inspirant de ce qu’a proposé Benoit, on va regarder qui sont les licenciés des différentes fédérations sportives, et ou ils vivent. Comme toujours en R, on charge les librairies qu’on va utiliser…

library(rgdal)
library(sp)
library(reshape2)
library(data.table)
library(ggplot2)
library(gridExtra)
library(ggmap)
library(RColorBrewer)
library(classInt)
library(backports)
library(OpenStreetMap)

J’ouvre une parenthèse rapide, mais en pratique on sait rarement ce qui va servir… ex-post, on va les ramener ce chargement de librairies au début. Je pense que ça serait mieux de les charger juste quand on les utilise. Bon, ensuite, il faut les donnees

Url_Licences = "https://www.data.gouv.fr/s/resources/recensement-des-licences-et-clubs-aupres-des-federations-sportives-agreees-par-le-ministere-charge-d/20180131-163516/Licences_2015.csv"
Licences_2015 = read.csv(file=Url_licences, header=TRUE, sep=",",stringsAsFactors = FALSE) 
Url_Federation = "http://freakonometrics.free.fr/Projet_R/Code_federation.csv"
Code_Fede = read.csv(Url_Federation, sep=";",header=FALSE, skip=3)
colnames(Code_Fede) = c("Code_Federation","Libelle_Federation")

On change ici le nom des variables, ça sera plus simple ensuite, et on retient juste quelques lignes interessantes

Code_Fede = Code_Fede[c(1:31,33:92),c(1:2)]

Il faut ensuite les coordonnées des villes pour faire une carte

Commune = read.csv(file="https://www.data.gouv.fr/fr/datasets/r/554590ab-ae62-40ac-8353-ee75162c05ee", sep=";", header=TRUE)

En fait, juste la latitude de la longitude nous interesse

Geocod = colsplit(Commune$coordonnees_gps, ",", c("Latitude", "Longitude"))
Commune = data.frame(Commune,Geocod)

Un peu de menage ne fera pas de mal

Commune$Ligne_5 = NULL
Commune$coordonnees_gps = NULL
doublons = which(duplicated(Commune$Code_commune_INSEE)) #détecte les lignes où il y a doublon
Commune_Indiv = Commune[-doublons,]

On rajoute maintenant un libelle pour chaque sport

Licences_2015 = merge(x=Licences_2015, y=Code_Fede, by.x="fed_2014", by.y="Code_Federation", all.y=TRUE)

Et on supprime également les lignes ou les codes commune ne sont pas renseignés (car les données ne seront pas exploitables)

Licences_2015 = Licences_2015[!is.na(Licences_2015$newcog2),]

On a besoin de faire un peu attention a Paris et Marseille, car on a des données par arrondissement,

for (i in 1:nrow(Licences_2015)){
  if (Licences_2015[i,c("newcog2")]=="75056") {
    (Licences_2015[i,c("newcog2")] = "75101")}
  if (Licences_2015[i,c("newcog2")]=="13055") {
    (Licences_2015[i,c("newcog2")] = "13101")}}
Licences_2015 = merge(x=Licences_2015, y=Commune_Indiv, by.x="newcog2", by.y="Code_commune_INSEE", all.x=TRUE)

On y est presque. On va créer la variable taux de licenciés (nombre de licences rapporté a la population) pour chaque commune

Licences_2015$Taux_Licencies = ifelse(Licences_2015$pop_2014 != 0,Licences_2015$l_2015/Licences_2015$pop_2014,0)

Maintenant, on peut jouer ! Ou presque… reste a faire quelques regroupements en fonction de ce qu’on veut représenter.

df_Nb_Lic_Agg_Fed = aggregate(data.frame(
Nb_Licence = Licences_2015$l_2015,
Nb_hommes = Licences_2015$l_h_2015,
Nb_femmes = Licences_2015$l_f_2015,
NbLicences_0_4_Ans=Licences_2015$l_0_4_2015,
NbLicences_5_9_Ans=Licences_2015$l_5_9_2015,
NbLicences_10_14_Ans=Licences_2015$l_10_14_2015,
NbLicences_15_19_Ans=Licences_2015$l_15_19_2015,
NbLicences_20_29_Ans=Licences_2015$l_20_29_2015,
NbLicences_30_44_Ans=Licences_2015$l_30_44_2015,
NbLicences_45_59_Ans=Licences_2015$l_45_59_2015,
NbLicences_60_74_Ans=Licences_2015$l_60_74_2015,
NbLicences_75_Ans=Licences_2015$l_75_2015,
Nb_0_4_Ans=Licences_2015$pop_0_4_2014,
Nb_5_9_Ans=Licences_2015$pop_5_9_2014,
Nb_10_14_Ans=Licences_2015$pop_10_14_2014,
Nb_15_19_Ans=Licences_2015$pop_15_19_2014,
Nb_20_29_Ans=Licences_2015$pop_20_29_2014,
Nb_30_44_Ans=Licences_2015$pop_30_44_2014,
Nb_45_59_Ans=Licences_2015$pop_45_59_2014,
Nb_60_74_Ans=Licences_2015$pop_60_74_2014,
Nb_75_Ans=Licences_2015$pop_75_2014,
Pop_femmes=Licences_2015$popf_2014,
Pop_hommes=Licences_2015$poph_2014,
Pop_Totale=Licences_2015$pop_2014), 
by = list(Federation = Licences_2015$Libelle_Federation), sum, na.rm = TRUE)

On peut ainsi calculer le “taux de féminisation” de chaque sport

df_Nb_Lic_Agg_Fed$tx_femmes = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence!=0,df_Nb_Lic_Agg_Fed$Nb_femmes/df_Nb_Lic_Agg_Fed$Nb_Licence,0)

ou la répartition par classe d’âge du nombre de licenciés par fédération

df_Nb_Lic_Agg_Fed$Nb_Licence_Norme = 
  df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_75_Ans

Pour la classe d’age 0-14 ans, on pose alors

df_Nb_Lic_Agg_Fed$Tx_Licences_0_14_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,      (df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

et pour la classe d’age 15-29 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_15_29_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,
(df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 30-44 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_30_44_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,(df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 45-59 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_45_59_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,                                        (df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 60 ans et plus (on a compris le truc)

df_Nb_Lic_Agg_Fed$Tx_Licences_60_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0, (df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+ df_Nb_Lic_Agg_Fed$NbLicences_75_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

On passe a la détermination des 25 premières fédérations en nombre de licenciés

dt_Nb_Lic_Agg_Fed = data.table(df_Nb_Lic_Agg_Fed)
setorder(dt_Nb_Lic_Agg_Fed,-Nb_Licence,na.last=TRUE)
dt_Nb_Lic_Agg_Main_Fed = dt_Nb_Lic_Agg_Fed[1:25,]
graph1 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed, aes(x=reorder(Federation,Nb_Licence), y=Nb_Licence)) + 
  geom_bar(stat="Identity",fill = "blue")+
  geom_text(aes(label=Nb_Licence),check_overlap = TRUE, vjust=0.5, hjust=0, color="blue")+
  ggtitle("TOP 25 des fédérations sportives en termes de licenciés")+
  ylim(0, 2500000)+
  xlab("Fédérations") + ylab("Nombre de licences")
graph1+coord_flip()

On ordonne ensuite par taux de femmes,

setorder(dt_Nb_Lic_Agg_Main_Fed,-tx_femmes,na.last=TRUE)
graph2 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed) +
  aes(x =reorder(Federation,tx_femmes), y = tx_femmes) + geom_bar(stat="Identity",fill = "pink")+
geom_text(aes(label=paste(round(100*tx_femmes, 0), "%", sep="")),check_overlap = TRUE, vjust=0.5, hjust=0.5, color="black")+
xlab("Fédération") + ylab("part des licenciées femmes")+
ggtitle("la pratique sportive féminine par fédération")  
graph2+coord_flip()

Et finalement on va regarder par classe d’age

df_Nb_Lic_Agg_Main_Fed = data.frame(dt_Nb_Lic_Agg_Main_Fed)
Licence_Age = melt(df_Nb_Lic_Agg_Main_Fed, id=c("Federation"), measured=c("Tx_Licences_0_14_Ans","Tx_Licences_15_29_Ans", "Tx_Licences_30_44_Ans", "Tx_Licences_45_59_Ans","Tx_Licences_60_Ans"))
Licence_Age_Clean = Licence_Age[(Licence_Age$variable=="Tx_Licences_0_14_Ans" |       Licence_Age$variable=="Tx_Licences_15_29_Ans" | Licence_Age$variable=="Tx_Licences_30_44_Ans" |
Licence_Age$variable=="Tx_Licences_45_59_Ans" |
Licence_Age$variable=="Tx_Licences_60_Ans"),]  
dt_Licence_Age_Clean = data.table(Licence_Age_Clean)
setorder(dt_Licence_Age_Clean,-variable,na.last=TRUE)
setorder(Licence_Age_Clean,variable,na.last=TRUE)
graph3 = ggplot(data=Licence_Age_Clean, aes(x=Federation, y=value, fill=variable)) +
geom_bar(stat="identity")+
xlab("Fédération") + ylab("répartition par classe d'âge")+
ggtitle("Répartition des licenciés par classe d'âge")  
graph3+coord_flip()+scale_fill_brewer(palette="Paired")

A la lecture du graphique ci-dessus, les sports pourraient être classés en 3 catégories :

  • les “sports de jeunes” : ceux-ci ont plus de la motié de leurs licenciés âgés de moins de 15 ans : il s’agit de la gymnastique, du judo, du handball, de la natation, ou encore de la voile.
  • les “sports de vieux” : on retrouve ici sans surprise la randonnée, le cyclotourisme, le golf, la pétanque, le tir ou encore les sports sous-marins. Ceux-ci voient leurs licenciés avoir plus de 45 ans pour tois quart d’entre eux.
  • les “sports pour tous” qui correspondent à ceux qui n’ont pas encore été cités et pour lesquels classes d’âge apparaissent plus équilibrés

Finallement, on peut regarder quelques sports, sur une carte

map.France = get_map(location = c(lon=1.75, lat=46.70), zoom = 6)
Rugby_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="133",]
Voile_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="128",]
Ski_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="121",]
PetanQ_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="242",]
Rugby = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Rugby_2015, colour="red", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Rugby")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="red"))
Voile = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Voile_2015, colour="blue", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Voile")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="blue"))
Ski = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Ski_2015, colour="grey", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Ski")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="grey"))
Petanque = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = PetanQ_2015, colour="chocolate3", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("pétanque et jeu provençal")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="chocolate3"))
grid.arrange(Rugby,Voile,Ski,Petanque, ncol=2, nrow = 2,top="visualisation géographique de sports \n à fort ancrage régional")

Amusant, non?

Le sport, une activité de riches ?

Allez, un petit billet léger pour bien commencer les vacances de Noël. Toujours sur le projet R de la formation en Data Science pour l’Actuariat, Cyril Legrand a proposé de fusionner deux bases, une sur le revenu moyen par foyer (fiscal) par commune, et une sur le nom de licenciés dans les clubs sportifs. Il est faire un peu de retraitement des codes INSEE pour la jointure, car par exemple Marseille est codée sur les arrondissements dans une des bases, sur la ville dans l’autre.

Continue reading Le sport, une activité de riches ?

A quelle distance d’une banque habite-t-on ?

Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.

On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.

Continue reading A quelle distance d’une banque habite-t-on ?

Working with “large” datasets, with dplyr and data.table

A few months ago, I was doing some training on data science for actuaries, and I started to get interesting puzzeling questions. For instance, Fleur was working on telematic data, and she’s been challenging my (rudimentary) knowledge of R. As claimed by Donald Knuth, “we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil“. So usually, in my courses, and my training, codes are very basic, and easy to understand. But usually poorly efficient. Since I was challenged, to work on very large datasets, we’ve been working on R functions to manipulate those possibly (very) large dataset, and to run some simple functions as fast as possible (with simple filter and aggregation functions).

In order to illustrate, let us generate our “large” telematic dataset. Assume that we have 10,000 drivers, each of them drives about 200 times, and each time, we have, say, 80 locations. That mean around 160 million observations. It is “large”, but not huge.

> rm(list=ls())
> N_id=10000
> N_tr=200
> T_tr=80

In order to have a code as general as possible, assume that we have some kind of randomness,

> set.seed(1)
> N=rpois(N_id,N_tr)
> N_traj=rpois(sum(N),T_tr)

By “observation”, we consider a driver Id., a Trajectory Id., and a location (latitude and longitude) at some specific dates (e.g. every 15 sec.). Again, just because we want some dataset to illustrate, swe will draw drivers’s home randomly (here uniformly on some square)

> origin_lat=runif(N_id,-5,5)
> origin_lon=runif(N_id,-5,5)

And, then, from those locations, we generate a 2-dimensional random walk,

> lat=lon=Traj_Id=rep(NA,sum(N_traj))
> Pers_Id=rep(NA,length(N_traj))
> s=1
> for(i in 1:N_id){Pers_Id[s:(s+N[i])]=i;s=s+N[i]}
> s=1
> for(i in 1:length(N_traj)){lat[s:(s+N_traj[i])]=origin_lat[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  lon[s:(s+N_traj[i])]=origin_lon[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  s=s+N_traj[i]}

We have something which looks like

Continue reading Working with “large” datasets, with dplyr and data.table

Names in the U.S., from James Smith to Jose Rodriguez

Two weeks ago, @mona published an interesting post on her blog, about a difficult question, What’s The Most Common Name In America? There were stats about first names, in the U.S., and last names, too. Those informations are – somehow – easy to get. But usually, it is more complicated to get the first and the last name together. For confidentiality issues ! Datasets – the ones I deal with – are supposed to be anonymized, so I never see the first and the last names.  In a previous post, a few years ago, I did mention the so-called Social Security Death Master File. In that file, we have Social Security numbers, with the date of birth, the date of death as well as the first and the last name. So I did use those files to get stats about the first and the last names of American citizens. Of course, it is very restrictive. I have only U.S. citizens that have a Social Security number (which is not compulsary in the U.S. as far as I understood) and who passed away (as mentioned in the name of the dataset: the death master file). Another great thing about that dataset is that I have the date of birth, so I can look at some cohort effect (see opendata.stackexchange for an interesting discussion on that dataset).

Continue reading Names in the U.S., from James Smith to Jose Rodriguez

Extracting datasets from excel files in a zipped folder

The title of the post is a bit long, but that’s the problem I was facing this morning: importing dataset from files, online. I mean, it was not a “problem” (since I can always download, and extract manually the files), more a challenge (I should be able to do it in R, directly). The files are located on ressources-actuarielles.net, in a zip file. Those are mortality tables used in French speaking African countries, and I guess that one problem came from special characters, such as “é” or “è”… When you open the zip file, you see a folder

and in that folder, several files that I would like to import

Continue reading Extracting datasets from excel files in a zipped folder

How to import some parts of a large database

In the introduction of Computational Actuarial Science with R, there was a short paragraph on how could we import only some parts of a large database, by selecting specific variables. The trick was to use the following

> read.table.select.columns=function(datatablename,
I,sep=";"){
+ datanc=read.table(datatablename,header=TRUE,
sep=sep,skip=0,nrows=1)
+ mycols=rep("NULL",ncol(datanc))
+ names(mycols)=names(datanc)
+ mycols[I]=NA
+ datat=read.table(datatablename,header=TRUE,
sep=sep,colClasses=mycols)
+ return(datat)}

For instance, if we use the same dataset as in the introduction, we can import only two variables of interest,

> loc="http://myweb.fsu.edu/jelsner/extspace/extremedatasince1899.csv"
> dt1=read.table.select.columns(loc,c("Region",
"Wmax"),sep=",")
> head(dt1,10)
    Region      Wmax
1    Basin 105.56342
2    Basin  40.00000
3    Basin  35.41822
4    Basin  51.06743
5  Florida  87.34328
6    Basin  96.64138
7     Gulf  35.41822
8       US  35.41822
9       US  87.34328
10      US 106.35318
> dim(dt1)
[1] 2100    2

Continue reading How to import some parts of a large database

R package for Computational Actuarial Science

A webpage for the book is now hosted on

http://cas.uqam.ca/

So far, it is a very basic page, but information regarding the package can be found there. For instance, to install the package, with all the datasets, the R code is

> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/")

The reference manual provides a description of all datasets.

Regression on categorical variables

This morning, Stéphane asked me tricky question about extracting coefficients from a regression with categorical explanatory variates. More precisely, he asked me if it was possible to store the coefficients in a nice table, with information on the variable and the modality (those two information being in two different columns). Here is some code I did to produce the table he was looking for, but I guess that some (much) smarter techniques can be used (comments – see below – are open). Consider the following dataset

> base
   x sex   hair
1  1   H  Black
2  4   F  Brown
3  6   F  Black
4  6   H  Black
5 10   H  Brown
6  5   H Blonde

with two factors,

> levels(base$hair)
[1] "Black"  "Blonde" "Brown" 
> levels(base$sex)
[1] "F" "H"

Let us run a (standard linear) regression,

> reg=lm(x~hair+sex,data=base)

which is here

> summary(reg)

Call:
lm(formula = x ~ hair + sex, data = base)

Residuals:
         1          2          3          4          5          6 
-3.714e+00 -2.429e+00  2.429e+00  1.286e+00  2.429e+00 -2.220e-16 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)   3.5714     3.4405   1.038    0.408
hairBlonde    0.2857     4.8655   0.059    0.959
hairBrown     2.8571     3.7688   0.758    0.528
sexH          1.1429     3.7688   0.303    0.790

Residual standard error: 4.071 on 2 degrees of freedom
Multiple R-squared: 0.2352,	Adjusted R-squared: -0.9121 
F-statistic: 0.205 on 3 and 2 DF,  p-value: 0.886

If we want to extract the names of the factors (assuming here that there are no numbers in the name of the factor), and the values of the associated modality, one can use

> VARIABLE=c("",gsub("[-^0-9]", "", names(unlist(reg$xlevels))))
> MODALITY=c("",as.character(unlist(reg$xlevels)))
> names=data.frame(VARIABLE,MODALITY,NOMVAR=c(
+ "(Intercept)",paste(VARIABLE,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))
> merge(names,regression,all.x=TRUE)
       NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2   hairBlack     hair    Black        NA
3  hairBlonde     hair   Blonde 0.2857143
4   hairBrown     hair    Brown 2.8571429
5        sexF      sex        F        NA
6        sexH      sex        H 1.1428571

or, if we want modalities exluding references,

> merge(names,regression)
       NOMVAR VARIABLE MODALITE      COEF
1 (Intercept)                   3.5714286
2  hairBlonde     hair   Blonde 0.2857143
3   hairBrown     hair    Brown 2.8571429
4        sexH      sex        H 1.1428571

In order to reproduce the table Stéphane sent me, let us use the following code to produce an html table,

> library(xtable)
> htlmtable <- xtable(merge(names,regression))
> print(htlmtable,type="html")
NOMVAR VARIABLE MODALITY COEF
1 (Intercept) 3.57
2 hairBlonde hair Blonde 0.29
3 hairBrown hair Brown 2.86
4 sexH sex H 1.14

So yes, it is possible to build a table with the variable, modalities, and coefficients. This function can be interesting on prospective mortality, when we do have a large number of modalities per factor (years, ages and year of birth). Consider the following datasets

> DEATH=read.table(
+ "http://freakonometrics.free.fr/DeathsSwitzerland.txt",
+ header=TRUE,skip=2)
> EXPOSURE=read.table(
+ "http://freakonometrics.free.fr/ExposuresSwitzerland.txt",
+ header=TRUE,skip=2)
> DEATH$Age=as.numeric(as.character(DEATH$Age))
> DEATH=DEATH[-which(is.na(DEATH$Age)),]
> EXPOSURE$Age=as.numeric(as.character(EXPOSURE$Age))
> EXPOSURE=EXPOSURE[-which(is.na(EXPOSURE$Age)),]
> base=data.frame(y=as.factor(DEATH$Year),a=as.factor(DEATH$Age),
+ c=as.factor(DEATH$Year-DEATH$Age),D=DEATH$Total,E= EXPOSURE$Total)
> base=base[base$E>0,]

and the following nonlinear model, based on Lee-Carter model (including a cohort effect),

https://latex.codecogs.com/gif.latex?N_{x,t}\sim\mathcal{P}(E_{x,t}\cdot%20\exp[\alpha_x+\beta_x%20\kappa_t%20+%20\gamma_x%20\delta_{t-x}])

can be estimated using

> library(gnm)
> reg=gnm(D~a+Mult(a,y)+Mult(a,c),offset=log(E),family=poisson,data=base)

In order to extract the 671 coefficients from the regresssion,

> length(coefficients(reg))
[1] 671

(as properly as possible) we have to be careful: names of coefficients are not that simple to handle. For instance, we can see things like

> coefficients(reg)[200]
Mult(., year).age98 
         0.04203519

In order to extract them, define

> na=length((reg$xlevels)$age)
> ny=length((reg$xlevels)$year)
> nc=length((reg$xlevels)$cohort)
> VARIABLElong=c("",rep("age",na),rep("Mult(., year).age",na),
+ rep("Mult(a, .).y",ny),
+ rep("Mult(., cohort).age",na),rep("Mult(age, .).cohort",nc))
> VARIABLEshort=c("",rep("age",na),rep("age",na),rep("year",ny),
+ rep("age",na),rep("cohort",nc))
> MODALITY=c("",(reg$xlevels)$age,(reg$xlevels)$age,
+ (reg$xlevels)$year,(reg$xlevels)$age,(reg$xlevels)$cohort)
> names=data.frame(VARIABLElong,VARIABLEshort,
+ MODALITY,NOMVAR=c("(Intercept)",paste(VARIABLElong,MODALITY,sep="")[-1]))
> regression=data.frame(NOMVAR=names(coefficients(reg)),
+ COEF=as.numeric(coefficients(reg)))

Here we go, now we have the coefficients from the regression in a nice table,

> outputreg=merge(names,regression)
> outputreg[1:10,]
        NOMVAR VARIABLElong VARIABLEshort MODALITY        COEF
1  (Intercept)                                     -8.22225458
2         age1          age           age        1 -0.87495451
3        age10          age           age       10 -1.67145704
4       age100          age           age      100  4.91041650
5        age11          age           age       11 -1.00186990
6        age12          age           age       12 -1.05953497
7        age13          age           age       13 -0.90952859
8        age14          age           age       14  0.02880668
9        age15          age           age       15  0.42830738
10       age16          age           age       16  1.35961403

It is now possible to plot all the coefficients, as functions of the age, the year of observation, or the year of birth. For instance, for the standard average age effect (namely https://latex.codecogs.com/gif.latex?\alpha_x as a function of https://latex.codecogs.com/gif.latex?x), we can use

> typevariable=as.character(unique(outputreg$VARIABLElong))
> basegraph=outputreg[outputreg$VARIABLElong==typevariable[2],]
> x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Age")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-30-a%CC%80-15.59.12.png

while the cohort effect (https://latex.codecogs.com/gif.latex?\delta_t as a function of https://latex.codecogs.com/gif.latex?t) is obtained using

> basegraph=outputreg[outputreg$VARIABLElong==typevariable[5],]
> x=as.numeric(as.character(basegraph$MODALITY))
> y=basegraph$COEF
> plot(x,y,type="p",col="blue",xlab="Cohort (year of birth)",ylim=c(0,10))

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-30-a%CC%80-16.07.25.png


	

Open data and ecological fallacy

A couple of days ago, on Twitter, @alung mentioned an old post I did publish on this blog about open-data, explaining how difficult it was to get access to data in France (the post, published almost 18 months ago can be found here, in French). And  @alung was wondering if it was still that hard to access nice datasets. My first answer was that actually, people were more receptive, and I now have more people willing to share their data. And on the internet, amazing datasets can be found now very easily. For instance in France, some detailed informations can be found about qualitifications, houses and jobs, by small geographical areas, on http://www.recensement.insee.fr (thanks @coulmont for the link). And that is great for researchers (and anyone actually willing to check things by himself).

But one should be aware that those aggregate data might not be sufficient to build up econometric models, and to infere individual behaviors. Thinking that relationships observed for groups necessarily hold for individuals is a common fallacy (the so-called ” ecological fallacy“).

In a popular paper, Robinson (1950) discussed “ecological inference“, stressing the difference between ecological correlations (on groups) and individual correlations (see also Thorndike (1937)) He considered two aggregated quantities, per american state: the percent of the population that was foreign-born, and the percent that was literate. One dataset used in the paper was the following

> library(eco)
> data(forgnlit30)
> tail(forgnlit30)
Y          X         W1          W2 ICPSR
43 0.076931986 0.03097168 0.06834300 0.077206504    66
44 0.006617641 0.11479052 0.03568792 0.002847920    67
45 0.006991899 0.11459207 0.04151310 0.002524065    68
46 0.012793782 0.18491515 0.05690731 0.002785916    71
47 0.007322475 0.13196654 0.03589512 0.002978594    72
48 0.007917342 0.18816461 0.02949187 0.002916866    73

The correlation between  foreign-born and literacy was

> cor(forgnlit30$X,1-forgnlit30$Y)
[1] 0.2069447

So it seems that there is a positive correlation, so a quick interpretation could be that in the 30’s, amercians were iliterate, but hopefully, literate immigrants got the idea to come in the US. But here, it is like in Simpson’s paradox, because actually, the sign should be negative, as obtained on individual studies. In the state-based-data study, correlation was positive mainly because foreign-born people tend to live in states where the native-born are relatively literate…

Hence, the problem is clearly how individuals were grouped. Consider the following set of individual observations,

> n=1000
> r=-.5
> Z=rmnorm(n,c(0,0),matrix(c(1,r,r,1),2,2))
> X=Z[,1]
> E=Z[,2]
> Y=3+2*X+E
> cor(X,Y)
[1] 0.8636764

Consider now some regrouping, e.g.

> I=cut(Z[,2],qnorm(seq(0,1,by=.05)))
> Yg=tapply(Y,I,mean)
> Xg=tapply(X,I,mean)

Then the correlation is rather different,

>  cor(Xg,Yg)
[1] 0.1476422

Here we have a strong positive individual correlation, and a small (positive correlation) on grouped data, but almost anything is possible.

Models with random coefficients have been used to make ecological inferences. But that is a long story, andI will probably come back with a more detailed post on that topic, since I am still working on this with @coulmont (following some comments by @frbonnet on his post on recent French elections on http://coulmont.com/blog/).

Open data might be a false good opportunity…

I am always surprised to see many people on Twitter tweeting about #opendata, e.g. @data4all, @usdatagov, @datapublicatwit, @ProPublica or @open3 among so many others… Initially, I was also very enthousiastic, but I have to admit thatopen data are rarely raw data. Which is what I am usually looking for, as a statistician…
Consider the following example: I was wondering (Valentine’s day is approaching)when will a man born in 1975 (say) get married – if he ever gets married ?More technically, I was looking for a distribution of the age of first marriage (given the year of birth), including the proportion of men that will never get married, for that specific cohort.

The only data I found on the internet is the following, on statistics.gov.uk/

Note that we can also focus on women (e.g. here). Is it possible to use that opendata to get an estimation of the distribution of first marriage for some specific cohort ? (and to answer the question I asked). Here, we have two dimensions: on line http://freakonometrics.free.fr/blog/latex/marriage01.gif, the year (of the marriage), and on column http://freakonometrics.free.fr/blog/latex/marriage02.gif, the age of the man when he gets married. Assume that those were rawdata, i.e. that we have the number of marriages of men of age http://freakonometrics.free.fr/blog/latex/marriage02.gif during the year http://freakonometrics.free.fr/blog/latex/marriage01.gif.

We are interested at a longitudinal lecture of the table, i.e. consider some man born year http://freakonometrics.free.fr/blog/latex/marriage03.gif, we want to estimate (or predict) the age he will get married, if he gets married. With raw data, we can do it… The first step is to build up triangles (to have a cohort vs. age lecture of the data), and then to consider a model, e.g.

http://freakonometrics.free.fr/blog/latex/marriage04.gif

where http://freakonometrics.free.fr/blog/latex/marriage05.gif is a year effect, and http://freakonometrics.free.fr/blog/latex/marriage06.gif is a cohort effect.

base=read.table("http://freakonometrics.free.fr/mariage-age-uk.csv",
sep=";",header=TRUE)
m=base[1:16,]
m=m[,3:10]
m=as.matrix(m)
triangle=matrix(NA,nrow(m),ncol(m))
n=ncol(m)
for(i in 1:16){
triangle[i,]=diag(m[i-1+(1:n),])
}
triangle[nrow(m),1]=m[nrow(m),1]
 
triangle
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
 [1,]   12  104  222  247  198  132   51   34
 [2,]    8   89  228  257  202  102   75   49
 [3,]    4   80  209  247  168  129   92   50
 [4,]    4   73  196  236  181  140   88   45
 [5,]    3   78  242  206  161  114   68   47
 [6,]   11  150  223  199  157  105   73   39
 [7,]   12  117  194  183  136   96   61   36
 [8,]   11  118  202  175  122   92   62   40
 [9,]   15  147  218  162  127   98   72   48
[10,]   20  185  204  171  138  112   82   NA
[11,]   31  197  240  209  172  138   NA   NA
[12,]   34  196  233  202  169   NA   NA   NA
[13,]   35  166  210  199   NA   NA   NA   NA
[14,]   26  139  210   NA   NA   NA   NA   NA
[15,]   18  104   NA   NA   NA   NA   NA   NA
[16,]   10   NA   NA   NA   NA   NA   NA   NA
 
Y=as.vector(triangle)
YEARS=seq(1918,1993,by=5)
AGES=seq(22,57,by=5)
X1=rep(YEARS,length(AGES))
X2=rep(AGES,each=length(YEARS))
reg=glm(Y~as.factor(X1)+as.factor(X2),family="poisson")
summary(reg)
 
Call:
glm(formula = Y ~ as.factor(X1) + as.factor(X2), family = "poisson")
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-5.4502  -1.1611  -0.0603   1.0471   4.6214  
 
Coefficients:
                    Estimate Std. Error z value Pr(>|z|)    
(Intercept)        2.8300461  0.0712160  39.739  < 2e-16 ***
as.factor(X1)1923  0.0099503  0.0446105   0.223 0.823497    
as.factor(X1)1928 -0.0212236  0.0449605  -0.472 0.636891    
as.factor(X1)1933 -0.0377019  0.0451489  -0.835 0.403686    
as.factor(X1)1938 -0.0844692  0.0456962  -1.848 0.064531 .  
as.factor(X1)1943 -0.0439519  0.0452209  -0.972 0.331082    
as.factor(X1)1948 -0.1803236  0.0468786  -3.847 0.000120 ***
as.factor(X1)1953 -0.1960149  0.0470802  -4.163 3.14e-05 ***
as.factor(X1)1958 -0.1199103  0.0461237  -2.600 0.009329 ** 
as.factor(X1)1963 -0.0446620  0.0458508  -0.974 0.330020    
as.factor(X1)1968  0.1192561  0.0450437   2.648 0.008107 ** 
as.factor(X1)1973  0.0985671  0.0472460   2.086 0.036956 *  
as.factor(X1)1978  0.0356199  0.0520094   0.685 0.493423    
as.factor(X1)1983  0.0004365  0.0617191   0.007 0.994357    
as.factor(X1)1988 -0.2191428  0.0981189  -2.233 0.025520 *  
as.factor(X1)1993 -0.5274610  0.3241477  -1.627 0.103689    
as.factor(X2)27    2.0748202  0.0679193  30.548  < 2e-16 ***
as.factor(X2)32    2.5768802  0.0667480  38.606  < 2e-16 ***
as.factor(X2)37    2.5350787  0.0671736  37.739  < 2e-16 ***
as.factor(X2)42    2.2883203  0.0683441  33.482  < 2e-16 ***
as.factor(X2)47    1.9601540  0.0704276  27.832  < 2e-16 ***
as.factor(X2)52    1.5216903  0.0745623  20.408  < 2e-16 ***
as.factor(X2)57    1.0060665  0.0822708  12.229  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 
 
(Dispersion parameter for poisson family taken to be 1)
 
    Null deviance: 5299.30  on 99  degrees of freedom
Residual deviance:  375.53  on 77  degrees of freedom
  (28 observations deleted due to missingness)
AIC: 1052.1
 
Number of Fisher Scoring iterations: 5

Here, we have been able to derive http://freakonometrics.free.fr/blog/latex/marriage12.gif and http://freakonometrics.free.fr/blog/latex/marriage13.gif, where now http://freakonometrics.free.fr/blog/latex/marriage14.gifdenotes the cohort.
We can now predict the number of marriages per year, and per cohort

http://freakonometrics.free.fr/blog/latex/marriage15.gif

Here, given the cohort http://freakonometrics.free.fr/blog/latex/marriage03.gif, the shape of http://freakonometrics.free.fr/blog/latex/marriage16.gif is the following

Yp=predict(reg,type="response")
tYp=matrix(Yp,nrow(m),ncol(m))
tYp[16,]
tYp[16,]
[1]  10.00000 222.94525 209.32773 159.87855 115.06971  42.59102
[7]  18.70168 148.92360
The errors (Pearson error) look like that
Ep=residuals(reg,type="pearson")
 

(where the darker the blue, the smaller the residuals, and the darker the red, the higher the residuals). Obviously, we are missing something here, like a diagonal effect. But this is not the main problem here…

I guess that study here is not valid. The problem is that we deal with open data, and numbers of marriages are not given here: what is given is a he proportion of marriage of men of age http://freakonometrics.free.fr/blog/latex/marriage02.gif during the year http://freakonometrics.free.fr/blog/latex/marriage01.gif, with a yearly normalization. There is a constraint on lines, i.e. we observe

http://freakonometrics.free.fr/blog/latex/marriage08.gif

so that

http://freakonometrics.free.fr/blog/latex/marriage09.gif

This is mentioned in the title

It is still possible to consider a Poisson regression on the http://freakonometrics.free.fr/blog/latex/marriage10.gif, but unfortunately, I do not think any interpretation is valid (unless demography did not change last century). For instance, the following sum

http://freakonometrics.free.fr/blog/latex/marriage17.gif

looks like that

apply(tYp,1,sum)
 [1] 919.948 838.762 846.301 816.552 943.559 930.280 857.871 896.113
 [9] 905.086 948.087 895.862 853.738 826.003 816.192 813.974 927.437

i.e. if we look at the graph

But I do not think we can interpret that sum as the probability (if we divide by 1,000) that a man in that cohort gets married…. And more basically, I cannot do anything with that dataset…

So open data might be interesting. The problem is that most of the time, the data are somehow normalized (or aggregated). And then, it becomes difficult to use them…

So I will have to work further to be able to write something (mathematically valid) on marriage strategy before Valentine’s day…. to be continued.

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.

Les tables de mortalité

Chose promise, chose dûe, un court billet expliquant les principales tables utilisées,

  • Les tables TD et TV 88-90

Ces tables datent un peu, et si je continue à en parler en cours, c’est parce qu’elles sont simples à utiliser (et pour continuer à me faire croire que je n’ai pas vieilli depuis mes études). Cette table est d’ailleurs tellement sérieuse qu’on la retrouve dans la loi (ici), dans un arrêté d’avril 1993. La table dite TD 88-90 (pour Décès) a été établie par l’INSEE suite aux observations réalisées entre 1988 et 1990 sur une population d’hommes. Elle était appliquée pour le calcul des primes des contrats d’assurance décès. La table dite TV 88-90 (pour Vie) a été établie par l’INSEE suite aux observations réalisées entre 1988 et 1990 sur une population de femmes. Elle était appliquée pour le calcul des primes des contrats d’assurance en cas de vie. Ces tables peuvent se récupérer à l’aide des codes suivants,

> TD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TD8890.csv",
+ sep=";",header=TRUE)
> TV=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ sep=";",header=TRUE)

Ces tables ont été remplacées par les tables dites TH et TF, respectivement.

  • Les tables TH et TF 00-02

Ces tables ont été établies à partir des données INSEE de la population française entre 2000 et 2002 et ont été lissées. Ce sont des tables générationnelles, qui nécessitent un correctif d’âge pour tenir compte des écarts de mortalité entre générations. Elles sont applicables à partir du 1er janvier 2006. L’institut des actuaires a proposé une “notice d’utilisation” en ligne ici, et Cimon en avait parlé sur son blog ().

  • Les tables TPRV 95

La table TPRV 93 (pour Table Prospective de Rente Viagère) est un extrait de la table dite plancher pour la tarification des contrats de rente viagère. Elle a été publiée par l’arrêté du 28 juillet 1993 (ici sans les annexes), et correspond à une table prospective qui retrace la mortalité des générations 1887 à 1993 (les tables prospectives sont au programme de Master 2).

La TPRV 93 représente la table complète de la génération 1950. La table est en ligne ici (en csv) lisible sous R avec le code suivant,

> TD=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TPRV.csv",
+ sep=";",header=TRUE)

Mais ce n’est que la première étape. On utilise ensuite un décallage d’âge.

On a parlé de ce point en TD (mais sur les taux de mortalité), c’est aussi ce qui s’appelle l’hypothèse de Rueff, et traduit la diminution des taux de mortalité par un rajeunissement (i.e. un gain sur les âges), i.e.

Notons qu’en fait,  dépend du niveau des taux d’intérêt. L’ensemble des tables de décallage sont en ligne ici.

  • Les tables TGH et TGF 05

Il s’agit ici, là encore, d’une population de rentiers, comme pour la (ici pour les 38 pages du code). Aussi, ces tables ont été construites sur une population différente des tables TH et TF (construites sur l’ensemble de la population française). La méthodologie est décrite ici.