All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

SIDE Summer School, day 1

This morning, we start the SIdE (Italian Econometric Association) Summer School, on Machine Learning Algorithms for Econometricians. Emmanuel Flachaire will start with a presentation of nonparametric econometric techniques. I will then get back to the geometry of (standard) econometric techniques, to introduce kernels. The first series of slides are online.

I will then spend more time on the (popular) idea of “least squares” and mention other loss functions. Slides are online.

Optimal transport on large networks

With Alfred Galichon and Lucas Vernet, we recently uploaded a paper entitled optimal transport on large networks on arxiv.

This article presents a set of tools for the modeling of a spatial allocation problem in a large geographic market and gives examples of applications. In our settings, the market is described by a network that maps the cost of travel between each pair of adjacent locations. Two types of agents are located at the nodes of this network. The buyers choose the most competitive sellers depending on their prices and the cost to reach them. Their utility is assumed additive in both these quantities. Each seller, taking as given other sellers prices, sets her own price to have a demand equal to the one we observed. We give a linear programming formulation for the equilibrium conditions. After formally introducing our model we apply it on two examples: prices offered by petrol stations and quality of services provided by maternity wards (only the later is described here for privacy issues). These examples illustrate the applicability of our model to aggregate demand, rank prices and estimate cost structure over the network. We insist on the possibility of applications to large scale data sets using modern linear programming solvers such as Gurobi.

Demand for gas in gas stations in Britanny, and demand for maternity in France (with border correction)

In addition to this paper we released a R toolbox to implement our results and an online tutorial, optimalnetwork.github.io.

Pareto Models for Top Incomes

This week, The Society for the Study of Economic Inequality (ECINEQ) organised the Eighth ECINEQ Meeting 2019  in Paris, hosted by the Paris School of Economics and the World Inequality Lab. Emmanuel Flachaire was there to present our joint work on Pareto Models for Top Incomes. Slides are also available


The paper is still available on hal, and the package (TopIncomes) is also available from github,

library(devtools)
install_github("freakonometrics/TopIncomes")
library(TopIncomes)

On my way to Manizales (Colombia)

Next week, I will be in Manizales, Colombia, for the Third International Congress on Actuarial Science and Quantitative Finance. I will be giving a lecture on Wednesday with Jed Fress and Emilianos Valdez.

I will give my course on Algorithms for Predictive Modeling on Thursday morning (after Jed and Emil’s lectures). Unfortunately, my computer locked itself last week, and I could not unlock it (could not IT team at the university, who have the internal EFI password). So I will not be able to work further on the slides, so it will be based on the version as-at now (clearly in progress).

 

Quinze ans…

Il y a presque quinze ans, jour pour jour, en juin 2004, le premier tome de Mathématiques de l’Assurance Non-Vie  était disponible en librairie…

Quinze ans…! Déjà… Autant le second tome est un peu dépassé aujourd’hui (on reste sur les GLM et les GAM, mais de nombreuses autres techniques statistiques auraient pu être présentée si on avait pris le temps de faire un vraie nouvelle édition), autant le premier me semble toujours au goût du jour. Avec le recul, je mettrais la section plus économique dans le premier tome, en rajoutant une section sur la concurence… mais avec quinze ans de recul, je suis toujours fier de voir ce livre trôner dans la bibliothèque de collègues, ainsi que de nombreux praticiens….

Pareto Models for Top Incomes

With Emmanuel Flachaire, we uploaded on hal a paper on Pareto Models for Top Incomes,

Top incomes are often related to Pareto distribution. To date, economists have mostly used Pareto Type I distribution to model the upper tail of income and wealth distribution. It is a parametric distribution, with an attractive property, that can be easily linked to economic theory. In this paper, we first show that modelling top incomes with Pareto Type I distribution can lead to severe over-estimation of inequality, even with millions of observations. Then, we show that the Generalized Pareto distribution and, even more, the Extended Pareto distribution, are much less sensitive to the choice of the threshold. Thus, they provide more reliable results. We discuss different types of bias that could be encountered in empirical studies and, we provide some guidance for practice. To illustrate, two applications are investigated, on the distribution of income in South Africa in 2012 and on the distribution of wealth in the United States in 2013.

This paper was presented at and UCSB and in several workshops this spring, and this Summer, Emmanuel will present it at ECINEQ.

Note that a R package is also available on github, TopIncomes.

Données Agrégées et Variables Compositionnelles

Avec Enora Belz, nous venons de mettre en ligne une note méthodologique, Données Agrégées et Variables Compositionnelles, sur hal.

La réforme du droit sur les données personnelles en Europe rend difficile l’accès aux données individuelles (même souvent non-nominatives), surtout quand on cherche des données jugées sensibles (et souvent, le revenu entre dans cette catégorie). Une solution souvent envisagée est la mise à disposition de données agrégées spatialement. Ces données posent toutefois deux problèmes techniques. Le premier est que les données catégorielles deviennent des compositions. Le second est lié au paradoxe écologique qui dit qu’il est dangereux d’inférer des relations économétriques individuelles à partir de données agrégées. Nous verrons ici comment travailler avec des données compositionnelles (pour éventuellement juste valider une approche classique de régression linéaire-plus simple à interpréter). Et nous évoquerons le second, mais qui reste malheureusement trop général pour pouvoir être traité de manière satisfaisante.

Extended Scale Free Networks

With Emmanuel Flachaire, we recently uploaded a short article, Extended Scale-Free Networks, on arxiv.

Recently, Broido & Clauset (2019) mentioned that (strict) Scale-Free networks were rare, in real life. This might be related to the statement of Stumpf, Wiuf & May (2005), that sub-networks of scale-free networks are not scale-free. In the later, those sub-networks are asymptotically scale-free, but one should not forget about second-order deviation (possibly also third order actually). In this article, we introduce a concept of extended scale-free network, inspired by the extended Pareto distribution, that actually is maybe more realistic to describe real network than the strict scale free property. This property is consistent with Stumpf, Wiuf & May (2005): sub-network of scale-free larger networks are not strictly scale-free, but extended scale-free.

Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table(
  "http://freakonometrics.free.fr/saporta.csv",
  head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE)
M=matrix(NA,100,ncol(MYOCARDE))
colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7])
S1=S2=M1=M2=M
for(i in 1:100){
idx = which(sample(0:1,size=n, replace=TRUE)==1)
reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,]))
nm=names(reg$coefficients)
M1[i,nm]=reg$coefficients
S1[i,nm]=summary(reg)$coefficients[,2]
f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="")
reg=glm(f,data=MYOCARDE[-idx,])
M2[i,nm]=reg$coefficients
S2[i,nm]=summary(reg)$coefficients[,2]
}

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){
idx=which(!is.na(M1[,j]))
plot(M1[idx,j],M2[idx,j])
abline(a=0,b=1,lty=2,col="gray")
segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j])  
segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j])  
}

For instance, with the intercept, we have the following

 

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

for(j in 1:8){
library(ks)
idx = which(!is.na(M1[,j]))
Z = cbind(M1[idx,j],M2[idx,j])
H = Hpi(x=Z)
fhat = kde(x=Z, H=H)
image(fhat$eval.points[[1]],
fhat$eval.points[[2]],fhat$estimate)
abline(a=0,b=1,lty=2,col="gray")
abline(v=0,lty=2)
abline(h=0,lty=2)
}

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

 

 

On the next one, we have again significance on the training sample, but not on the validation sample,

 

 

and probably more interesting

where the two are very consistent.