All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Douce France

Hier, sur Twitter, je mentionnais une carte parue sur Times Higher Education à propos des “100 meilleurs universités” (dans le monde entier). Je ne sais plus par quel biais j’ai vu des cartes, par pays, permettant de voir où étaient les 100 meilleures universités (d’ordinaire, on a un classement, mais là, pour la première fois, on avait des cartes),

Le premier tweet, en anglais, qui montrait la carte est passé assez inaperçu. Ce n’est pas le cas du second, en français, qui a été beaucoup repris, dans lequel je posais la question de manière plus directe, “sauras-tu remarquer une particularité géographique de l’enseignement supérieur français?“. Car je trouve la carte française assez choquante. disait de manière assez élégante que la “ville lumière” faisait de l’ombre aux autres villes françaises. Ce n’est pas peu dire ! Dans tous les autres pays comparables, avec 4, 5 ou 6 universités dans ce classement, on a une relative dispersion sur le territoire. Mais en France, on a Paris. Et c’est tout.

C’est amusant car cette semaine, j’ai commencé le livre Unknown Quantity de John Derbyshire qui raconte, de manière très vulgarisée (et donc parfaite pour moi) l’histoire de l’algèbre. Et dans un des premiers chapitres (on est encore en Mésopotamie) il y a ce paragraphe qui m’a beaucoup rappelé cette carte

Effectivement, il faudrait que je creuse un peu, mais j’ai du mal à voir ce qui va ressortir de cette méga-concentration géographique typiquement française de l’enseignement supérieur. Rien de bon, je crains…

Copulas and Financial Time Series

I was recently asked to write a survey on copulas for financial time series. The paper is, so far, unfortunately, in French, and is available on https://hal.archives-ouvertes.fr/. There is a description of various models, including some graphs and statistical outputs, obtained from read data.

To illustrate, I’ve been using weekly log-returns of (crude) oil prices, Brent, Dubaï and Maya.

The dataset is available from an excel file, oil.xls (I thought it was possible to load it direclty from the internet, but it did not work… so I suggest to download the file first, and then load it)

> library(xlsx)
> temp <- tempfile()
> download.file(
+ "http://freakonometrics.free.fr/oil.xls",temp)
trying URL 'http://freakonometrics.free.fr/oil.xls'
Content type 'application/vnd.ms-excel' length 99328 bytes (97 KB)
downloaded 97 KB
> oil=read.xlsx(temp,sheetName="DATA",dec=",")
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl,  : 
  java.io.IOException: block[ 0 ] already removed - does your POIFS have circular or duplicate block references?
> oil=read.xlsx("D:\\home\\acharpen\\mes documents\\oil.xls",sheetName="DATA")

Then we can plot those three time series

> head(oil)
        Date      WTI    brent   Dubai     Maya
1 1997-01-10  2.73672  2.25465  3.3673   1.5400
2 1997-01-17 -3.40326 -6.01433 -3.8249  -4.1076
3 1997-01-24 -4.09531 -1.43076 -6.6375  -4.6166
4 1997-01-31 -0.65789  0.34873  0.7326  -1.5122
5 1997-02-07 -3.14293 -1.97765 -0.7326  -1.8798
6 1997-02-14 -5.60321 -7.84534 -7.6372 -11.0549

> Time=as.Date(oil$Date,"%Y-%m-%d")
> plot(Time,oil[,3],type="l",ylab="Brent, weekly log returns",ylim=range(oil[,3:5]))

The idea is to use some multivariate ARMA-GARCH processes here. The heuristics here is that the first part is used to model the dynamics of the average value of the time series, and the second part is used to model the dynamics of the variance of the time series. Two kinds of models are considered in the paper

  • a mutivariate GARCH process (or a model on the dynamics of the variance matrix) on the residuals from the ARMA models
  • a multivariate model (based on copulas) on the residuals of the ARMA-GARCH process

Continue reading Copulas and Financial Time Series

Review of ‘Computational Actuarial Science with R’

Andrey Kosteko recently pusblihed a review of the book Computational Actuarial Science with R in JRSS-A. As mentioned in the review, we should still improve the github, where codes are supposed to be uploaded. And as mentioned in a previous post, the package that contains all the datasets is not hosted by the CRAN but can be found on http://cas.uqam.ca/. Hence, use

> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/", type="source")
> library(CASdatasets)
> ?CASdatasets

Segmentation et Mutualisation, les deux faces d’une même pièce ?

Ce billet est co-écrit avec Michel Denuit et Romuald Elie. Il s’agit de la version préliminaire d’un article qui sera bientôt soumis pour publication.

L’assurance repose fondamentalement sur l’idée que la mutualisation des risques entre des assurés est possible. Cette mutualisation, qui peut être vue comme une relecture actuarielle de la loi des grands nombres, n’ayant de sens qu’au sein d’une population de risques « homogènes » (Charpentier [2011]). Cette condition (actuarielle) impose aux assureurs de segmenter, ce que confirment plusieurs travaux économiques. Avec l’explosion du nombre de données, et donc de possibles variables tarifaires, certains assureurs évoquent l’idée d’un tarif individuel, semblant remettre en cause l’idée même de mutualisation des risques. Entre cette force qui pousse à segmenter et la force de rappel qui tend (pour des raisons sociales mais aussi actuarielles – ou au moins de robustesse statistique[1]) à imposer une solidarité minimale entre les assurés, quel équilibre va en résulter, dans un contexte de concurrence fort entre les compagnies d’assurance ?

Continue reading Segmentation et Mutualisation, les deux faces d’une même pièce ?

Working with “large” datasets, with dplyr and data.table

A few months ago, I was doing some training on data science for actuaries, and I started to get interesting puzzeling questions. For instance, Fleur was working on telematic data, and she’s been challenging my (rudimentary) knowledge of R. As claimed by Donald Knuth, “we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil“. So usually, in my courses, and my training, codes are very basic, and easy to understand. But usually poorly efficient. Since I was challenged, to work on very large datasets, we’ve been working on R functions to manipulate those possibly (very) large dataset, and to run some simple functions as fast as possible (with simple filter and aggregation functions).

In order to illustrate, let us generate our “large” telematic dataset. Assume that we have 10,000 drivers, each of them drives about 200 times, and each time, we have, say, 80 locations. That mean around 160 million observations. It is “large”, but not huge.

> rm(list=ls())
> N_id=10000
> N_tr=200
> T_tr=80

In order to have a code as general as possible, assume that we have some kind of randomness,

> set.seed(1)
> N=rpois(N_id,N_tr)
> N_traj=rpois(sum(N),T_tr)

By “observation”, we consider a driver Id., a Trajectory Id., and a location (latitude and longitude) at some specific dates (e.g. every 15 sec.). Again, just because we want some dataset to illustrate, swe will draw drivers’s home randomly (here uniformly on some square)

> origin_lat=runif(N_id,-5,5)
> origin_lon=runif(N_id,-5,5)

And, then, from those locations, we generate a 2-dimensional random walk,

> lat=lon=Traj_Id=rep(NA,sum(N_traj))
> Pers_Id=rep(NA,length(N_traj))
> s=1
> for(i in 1:N_id){Pers_Id[s:(s+N[i])]=i;s=s+N[i]}
> s=1
> for(i in 1:length(N_traj)){lat[s:(s+N_traj[i])]=origin_lat[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  lon[s:(s+N_traj[i])]=origin_lon[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  s=s+N_traj[i]}

We have something which looks like

Continue reading Working with “large” datasets, with dplyr and data.table

I Fought the (distribution) Law (and the Law did not win)

A few days ago, I was asked if we should spend a lot of time to choose the distribution we use, in GLMs, for (actuarial) ratemaking. On that topic, I usually claim that the family is not the most important parameter in the regression model. Consider the following dataset

> db <- data.frame(x=c(1,2,3,4,5),y=c(1,2,4,2,6))
> plot(db,xlim=c(0,6),ylim=c(-1,8),pch=19)

To visualize a regression model, use the following code

> nd=data.frame(x=seq(0,6,by=.1))
> add_predict = function(reg){
+ prd1=predict(reg,newdata=nd,se.fit = TRUE,type="response")
+ y1=prd1$fit
+ y1_upp=prd1$fit+prd1$residual.scale*1.96*
prd1$se.fit   
+ y1_low=prd1$fit-prd1$residual.scale*1.96*
prd1$se.fit 
+ polygon(c(nd$x,rev(nd$x)),c(y1_upp,
rev(y1_low)),col="light green",angle=90,
density=40,border=NA)
+ lines(nd$x,y1,col="red",lwd=2)
+ }

For instance, with a Poisson regression (with a log link function) we get

> plot(db)
> reg1=glm(y~x,family=poisson(link="log"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with a log link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="log"),
+ data=db)
> add_predict(reg2)

If we just care about the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So, indeed, forget about the (distribution) law when running a GLM. Not convinced? Consider – on the same dataset – a Poisson regression (with an identity link function this time)

> plot(db)
> reg1=glm(y~x,family=poisson(link="identity"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with an identity link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="identity"),
+ data=db)
> add_predict(reg2)

Again, if we just plot the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So clearly, the simplistic message you should not care too much about the (distribution) law seems to be valid…

Continue reading I Fought the (distribution) Law (and the Law did not win)

Visualising a Classification in High Dimension, part 2

A few weeks ago, I published a post on Visualising a Classification in High Dimension, based on the use of a principal component analysis, to get a projection on the first two components. Following that post, I was wondering what could be done in the context of a classification on categorical covariates. A natural idea would be to consider a correspondance analysis, and to run a similar code.

Consider here the dataset used in a recent post,

> source("http://freakonometrics.free.fr/import_data_credit.R")

If we consider a correspondance analysis, we get

> library(FactoMineR)
> acm=MCA(train.db,quali.sup = 
+ which(names(train.db,)=="class"),ncp=10)

For the covariates (including also the variable we want to model, considered here as some supplementary variable), the visualisation – on the first two components – is

and for the individuals

Continue reading Visualising a Classification in High Dimension, part 2