All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

An Update on Boosting with Splines

In my previous post, An Attempt to Understand Boosting Algorithm(s), I was puzzled by the boosting convergence when I was using some spline functions (more specifically linear by parts and continuous regression functions). I was using

> library(splines)
> fit=lm(y~bs(x,degree=1,df=3),data=df)

The problem with that spline function is that knots seem to be fixed. The iterative boosting algorithm is

  • start with some regression model 
  • compute the residuals, including some shrinkage parameter,

then the strategy is to model those residuals

  • at step , consider regression 
  • update the residuals 

and to loop. Then set

I thought that boosting would work well if at step , it was possible to change the knots. But the output

was quite disappointing: boosting does not improve the prediction here. And it looks like knots don’t change. Actually, if we select the ‘best‘ knots, the output is much better. The dataset is still

> n=300
> set.seed(1)
> u=sort(runif(n)*2*pi)
> y=sin(u)+rnorm(n)/4
> df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

> library(freeknotsplines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1, 
+ numknot = 2, 555)

The code of the previous post can simply be updated

> v=.05
> library(splines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1, 
+ numknot = 2, 555)
> fit=lm(y~bs(x,degree=1,knots=
+ xy.freekt@optknot),data=df)
> yp=predict(fit,newdata=df)
> df$yr=df$y - v*yp
> YP=v*yp
>  for(t in 1:200){
+    xy.freekt=freelsgen(df$x, df$yr, degree = 1,
+    numknot = 2, 555)
+ fit=lm(yr~bs(x,degree=1,knots=
+     xy.freekt@optknot),data=df)
+    yp=predict(fit,newdata=df)
+    df$yr=df$yr - v*yp
+    YP=cbind(YP,v*yp)
+  }
>  nd=data.frame(x=seq(0,2*pi,by=.01))
>  viz=function(M){
+    if(M==1)  y=YP[,1]
+    if(M>1)   y=apply(YP[,1:M],1,sum)
+    plot(df$x,df$y,ylab="",xlab="")
+    lines(df$x,y,type="l",col="red",lwd=3)
+    fit=lm(y~bs(x,degree=1,df=3),data=df)
+    yp=predict(fit,newdata=nd)
+    lines(nd$x,yp,type="l",col="blue",lwd=3)
+    lines(nd$x,sin(nd$x),lty=2)}
 
>  viz(100)

I like that graph. I had the intuition that using (simple) splines would be possible, and indeed, we get a very smooth prediction.

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)

An Attempt to Understand Boosting Algorithm(s)

Last tuesday, at the annual meeting of the French Economic Association, I was having lunch with Alfred, and while we were chatting about modeling issues (econometric models against machine learning prediction), he asked me what boosting was. Since I could not be very specific, we’ve been looking at wikipedia webpage.

Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones

One should admit that it is not very informative. But at least, there is the idea that ‘weak learners’ can be used to provide a good predictor. Now, to be honest, I guess I understand the concept. But I still can’t reproduce what I got with standard ‘boosting’ packages.

There are a lot of publications about the concept of ‘boosting’. In 1988, Michael Kearns published Thoughts on Hypothesis Boosting, which is probably the oldest one. About the algorithms, it is possible to find some references. Consider for instance Improving Regressors using Boosting Techniques, by Harris Drucker. Or The Boosting Approach to Machine Learning An Overview by Robert Schapire, among many others. In order to illustrate the use of boosting in the context of regression (and not classification, since I believe it provides a better visualisation) consider the section in Dong-Sheng Cao’s In The boosting: A new idea of building models.

Continue reading An Attempt to Understand Boosting Algorithm(s)

‘Variable Importance Plot’ and Variable Selection

Classification trees are nice. They provide an interesting alternative to a logistic regression.  I started to include them in my courses maybe 7 or 8 years ago. The question is nice (how to get an optimal partition), the algorithmic procedure is nice (the trick of splitting according to one variable, and only one, at each node, and then to move forward, never backward), and the visual output is just perfect (with that tree structure). But the prediction can be rather poor. The performance of that algorithme can hardly compete with a (well specified) logistic regression.

Then I discovered forests (see Leo Breiman’s page for a detailed presentation). Being a huge fan of boostrap procedures I loved the idea. In regression models, I usually mention boostrap to avoid asymptotic approximations: we boostrap the rows (the observations). In the case of random forest, I have to admit that the idea of selecting randomly a set of possible variables at each node is very clever. The performance is much better, but interpretation is usually more difficult. And something that I love when there are a lot of covariance, the variable importance plot. Which is something that we can hardly get with econometric models (please let me know if I’m wrong).

In order to illustrate, let us generate a large dataset. Not necessarily huge, but large, so that we really have to select variables.  Since it is more interesting if we have possibly correlated variables, we need a covariance matrix. There is a nice package in R to randomly generate covariance matrices.

> set.seed(1)
> n=500
> library(clusterGeneration)
> library(mnormt)
> S=genPositiveDefMat("eigen",dim=15)
> S=genPositiveDefMat("unifcorrmat",dim=15)
> X=rmnorm(n,varcov=S$Sigma)
> library(corrplot)
> corrplot(cor(X), order = "hclust")

See Gosh & Hendersen (2003) for more details on the methodology.

Continue reading ‘Variable Importance Plot’ and Variable Selection

p-hacking, or cheating on a p-value

Yesterday evening, I discovered some interesting slides on False-Positives, p-Hacking, Statistical Power, and Evidential Value, via  ‘s post on Twitter. More precisely, there was this slide on how cheating (because that’s basically what it is) to get a ‘good’ model (by targeting the p-value)

As mentioned by @david_colquhoun  one should be careful when reading the slides : some statistician might have a heart attack when they read

But still, there are interesting points in that slide.

Continue reading p-hacking, or cheating on a p-value

Who interacts on Twitter during a conference (#JDSLille)

Disclamer: This is a joint post with Avner Bar-Hen, a.k.a. @a_bh, Benjamin Guedj, a.k.a. @bguedj and Nathalie Villa, a.k.a. @Natty_V2

Organised annually since 1970 by the French Society of Statistics (SFdS), the Journées de Statistique (JdS) are the most important scientific event of the French statistical community. More than 400 researchers, teachers and practitioners meet at each edition. In 2015, JDS took place in Lille, in France.

SFdS regularly tweets (with the account @Statfr) and for the first year a live-tweet was organized durind JdS. The Hashtag was #JDSLille. The aim of this post is a (brief) statistical analysis of the live-tweet.

Continue reading Who interacts on Twitter during a conference (#JDSLille)

Probit Transformation for Nonparametric Kernel Estimation of the Copula Density, Lille

This Monday I will be in Lille to give a talk at the Journées de Statistiques. The talk will be based on the joint work with Gery Geenens and Davy Paindaveine, on Probit transformation for nonparametric kernel estimation of the copula density”. The papier can be found online, on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

The slides are available on Dropbox (it is a 54Mo file with animated pictures, that do not appear on the version below).

Douce France

Hier, sur Twitter, je mentionnais une carte parue sur Times Higher Education à propos des “100 meilleurs universités” (dans le monde entier). Je ne sais plus par quel biais j’ai vu des cartes, par pays, permettant de voir où étaient les 100 meilleures universités (d’ordinaire, on a un classement, mais là, pour la première fois, on avait des cartes),

Le premier tweet, en anglais, qui montrait la carte est passé assez inaperçu. Ce n’est pas le cas du second, en français, qui a été beaucoup repris, dans lequel je posais la question de manière plus directe, “sauras-tu remarquer une particularité géographique de l’enseignement supérieur français?“. Car je trouve la carte française assez choquante. disait de manière assez élégante que la “ville lumière” faisait de l’ombre aux autres villes françaises. Ce n’est pas peu dire ! Dans tous les autres pays comparables, avec 4, 5 ou 6 universités dans ce classement, on a une relative dispersion sur le territoire. Mais en France, on a Paris. Et c’est tout.

C’est amusant car cette semaine, j’ai commencé le livre Unknown Quantity de John Derbyshire qui raconte, de manière très vulgarisée (et donc parfaite pour moi) l’histoire de l’algèbre. Et dans un des premiers chapitres (on est encore en Mésopotamie) il y a ce paragraphe qui m’a beaucoup rappelé cette carte

Effectivement, il faudrait que je creuse un peu, mais j’ai du mal à voir ce qui va ressortir de cette méga-concentration géographique typiquement française de l’enseignement supérieur. Rien de bon, je crains…

Copulas and Financial Time Series

I was recently asked to write a survey on copulas for financial time series. The paper is, so far, unfortunately, in French, and is available on https://hal.archives-ouvertes.fr/. There is a description of various models, including some graphs and statistical outputs, obtained from read data.

To illustrate, I’ve been using weekly log-returns of (crude) oil prices, Brent, Dubaï and Maya.

The dataset is available from an excel file, oil.xls (I thought it was possible to load it direclty from the internet, but it did not work… so I suggest to download the file first, and then load it)

> library(xlsx)
> temp <- tempfile()
> download.file(
+ "http://freakonometrics.free.fr/oil.xls",temp)
trying URL 'http://freakonometrics.free.fr/oil.xls'
Content type 'application/vnd.ms-excel' length 99328 bytes (97 KB)
downloaded 97 KB
> oil=read.xlsx(temp,sheetName="DATA",dec=",")
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl,  : 
  java.io.IOException: block[ 0 ] already removed - does your POIFS have circular or duplicate block references?
> oil=read.xlsx("D:\\home\\acharpen\\mes documents\\oil.xls",sheetName="DATA")

Then we can plot those three time series

> head(oil)
        Date      WTI    brent   Dubai     Maya
1 1997-01-10  2.73672  2.25465  3.3673   1.5400
2 1997-01-17 -3.40326 -6.01433 -3.8249  -4.1076
3 1997-01-24 -4.09531 -1.43076 -6.6375  -4.6166
4 1997-01-31 -0.65789  0.34873  0.7326  -1.5122
5 1997-02-07 -3.14293 -1.97765 -0.7326  -1.8798
6 1997-02-14 -5.60321 -7.84534 -7.6372 -11.0549

> Time=as.Date(oil$Date,"%Y-%m-%d")
> plot(Time,oil[,3],type="l",ylab="Brent, weekly log returns",ylim=range(oil[,3:5]))

The idea is to use some multivariate ARMA-GARCH processes here. The heuristics here is that the first part is used to model the dynamics of the average value of the time series, and the second part is used to model the dynamics of the variance of the time series. Two kinds of models are considered in the paper

  • a mutivariate GARCH process (or a model on the dynamics of the variance matrix) on the residuals from the ARMA models
  • a multivariate model (based on copulas) on the residuals of the ARMA-GARCH process

Continue reading Copulas and Financial Time Series

Review of ‘Computational Actuarial Science with R’

Andrey Kosteko recently pusblihed a review of the book Computational Actuarial Science with R in JRSS-A. As mentioned in the review, we should still improve the github, where codes are supposed to be uploaded. And as mentioned in a previous post, the package that contains all the datasets is not hosted by the CRAN but can be found on http://cas.uqam.ca/. Hence, use

> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/", type="source")
> library(CASdatasets)
> ?CASdatasets

Segmentation et Mutualisation, les deux faces d’une même pièce ?

Ce billet est co-écrit avec Michel Denuit et Romuald Elie. Il s’agit de la version préliminaire d’un article qui sera bientôt soumis pour publication.

L’assurance repose fondamentalement sur l’idée que la mutualisation des risques entre des assurés est possible. Cette mutualisation, qui peut être vue comme une relecture actuarielle de la loi des grands nombres, n’ayant de sens qu’au sein d’une population de risques « homogènes » (Charpentier [2011]). Cette condition (actuarielle) impose aux assureurs de segmenter, ce que confirment plusieurs travaux économiques. Avec l’explosion du nombre de données, et donc de possibles variables tarifaires, certains assureurs évoquent l’idée d’un tarif individuel, semblant remettre en cause l’idée même de mutualisation des risques. Entre cette force qui pousse à segmenter et la force de rappel qui tend (pour des raisons sociales mais aussi actuarielles – ou au moins de robustesse statistique[1]) à imposer une solidarité minimale entre les assurés, quel équilibre va en résulter, dans un contexte de concurrence fort entre les compagnies d’assurance ?

Continue reading Segmentation et Mutualisation, les deux faces d’une même pièce ?