Probit Transformation for Nonparametric Kernel Estimation of the Copula Density, Lille

This Monday I will be in Lille to give a talk at the Journées de Statistiques. The talk will be based on the joint work with Gery Geenens and Davy Paindaveine, on Probit transformation for nonparametric kernel estimation of the copula density”. The papier can be found online, on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

The slides are available on Dropbox (it is a 54Mo file with animated pictures, that do not appear on the version below).

Douce France

Hier, sur Twitter, je mentionnais une carte parue sur Times Higher Education à propos des “100 meilleurs universités” (dans le monde entier). Je ne sais plus par quel biais j’ai vu des cartes, par pays, permettant de voir où étaient les 100 meilleures universités (d’ordinaire, on a un classement, mais là, pour la première fois, on avait des cartes),

Le premier tweet, en anglais, qui montrait la carte est passé assez inaperçu. Ce n’est pas le cas du second, en français, qui a été beaucoup repris, dans lequel je posais la question de manière plus directe, “sauras-tu remarquer une particularité géographique de l’enseignement supérieur français?“. Car je trouve la carte française assez choquante. disait de manière assez élégante que la “ville lumière” faisait de l’ombre aux autres villes françaises. Ce n’est pas peu dire ! Dans tous les autres pays comparables, avec 4, 5 ou 6 universités dans ce classement, on a une relative dispersion sur le territoire. Mais en France, on a Paris. Et c’est tout.

C’est amusant car cette semaine, j’ai commencé le livre Unknown Quantity de John Derbyshire qui raconte, de manière très vulgarisée (et donc parfaite pour moi) l’histoire de l’algèbre. Et dans un des premiers chapitres (on est encore en Mésopotamie) il y a ce paragraphe qui m’a beaucoup rappelé cette carte

Effectivement, il faudrait que je creuse un peu, mais j’ai du mal à voir ce qui va ressortir de cette méga-concentration géographique typiquement française de l’enseignement supérieur. Rien de bon, je crains…

Copulas and Financial Time Series

I was recently asked to write a survey on copulas for financial time series. The paper is, so far, unfortunately, in French, and is available on https://hal.archives-ouvertes.fr/. There is a description of various models, including some graphs and statistical outputs, obtained from read data.

To illustrate, I’ve been using weekly log-returns of (crude) oil prices, Brent, Dubaï and Maya.

The dataset is available from an excel file, oil.xls (I thought it was possible to load it direclty from the internet, but it did not work… so I suggest to download the file first, and then load it)

> library(xlsx)
> temp <- tempfile()
> download.file(
+ "http://freakonometrics.free.fr/oil.xls",temp)
trying URL 'http://freakonometrics.free.fr/oil.xls'
Content type 'application/vnd.ms-excel' length 99328 bytes (97 KB)
downloaded 97 KB
> oil=read.xlsx(temp,sheetName="DATA",dec=",")
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl,  : 
  java.io.IOException: block[ 0 ] already removed - does your POIFS have circular or duplicate block references?
> oil=read.xlsx("D:\\home\\acharpen\\mes documents\\oil.xls",sheetName="DATA")

Then we can plot those three time series

> head(oil)
        Date      WTI    brent   Dubai     Maya
1 1997-01-10  2.73672  2.25465  3.3673   1.5400
2 1997-01-17 -3.40326 -6.01433 -3.8249  -4.1076
3 1997-01-24 -4.09531 -1.43076 -6.6375  -4.6166
4 1997-01-31 -0.65789  0.34873  0.7326  -1.5122
5 1997-02-07 -3.14293 -1.97765 -0.7326  -1.8798
6 1997-02-14 -5.60321 -7.84534 -7.6372 -11.0549

> Time=as.Date(oil$Date,"%Y-%m-%d")
> plot(Time,oil[,3],type="l",ylab="Brent, weekly log returns",ylim=range(oil[,3:5]))

The idea is to use some multivariate ARMA-GARCH processes here. The heuristics here is that the first part is used to model the dynamics of the average value of the time series, and the second part is used to model the dynamics of the variance of the time series. Two kinds of models are considered in the paper

  • a mutivariate GARCH process (or a model on the dynamics of the variance matrix) on the residuals from the ARMA models
  • a multivariate model (based on copulas) on the residuals of the ARMA-GARCH process

Continue reading Copulas and Financial Time Series

Review of ‘Computational Actuarial Science with R’

Andrey Kosteko recently pusblihed a review of the book Computational Actuarial Science with R in JRSS-A. As mentioned in the review, we should still improve the github, where codes are supposed to be uploaded. And as mentioned in a previous post, the package that contains all the datasets is not hosted by the CRAN but can be found on http://cas.uqam.ca/. Hence, use

> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/", type="source")
> library(CASdatasets)
> ?CASdatasets

Segmentation et Mutualisation, les deux faces d’une même pièce ?

Ce billet est co-écrit avec Michel Denuit et Romuald Elie. Il s’agit de la version préliminaire d’un article qui sera bientôt soumis pour publication.

L’assurance repose fondamentalement sur l’idée que la mutualisation des risques entre des assurés est possible. Cette mutualisation, qui peut être vue comme une relecture actuarielle de la loi des grands nombres, n’ayant de sens qu’au sein d’une population de risques « homogènes » (Charpentier [2011]). Cette condition (actuarielle) impose aux assureurs de segmenter, ce que confirment plusieurs travaux économiques. Avec l’explosion du nombre de données, et donc de possibles variables tarifaires, certains assureurs évoquent l’idée d’un tarif individuel, semblant remettre en cause l’idée même de mutualisation des risques. Entre cette force qui pousse à segmenter et la force de rappel qui tend (pour des raisons sociales mais aussi actuarielles – ou au moins de robustesse statistique[1]) à imposer une solidarité minimale entre les assurés, quel équilibre va en résulter, dans un contexte de concurrence fort entre les compagnies d’assurance ?

Continue reading Segmentation et Mutualisation, les deux faces d’une même pièce ?

Working with “large” datasets, with dplyr and data.table

A few months ago, I was doing some training on data science for actuaries, and I started to get interesting puzzeling questions. For instance, Fleur was working on telematic data, and she’s been challenging my (rudimentary) knowledge of R. As claimed by Donald Knuth, “we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil“. So usually, in my courses, and my training, codes are very basic, and easy to understand. But usually poorly efficient. Since I was challenged, to work on very large datasets, we’ve been working on R functions to manipulate those possibly (very) large dataset, and to run some simple functions as fast as possible (with simple filter and aggregation functions).

In order to illustrate, let us generate our “large” telematic dataset. Assume that we have 10,000 drivers, each of them drives about 200 times, and each time, we have, say, 80 locations. That mean around 160 million observations. It is “large”, but not huge.

> rm(list=ls())
> N_id=10000
> N_tr=200
> T_tr=80

In order to have a code as general as possible, assume that we have some kind of randomness,

> set.seed(1)
> N=rpois(N_id,N_tr)
> N_traj=rpois(sum(N),T_tr)

By “observation”, we consider a driver Id., a Trajectory Id., and a location (latitude and longitude) at some specific dates (e.g. every 15 sec.). Again, just because we want some dataset to illustrate, swe will draw drivers’s home randomly (here uniformly on some square)

> origin_lat=runif(N_id,-5,5)
> origin_lon=runif(N_id,-5,5)

And, then, from those locations, we generate a 2-dimensional random walk,

> lat=lon=Traj_Id=rep(NA,sum(N_traj))
> Pers_Id=rep(NA,length(N_traj))
> s=1
> for(i in 1:N_id){Pers_Id[s:(s+N[i])]=i;s=s+N[i]}
> s=1
> for(i in 1:length(N_traj)){lat[s:(s+N_traj[i])]=origin_lat[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  lon[s:(s+N_traj[i])]=origin_lon[Pers_Id[i]]+
+  cumsum(c(0,rnorm(N_traj[i]-1,0,sd=.2)));
+  s=s+N_traj[i]}

We have something which looks like

Continue reading Working with “large” datasets, with dplyr and data.table