Category Archives: Seminar

Exposé à Caen

Je serais en début de semaine à Caen pour un exposé sur “Understanding the Choice Negociated vs. Court Settlements in Bodily Injury Claim Compensations“, à partir de travaux en cours avec Enora Belz, Pierre-Yves Geoffard et Julien Tomas.

In car accidents, involving bodily injuries, a no-fault system has been instated in 1985, the so-called ‘loi Badinter‘. Following the accident (and after consolidation of victims injuries), the insurer of the driver of the car should offer a compensation to all victmims, that should cover health expenditures up to healing or recovery, as well as additional compensation for temporary incapacity, loss of professional earnings, temporary functional deficit, etc. The victim can either accept that compensation, or choose to go to court. Then a judge settles the claim, and the insurer has to pay for this compensation. Using the official data of AGIRA (association pour la gestion des informations sur le risque automobile), with more than 111,000 victims, injured between 1999 and 2014, we try to explain amounts obtained. The challenge here is that we only have to final settlement, and if the victim goes to court, the amount offered by the insurance company. Using Maddala (1983)’s limited dependent model, we model those two amounts, and then investigate the choice to go to court for a victim.

Quantile and Expectile Regressions

Tomorrow afternoon, because Pavel Shevchenko is currently in Rennes, there will be a small workshop. I will present some recent work with Amadou Barry and Karim Oualkacha on quantile and expectile regressions (our work is more specifically on panel regressions, with random effect models, quantile QRRE and expectile ERRE) but tomorrow, it will be more an introduction.  Slides are available online.

MultiAttribute Copula Utility Functions

In June, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we organized this year an internal workshop on that topic. A gave an oveview in September on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. This time, following recent presentations made by Olivier, I will present Ali E. Abbas (recent) contributions on copula-type multriattribute utility functions. Slides are online, and the presentation will be this Thursday

As discussed in the introduction, one (nice) application can be the choice of a seat in a theatre, see

Multiattribute, Risk and Utility

As discussed already, in June 2016, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. Related to that conference, we have a working group on related topics. At the end of September, I gave a brief survey on multivariate distributions. And yesterday, Olivier gave the first part of survey on multivariate decision making. The second part will be in two weeks,

Overview on Multivariate Distributions

In June 2016, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we will organize every month an internal workshop on that topic. We will start tomorrow afternoon, 13:00-14:30, and I will give a brief talk on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. Slides are now online,

Back in Leuven, for a talk on Nonparametric Estimation

I am currently in Leuven for a few days. It is always a pleasure to be back to the place where I defended my PhD, a few years ago.

I will give a talk, tomorrow, at noon, on nonparametric (and kernel related) inference for quantiles and risk measures, inspired by recent work with Emmanuel Flachaire. Our first paper log-transform kernel density estimationof income distribution is online  on http://papers.ssrn.com/id=2514882, and should appear soon. Another one it able to be finalised, soon.

Big Data, à Paris

Mercredi maitin, après une dernière réunion à Bruxelles, je prends le train pour être à Paris à l’heure du repas du midi pour faire une (rapide) intervention sur le big data, pour le groupe de travail big data de l’Institut des Actuaires. Je crois que l’intervention est prévue à 12:30, chez Optimind, rue de la Boëtie. J’ai essayé de mettre quelques éléments de réflexion dans des transparents, histoire de lancer des débats !

Je passerais la soirée à Paris (après quelques réunions l’après-midi), avant de redécoller jeudi matin.

Modeling Dynamic Incentives: Application to Basketball

I will give a talk on “Modeling Dynamic Incentives: Application to Basketball” at the GERAD (Groupe d’études et de recherche en analyse des décisions) on June, 10th June, 6th. This is some joint work with Nathalie Colombier and Romuald Elie

An important aspect of the strategy of most organizations is the provision of incentives to the employees to meet the organization’s objectives. Typically this implies tying pay to performance (see Prendergast, 1999). In order to reward employees for their effort, firms spend considerable resources on performance evaluations. In many cases, evaluation consists of comparing actual performance to a pre-defined individual target. Another frequently used format is relative performance evaluation. Relative performance evaluation may motivate employees to work harder.But it may also be demoralizing and create an excessively competitive workplace, which may hinder overall performance; see Lazear (1989). Determining the overall impact of relative performance evaluation is crucial for companies. Economic research on relative performance evaluation has mainly focused on the comparison of final performances between competitors,like in tournament theory, and on quantitative and subjective performance ratings (Lazear and Gibbs, 2009). In contrast, what happens during a competition and the impact of feedback frequency on effort have so far received little attention. Following Berger and Pope (2011), we decided to use a basketball application to get a better understanding of the role of the feedback information. Sports datasets allow to observe score and team behavior continuously (during a game but also during the season) which can be use as a proxy of the effort. Berger an Pope (2010) asked ”can loosing lead to winning ?” looking at the impact of the halftime score difference on winning probability in NCAA (college) and NBA (pro) games. More precisely, they studied whether a team loosing at halftime is more likely to win than expected using a logit model. They find that usually the higher the score difference the more likely the are to win. But if the halftime score difference is around 0 they observe a discontinuity: loosing with a small difference (e.g. down by 1 point) can lead to increase the effort and win the game. In this paper we try answer the question ”when loosing lead to winning ?”.

Talk at CIMAT, Guanajuato, Mexico

I will be back in Guanajuato, Mexico, this week, to visit Victor Rivero. And I will give a talk at the Centro de Investigacion en Matematicas (CIMAT) this Wednesday on “Multivariate Archimax Copulas“. The slides are already online.

(there is a lot of material on copulas, as requested, to provide an introduction for students not familiar with this concept).

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.