Category Archives: Seminar

Back in Leuven, for a talk on Nonparametric Estimation

I am currently in Leuven for a few days. It is always a pleasure to be back to the place where I defended my PhD, a few years ago.

I will give a talk, tomorrow, at noon, on nonparametric (and kernel related) inference for quantiles and risk measures, inspired by recent work with Emmanuel Flachaire. Our first paper log-transform kernel density estimationof income distribution is online  on http://papers.ssrn.com/id=2514882, and should appear soon. Another one it able to be finalised, soon.

Big Data, à Paris

Mercredi maitin, après une dernière réunion à Bruxelles, je prends le train pour être à Paris à l’heure du repas du midi pour faire une (rapide) intervention sur le big data, pour le groupe de travail big data de l’Institut des Actuaires. Je crois que l’intervention est prévue à 12:30, chez Optimind, rue de la Boëtie. J’ai essayé de mettre quelques éléments de réflexion dans des transparents, histoire de lancer des débats !

Je passerais la soirée à Paris (après quelques réunions l’après-midi), avant de redécoller jeudi matin.

Modeling Dynamic Incentives: Application to Basketball

I will give a talk on “Modeling Dynamic Incentives: Application to Basketball” at the GERAD (Groupe d’études et de recherche en analyse des décisions) on June, 10th June, 6th. This is some joint work with Nathalie Colombier and Romuald Elie

An important aspect of the strategy of most organizations is the provision of incentives to the employees to meet the organization’s objectives. Typically this implies tying pay to performance (see Prendergast, 1999). In order to reward employees for their effort, firms spend considerable resources on performance evaluations. In many cases, evaluation consists of comparing actual performance to a pre-defined individual target. Another frequently used format is relative performance evaluation. Relative performance evaluation may motivate employees to work harder.But it may also be demoralizing and create an excessively competitive workplace, which may hinder overall performance; see Lazear (1989). Determining the overall impact of relative performance evaluation is crucial for companies. Economic research on relative performance evaluation has mainly focused on the comparison of final performances between competitors,like in tournament theory, and on quantitative and subjective performance ratings (Lazear and Gibbs, 2009). In contrast, what happens during a competition and the impact of feedback frequency on effort have so far received little attention. Following Berger and Pope (2011), we decided to use a basketball application to get a better understanding of the role of the feedback information. Sports datasets allow to observe score and team behavior continuously (during a game but also during the season) which can be use as a proxy of the effort. Berger an Pope (2010) asked ”can loosing lead to winning ?” looking at the impact of the halftime score difference on winning probability in NCAA (college) and NBA (pro) games. More precisely, they studied whether a team loosing at halftime is more likely to win than expected using a logit model. They find that usually the higher the score difference the more likely the are to win. But if the halftime score difference is around 0 they observe a discontinuity: loosing with a small difference (e.g. down by 1 point) can lead to increase the effort and win the game. In this paper we try answer the question ”when loosing lead to winning ?”.

Talk at CIMAT, Guanajuato, Mexico

I will be back in Guanajuato, Mexico, this week, to visit Victor Rivero. And I will give a talk at the Centro de Investigacion en Matematicas (CIMAT) this Wednesday on “Multivariate Archimax Copulas“. The slides are already online.

(there is a lot of material on copulas, as requested, to provide an introduction for students not familiar with this concept).

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.

Beta kernel and transformed kernel

This Thursday I will give a talk at Laval University, on “Beta kernel and transformed kernel : applications to copula density estimation and quantile estimation“. This time, I will talk at the department of Mathematics and Statistics (13:30 at the pavillon Adrien-Pouliot). “Because copulas have bounded support (the unit square in dimension 2), standard kernel based estimators of densities are (multiplicatively) biased on borders and in corners of the support. Two techniques can be used to avoid that underestimation: Beta kernels and Transformed kernel. We will describe and discuss those two techniques in the first part of the talk. Then, we will see that it is possible to combine those two techniques to get nice estimator of several quantities (e.g. quantiles): transform the data to get on the unit interval – using a transformed kernel – then estimate the (transformed) quantile on [0,1] using a beta kernel, then get back on the initial support. As we will see on simulations, that technique can be better than standard quantile estimators, especially when data are heavy tailed.” Slides can be downloaded here.

  • kernel based density estimation

Kernel based estimation are a popular (and natural) technique to estimate densities.  It is simply and extension of the moving histogram:

so we count how many observations are a the neighborhood of the point where we want to estimate the density of the distribution. Then it is natural so consider a smoothing function, i.e. instead of a step function (either observations are close enough, or not), it is possible to give weights to observations, which will be a decreasing function of the distance,

With a smooth kernel, we have a smooth estimation of the density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-01.gif

Then it is possible to play on the bandwidth, either to get a more accurate estimation of the density, but not that smooth (small bias but large variance),

or a smoother one (large bias, but small variance),

In R, it is simply

> X=rnorm(100)
> (D=density(X))
 
Call:
	density.default(x = X)
 
Data: X (100 obs.);	Bandwidth 'bw' = 0.3548
 
       x                   y            
 Min.   :-3.910799   Min.   :0.0001265  
 1st Qu.:-1.959098   1st Qu.:0.0108900  
 Median :-0.007397   Median :0.0513358  
 Mean   :-0.007397   Mean   :0.1279645  
 3rd Qu.: 1.944303   3rd Qu.:0.2641952  
 Max.   : 3.896004   Max.   :0.3828215  
 
> plot(D$x,D$y)
  • Beta kernel

The idea of Beta kernel is to consider kernels having support [0,1]. In the univariate case,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-06.gif

where http://freakonometrics.blog.free.fr/public/perso3/kernel-f-07.gif is the density of a Beta distribution, i.e.

http://freakonometrics.blog.free.fr<br />
/public/perso3/beta-distribution.gif

For additional material, I have uploaded some R code to fit copula densities using beta kernels,

library(copula)
beta.kernel.copula.surface = function (u,v,bx,by,p) {
s = seq(1/p, len=(p-1), by=1/p)
mat = matrix(0,nrow = p-1, ncol = p-1)
for (i in 1:(p-1)) {
a = s[i]
for (j in 1:(p-1)) {
b = s[j]
mat[i,j] = sum(dbeta(a,u/bx,(1-u)/bx) *
dbeta(b,v/by,(1-v)/by)) / length(u)
} }
return(data.matrix(mat)) }

Then we can used it to see what we get on a simulated sample

library(copula)
COPULA = frankCopula(param=5, dim = 2)
X = rcopula(n=1000,COPULA)
p0 = 26
Z= beta.kernel.copula.surface(X[,1],X[,2],bx=.01,by=.01,p=p0)
u = seq(1/p0, len=(p0-1), by=1/p0)
persp(u,u,Z,theta=30,col="green",shade=TRUE,
box=FALSE,zlim=c(0,6))

http://freakonometrics.free.fr/copula-kernel-beta.gif
(yes, the surface is changing… to illustrate the impact of the bandwidth on the estimation).

  • transformed kernel estimation

I the talk, I will also mention the transformed Kernel estimate, as introduced in the book on L1 density estimation by Luc Devroye and Laszlo Györfi (the book can be downloaded here). I probably spend a few minutes on the original chapter, in order to provide another application of that techniques (not only to estimate copula densities, but here to estimate quantiles of heavy tailed distribution). In the univariate case, the R code is the following (here I consider two transformation, the quantile function of the Gaussian distribution, and the quantile function of the Student distribution with 3 degrees of freedom),

set.seed(1)
sample=rbeta(100,4,3)
 
transfN = function(x){
Y=qnorm(sample)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qnorm(x)); 
  g=f$y[ny]/dnorm(qnorm(x))
return(g)
}
 
df0=3
 
transfT = function(x){
Y=qt(sample,df=df0)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qt(x,3)); 
  g=f$y[ny]/dt(qt(x,df=df0),df=df0)
return(g)
}
 
tN=Vectorize(transfN)
tT=Vectorize(transfT)
 
u=seq(.01,.99,by=.01)
vN=tN(u)
vT=tT(u)
plot(u,vN,type="l",lwd=3,col="blue")
lines(u,vT,lwd=3,col="green")
lines(u,dbeta(u,4,3),col="red",lty=2)

The density estimation is the following,

(the red dotted line is the true density, since we work on a simulated sample). Now, let us get back on the initial chapter,

In the book, this is introduced as follows,

The original idea we add it to use this kernel based estimator for copulas, i.e. since we can estimate densities in high dimension with unbounded support, using

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-02.gif

the idea is to transform marginal observations,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-10.gif

and to use the fact that the associated copula density can be written

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-12.gif

to derive an intuitive estimator for the copula density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-13.gif

An important issue is how do we choose the transformation

And Luc Devroye and Laszlo Györfi mention that this can be used to deal with extremes.

well, extremes are introduced through bumps (which is not the way I would have been dealing with extremes)

and note that several results can be derived on those bumps,

e.g.

Then, there is an interesting discussion about estimating the optimal transformation

and I will prove that this can be an extremely interesting idea, for instance to estimate quantiles of heavy tailed distribution, if we use also the beta kernel estimator on the unit interval. This idea was developed in a paper with Abder Oulidi, online here.

Remark: actually, in the book, an additional reference is mentioned,

but I have never been able to find a copy… if anyone has one, I’d be glad to read it…

Séminaire Probabilité et Statistique, UBO, Brest

Talk at the statistical seminar at the Université de Bretagne Occidentale, in Brest, Wednesday May 6th Tuesday May 5th, 14h (in  10 days), on “multivariate extremes. Slides can be found here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

[04/05/2009]: some applications in risk management will be shown at the end of talk, as well as some news things on spatial correlation.

and in order to illustrate tail convergence of Archimedean copulas, I have uploaded two animations, with tail independence below,

with tail dependence (or asymptotic dependence),

Workshop in Sao Paulo

Talk at the workshop in Sao Paulo, Thursday, on “estimation of quantile related risk measures“. The workshop also invited Claudia Kluppelberg, Richard Davis, and  Ermanno Pitacco to talk. Slides can be found here. And maybe to explain a bit more where this idea of beta-kernels and transformed kernel comes from, I should mention those slides (see here for a more detailed version of the slides, and there for the full version of the paper with Jean David Fermanian and Olivier Scaillet).

Statistical seminar at Belo Horizonte

Talk at the statistical seminar at the university of Belo Horizonte, Wednesday, onmultivariate extremes. Slides can be downloaded here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

Many thanks to Renato Martins Assunção (here) for inviting me for a couple of days in Belo Horizonte ! Thanks also for your interest in my blog… and since I understood that some people who do not speak French might be interested in my blog, I started to write my blog in English (or at least a langage that should not be too far away from English). There is a nice discussion about langage on this blog (here, unfortunately in French…)

Exposé à Toulouse sur l’estimation (nonparamétrique) de quantiles

Exposé à Toulouse 1, sur l’estimation nonparamétrique de quantiles.

In this talk we propose several nonparametric estimators of quantiles based on Beta kernel and applied to transformed data by the generalized Champernowne distribution initially fitted to the data. A Monte-Carlo based study will show that those estimators improve the efficiency, not only for light tailed distributions, but mainly for heavy tailed, when the probability level is close to 1. Another application will be seen, on portfolio optimization in the mean-VaR context.