Tag Archives: MAT8595

Independence and correlation

A short post to get back on a property I gave briefly in the MAT8595 class in January, and again in the MAT8181 class this week (to illustrate the distinction between weak and strong white noises). Recall that (real-valued) random variables  and  are independent if for all , Another characterization, for integrable variable is that for all , which can be written, if variables are square integrable The idea to prove this characterization is to observe that if  and  are independent can be written Using a standard argument in integration theory, equality is valid for step functions (not only indicators), and then to positive measurable functions, and finally to integrable functions. Proving this result is not that difficult. Observe that Rényi (1959) – inspired by Gebelein (1947) – followed by Sarmanov (1958) introduced the concept of maximal correlation, that can be related to this result, where the maximum is taken over all functions  and  such that the correlation exist. Actually, it is possible to consider only transformations such that  and  (and similarly for , the idea is that we simple center and scale, which does not impact the correlation.Thus,  and  are independent if and only if Algorithm to estimate that coefficient are interesting. The problem can be written, equivalently And if the minimization is considered over , assuming that  is fixed, then the optimal transformation is And similarly for . So using an iterative algorithm, it is possible to get  and  (see Breiman and Friedman (1985) for more details). Actually, those functions appear in nonlinear canonical analysis. As mentioned in Lancaster (1957), for a Gaussian random vector  and in that case   and  are affine functions. This can be related to Hermite’s polynomial and to the expansion of the bivariate Gaussian density. I still hope that someone will go further for the project in the MAT8181 course.

Bivariate Densities with N(0,1) Margins

This Monday, in the ACT8595 course, we came back on elliptical distributions and conditional independence (here is an old post on de Finetti’s theorem, and the extension to Hewitt-Savage’s). I have shown simulations, to illustrate those two concepts of dependent variables, but I wanted to spend some time to visualize densities. More specifically what could be the joint density is we assume that margins are  distributions.

  • The Bivariate Gaussian distribution

Here, we consider a Gaussian random vector, with margins , and with correlation . This is the standard graph, with elliptical isodensity curves

r=.5
library(mnormt)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmnorm(cbind(x,y),varcov=S)
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
X=rmnorm(1500,varcov=S)
xhist <- hist(X[,1], plot=FALSE)
yhist <- hist(X[,2], plot=FALSE)
top <- max(c(xhist$density, yhist$density,dnorm(0)))
nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
par(mar=c(3,3,1,1))
image(vx,vy,z,col=rev(heat.colors(101)))
contour(vx,vy,z,col="blue",add=TRUE)
points(X,cex=.2)
par(mar=c(0,3,1,1))
barplot(xhist$density, axes=FALSE, ylim=c(0, top), space=0,col="light green")
lines((density(X[,1])$x-xhist$breaks[1])/diff(xhist$breaks)[1],
dnorm(density(X[,1])$x),col="red")
par(mar=c(3,0,1,1))
barplot(yhist$density, axes=FALSE, xlim=c(0, top), space=0, 
horiz=TRUE,col="light green")
lines(dnorm(density(X[,2])$x),(density(X[,2])$x-yhist$breaks[1])/
diff(yhist$breaks)[1],col="red")

That was the simple part.

  • The Bivariate Student-t distribution

Consider now another elliptical distribution. But we want here to normalize the margins. Thus, instead of a pair , we would like to consider the pair , so that the marginal distributions are . The new density is obtained simply since the transformation is a one-to-one increasing transformation. Here, we have

k=3
r=.5
G=function(x) qnorm(pt(x,df=k))
dg=function(x) dt(x,df=k)/dnorm(qnorm(pt(x,df=k)))
Ginv=function(x) qt(pnorm(x),df=k)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmt(cbind(Ginv(x),Ginv(y)),S=S,df=k)/(dg(x)*dg(y))
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
Z=rmt(1500,S=S,df=k)
X=G(Z)

Because we considered a nonlinear transformation of the margins, the level curves are no longer elliptical. But there is still some kind of symmetry.

  • The Exchangeable Case with Conditionally Independent Random Variables

We did consider the case where  and  with independent random variables, given , and that both variables are exponentially distributed, with parameter . As we’ve seen in class, it might be difficult to visualize that sample, unless we have log scales on both axis. But instead of a log transformation, why not consider a transformation so that margins will be . The only technical problem is that we do not have the (nonconditional) distributions of the margins. Well, we have them, but they are integral based. From a computational point of view, that’s not a bit deal… Computations might take a while, but we can visualize the density using the following code (here, we assume that  is Gamma distributed)

a=.6
b=1
h=.0001
G=function(x) qnorm(ifelse(x<0,0,integrate(function(z) pexp(x,z)*
dgamma(z,a,b),lower=0,upper=Inf)$value))
Ginv=function(x) uniroot(function(z) G(z)-x,lower=-40,upper=1e5)$root
dg=function(x) (Ginv(x+h)-Ginv(x-h))/2/h
H=function(xy) integrate(function(z) dexp(xy[2],z)*dexp(xy[1],z)*
dgamma(z,a,b),lower=0,upper=Inf)$value
f=function(x,y) H(c(Ginv(x),Ginv(y)))*(dg(x)*dg(y))
vx=seq(-3,3,length=151)
vy=seq(-3,3,length=151)
z=matrix(NA,length(vx),length(vy))
for(i in 1:length(vx)){
for(j in 1:length(vy)){
z[i,j]=f(vx[i],vy[j])}}
set.seed(1)
Theta=rgamma(1500,a,b)
Z=cbind(rexp(1500,Theta),rexp(1500,Theta))
X=cbind(Vectorize(G)(Z[,1]),Vectorize(G)(Z[,2]))

There is a small technical problem, but no big deal.

Here, the joint distribution is quite different. Margins are – one more time – standard Gaussian, but the shape of the joint distribution is quite different, with an asymmetry from the lower (left) tail to the upper (right) tail. More details when we’ll introduce copulas. The only difference will be that the margins will be uniform on the unit interval, and not standard Gaussian.

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

Likelihood Based Methods, for Extremes

This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).

  • Maximum Likelihood Estimation

In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or  on some neighbourhood of , then

where https://latex.codecogs.com/gif.latex%20?I denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter https://latex.codecogs.com/gif.latex?\boldsymbol{\theta}=(\xi,\sigma), so that

https://latex.codecogs.com/gif.latex?%20%20%20%20%20F_{(\xi,\sigma)}(x)%20=%20\begin{cases}%201%20-%20\left(1+%20\frac{\xi%20x}{\sigma}\right)^{-1/\xi}%20&,%20\xi%20\neq%200%20\\%201%20-%20\exp%20\left(-\frac{x}{\sigma}\right)%20&,%20\xi%20=%200%20\end{cases}

If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator  https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) which satisfies

https://latex.codecogs.com/gif.latex?\sqrt{n}\left(\left[\begin{array}{c}\widehat{\xi}_n\\\widehat{\sigma%20}_n\end{array}\right]-\left[\begin{array}{c}\xi_0\\\sigma_0%20\end{array}\right]\right)\rightarrow%20\mathcal{N}\left(\left[\begin{array}{c}0\\end{array}\right],\left[\begin{array}{cc}(1+\xi_0)^2%20&%20\sigma_0[1+\xi_0]\\%20\sigma_0%20[1+\xi_0]%20&%202\sigma^2_0(1+\xi_0)%20\end{array}\right]\right)

The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if https://latex.codecogs.com/gif.latex%20?n is large enough, then https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,

> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)

and to get a 3d representation

> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")

With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.

To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).

  • Delta Method

Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if https://latex.codecogs.com/gif.latex%20?\widehat{\boldsymbol{\theta}}_n is asymptotically normal, and if is sufficently smooth, then https://latex.codecogs.com/gif.latex%20?h(\widehat{\boldsymbol{\theta}}_n) will also be asymptotically Gaussian. More precicely (see also the header of this blog)

From this property, we can get the normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n=\widehat{\xi}_n^{-1} (which is another parametrization used in extreme value models), or on any quantile, https://latex.codecogs.com/gif.latex%20?\widehat{Q}_u=F^{-1}_{\widehat{\boldsymbol{\theta}}_n}(u)=h_u(\widehat{\xi}_n,\widehat{\sigma}_n). Let us run some simulation, one more time to check that we actually have a joint normality.

> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)

As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with

> n=5000

We get the joint normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n and https://latex.codecogs.com/gif.latex%20?\widehat{Q}_n(u). This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.

  • Profile Likelihood

Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index https://latex.codecogs.com/gif.latex%20?\xi, https://latex.codecogs.com/gif.latex%20?\sigma being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif is the set of interesting parameters. Then (under standard suitable conditions) we can prove that

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals. In the GPD case, for each https://latex.codecogs.com/gif.latex%20?\xi, we have to find an optimal https://latex.codecogs.com/gif.latex%20?\sigma^\star(\xi). We compute the (profile) likelihood i.e. https://latex.codecogs.com/gif.latex%20?\mathcal{L}(\xi,\sigma^\star(\xi)). And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below

>  n=500
>  set.seed(1)
>  x=rgpd(n,xi=1/1.5,beta=1)
>  loglikelihood=function(xi,beta){
+  sum(log(dgpd(x,xi,mu=0,beta))) }
>  XIV=(1:300)/100;L=rep(NA,300)
>  for(i in 1:300){
+  XI=XIV[i]
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  L[i]=-optim(par=1,fn=profilelikelihood)$value }
>  plot(XIV,L,type="l")
>  XIV[which.max(L)]
[1] 0.67
>  gpd(x,0)$par.ests
       xi      beta 
0.6730145 0.9725483

We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use

>  PL=function(XI){
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  return(optim(par=1,fn=profilelikelihood)$value)}
>  (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025

$objective
[1] 822.5574

Observe that, indeed, we are not far away from the maximum likelihood estimator of https://latex.codecogs.com/gif.latex%20?\xi (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph

>  up=OPT$objective
>  abline(h=-up)
>  abline(h=-up-qchisq(p=.95,df=1),col="red")
>  I=which(L>=-up-qchisq(p=.95,df=1))
>  lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+  lwd=5,col="red")
>  abline(v=range(XIV[I]),lty=2,col="red")

The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter https://latex.codecogs.com/gif.latex%20?\xi.

To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.

Central Limit Theorem

This week, in the MAT8595 course, before proving Fisher-Tippett theorem, we will get back on the proof of the Central Limit Theorem, and the class of stable distribution (in Lévy’s sense). In order to illustrate the problem of heavy tails, on the behavior of the mean, consider a sequence of i.i.d. Gaussian random variables https://latex.codecogs.com/gif.latex?X_i‘s. On top, we visualize the sequence, and below, we visualize the associate random walk

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=1}^n%20X_i

(the central limit theorem will give a limiting distribution for https://latex.codecogs.com/gif.latex?n^{-1}S_n in the case where the variance of the https://latex.codecogs.com/gif.latex?X_i‘s is finite)

If we consider a sequence of i.i.d. random variables https://latex.codecogs.com/gif.latex?X_i‘s whith heavier tails (possibly with infinite variance), we can still define https://latex.codecogs.com/gif.latex?S_n, but as we can see below, https://latex.codecogs.com/gif.latex?S_n can be quite heratic.

As we will see this Thursday, the key to derive stable distribution for the central limit theorem, or possible limiting distributions for the maximum is Cauchy’s function equation. I strongly recommand to look at the proof.

Copules et valeurs extrêmes, syllabus

Le plan de cours pour le cours MAT8595 Copules et Valeurs Extremes est en ligne. L’entente d’évaluation sera signée au premier cours, ce lundi à 9:00 (salle SH-2140). D’autre billets seront mis en ligne dans les jours à venir, avec quelques exercices, et les articles qui serviront de base pour les projets, sur http://freakonometrics.hypotheses.org/courses/copulas-and-extremes.

Graduate Course on Copulas and Extreme Values

This Winter, I will be giving a (graduate) course on extreme values, and copulas (more generally multivariate models and dependence), MAT8595. It is an ISM course, and even if it will probably be given in French, I will upload information here, in English. I will upload the (detailed) syllabus of the course during the Christmas holidays. But to give an overview, for those willing to register, the first part of the course will focus on extreme value theory. The references will be

The second part of the course will be on multivariate distributions. The references will be

Specific references and more details about the chapters will be given during the course. I will upload exercises this winter, as well as a list of articles that will be used for projects. Examples will be illustrated using R functions from dedicated packages.

Grades will be based on exercises (homework), report (based on a published paper) and final writen exam.