# “A 99% TVaR is generally a 99.6% VaR”

Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that

the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level

which was inspired by a remark in Swiss Experience report,

expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk

# More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

```Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1```

for the first model – a GLM with some distribution, and some link function – and

```Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1```

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

```> fitdistr(Y,"normal")
mean          sd
0.93685011   0.90700830
(0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
rate
1.06740661
(0.07547704)```

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

```> set.seed(5)
> n=200
> U=runif(n);
> Y=-log(U)```

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

```> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)```

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

```> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0\$coefficient
> b=summary(reg0)\$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")```

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our $p$-value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

```> a=.1
> set.seed(5)
> n=200
> U=runif(n);
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)```

To visualize the copula of the variables, we can use

```> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
+ ticktype ="detailed",zlab="")```

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

• a Gaussian model (here a standard linear model)
• a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

```> reg1=lm(Y~X)
> summary(reg1)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10```

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

# Fisher-Tippett theorem and limiting distribution for the maximum

Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples . For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. on the unit interval. Let and . Then, for all and ,

i.e. the limiting distribution of the maximum is Weibull’s.

```set.seed(1)
s=1000000
n=100
M=matrix(runif(s),n,s/n)
V=apply(M,2,max)
bn=1
an=1/n
U=(V-bn)/an
hist(U,probability=TRUE,,col="light green",
xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25))
u=seq(-10,0,by=.1)
v=exp(u)
lines(u,v,lwd=3,col="red")```

For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function . Let and , then

which means that the limiting distribution is Fréchet’s.

```set.seed(1)
s=1000000
n=100
M=matrix((runif(s))^(-1/2),n,s/n)
V=apply(M,2,max)
bn=0
an=n^(1/2)
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25))
u=seq(0,10,by=.1)
v=dfrechet(u,shape=2)
lines(u,v,lwd=3,col="red")```

For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function . Let and , then

i.e. the limiting distribution for the maximum is Gumbel’s distribution.

```library(evd)
set.seed(1)
s=1000000
n=100
M=matrix(rexp(s,1),n,s/n)
V=apply(M,2,max)
(bn=qexp(1-1/n))
log(n)
an=1
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")```

Consider now a Gaussian sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)

as . Let and . Then we can get

as . I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,

and it is only slightly better with 1,000 observations,

```set.seed(1)
s=10000000
n=1000
M=matrix(rnorm(s,0,1),n,s/n)
V=apply(M,2,max)
(bn=qnorm(1-1/n,0,1))
an=1/bn
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")```

Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since , if

then

i.e. using Taylor’s approximation on the right term,

This gives us normalizing coefficients we should use here.

```set.seed(1)
s=10000000
n=1000
M=matrix(rlnorm(s,0,1),n,s/n)
V=apply(M,2,max)
bn=exp(qnorm(1-1/n,0,1))
an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1))
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")```

# Will I ever be a bayesian statistician ? (part 2)

A few weeks ago, I started a series of posts on the magic of bayesian statistics from the eyes of a muggle (see http://freakonometrics.hypotheses.org/2191). It might be time to go a bit further…. And today, I wanted to discuss the choice of the a priori distribution of the parameter (which was mentioned in the commentary of the previous post). As far as I understood, there are several houses with different ideas on how to choose it.

• The conjugate house

The first idea (used in the previous post, here) is to consider an exponential distribution. To be formal, those distributions can be written as

(in a form as general as possible), i.e.

Here  is somehow the new parameter of the distribution. Then, a conjugate prior for the parameter  of the exponential family is given by

where  (where  is the dimension of ).

The conjugate prior is interesting since, when combined with the likelihood (and normalized), produces a posterior distribution which is of the same type as the prior. And a lot of standard distributions have a conjugate prior. E.g.

• For a Bernoulli distribution, i.e.  are i.i. with distribution , assume that the  a priori distribution of  is , , then the a posteriori distribution of  is still Beta, with parameters
• For a binomial distribution,  , assume that the  a prioridistribution of  is , then the a posteriori distribution of  is still Beta, with parameters
• For a Negative Binomial distribution, ,  assume that the  a priori distribution of  is , then the a posterioridistribution of  is still Beta, with parameters
• For a Poisson distribution, , assume that the  a priori distribution of  is , then the a posteriori distribution of is still gamma, with parameter
• For a Geometric distribution,, assume that the  a prioridistribution of  is , then the a posteriori distribution of  is still Beta, with parameters
• For an Exponential distribution assume that the  a prioridistribution of  is , then the a posteriori distribution of  is still Gamma, with parameters
• For a Gaussian distribution,  assume that , then

• For a gamma distribution, , assume that the  a prioridistribution of  is  , then the a posteriori distribution of  is still Gamma, with parameters
• For a Pareto distribution, , assume that the  a priori distribution of  is , then the a posteriori distribution of  is still Gamma, with parameters

• The non-informative or vague house

So far, the choice of the prior was not neutral, in the sense that the a priori of the statistician will have an influence on a posteriori distributions (we’ll discuss that point further later on). We could be interested by the case where  is somehow t neutral. A famous example is the case of  distribution. Between 1745 and 1784, Pierre Simon Laplace observed 393,386 birth of boys versus 377,555 birth of girls (or 251,527 boys versus 241,945 girls if we consider the initial article, for birth before 1770). He wanted to quantify the probability that p, the provability to have a boy, exceed 1/2. He assume that a priorip was uniform on the unit interval claiming that it was being as neutral as possible. But it is not that correct.
The idea of noninformative prior is that we should get an equivalent result when considering a transformed parameter. So assume that the parameter is no longer , but , where  (for some bijective transformation). The distribution (density) of  is then

Let  denote Fisher information of parameter ,

Then, Fisher information of parameter  is

which can be written

So if we want a distribution invariant by transformation of the parameter (), it seems natural to consider

or at least something proportional to that square root, since we want to get a density.
Thus, from Jeffrey’s principle, the prior distribution for a single parameter is noninformative if it is taken proportional to the square root of Fisher’s information measure. For those who want to go further, see Noninformative Priors Do Not Exist or  A Catalog of Noninformative Priors.

• For the Poisson distribution, the Jeffreys prior for the rate parameter  is
• For the Bernoulli distribution, the Jeffreys prior for the probability parameter  is

This is the arcsine distribution and is a beta distribution with parameters 1/2.

• The expert house

The idea is quite simple. We need a prior distribution so that

But assume that we have already seen a similar problem before. For instance, I remember Eric Parent mentioning the case of river flow models. If we have two similar rivers, then it might be interesting to use information on one river as an a prioriinformation for the second one, something like

I guess it is also possible to use meta-regression to get an aggregation of experts opinion.

To go further on bayesian statistics, I suggest to go on Albus Dumbledore’s og,here, or the the blog of some PhD (and postdoc) students in Hogwarts, there. Or if you can wait, a dozen other posts will come soon (well, let’s hope so). The next one will probably be on a posteriori calculations (which is the natural step since we’ve seen a priori choice here).