“A 99% TVaR is generally a 99.6% VaR”

Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that

the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level

which was inspired by a remark in Swiss Experience report,

expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk

Recall that

while

For any (absolutely) continuous cumulative distribution function, strictly increasing, since both (the VaR and the TVaR) are continuous, and strictly increasing, it is possible to relate any TVaR to some VaR, with a different level.  I.e.

Which is not the same as

Consider for instance the lognormal distribution. Since there is no simple expression for the expected shortfall, use monte carlo simulation to approximate it. And then, use the cumulative distribution function to get the assocated level for the value at risk,

> n=1e7
> TVaR_VaR_LN=function(p){
+     X=rlnorm(n)
+     E=mean(X[X>qlnorm(p)])
+     return(plnorm(E))
+ }

E.g.

> TVaR_VaR_LN(.99)
[1] 0.9967621

In order to plot it, define

> prob=c(seq(.8,.99,by=.01),.995)
> P_ln=unlist(lapply(prob,TVaR_VaR_LN))

Now, if we consider a distribution with lighter tails, like the exponential distribution

> TVaR_VaR_exp=function(p){
+     X=rexp(n)
+     E=mean(X[X>qexp(p)])
+     return(pexp(E))
+ }
> P_exp=unlist(lapply(prob,TVaR_VaR_exp))

or a distribution with heavier tails, like the Pareto one,

> qpareto=function(u,a=2){(1-u)^(-1/a)}
> rpareto=function(n,a=2){qpareto(runif(n),a)}
> ppareto=function(x,a=2){1-1/x^a}
> TVaR_VaR_par=function(p){
+     X=rpareto(n)
+     E=mean(X[X>qpareto(p)])
+     return(ppareto(E))
+ }
> P_pareto=unlist(lapply(prob,TVaR_VaR_par))

we have different probability levels.

> plot(prob,P_ln,type="l",xlab="TVaR probability level",ylab="VaR probability level")
> lines(prob,P_pareto,type="l",col="red")
> lines(prob,P_exp,type="l",col="blue")
> legend("topleft",
+        c("Pareto","Log Normal","Exponential"),
+        col=c("red","black","blue"),lty=1)

Hence, the heavier the tail, the higher the probability level. So, always qppfoximating a 99% TVaR with 99.6% VaR might work in some cases, e.g.

> TVaR_VaR_exp(.99)
[1] 0.9963071

but I is usually an optimistic approximation.



Cite this blog post
Arthur Charpentier (2015, August 29). “A 99% TVaR is generally a 99.6% VaR” Freakonometrics. Retrieved March 19, 2024, from https://doi.org/10.58079/ov0a

4 thoughts on ““A 99% TVaR is generally a 99.6% VaR””

  1. Hi Arthur,

    Do you know any techical articles that study this relationship between the Expected Shortfall and the Parametric VAR?
    I read the Swiss report you mentioned, but they don’t give any explanation on these numbers.

    Thank you,

    Leticia

  2. Hi Arthur

    I just read your post on RBloggers about the relationship between TVaR and VaR. While at my company my role was to build a simulation model to estimate the regulatory capital requirement for two syndicates. We typically observed that the VaR 99.5% corresponded to 98.5% TVaR. This is very close to the observation you read about elsewhere.

    In your post you compare the VaR and TVaR for various distributions results and conclude that the relationship may be optimistic. I find this interesting.

    In my case, I formed my impression on the relationship between VaR and TVaR when comparing sample statistics derived from aggregate loss results – i.e. results based on adding the simulation output from all lines of business in which we wrote (re)insurance. In contrast, your conclusions are based on observations of univariate distributions.

    My gut feel is that your conclusions are valid. However, I think that many actuaries/others deploy aggregate models with limited tail dependence and the rule of thumb quoted is an artefact of the dependency structure. Would be keen to hear your thoughts on this.

    For example, one extension of your post might be to create three aggregate loss models. Each model would be comprised of the same (say) three heavy tail univariate distributions. Based on your assertion, using heavy tailed distributions should (on a standalone basis) invalidate the rule.

    Now combine the three distributions using i) Gaussian copula; ii) T copula with high degrees of freedom; and iii) T Copula with low degrees of freedom. For each of these aggregate models the level of tail dependence should be increasing.

    What will be interesting to see is, “does the rule reassert itself when applied to aggregate models; only when there is limited tail dependence? or in some other situation?”

    Anyway, hope you don’t mind me providing some thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.