Tag Archives: VaR

“A 99% TVaR is generally a 99.6% VaR”

Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that

the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level

which was inspired by a remark in Swiss Experience report,

expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk

Continue reading “A 99% TVaR is generally a 99.6% VaR”

Vector Autoregressive Models

Consider here some https://latex.codecogs.com/gif.latex?VAR(1) model,

https://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}A_{1,1}&A_{1,2}%20\\%20A_{2,1}&A_{2,2}\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

We’ve seen in class that stationnarity of that time series, in the sense that https://latex.codecogs.com/gif.latex?\mathbb{E}[\boldsymbol{Y}_t]=\boldsymbol{\mu} and https://latex.codecogs.com/gif.latex?\text{Var}[\boldsymbol{Y}_t,\boldsymbol{Y}_{t-h}]=\boldsymbol{\Gamma}(h), was valid if the roots (in https://latex.codecogs.com/gif.latex?\mathbb{C}) of the characteristic polyonomial –https://latex.codecogs.com/gif.latex?P(z)=\text{det}(\mathbb{I}-\boldsymbol{A}z) – were outside the unit circle.

To visualize this point, consider the following time series

https://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}0.7&0.4%20\\%200.2&0.3\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

To generate that time series, we need to generate a bivariate white noise, i.e. https://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t)=\boldsymbol{\Sigma} (not necessarily a diagonal matrix), and https://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t,\boldsymbol{\varepsilon}_{t-h})=\boldsymbol{0}. For instance

> n=500
> r=0.7
> set.seed(1)
> Z1=rnorm(n)
> Z2=rnorm(n)
> E1=Z1
> E2=r*Z1+sqrt(1-r^2)*Z2

To generate now our time series, use

> A=matrix(c(.7,.2,.4,.3),2,2)
> X1=X2=rep(0,n)
> for(t in 2:n){
+   X1[t]=A[1,1]*X1[t-1]+A[1,2]*X2[t-1]+E1[t]
+   X2[t]=A[2,1]*X1[t-1]+A[2,2]*X2[t-1]+E2[t]  
+ }

Here, we have

> plot(X1,type="l",col="red")
> lines(X2,col="blue")

Those two time series seem to be stationnary. And, indeed,

> polyroot(c(1,-sum(diag(A)),det(A)))
[1] 1.18+0i 6.51-0i
> Mod(polyroot(c(1,-sum(diag(A)),det(A))))
[1] 1.18 6.51

Continue reading Vector Autoregressive Models

Risk Measures with Extreme Value Models

We’ve seen Monday, in the MAT8595 course how to use the Generalized Pareto Distribution to estimate some downside risk measures, given a sample (assumed to be i.i.d., I will not mention here properties on extremes for stochastic processes) with distribution https://latex.codecogs.com/gif.latex?F. The cumulative distribution function of the  Pareto distribution is here

For some threshold , and https://latex.codecogs.com/gif.latex?x\geq%20u, we can write

From Pickands–Balkema–de Haan theorem, if is large enough, then

Given our sample https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_n\}, let  denote the number of observations over,  threshold . Then we can write

or equivalently

If we invert this function, we get the quantile of level ,

Actually, a threshold and then the implied number of observation exceeding that threshold, it is possible to consider a fixed number of observation, and then the associated threshold will be the associated order statistics.

The density of the Pareto distribution is here

https://latex.codecogs.com/gif.latex?%20%20%20%20%20g_{(\xi,\sigma)}(x)%20=%20\frac{1}{\sigma}\left(1%20+%20\frac{\xi%20x}{\sigma}\right)^{\left(-\frac{1}{\xi}%20-%201\right)}

which is here function of two paramters, https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?\sigma.As discussed in the course, it is possible to use the Delta method to derive the asymptotic distribution of any quantile, and get then an approximated (asymptotic) confidence interval.

But since https://latex.codecogs.com/gif.latex?\sigma is usually not a parameter of interest, why not considering a reparametrization of our density, as a function of  https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?Q(p) (for some probability https://latex.codecogs.com/gif.latex?p that will be considered as fixed from now on). We can easily get (assuming that https://latex.codecogs.com/gif.latex?\xi\neq%200) that

https://latex.codecogs.com/gif.latex?g_{\xi,Q(p)}(x)=\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi[Q(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{[Q(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

Tis expression is simple, and can be used to derive the likelihood (on the observations exceeding the threshold)

https://latex.codecogs.com/gif.latex?\log\mathcal{L}(\xi,Q(p);\boldsymbol{x})=\sum_{i=0}^{N_u-1}%20\log%20g_{\xi,Q(p)}(x_{n-i:n})Numerically, let us write (and plot) that function. Consider some real data here

> X=as.numeric(danish)
> Xs=sort(X,decreasing=TRUE)
> n=length(X)
> u=10
> nu=sum(X>u)

Consider, say, the 99.9% quantile,

> p=.999

The empirical quantile is here

> quantile(X,p)
   99.9% 
131.5519

The density and the loglikelihood functions are here

> gq=function(x,xi,q){
+ ( (n/nu*(1-p) ) ^ (-xi)-1)/(xi*(q-u))*
+ (1+((n/nu*(1-p))^(-xi)-1)/(q-u)*x)^(-1/xi-1)}

> loglik=function(param){
+ xi=param[2];q=param[1]
+ lg=function(i) log(gq(Xs[i],xi,q))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

We can try to plot this likelihood using

> h=201
> Q=seq(50,300,length=h)
> XI=seq(.1,1,length=h)
> XIQ=as.matrix(expand.grid(Q,XI))
> M=mapply(loglik,XIQ)

Unfortunately, it was not working, so I used the old style

> M=matrix(NA,h,h)
> for(i in 1:h){for(j in 1:h){M[i,j]=loglik(c(Q[i],XI[j]))}}

The level curves of the log-likelihood are here

> hc=heat.colors(100)
> image(Q,XI,-M,col=hc)
> contour(Q,XI,-M,add=TRUE)

Again, since our interest is in the quantile, we can draw the profile likelihood and get the maximum of that function

> PL=function(Q){
+ profilelikelihood=function(xi){
+ loglik(c(Q,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))

$minimum
[1] 111.1055

and the graph is

> XQ=seq(50,300,length=101)
> L=Vectorize(PL)(XQ)
> plot(XQ,-L,type="l")
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(-L>=-up-qchisq(p=.95,df=1))
> lines(XQ[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XQ[I]),lty=2,col="red")

which can be seen as an alternative to

> gpd.q(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 64.66184  94.28956 188.91752 

$objective
[1] 454.6481

If we want to focus on another downside risk measure, that shouldn’t be too difficult. For instance, the expected shortfall,  can be estimated as

where  denotes the mean excess function, which can be writen, with a Generalized Pareto Distribution

Thus, a natural estimator for the expected shortfall is

One more time, it is possible to re-parametrize the density of the Pareto distribution, using https://latex.codecogs.com/gif.latex?ES(p) instead of https://latex.codecogs.com/gif.latex?\sigma. Here, we get

https://latex.codecogs.com/gif.latex?g_{\xi,ES(p)}(x)=\frac{\displaystyle{\xi+\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi(1-\xi)[ES(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{(1-\xi)[ES(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

The code to get the associated log-likelihood is here

> ge=function(x,xi,es){
+ (xi+(n/nu*(1-p))^(-xi)-1)/(xi*(1-xi)*(es-u))*(1+(xi+(n/nu*(1-p))^(-xi)
+ -1)/((es-u)*(1-xi))*x)^(-1/xi-1)
+ }
> loglik=function(param){
+ xi=param[2];es=param[1]
+ lg=function(i) log(ge(Xs[i],xi,es))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

and again, we can plot it

and the profile (log) likelihood is here (for the 99.9% expected shortfall)

> PL=function(ES){
+ profilelikelihood=function(xi){
+ loglik(c(ES,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))
$minimum
[1] 143.66

$objective
[1] 454.6481

which could be compared with

> gpd.sfall(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 96.64625 191.36972 394.87555

Financial model complexity

Today, Olivier Scaillet gave a great talk on fast recursive projections. The idea was great, and the talk was amazing. A great plenary session talk actually. And after lunch, while we were having a coffee, we started to discuss about financial model complexity, mentioning that sometimes, traders and quants are lost, and it might be good to spend more time on basics than on very advanced stuff (which was, in fact, the general idea of my own talk, yesterday). And Olivier recalled that story, on how a rookie excel error led JPMorgan to misreport its VaR for years, published on the blog http://zerohedge.com/…. The short story is that the JPM’s reported VaR did rise by some 93% year over a year, from 2011 to 2012 (while it was decreasing for all competitors). The reason is explained in the very last page of its JPM task force report

… a decision was made to stop using the Basel II.5 model and not to rely on it for purposes of reporting CIO VaR in the Firm’s first-quarter Form 10-Q. Following that decision, further errors were discovered in the Basel II.5 model, including, most significantly, an operational error in the calculation of the relative changes in hazard rates and correlation estimates. Specifically, after subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a  factor of two and of lowering the VaR…. It also remains unclear when this error was introduced in the calculation.

(Tyler Durden did highlight some parts, and I keep it like that). Let’s admit it: we did have fun about practitioners (actually, there was also a quant sitting with the two of us). But on the other hand, it is a bit scary, to see that we spend so much time to implement complex algorithms, to faster computations, and finally, we end up with a mistake in a spreadsheet….

Bounding sums of random variables, part 2

It is possible to go further, much more actually, on bounding sums of random variables (mentioned in the previous post). For instance, if everything has been defined, in that previous post, on distributions on , it is possible to extend bounds of distributions on . Especially if we deal with quantiles. Everything we’ve seen remain valid. Consider for instance two  distributions. Using the previous code, it is possible to compute bounds for the quantiles of the sum of two Gaussian variates. And one has to remember that those bounds are sharp.

> Finv=function(u) qnorm(u,0,1)
> Ginv=function(u) qnorm(u,0,1)
> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Actually, it is possible to compare here with two simple cases: the independent case, where the sum has a  distribution, and the comonotonic case where the sum has a  distribution.

>  lines(x,qnorm(x,sd=sqrt(2)),col="blue",lty=2)
>  lines(x,qnorm(x,sd=2),col="blue",lwd=2)

On the graph below, the comonotonic case (usually considered as the worst case scenario) is the plain blue line (with here an animation to illustrate the convergence of the numerical algorithm)

Below that (strong) blue line, then risks are sub-additive for the Value-at-Risk, i.e.

but above, risks are super-additive for the Value-at-RIsk. i.e.

(since for comonotonic variates, the quantile of the sum is the sum of quantiles). It is possible to visualize those two cases above, in green the area where risks are super-additive, while the yellow area is where risks are sub-additive.

Recall that with a Gaussian random vector, with correlation https://latex.codecogs.com/gif.latex?r then the quantile is the quantile of a random variable centered, with variance https://latex.codecogs.com/gif.latex?2(1+r). Thus, on the graph below, we can visualize case that can be obtained with this Gaussian copula. Here the yellow area can be obtained with a Gaussian copulas, the upper and the lower bounds being respectively the comonotonic and the countermononic cases.

https://freakonometrics.hypotheses.org/files/2019/05/sum-norm-G-bounds2.gif

But the green area can also be obtained when we sum two Gaussian variables ! We just have to go outside the Gaussian world, and consider another copula.

Another point is that, in the previous post, https://latex.codecogs.com/gif.latex?C^- was the lower Fréchet-Hoeffding bound on the set of copulas. But all the previous results remain valid if https://latex.codecogs.com/gif.latex?C^- is alower bound on the set of copulas of interest. Especially

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

for all https://latex.codecogs.com/gif.latex?C such that https://latex.codecogs.com/gif.latex?C\geq%20C^-. For instance, if we assume that the copula should have positive dependence, i.e. https://latex.codecogs.com/gif.latex?C\geq%20C^\perp, then

https://latex.codecogs.com/gif.latex?\tau_{C^\perp,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^\perp,L}(F,G)

Which means we should have sharper bounds. Numerically, it is possible to compute those sharper bounds for quantiles. The lower bound becomes

https://latex.codecogs.com/gif.latex?\sup_{u\in[x,1]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x}{u}\right)\right\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x-u}{1-u}\right)\right\}

Again, one can easily compute those quantities on a grid of the unit interval,

> Qinfind=Qsupind=rep(NA,n-1)
> for(i in 1:(n-1)){
+  J=1:(i)
+  Qinfind[i]=max(Finv(J/n)+Ginv((i-J)/n/(1-J/n)))
+  J=(i):(n-1)
+  Qsupind[i]=min(Finv(J/n)+Ginv(i/J))
+ }

We get the graph below (the blue area is here to illustrate how sharper those bounds get with the assumption that we do have positive dependence, this area been attained only with copulas exhibiting non-positive dependence)

For high quantiles, the upper bound is rather close to the one we had before, since worst case are probably obtained when we do have positive correlation. But it will strongly impact the lower bound. For instance, it becomes now impossible to have a negative quantile, when the probability exceeds 75% if we do have positive dependence…

> Qinfind[u==.75]
[1] 0

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let https://latex.codecogs.com/gif.latex?\Delta denote the set of univariate continuous distribution function, left-continuous, on https://latex.codecogs.com/gif.latex?\mathbb{R}. And https://latex.codecogs.com/gif.latex?\Delta^+ the set of distributions on https://latex.codecogs.com/gif.latex?\mathbb{R}^+. Thus, https://latex.codecogs.com/gif.latex?F\in\Delta^+ if https://latex.codecogs.com/gif.latex?F\in\Delta and https://latex.codecogs.com/gif.latex?F(0)=0. Consider now two distributions https://latex.codecogs.com/gif.latex?F,G\in\Delta^+. In a very general setting, it is possible to consider operators on https://latex.codecogs.com/gif.latex?\Delta^+\times%20\Delta^+. Thus, let https://latex.codecogs.com/gif.latex?T:[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that https://latex.codecogs.com/gif.latex?T(1,1)=1. And consider some function https://latex.codecogs.com/gif.latex?L:\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions https://latex.codecogs.com/gif.latex?T and https://latex.codecogs.com/gif.latex?L, define the following (general) operator, https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G) as

https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when https://latex.codecogs.com/gif.latex?Tis a copula, https://latex.codecogs.com/gif.latex?C. In that case,

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where https://latex.codecogs.com/gif.latex?C^\star is the survival copula associated with https://latex.codecogs.com/gif.latex?C, i.e. https://latex.codecogs.com/gif.latex?C^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e. https://latex.codecogs.com/gif.latex?L(x,y)=x+y, in that case https://latex.codecogs.com/gif.latex?\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, and copula https://latex.codecogs.com/gif.latex?C. Thus, https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,

https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator https://latex.codecogs.com/gif.latex?L, then, for any copula https://latex.codecogs.com/gif.latex?C, one can find a lower bound for https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula https://latex.codecogs.com/gif.latex?C, https://latex.codecogs.com/gif.latex?C\geq%20C^-, where https://latex.codecogs.com/gif.latex?C^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, bounds for the distribution of the sum https://latex.codecogs.com/gif.latex?H(x)=\mathbb{P}(X+Y\leq%20x), where https://latex.codecogs.com/gif.latex?X\sim%20F and https://latex.codecogs.com/gif.latex?Y\sim%20G, can be written

https://latex.codecogs.com/gif.latex?H^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and

https://latex.codecogs.com/gif.latex?H^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all https://latex.codecogs.com/gif.latex?t\in(0,1), there is a copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all https://latex.codecogs.com/gif.latex?F\in\Delta^+ let https://latex.codecogs.com/gif.latex?F^{-1} denotes its generalized inverse, left continuous, and let https://latex.codecogs.com/gif.latex?\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,

https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

and

https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})(x)=\sup_{(u,v)\in%20T^\star^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

Those definitions are really dual versions of the previous ones, in the sense that https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is

https://latex.codecogs.com/gif.latex?\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider https://latex.codecogs.com/gif.latex?x=i/n and https://latex.codecogs.com/gif.latex?u=j/n, and the bound for the quantile function at point https://latex.codecogs.com/gif.latex?i/n is then

https://latex.codecogs.com/gif.latex?\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given https://latex.codecogs.com/gif.latex?n is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several https://latex.codecogs.com/gif.latex?ns were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

http://freakonometrics.hypotheses.org/files/2016/11/credit-01.gif

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – http://freakonometrics.hypotheses.org/files/2016/11/exch67.gif (either the company defaults, or not), so that

http://freakonometrics.hypotheses.org/files/2016/11/exch66.gif

i.e. we can derive the (unconditional) distribution of the sum

http://freakonometrics.hypotheses.org/files/2016/11/exch60.gif

so that the probability function of the sum is, assuming that http://freakonometrics.hypotheses.org/files/2016/11/exch76.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch68.gif

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here

http://freakonometrics.hypotheses.org/files/2016/11/exch70.gif

i.e.

http://freakonometrics.hypotheses.org/files/2016/11/exch71.gif

Thus, the correlation between two default indicators is then

http://freakonometrics.hypotheses.org/files/2016/11/exch73.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch75.gif

Under the assumption that the latent factor is beta distributed

http://freakonometrics.hypotheses.org/files/2016/11/exch78.gif

we get

http://freakonometrics.hypotheses.org/files/2016/11/exch80.gif

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+"blue","red","grey"),
+ lty=c(1,1,1,2,2,1),border=n)
+}

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,


Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

MAT8886 from tail estimation to risk measure(s) estimation

This week, we conclude the part on extremes with an application of extreme value theory to risk measures. We have seen last week that, if we assume that above a threshold http://freakonometrics.blog.free.fr/public/perso5/qt01.gif, a Generalized Pareto Distribution will fit nicely, then we can use it to derive an estimator of the quantile function (for percentages such that the quantile is larger than the threshold)

http://freakonometrics.blog.free.fr/public/perso5/qt03.gif

It the threshold is http://freakonometrics.blog.free.fr/public/perso5/qt02.gif, i.e. we keep the http://freakonometrics.blog.free.fr/public/perso5/qt04.gif largest observations to fit a GPD, then this estimator can be written

http://freakonometrics.blog.free.fr/public/perso5/qt06.gif

The code we wrote last week was the following (here based on log-returns of the SP500 index, and we focus on large losses, i.e. large values of the opposite of log returns, plotted below)

> library(tseries)
> X=get.hist.quote("^GSPC")
> T=time(X)
> D=as.POSIXlt(T)
> Y=X$Close
> R=diff(log(Y))
> D=D[-1]
> X=-R
> plot(D,X)
> library(evir)
> GPD=gpd(X,quantile(X,.975))
> xi=GPD$par.ests[1]
> beta=GPD$par.ests[2]
> u=GPD$threshold
> QpGPD=function(p){
+ u+beta/xi*((100/2.5*(1-p))^(-xi)-1)
+ }
> QpGPD(1-1/250)
97.5%
0.04557386
> QpGPD(1-1/2500)
97.5%
0.08925095

This is similar with the following outputs, with the return period of a yearly event (one observation out of 250 trading days)

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/250, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.04172534 0.04557655 0.05086785

or the decennial one

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/2500, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.07165395 0.08925558 0.13636620

Note that it is also possible to derive an estimator for another population risk measure (the quantile is simply the so-called Value-at-Risk), the expected shortfall (or Tail Value-at-Risk), i.e.

http://freakonometrics.blog.free.fr/public/perso5/qt10.gif

The idea is to write that expression

http://freakonometrics.blog.free.fr/public/perso5/qt11.gif

so that we recognize the mean excess function (discussed earlier). Thus, assuming again that above http://freakonometrics.blog.free.fr/public/perso5/qt01.gif (and therefore above that high quantile) a GPD will fit, we can write

http://freakonometrics.blog.free.fr/public/perso5/qt12.gif

or equivalently

http://freakonometrics.blog.free.fr/public/perso5/qt13.gif

If we substitute estimators to unknown quantities on that expression, we get

http://freakonometrics.blog.free.fr/public/perso5/qt09.gif

The code is here

> EpGPD=function(p){
+ u-beta/xi+beta/xi/(1-xi)*(100/2.5*(1-p))^(-xi)
+ }
> EpGPD(1-1/250)
97.5%
0.06426508
> EpGPD(1-1/2500)
97.5%
0.1215077

An alternative is to use Hill’s approach (used to derive Hill’s estimator). Assume here that http://freakonometrics.blog.free.fr/public/perso5/qt20.gif, where http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function. Then, for all http://freakonometrics.blog.free.fr/public/perso5/qt23.gif,

http://freakonometrics.blog.free.fr/public/perso5/qt24.gif

Since http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function, it seem natural to assume that this ratio is almost 1 (which is true asymptotically). Thus

http://freakonometrics.blog.free.fr/public/perso5/qt25.gif

i.e. if we invert that function, we derive an estimator for the quantile function

http://freakonometrics.blog.free.fr/public/perso5/qt26.gif

which can also be written

http://freakonometrics.blog.free.fr/public/perso5/qt07.gif

(which is close to the relation we derived using a GPD model). Here the code is

> k=trunc(length(X)*.025)
> Xs=rev(sort(as.numeric(X)))
> xiHill=mean(log(Xs[1:k]))-log(Xs[k+1])
> u=Xs[k+1]
> QpHill=function(p){
+ u+u*((100/2.5*(1-p))^(-xiHill)-1)
+ }

with the following Hill plot

For yearly and decennial events, we have here

> QpHill(1-1/250)
[1] 0.04580548
> QpHill(1-1/2500)
[1] 0.1010204

Those quantities seem consistent since they are quite close, but they are different compared with empirical quantiles,

> quantile(X,1-1/250)
99.6%
0.04743929
> quantile(X,1-1/2500)
99.96%
0.09054039

Note that it is also possible to use some functions to derive estimators of those quantities,

> riskmeasures(gpd(X,quantile(X,.975)),1-1/250)
p   quantile      sfall
[1,] 0.996 0.04557655 0.06426859
> riskmeasures(gpd(X,quantile(X,.975)),1-1/2500)
p   quantile     sfall
[1,] 0.9996 0.08925558 0.1215137

(in this application, we have assumed that log-returns were independent and identically distributed… which might be a rather strong assumption).

Millenium bridge, endogeneity and risk management

In less than 48 hours, last week two friends mentioned the Millennium Bridge as an illustration of a risk management concept. There are several documents with that example, here (for the initial idea of using the Millennium Bridge to illustrate issues in risk management) here or there, e.g.

When we mention resonance effects on bridges, we usually thing of the Tacoma Narrows Bridge (where strong winds set the bridge oscillating) or the Basse-Chaîne Bridge (in France, which collapsed on April 16, 1850, when 478 French soldiers marched across it in lockstep). In the first case, there is nothing we can do about it, but for the second one, this is why soldiers are required to break step on bridges.

But for the Millennium bridge, a ‘positive feedback‘ phenomenon (known as Synchronous Lateral Excitation in physics) has been observed: the natural sway motion of people walking caused small sideways oscillations in the bridge, which in turn caused people on the bridge to sway in step, increasing the amplitude of the oscillations and continually reinforcing the effect. That has been described in a nice paper in 2005 (here). In the initial paper by Jon Danielsson and Hyun Song Shin, they note that “what is the probability that a thousand people walking at random will end up walking exactly in step? It is tempting to say “close to zero”, or “negligible”. After all, if each person’s step is an independent event, then the probability of everyone walking in step would be the product of many small numbers – giving us a probability close to zero. Presumably, this is the reason why Arup – the bridge engineers – did not take this into account. However, this is exactly where endogenous risk comes in. What we must take into account is the way that people react to their environment. Pedestrians on the bridge react to how the bridge is moving. When the bridge moves under your feet, it is a natural reaction for people to adjust their stance to regain balance. But here is the catch. When the bridge moves, everyone adjusts his or her stance at the same time. This synchronized movement pushes the bridge that the people are standing on, and makes the bridge move even more. This, in turn, makes the people adjust their stance more drastically, and so on. In other words, the wobble of the bridge feeds on itself. When the bridge wobbles, everyone adjusts their stance, which sets off an even worse wobble, which makes the people adjust even more, and so on. So, the wobble will continue and get stronger even though the initial shock (say, a gust of wind) has long passed. It is an example of a force that is generated and amplified within the system. It is an endogenous response. It is very different from a shock that comes from a storm or an earthquake which are exogenous to the system.

And to go further, they point out that this event is rather similar to what is observed in financial markets (here) by quoting The Economist from October 12th 2000 “So-called value-at-risk models (VaR) blend science and art. They estimate how much a portfolio could lose in a single bad day. If that amount gets too large, the VAR model signals that the bank should sell. The trouble is that lots of banks have similar investments and similar VAR models. In periods when markets everywhere decline, the models can tell everybody to sell the same things at the same time, making market conditions much worse. In effect, they can, and often do, create a vicious feedback loop.

from two to three…

A short post to give more details about the final remark in the course of Financial Econometrics, and more precisely the formula that can be found in the book of Philip Jorion,

Note that this formula can be found (perhaps written with slight changes) in several papers, e.g. the following sentence (on the http://www.bis.org/website),

or the following formula, on documents from the Bank of England website,

I recently pulished (in French, here) a paper on the Value-at-Risk, including the following graph,

Usually, three times the average over 60 trading days is the larger component, but during the financial crisis, it turned out that the daily component was almost three times higher than the average value over the past the months (this fact was mention by Paul Embrechts in some conference in Paris on risk measures).
The interpreation of the multiplicative k coefficient (which is from 2 to 3 in some publications, or which exceeds 3 in others) has been proposed in a paper of Gerhard Stahl, entitled three cheers. The idea is to use the Bienaymé-Tchebychev inequality. For random variables with finite variance, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-01.png

Recall that this inequality is simply a corrolary of Markov’s inequality

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-02.png

or for any increasing function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-99.png

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-03.png

(taking function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-04.png, applied to https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-05.png). This upper bound can be far away from the true probability, see e.g. the gaussian case below, i.e. if  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-07.png

 

> z = seq(0,3,by=.01)
> P = 2*dnorm(k)
> U = 1/z^2
> plot(z,P,type="l",lwd=2,col="red",xlab="",ylab="")

The ratio between the two is given below,

> plot(z,U/P,type="l",lwd=2,col="purple",xlab="",ylab="",ylim=c(0,10))

Note that it is possible to interprete the axis values as probabilities values, taking quantiles of the gaussian distribution

> plot(pnorm(z),U/P,type="l",lwd=2,col="purple",xlab="",
+ ylab="",ylim=c(0,10),xlim=c(.9,1))
> abline(h=3,lty=2)

The interpretation is that the upper bound is 3 times higher than the true probability in the Gaussian case when z is the quantile of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png distribution associated with probability level 99%.
Note that

  • if z is the 95% quantile of the mathcal{N}(0,1) distribution, the ratio is 2 (1.92)
  • if z is the 99% quantile of the mathcal{N}(0,1) distribution, the ratio is 3 (3.04)
  • if z is the 99.55% quantile of the mathcal{N}(0,1) distribution, the ratio is almost 4 (3.88)
  • if z is the 99.75% quantile of the mathcal{N}(0,1) distribution, the ratio is 5 (5.04)

A more formal explaination is to assume that X is symmetric, and then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-09.png

Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-10.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-11.png, we have an upper bound for the  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.pngValue-at-Risk,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-20.png

where the upper bound is the upper bound for the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.png Value-at-Risk for any distribution with finite variance and centred.
If  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-31.png, then https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-32.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png.  But since, https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png for a https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-36.png distribution, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-21.png

and further

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-22.png

Les généralités sont généralement fausses

Suite à un commentaire sur un vieux billet (ici), concomitante avec une demande d’intervention pour la formation ERM proposée par l’Institut des Actuaires, je me suis replongé dans les calculs de SCR (solvency capital requierement). Plus je lis la documentation technique, plus je suis surpris par l’importance de l’hypothèse de normalité ! J’ai repris mon Loss Model, qui est le livre de base pour l’examen d’actuariat non-vie dans les pays anglo-saxons, et j’ai vu des dizaines de lois bizarres… mais rien sur la loi normale…

  • de l’hypothèse de normalité dans la formule d’agrégation

Comme je le disais dans un vieux billet (ici), on ne peut montrer la validité de la formule de calcul du SCR que dans le cas Gaussien (ou plus précisément dans le cas elliptique, voire ici). En effet, la preuve est assez simple. Je renvois vers un papier tout chaud de Laurent Devineau et Stéphane Loisel sur le sujet, ici.

  • où l’on continue à entrevoir l’hypothèse de normalité

En fait l’hypothèse de normalité est vraiment partout dans les calculs de SCR. Mais bien sûr, sans le dire. Par exemple dans un papier de la SCOR,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.doc14-1_m.jpg

Comme je le disais l’autre jour (ici), le problème des documents “professionels” (par opposition à “académiques“), c’est que comme on veut que le CEO le lise, on évite de parler d’hypothèses, ou pire, de mettre un peu de maths…. Bref, on apprend que “generally” (terme d’une grande rigueur scientifique, que je recaserai un jour dans mes démonstrations), “the expected shortfall[…] at the 99 % level corresponds quite closely to the[…] value-at-risk at a 99.6% level “. Mais on retrouve la même phrase dans un rapport de l’edhec

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.SCR-edhec_m.jpg

Là aussi, aucune source… Damned ! En creusant un peu, Actuaris m’a mis sur la piste (ici), car on apprend que c’est l'”autorité suisse” qui a énoncé ce résultat que je qualifierais de surprenant1.

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.actuaris-var_m.jpg

Effectivement, on peut retrouver le document originel,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.doc13-1_m.jpg

J’avoue avoir essayé – sans grand succès – de lire l’annexe D. Je n’ai pas tout compris, mais j’y ai lu un “assuming that the distribution functions follows a normal law”,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.doc13-2_m.jpg

Donc si j’ai bien tout compris, “generally”, “the distribution functions follows a normal law” ! Bon, je crois que je vais pouvoir envoyer un mail à Stuart, Harry et Gordon pour leur expliquer qu’ils peuvent virer des pages à leur livre ! Ou peut-être Jean-Luc Besson (pour lequel j’ai une immense estime !) qui présente beaucoup de lois utiles en assurance dans l’ouvrage qu’il a coécrit avec Christian Partrat…
Mais peut-être a-t-il raison…? On peut se dire qu’en actuariat, on avait l’habitude de présenter l’approximation “normal power“, et peut être que ce résultat est vrai pour d’autres lois….
Regardons vite-fait sur quelques lois….. Par exemple, pour la loi lognormale, si on regarde en fonction de l’écart-type de la loi normale sous-jacente,

Effectivement, on colle au quantile à 99.6%. L’ordonnée est en échelle logarithmique pour que le dessin soit présentable, avec en pointillé la VaR (formule exacte, à 99.6% en rouge, et à 99.8% en bleu), et les ronds sont des estimations de la TVaR à 99%. Par contre, par exemple pour la loi lognormale, si on regarde en fonction de l’écart-type de la loi normale sous-jacente, plus la volatilité est grande, plus l’approximation est mauvaise.

Bon, essayons une loi Gamma (classique en assurance, car c’est une loi de la famille exponentielle, et souvent comparée à la loi lognormale).

Visiblement, là aussi l’approximation semble marcher….  mais rappelons que la loi Gamma est une loi à queue fine… Allons-y franchement, comparons avec le cas d’une loi de Pareto…

cette fois, l’abscisse se lit dans l’autre sens: à gauche, c’est le cas le plus risqué (avec une variance infinie entre 1 et 2). On notera que pour les risques les plus importants, là aussi on sous-estime la TVaR en considérant la VaR (à un seuil plus important). On retrouve d’ailleurs exactement le même type de graphique avec une loi de Student,

Bref, on me dira que je pinaille, n’empêche qu’à mon avis, les compangies sous-estiment “généralement” la TVaR (à supposer que ça soit la mesure de risque qui les intéresse) en prenant une hypothèse de normalité…

1 en tant qu’enseignant, ce genre de conclusion me gêne toujours, car je me fatigue à faire des cours sur la mesure de risque, j’essaye de présenter des notions aussi proprement que possible, en expliquant que les hypothèses c’est important, qu’il faut définir différentes notions de mesures de risques (entre les convexes, les cohérentes, les dynamiques, etc), et là tout d’un coup, on nous apprend que “generally” c’est kif kif….

Quantile de sommes, ou somme de quantiles ?

Je reprends ici un commentaire que j’ai longtemps entendu dans mon expérience d’actuaire dans une vie antérieure (et qu’on peut lire – entre les lignes le plus souvent – dans certaines réactions de risk managers). Les quantiles sont la base des mesures de risques, en particulier via la VaR en finance, mais quand j’étais actuaire, on calculait des 90/10 pour désigner des quantiles à 90% des pertes. Donc tout ça n’est pas spécialement nouveau. Je rappelle que la fonction quantile est simplement  où

Les quantiles sont calculés par type de risque (en gestion des risques), et forcément à la fin, on souhaite un quantile de la somme des risques, et il a longtemps été coutume de prendre la somme des quantiles. Je vais donc discuter un peu le résultat suivant,

  • L’égalité n’est pas vraie dans le cas indépendant

Suite à une espère de déformation professionnelle (à cause de la variance), beaucoup de monde pense que

mais ce résultat est faux…. Un exemple est d’ailleurs donné ci-dessous dans le cas de variables exponentielles.

  • L’égalité est vraie dans le cas comonotone…

En fait, le résultat précédant est vrai dans le cas comonotone. La comonotonie correspond au cas de dépendance parfaite positive, de corrélation maximale (si la corrélation existe), i.e.  avec  une fonction strictement croissante. Alors

  • … mais ce n’est pas un “worst case

Seconde bizarrerie, la comonotonie n’est pas le pire des cas. La comonotonie est souvent envisagée comme un worst case scenario mais il n’en est rien (cette idée se retrouve dans les QIS 3 par exemple, ici).
Formellement, si  (la classe de Fréchet), alors

et

En passant en dimension 2 (cas où la borne inférieure est effectivement une vraie fonction de répartition), si et  désignent respectivement des version anticomonotone et comonotone de , alors on peut se demander, en considérant une mesure de risque quelconque  si

En fait, Andre Tchen a montré en 1980 un résultat qui montre que la comonotonie est effectivement un worst case, mais seulement dans certains cas très limités.
Si est une fonction supermodulaire, i.e.

pour tout  et . Alors dans ce cas, pour tout ,

(la preuve se trouve ici). Je renvoie aux papiers de Michel Denuit, de Jan Dhaene ou de Marc Goovarts (ici ou ) pour des applications en tarification sur deux têtes, par exemple, ou sur les primes stop-loss en réassurance

  • Borner (numériquement) le quantile d’une somme

Commençons par un exemple, avec deux lois exponentielles, car c’est le plus simple, et surtout on peut calculer explicitement les bornes (dans le cas général, on se contentera de méthode numérique comme l’avait fait Khalil ici).
Si  et , où, pour être plus précis,

l’encadrement

est valide pour tout  quelle que soit la structure de dépendance entre et , où

Aussi, en prenant les inverses (car ces fonctions sont strictement croissante), on obtient une inégalité en terme de quantile, ou de Value-at-Risk,

pour tout .
Rappelons que dans le cas de variables indépendantes  alors que dans le cas comonotone . La figure ci-dessous montre les valeurs possible pour  la fonction de répartition de la somme, avec le cas indépendant en bleu, et le cas comotonone en rouge.

On peut aussi visualiser les valeurs possibles pour les quantiles de la somme. Dans un cas général, Ce problème a été traité par Makarov en 1981, et par Frank, Nelsen et Schweizer en 1997 (ici), tout en restant en dimension 2. En fait, ils ont travaillé sur les bornes de la fonction de répartition d’une somme mais c’est pareil, comme on l’a vu juste auparavant. Ils ont montré le résultat suivant: si  alors pour tout

désigne la copule anticomonotone (qui est une copule en dimension 2),

en posant , et

On peut retrouver des résultats plus fin en creusant dans la direction de l’arithmetic probability, par exemple avec la thèse de Williamson (ici, cette thèse est remarquable, et trop peu citée, malheureusement). Je ne parle que de la dimension 2, l’extension en dimension supérieure est délicate (je renvoie aux travaux de Paul Embrechts sur le sujet, ici ou , par exemple).

Calculs de SCR, Solvency Capital Requirements

Pour reprendre le contexte général, Solvency II (l’analogue de la directive CRD pour les banques*) repose sur 3 piliers,

  1. définir des seuils quantitatifs de calcul des provisions techniques des fonds propres, seuils qui seront à terme réglementaires, à savoir le MCR (Minimum Capital Requirement, niveau minimum de fonds propres en-dessous duquel l’intervention de l’autorité de contrôle sera automatique) et le SCR (Solvency Capital Requirement, capital cible nécessaire pour absorber le choc provoqué par une sinistralité exceptionnelle),
  2. fixer des normes qualitatives de suivi des risques en interne aux sociétés, et définir comment l’autorité de contrôle doit exercer ses pouvoirs de surveillance dans ce contexte. Notons qu’en principe, les autorités de contrôle auront la possibilité de réclamer à des sociétés “trop risquées” de détenir un capital plus élevé que le montant suggéré par le calcul du SCR, et pourra les forcer àréduire leur exposition aux risques,
  3. définir un ensemble d’information que les autorités de contrôle jugeront nécessaires pour exercer leur pouvoir de surveillance.

Cette histoire de pilliers peut s’illustrer de la manière suivante

Sur le premier pilier, assureurs et réassureurs devront mesurer les risques, et devront s’assurer qu’ils détiennent suffisamment de capital pour les couvrir. En pratique, le CEIOPS et la Commission Européenne ont retenu une probabilité de ruine de 0,5%. Les calculs de capital se font alors de deux manières, au choix,

  1. utiliser une formule standard. La formule ainsi que la calibration des paramètres ont été abordé à l’aide des QIS.
  2. utiliser un modèle interne. Là dessus, le CEIOPS étudie les modalités d’évaluation.

En avril 2007, QIS3 a été lancé, afin de proposer une formule standard pour le calcul des MCR et SCR, en étudiant la problématique spécifique des groupes. En particulier, on trouve dans les documents la formule suivante (pour un calcul de basic SCR)

Cette formule sort du QIS3, mais on trouve des choses analogues dans Sandström (2004), par exemple,

Avec une contrainte forte sur la forme du SCR, il obtient alors

D’où sort cette formule ? Certains ont tenté des éléments de réponse, par exemple

Ce résultat n’est malheureusement pas très probant car il n’est jamais rien évoqué sur la dépendance entre les composantes, ce qui est troublant. Sandstôrm écrit quelque chose de similaire, même si pour lui “normalité” est ici entendu dans un cadre multivarié.

Une explication peut être trouvée dans un papier de Dietmar Pfeiffer et Doreen Straßburger (ici) paru dans le Scandinavian Actuarial Journal (téléchargeable ici). Il cherche à expliquer comment calculer le SCR,

Il note, et c’est effectivement l’intuition que l’on avait, que dans un monde Gaussien (multivarié), cette formule marche, aussi bien pour un SCR basé sur la VaR que la TVaR. En particulier, ils citent un livre de Sven Koryciorz, correspondant à sa thèse de doctorat, intitulée “Sicherheitskapitalbestimmung und –allokation in der Schadenversicherung. Eine risikotheoretische Analyse auf der Basis des Value-at-Risk und des Conditional Value-at-Risk“, publiée en 2004.
Sinon, pour aller un peu plus loin, on peut aussi noter, dans les rapports du CEIOPS des déclarations un peu troublantes, par exemple

Il est pourtant facile de montrer que ce n’est pas le cas (même si c’est effectivement ce que préconise la “formule standard“). Le graphique ci-dessous montre l’évolution de la VaR d’une somme de risques corrélés (échangeables) en fonction de la corrélation sous-jacente: sur cet exemple, les risques très très corrélés sont moins risqués que des risques moyennement corrélés.

(la loi sous-jacente est une copule de Student). En revanche pour la TVaR, sur le même exemple, la TVaR de la somme est effectivement une fonction croissante avec la corrélation,


(plus de compléments dans les slides de l’école d’été à Lyon l’été dernier, ici).

* Pour reprendre des éléments de la page de wikipedia (ici), la directive européenne CRD (Capital Requirements Directive, i.e. Fonds Propres Réglementaires) transpose dans le droit européen les recommandations des accords de Bâle II, visant à calculer les fonds propres exigés pour les établissements financiers (i.e. directives 2006/48/CEet 2006/49/CE) .

Article (de vulgarisation) sur les mesures de risque

Parution d’un article sur l’histoire des mesures de risques, dans la revue Risques, en partant des débats suite à la vaccination, jusqu’à l’adoption de la VaR (Value-at-Risk, i.e un quantile statistique) sous la pression de JP Morgan. En particulier, je voulais rappeler dans ce débat le lien avec la “probabilité de ruine“, étudiée en actuariat depuis plus d’un siècle correspondant à la principale mesure de risque car explicitement liée à la solvabilité d’une compagnie d’assurance.

Allocations for Value-at-Risk portfolio optimization

Parution de l’article Estimating allocations for Value-at-Risk portfolio optimization dans Mathematical Methods of Operations Research.

Value-at-Risk, despite being adopted as the standard risk measure in finance, suffers severe objections from a practical point of view, due to a lack of convexity, and since it does not reward diversification (which is an essential feature in portfolio optimization). Furthermore, it is also known as having poor behavior in risk estimation (which has been justified to impose the use of parametric models, but which induces then model errors). The aim of this paper is to chose in favor or against the use of VaR but to add some more information to this discussion, especially from the estimation point of view. Here we propose a simple method not only to estimate the optimal allocation based on a Value-at-Risk minimization constraint, but also to derive— empirical—confidence intervals based on the fact that the underlying distribution is unknown, and can be estimated based on past observations.