Tag Archives: copulas

MultiAttribute Copula Utility Functions

In June, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we organized this year an internal workshop on that topic. A gave an oveview in September on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. This time, following recent presentations made by Olivier, I will present Ali E. Abbas (recent) contributions on copula-type multriattribute utility functions. Slides are online, and the presentation will be this Thursday

As discussed in the introduction, one (nice) application can be the choice of a seat in a theatre, see

Overview on Multivariate Distributions

In June 2016, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we will organize every month an internal workshop on that topic. We will start tomorrow afternoon, 13:00-14:30, and I will give a brief talk on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. Slides are now online,

Probit Transformation for Nonparametric Kernel Estimation of the Copula Density, Lille

This Monday I will be in Lille to give a talk at the Journées de Statistiques. The talk will be based on the joint work with Gery Geenens and Davy Paindaveine, on Probit transformation for nonparametric kernel estimation of the copula density”. The papier can be found online, on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

The slides are available on Dropbox (it is a 54Mo file with animated pictures, that do not appear on the version below).

Copulas and Financial Time Series

I was recently asked to write a survey on copulas for financial time series. The paper is, so far, unfortunately, in French, and is available on https://hal.archives-ouvertes.fr/. There is a description of various models, including some graphs and statistical outputs, obtained from read data.

To illustrate, I’ve been using weekly log-returns of (crude) oil prices, Brent, Dubaï and Maya.

The dataset is available from an excel file, oil.xls (I thought it was possible to load it direclty from the internet, but it did not work… so I suggest to download the file first, and then load it)

> library(xlsx)
> temp <- tempfile()
> download.file(
+ "http://freakonometrics.free.fr/oil.xls",temp)
trying URL 'http://freakonometrics.free.fr/oil.xls'
Content type 'application/vnd.ms-excel' length 99328 bytes (97 KB)
downloaded 97 KB
> oil=read.xlsx(temp,sheetName="DATA",dec=",")
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl,  : 
  java.io.IOException: block[ 0 ] already removed - does your POIFS have circular or duplicate block references?
> oil=read.xlsx("D:\\home\\acharpen\\mes documents\\oil.xls",sheetName="DATA")

Then we can plot those three time series

> head(oil)
        Date      WTI    brent   Dubai     Maya
1 1997-01-10  2.73672  2.25465  3.3673   1.5400
2 1997-01-17 -3.40326 -6.01433 -3.8249  -4.1076
3 1997-01-24 -4.09531 -1.43076 -6.6375  -4.6166
4 1997-01-31 -0.65789  0.34873  0.7326  -1.5122
5 1997-02-07 -3.14293 -1.97765 -0.7326  -1.8798
6 1997-02-14 -5.60321 -7.84534 -7.6372 -11.0549

> Time=as.Date(oil$Date,"%Y-%m-%d")
> plot(Time,oil[,3],type="l",ylab="Brent, weekly log returns",ylim=range(oil[,3:5]))

The idea is to use some multivariate ARMA-GARCH processes here. The heuristics here is that the first part is used to model the dynamics of the average value of the time series, and the second part is used to model the dynamics of the variance of the time series. Two kinds of models are considered in the paper

  • a mutivariate GARCH process (or a model on the dynamics of the variance matrix) on the residuals from the ARMA models
  • a multivariate model (based on copulas) on the residuals of the ARMA-GARCH process

Continue reading Copulas and Financial Time Series

The Pay-for-Performance Myth

Last week, Eric Chemi and Ariana Giorgi published an interesting article on “The Pay-for-Performance Myth

With all the public chatter about exorbitant executive compensation and income inequality, it’s useful to look at the relationship between chief executive officer pay and corporate performance. Typically, when the subject of their big pay packages arises, CEOs—usually through their spokespeople—say they are paid for performance. Does data back that up?

An analysis of compensation data publicly released by Equilar shows little correlation between CEO pay and company performance. Equilar ranked the salaries of 200 highly paid CEOs. When compared to metrics such as revenue, profitability, and stock return, the scattering of data looks pretty random, as though performance doesn’t matter. The comparison makes it look as if there is zero relationship between pay and performance.

In the article, they produce a copula-type plot (since ranks – only – are considered). Ariana kindly sent me the dataset (that was used in The Pay at the Top) to play with it

> base=read.table("ceo.csv",sep=";",header=TRUE)

Here I normalize (dividing by the size of the dataset) to have uniform distribution on the unit interval (instead of working with ranks, i.e. integers). If we remove that scaling factor, the scatterplot is that same as the one mentioned in  the Pay-for-performance myth.

> n=nrow(base)
> U=rank(base[,1])/(n+1)
> V=rank(base[,2])/(n+1)
> plot(U,V,xlab="Rank CEO Pay",
+ ylab="Rank Stock Return")

This is the copula type representation.

If we visualize the density of the copula (using the algorithm described in the joint paper with Gery and Davy), we get either

> library("copula")
> library("ks")
> library("MASS")
> library("locfit")
> n.res=32
> ctilde1=probtranscopkde(UVs,p=1,
+ u.out=seq(1/(2*n.res+1),1-1/(2*n.res+1),
+length=n.res),plots=TRUE)

Continue reading The Pay-for-Performance Myth

On Hoeffding’s identity

In 1940, Wassily Hoeffding published Masstabinvariante Korrelationstheorie, which was an impressive paper. For those (like me) who unfortunately barely speak German, an English translation could be found in The Collected Works of Wassily Hoeffding, published a few years ago. As I keep saying in my courses about copulas, almost everything was in that paper, by Wassily Hoeffding. For instance, we can see the following graph, of a cumulative distribution function,

What is the difference with a copula? A copula (in dimension 2) is the cumulative distribution function of a random pair with uniform on , as defined by Abe Sklar

But Wassily Hoeffding considered a random pair with uniform on . But everything else is the same. He can even derive the level curves of the density of the Gaussian copula,

> library(mnormt)
> r=.6
> dc=function(u,v) return(
+ as.numeric(dmnorm(cbind(qnorm(u),qnorm(v)),varcov=
+ matrix(c(1,r,r,1),2,2))/dnorm(qnorm(u))/dnorm(qnorm(v))))
> n=500
> vectu=seq(1/n,1-1/n,length=n-1)
> matdc=outer(vectu,vectu,dc)
> contour(vectu,vectu,matdc,levels=
+ c(.325,.944,1.212,1.250,1.290,1.656,3.85),lwd=2)

 

But another interesting point is that there is the so-called Hoeffding’s equality

which is interesting, and quite important, actually, to understand that the covariance (or the correlation) can be seen as some ‘distance‘ to the independence. More precisely, observe that

where  would be the joint cumulative distribution function of some independent variables, with the same marginal distributions.

Of course, it is not exactly a distance, since it can be negative. But still. Now, the thing is that the proof is not trivial. But it is using interesting identities. For instance, in 1885, Franklin wrote a nice paper, Proof of a Theorem of Tchebycheff’s on Definite Integrals, in the American Journal of Mathematics. To get some heuristics about the identity, consider some (finite) sequences  and , then one can prove that

And there is a continuous version of that identity. Consider two bounded functions  and , on some interval,  then

is equal to

In 1979, in Monotone Regression and Covariance Structure, Gerald Shea gave a more probabilistic interpretation of that results, using the fact that

and using a different measure. More precisely, assume now that  functions  and  are integrable, with respect to some measure , on some set . Then

is equal to

In the case where  is a probability measure of , i.e. , this equality is the one used by Wassily Hoeffding, in 1940. The interpretation in terms of random variable is simple that

(with standard assuptions of existence of those quantitites) where  and  are two independent vectors, with identical distribution, . Actually, this relationship can also be found in Some Concepts of Dependence, by E. L. Lehmann, published in 1966. Oh, and by the way, the connection with Chebyshev inequality (claimed in the title of seminal paper by Franklin) come from the fact that if  and  are monotonic, then the left part of the identity is positive, and thus,

But let’s get back to Hoeffiding’s result. How do we get it from that lemma. The idea is to write

as

i.e.

We can then intervert the integral and the expectation, use the fact that

and then, and some integral calculus can be used to rewrite that expression as

So we get here Hoeffding’s identity. Actually, as mentioned by Ben Derrett about the equality above, it can be observed (see http://math.stackexchange.com/105713) that2\text{cov}(X,Y)=2\big(\mathbb{E}[XY]-\mathbb{E}[X]\mathbb{E}[Y]\big)can also be written

where again,  and  are two independent vectors, with identical distribution, . The later can be writen

Copula Density Estimation

The joint paper, written with Gery Geenens and Davy Paindaveine, entitled Probit transformation for nonparametric kernel estimation of the copula density” is now online on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

Multivariate Archimax copulas

Our paper, written jointly also with Anne-Laure Fougères, Christian Genest and Johanna Nešlehová, entitled Multivariate Archimax Copulas, should appear some day in the Journal of Multivariate Analysis.

A multivariate extension of the bivariate class of Archimax copulas was recently proposed by Mesiar & Jagr (2013), who asked under which conditions it holds. This paper answers their question and provides a stochastic representation of multivariate Archimax copulas. A few basic properties of these copulas are explored, including their minimum and maximum domains of attraction. Several non-trivial examples of multivariate Archimax copulas are also provided.

In this paper, we extend the class of Archimax copulas, introduced in dimension 2 in Bivariate Distributions with Given Extreme Value Attractor, by Philippe Capéraà, Anne-Laure Fougères and Christian Genest, inspired by some ideas mentioned in a paper published in Kybernetika a few years ago. I will try to post additional material, soon…

Conditional dependence measures

This week, I spend some time at the Workshop on Nonparametric Curve Smoothing conference at Concordia. Yesterday afternoon, Noël Veraverbeke show an interesting graph, to illustrate conditional copulas (and the derivation of conditional dependence measures, such as Kendall’s tau, or Spearman’s rho). A long time ago, in my PhD thesis (mainly on conditional copulas) I did try to derive conditional dependence measures (in a dedicated chapter). In my PhD, I was interested to describe the dependence of a pair https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given https://latex.codecogs.com/gif.latex?(Y_1,Y_2)\in\mathcal{V}, where https://latex.codecogs.com/gif.latex?\mathcal%20V is a region of interest, such has tails. So I wanted to study the behavior of https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given https://latex.codecogs.com/gif.latex?\{Y_1%3Et,Y_2%3Et\}. This has interpretation when studying large risks, but also in joint life mortality.

In the paper Noël mentioned, they want to describe the dependence of a pair https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given a covariate https://latex.codecogs.com/gif.latex?X=x. And he came up with this very nice example: consider expected lifetimes, for male and female, in various countries. You can get zipped files with data for male, female and we can use the GPD per capita as our covariate. Here is the code to visualize life expectancies,

b1=read.table("sp.dyn.le00.fe.in_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b2=read.table("sp.dyn.le00.ma.in_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b3=read.table("ny.gdp.pcap.cd_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b1b=b1[,c(1,2,55)]
b2b=b2[,c(1,2,55)]
b3b=b3[,c(1,2,55)]
names(b1b)[3]="LEF"
names(b2b)[3]="LEM"
names(b3b)[3]="GPD"
b=merge(b1b,b2b)
b=merge(b,b3b)
plot(b$LEM,b$LEF,xlab="Life Expectancy (male vs. female)")

With this graph, we cannot visualize the link with the covariate,

b$cgpd=cut(b$GPD,quantile(b$GPD,seq(0,1,by=1/6),na.rm=TRUE))
levels(b$cgpd)=as.character(1:6)
library(RColorBrewer)
CL=brewer.pal(6, "RdBu")	
plot(b$LEM,b$LEF,xlab="Life Expectancy (male vs. female)",pch=19,col=CL[as.numeric(b$cgpd)])

Here, poor countries are in red, and rich countries in blue,

Clearly, life expectancy is connected to the wealth of the country,

plot(b$GPD,b$LEF,xlab="(Female) Life Expectancy vs. GPD (log scale)",pch=19,col=CL[as.numeric(b$cgpd)],log="x")
plot(b$GPD,b$LEM,xlab="(Male) Life Expectancy vs. GPD (log scale)",pch=19,col=CL[as.numeric(b$cgpd)],log="x")

The idea here is to consider the conditional dependence structure, given the wealth. If we want something smooth (this is actually the goal of the workshop, but I’d like to make that quickly) consider some weighted version of Kendall’s tau, based on the idea mentioned in a post on http://stackoverflow.com/

The idea is to use concordance and discordance counts, with replications of the data, based on the weights

P = function(t) {   
  r_ndx = row(t)
  c_ndx = col(t)
  sum(t * mapply(function(r, c){sum(t[(r_ndx > r) & (c_ndx > c)])},
    r = r_ndx, c = c_ndx))}
Q = function(t) {
  r_ndx = row(t)
  c_ndx = col(t)
  sum(t * mapply( function(r, c){
      sum(t[(r_ndx > r) & (c_ndx < c)])
  },
    r = r_ndx, c = c_ndx) )
}
kendall_tau_c = function(t){
    t = as.matrix(t) 
    m = min(dim(t))
    n = sum(t)
    ks_tauc = (m*2*(P(t)-Q(t)))/((n*n)*(m-1))
}
I=is.na(b$GPD)
bw=density(log(b$GPD[!I]))$bw
kendall.weight=function(x){
df=data.frame(Y1=b$LEF, Y2=b$LEM, freq=trunc(dnorm(log(b$GPD)-log(x),sd=bw)*100))
df=df[!is.na(df$freq),]
dfrep=data.frame( lapply(df, function(x){rep(x, df$freq)}))
t=xtabs(~ Y1+Y2, dfrep)
return(kendall_tau_c(t))}

Here, I use weights using some Gaussian kernel on the logarithm of the GPD per capita (my standard deviation for the Gaussian weight being equal to the bandwidth of the Gaussian kernel of the density of the log of the GPD per capita), then, we can compute various conditional Kendall’s tau,

T=exp(seq(6,11.5,length=50))
K=Vectorize(kendall.weight)(T)

and plot them,

plot(T,K,type="l",xlab="Conditional Kendall's tau vs. GPD (log scale)")

There is more “correlation” between lifetimes of men and women in poor countries than rich country (which is also what Noël observed). Now, we can also play with time, because we have those statistics for several years.

Graduate Course on Copulas and Extreme Values

This Winter, I will be giving a (graduate) course on extreme values, and copulas (more generally multivariate models and dependence), MAT8595. It is an ISM course, and even if it will probably be given in French, I will upload information here, in English. I will upload the (detailed) syllabus of the course during the Christmas holidays. But to give an overview, for those willing to register, the first part of the course will focus on extreme value theory. The references will be

The second part of the course will be on multivariate distributions. The references will be

Specific references and more details about the chapters will be given during the course. I will upload exercises this winter, as well as a list of articles that will be used for projects. Examples will be illustrated using R functions from dedicated packages.

Grades will be based on exercises (homework), report (based on a published paper) and final writen exam.

Fractals and Kronecker product

A few years ago, I went to listen to Roger Nelsen who was giving a talk about copulas with fractal support. Roger is amazing when he gives a talk (I am also a huge fan of his books, and articles), and I really wanted to play with that concept (that he did publish later on, with Gregory Fredricks and José Antonio Rodriguez-Lallena). I did mention that idea in a paper, writen with Alessandro Juri, just to mention some cases where deriving fixed point theorems is not that simple (since the limit may not exist).

The idea in the initial article was to start with something quite simple, a the so-called transformation matrix, e.g.

https://latex.codecogs.com/gif.latex?T=\frac{1}{8}\left(\begin{matrix}1&%200%20&%201%20\\%200%20&%204%20&%200%20\\%201%20&%200&1\end{matrix}\right)
Here, in all areas with mass, we spread it uniformly (say), i.e. the support of https://latex.codecogs.com/gif.latex?T(C^\perp) is the one below, i.e. https://latex.codecogs.com/gif.latex?1/8th of the mass is located in each corner, and https://latex.codecogs.com/gif.latex?1/2 is in the center. So if we spread the mass to have a copula (with uniform margin,)we have to consider squares on intervals https://latex.codecogs.com/gif.latex?[0,1/4]https://latex.codecogs.com/gif.latex?[1/4,3/4] and https://latex.codecogs.com/gif.latex?[3/4,1],

Then the idea, then, is to consider https://latex.codecogs.com/gif.latex?T^2=\otimes^2T, where  https://latex.codecogs.com/gif.latex?\otimes^2T is the tensor product (also called Kronecker product) of https://latex.codecogs.com/gif.latex?T with itself. Here, the support of https://latex.codecogs.com/gif.latex?T^2(C^\perp) is

Then, consider https://latex.codecogs.com/gif.latex?T^3=\otimes^3T, where https://latex.codecogs.com/gif.latex?\otimes^3T is the tensor product of https://latex.codecogs.com/gif.latex?T with itself, three times. And the support of https://latex.codecogs.com/gif.latex?T^3(C^\perp) is

Etc. Here, it is computationally extremely simple to do it, using this Kronecker product. Recall that if https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}=(a_{i,j}), then

https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}\otimes\mathbf{B}%20=%20\begin{pmatrix}%20a_{11}%20\mathbf{B}%20&%20\cdots%20&%20a_{1n}\mathbf{B}%20\\%20\vdots%20&%20\ddots%20&%20\vdots%20\\%20a_{m1}%20\mathbf{B}%20&%20\cdots%20&%20a_{mn}%20\mathbf{B}%20\end{pmatrix}

So, we need a transformation matrix: consider the following https://latex.codecogs.com/gif.latex?4\times4 matrix,

> k=4
> M=matrix(c(1,0,0,1,
+            0,1,1,0,
+            0,1,1,0,
+            1,0,0,1),k,k)
> M
[,1] [,2] [,3] [,4]
[1,]    1    0    0    1
[2,]    0    1    1    0
[3,]    0    1    1    0
[4,]    1    0    0    1

Once we have it, we just consider the Kronecker product of this matrix with itself, which yields a https://latex.codecogs.com/gif.latex?4^2\times4^2 matrix,

> N=kronecker(M,M)
> N[,1:4]
[,1]  [,2] [,3] [,4]
[1,]     1    0    0    1
[2,]     0    1    1    0
[3,]     0    1    1    0
[4,]     1    0    0    1
[5,]     0    0    0    0
[6,]     0    0    0    0
[7,]     0    0    0    0
[8,]     0    0    0    0
[9,]     0    0    0    0
[10,]    0    0    0    0
[11,]    0    0    0    0
[12,]    0    0    0    0
[13,]    1    0    0    1
[14,]    0    1    1    0
[15,]    0    1    1    0
[16,]    1    0    0    1

And then, we continue,

> for(s in 1:3){N=kronecker(N,M)}

After only a couple of loops, we have a https://latex.codecogs.com/gif.latex?4^5\times4^5 matrix. And we can plot it simply to visualize the support,

> image(N,col=c("white","blue"))

As we zoom in, we can visualize this fractal property,

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let https://latex.codecogs.com/gif.latex?\Delta denote the set of univariate continuous distribution function, left-continuous, on https://latex.codecogs.com/gif.latex?\mathbb{R}. And https://latex.codecogs.com/gif.latex?\Delta^+ the set of distributions on https://latex.codecogs.com/gif.latex?\mathbb{R}^+. Thus, https://latex.codecogs.com/gif.latex?F\in\Delta^+ if https://latex.codecogs.com/gif.latex?F\in\Delta and https://latex.codecogs.com/gif.latex?F(0)=0. Consider now two distributions https://latex.codecogs.com/gif.latex?F,G\in\Delta^+. In a very general setting, it is possible to consider operators on https://latex.codecogs.com/gif.latex?\Delta^+\times%20\Delta^+. Thus, let https://latex.codecogs.com/gif.latex?T:[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that https://latex.codecogs.com/gif.latex?T(1,1)=1. And consider some function https://latex.codecogs.com/gif.latex?L:\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions https://latex.codecogs.com/gif.latex?T and https://latex.codecogs.com/gif.latex?L, define the following (general) operator, https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G) as

https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when https://latex.codecogs.com/gif.latex?Tis a copula, https://latex.codecogs.com/gif.latex?C. In that case,

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where https://latex.codecogs.com/gif.latex?C^\star is the survival copula associated with https://latex.codecogs.com/gif.latex?C, i.e. https://latex.codecogs.com/gif.latex?C^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e. https://latex.codecogs.com/gif.latex?L(x,y)=x+y, in that case https://latex.codecogs.com/gif.latex?\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, and copula https://latex.codecogs.com/gif.latex?C. Thus, https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,

https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator https://latex.codecogs.com/gif.latex?L, then, for any copula https://latex.codecogs.com/gif.latex?C, one can find a lower bound for https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula https://latex.codecogs.com/gif.latex?C, https://latex.codecogs.com/gif.latex?C\geq%20C^-, where https://latex.codecogs.com/gif.latex?C^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, bounds for the distribution of the sum https://latex.codecogs.com/gif.latex?H(x)=\mathbb{P}(X+Y\leq%20x), where https://latex.codecogs.com/gif.latex?X\sim%20F and https://latex.codecogs.com/gif.latex?Y\sim%20G, can be written

https://latex.codecogs.com/gif.latex?H^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and

https://latex.codecogs.com/gif.latex?H^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all https://latex.codecogs.com/gif.latex?t\in(0,1), there is a copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all https://latex.codecogs.com/gif.latex?F\in\Delta^+ let https://latex.codecogs.com/gif.latex?F^{-1} denotes its generalized inverse, left continuous, and let https://latex.codecogs.com/gif.latex?\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,

https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

and

https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})(x)=\sup_{(u,v)\in%20T^\star^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

Those definitions are really dual versions of the previous ones, in the sense that https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is

https://latex.codecogs.com/gif.latex?\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider https://latex.codecogs.com/gif.latex?x=i/n and https://latex.codecogs.com/gif.latex?u=j/n, and the bound for the quantile function at point https://latex.codecogs.com/gif.latex?i/n is then

https://latex.codecogs.com/gif.latex?\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given https://latex.codecogs.com/gif.latex?n is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several https://latex.codecogs.com/gif.latex?ns were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…