A short post to get back on a property I gave briefly in the MAT8595 class in January, and again in the MAT8181 class this week (to illustrate the distinction between weak and strong white noises). Recall that (real-valued) random variables and
are independent if for all
,
Another characterization, for integrable variable is that
for all
, which can be written, if variables are square integrable
The idea to prove this characterization is to observe that if
and
are independent
can be written
Using a standard argument in integration theory, equality
is valid for step functions (not only indicators), and then to positive measurable functions, and finally to integrable functions. Proving this result is not that difficult. Observe that Rényi (1959) – inspired by Gebelein (1947) – followed by Sarmanov (1958) introduced the concept of maximal correlation, that can be related to this result,
where the maximum is taken over all functions
and
such that the correlation exist. Actually, it is possible to consider only transformations such that
and
(and similarly for
, the idea is that we simple center and scale, which does not impact the correlation.Thus,
and
are independent if and only if
Algorithm to estimate that coefficient are interesting. The problem can be written, equivalently
And if the minimization is considered over
, assuming that
is fixed, then the optimal transformation is
And similarly for
. So using an iterative algorithm, it is possible to get
and
(see Breiman and Friedman (1985) for more details). Actually, those functions appear in nonlinear canonical analysis. As mentioned in Lancaster (1957), for a Gaussian random vector
and in that case
and
are affine functions. This can be related to Hermite’s polynomial and to the expansion of the bivariate Gaussian density. I still hope that someone will go further for the project in the MAT8181 course.
Tag Archives: independence
Non transitivity of correlation for random vectors in dimension 3
Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)
“If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic”
Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,
Projections on bivariate planes of the three dimensional vector are
Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?
“If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent”
Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,
Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,
that can be visualized below,
In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.
To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix
Given two pairs of correlations, and
, what could we say about
? For instance, the intuition is that if
and
are positive, then
is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by
.Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like
which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,
But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for , as function of
and
. Consider the following code
U=seq(-1,1,by=.1) V=seq(-1,1,by=.001) FSUP=function(a,b){ DF=function(c){min(eigen(matrix (c(1,a,b,a,1,c,b,c,1),3,3))$values)}; V[max(which(Vectorize(DF)(V)>0))]} FINF=function(a,b){ DF=function(c){min(eigen(matrix( c(1,a,b,a,1,c,b,c,1),3,3))$values)}; V[min(which(Vectorize(DF)(V)>0))]} MSUP=outer(U,U,Vectorize(FSUP)) MINF=outer(U,U,Vectorize(FINF)) library(RColorBrewer) clr=rev(brewer.pal(6, "RdBu")) U=U[2:20] MSUP=MSUP[2:20,2:20] MINF=MINF[2:20,2:20] persp(U,U,MSUP,col="green",shade=TRUE) image(U,U,MSUP,breaks=((-3):3)/3,col=clr) persp(U,U,MINF,col="green",shade=TRUE) image(U,U,MINF,breaks=((-3):3)/3,col=clr)
Here, we can derive the lower and the upper bound for , as function of
and
.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |



V=seq(-1,1,by=.001) U=seq(-1,1,by=.1) U=U[2:(length(U)-1)] V=V[2:(length(V)-1)] U=c(-.9999,U,.9999) V=c(-.99999,V,.99999) FSUP=function(a){ DF=function(c){min(eigen(matrix( c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)}; V[max(which(Vectorize(DF)(V)>0))]} FINF=function(a){ DF=function(c){min(eigen(matrix( c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)}; V[min(which(Vectorize(DF)(V)>0))]} VS=Vectorize(FSUP)(U) VI=Vectorize(FINF)(U) plot(c(U,U),c(VS,VI),col="white") polygon(c(U,rev(U)),c(VS,rev(VI)), col="yellow",border=NA) lines(U,VS,lwd=2,col="red") lines(U,VI,lwd=2,col="red")


![]() |
![]() |
We do observe here extremely nice ellipses… Consider the case of a null correlation then the region for possible values for
and
is the unit circle.
The interpretation is that if
is null, and so is
then
might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if
is either -1 or +1 (perfect negative/positive correlation) then
has to be null…
We keep breaking records ? so what ?… Get statistical perspective….
This summer, we have been told that some financial series broke some records (here, in French)
For instance, the French CAC40 had negative return for 11 consecutive days (which has never been seen, so far).
> library(tseries) > x<-get.hist.quote("^FCHI") > Y=x$Close > Z=diff(log(Y)) > RUN=rle(as.character(Z>=0))$lengths > n=length(RUN) > LOSS=RUN[seq(2,n,by=2)] > GAIN=RUN[seq(1,n,by=2)] > TG=sort(table(GIN)) > TG[as.character(1:13)] GAIN 1 2 3 4 5 6 7 8 9 <NA> <NA> <NA> 13 645 336 170 72 63 21 7 3 4 NA NA NA 1 > TL=sort(table(LOSS)) > TL[as.character(1:15)] LOSS 1 2 3 4 5 6 7 8 9 <NA> 11 <NA> <NA> 664 337 186 68 42 14 5 3 1 NA 1 NA NA > TR=sort(table(RUN)) > TR[as.character(1:15)] RUN 1 2 3 4 5 6 7 8 9 <NA> 11 <NA> 13 1309 673 356 140 105 35 12 6 5 NA 1 NA 1
Indeed 11 consecutive days of negative returns is a record. But one should keep in mind the fact that the real records for runs is 13 consecutive days with positivereturns…
But what does that mean ? Can we still assume time independence of log-returns (since today, a lot of financial models are still based on that assumption) ?
Actually. if financial series were time-independence, such a probability, indeed, should be rather small. At least on 11 or 10 runs. Something like

(assuming that each day, the probability to observe a negative return is 50%). But maybe not over 25 years (6250 trading days): the probability to observe a sub-sequence of 10 consecutive negative value (with daily probability of one half) over 6250 observations will be much larger. My guess is that is would be

where at the numerator we have the number of favourable cases over the total number of cases. At the numerator, the first number the number of cases where the first 10 (at least) are negative, then for the second one, we count the number of cases where the first is positive, then the next 10 (at least) are negative (and then the second is positive and then the next 10 are negative, the third is positive etc). For those interested by more details (and a more general formula on runs), an answer can be found here.
But note that the probability is quite large… So it is not that unlikely to observe such a sequence over 25 years.
A classical idea when looking at time series is to look at the autocorrelation function of the returns,
which might suggest that there is no correlation with past returns. But it should be possible to do more advanced tests.
On the CAC40 series, we can run an independence run test on the latest 100 consecutive days, and look at the p-value,
> library(lawstat) > u=as.vector(Z[(n-100):n]) > runs.test(u,plot=TRUE) Runs Test - Two sided data: u Standardized Runs Statistic = -0.4991, p-value = 0.6177
The B’s here are returns lower than the median (almost null, so they might be considered as negative returns). With such a high p-value, we accept the null hypothesis, i.e. time independence.
If we consider a moving-time window
we can see that we accept the assumption of independence most the the time.
Actually, here, the time window is 100 days (+/- 50 days). But it is possible to consider 200 days,
or even 400 days,
So, except if we focus on 2006, it looks like we should reject the idea of time dependence in financial markets.
It is also possible to look more carefully at the distribution of runs, and to compare it with the case of independent samples (here we consider monte carlo generation of sequences having the same size),
> m=length(Z) > ns=100000 > HIST=matrix(NA,ns,15) > for(j in 1:ns){ + XX=sample(c("A","B"),size=m,replace=TRUE) + RUNX=rle(as.character(XX))$lengths + S=sort(table(RUNX)) + HIST[j,]=S[as.character(1:15)] + } > meana=function(x){sum(x[is.na(x)==FALSE])/length(x)} > cbind(TR[as.character(1:15)],apply(HIST,2,meana), + round(m/(2^(1+1:15)))) [,1] [,2] [,3] 1 1309 1305.12304 1305 2 673 652.46513 652 3 356 326.21119 326 4 140 163.05101 163 5 105 81.52366 82 6 35 40.74539 41 7 12 20.38198 20 8 6 10.16383 10 9 5 5.09871 5 10 NA 2.56239 3 11 1 1.26939 1 12 NA 0.63731 1 13 1 0.31815 0 14 NA 0.15812 0 15 NA 0.08013 0
The first column above is the empirical frequency of runs of length 1,2,3, etc. The second one is the average frequencies obtained on random simulation of independent sample. The third one is the theoretical frequency based on a (geometric distribution with mean 1).
Here again, it looks like our time series behave like an independent sample. Here is also a nice paper by Mark Schilling on the longest run of heads.
So it is not that odd to observe such a series of losses on financial markets….
Hypothèse d’indépendance en assurance
Il y a deux semaines, j’ai reçu un mail dans ma boite à l’ensae (que je consulte rarement) qui posait une vraie question. “Pour le modèle de risque collectif : S=X1+X2+…+XN. On suppose que les Xi et N sont indépendants (montant de sinistres et nombre de sinistres). Question : Dans la pratique (par exemple : assurance automobile), comment peut-on expliquer cette indépendance.“
Sur la forme de la question, on mélange un peu théorie et pratique: c’est la théorie qui impose cette hypothèse, et effectivement, il convient de la confronter à la pratique afin d’utiliser les résultats qui découlent du modèle collectif (en particulier en terme de solvabilité, j’en parlerai à la fin). Mais cela pose toutefois une question très intéressante.
Pour rappels, cette hypothèse a été faite par Filip Lundberg quand il a proposé les premiers résultats en 1903 (voire plus tôt), à l’époque où l’étude des processus était assez peu développée… Charles Spearman n’a introduit sa mesure de corrélation qu’en 1904, par exemple (même si on peut trouver des choses chez Francis Galton en 1888, puis chez Karl Pearson en 18921). Autrement dit, le premier modèle reposait sur les hypothèses les plus simples possibles (pour l’époque). Depuis, l’étude des processus ponctuels s’est beaucoup développée, et énormément de résultats ont été obtenus. Autrement dit, il existe des résultats théoriques permettant de s’affranchir de cette indépendance.
Petite remarque en passant, dans les livres avec Michel Denuit, on avait insisté longuement sur cette hypothèse d’indépendance… et c’est précisément sur ce point que Claude Bébéar avait ouvert la préface (ici) et qu’Hans Bühlmann avait clôturé la postface (là). C’est d’ailleurs Hans Bühlmann qui notait en 1963 que “the independence hypothesis is so common to be made that many authors forget to mention it“.
Pour être un peu complet, il y a trois indépendances distinctes dans les hypothèses du modèle collectif:
- d’indépendance entre les arrivées des sinistres, on parle plus proprement d’hypothèse d’un processus de Poisson homogène (les durées entre survenances suivent des lois exponentielles indépendantes),
- d’indépendance entre les coûts (individuels) des sinistres,
- d’indépendance entre les coûts de sinistres, et les arrivées.
C’est surtout la première hypothèse qui a été relâchée. On retrouvera des exemples dans les papiers suivants
- Malinovskii, V.K. (1998). Non-Poissonian claims’ arrivals and calculation of the probability of ruin. Insurance: Mathematics and Economics, 22, 2, 123-138.
- Gyllenberg, M. & Silvestrov, D.S. (2000). Cramer-Lundberg approximation for nonlinearly perturbed risk processes. Insurance: Mathematics and Economics, 26, 1, 75-90.
Sur les aspects pratiques, on peut retrouver ce phénomène en particulier sur les catastrophes naturelles qui sont très cycliques. L’exemple le plus connu et le plus étudié est les ouragans,
- Parisi, F. & Lund, R. (2000). Seasonality and return periods of landfalling Atlantic basin hurricanes. Australian and New Zealand Journal of Statistics, 42, 3, 271-282.
Mais on a la même chose en assurance auto, à cause de la pluie et du brouillard (en France en tous cas2).
Pour la seconde, on peut trouver des choses dans les derniers papiers de Stéphane Loisel (ici).
Mais c’est le dernier point qui était évoqué, et seulement celui là… Avec la même dichotomie que tout à l’heure, on trouve des résultats théoriques, en particulier dans les papiers d’Hansjorg Albrecher
- Albrecher, H. & Boxma, O.J. (2004). A ruin model with dependence between claim sizes and claim intervals. Insurance: Mathematics and Economics, 35, 2, 245-254.
Sur les aspects pratiques, l’exemple le plus classique et le plus étudié est celui des tremblements de terre. Pour faire très simple, un tremblement de terre est causé par une accumulation d’énergie: plus on attend, plus le tremblement de terre risque d’être important. Bref, d’un point de vue pratique, cette hypothèse peut être délicate à supposer. On avait utilisé un modèle ACD avec un processus marqué dans le cas des crues de rivières par exemple, dans un papier avec David Sibaï (ici). On modélisait un processus joint prenant en compte les durées des crues, les durées entre crue, et les importances des crues. Là aussi, intensités et durées ne sont pas indépendant.
Mais au delà de l’explication, il faut aussi – peut être – essayer de tester cette indépendance. Si quelqu’un a des données ou des résultats sur ce sujet, je suis preneur.
Pour conclure, je dirais que cette hypothèse d’indépendance est fondamentale. Si on la supprime, on peut perdre beaucoup de résultats classiques, sur la solvabilité par exemple. Pour reprendre une idée commune partagée par beaucoup de monde, dans l’Encyclopaedia of Financial Engineering and Risk Management, (ici) on apprend que “reinsurance is able to offer additional underwriting capacity for cedants, but also to reduce the probability of a direct insurer’s ruin“. En fait, ce résultat devient faux si l’on supprime cette hypothèse d’indépendance: on peut augmenter la probabilité de ruine en se réassurance ! La réassurance diminue alors la solvabilité de la compagnie d’assurance (ou tout du moins peut le faire). C’est dans un papier que j’ai soumis récemment, je garde donc tout ça pour un billet dans les mois à venir… à suivre donc.
1 Sur la corrélation de Pearson – que j’ai beaucoup critiqué sur ce blog – je renvoie à un très beau papier que je viens (seulement) de découvrir, de Joseph Rodgers et Alan Nicewander, intitulé Thirteen Ways to Look at the Correlation Coefficient (ici)
2 Si quelqu’un a des données pour illustrer ce point, je suis preneur !