Tag Archives: normal

C’est normal ! (partie 1) et si la normalité n’existait pas?

Il y a quelques semaines, je discutais avec une collègue juriste (alors que j’essayais de glaner des statistiques juridiques), et alors que l’on parlait alors de la lenteur de l’instruction, ou de l’inversion de la charge de la preuve (je ne me souviens plus) j’ai été supris qu’elle me dise que “c’est normal“. Je sais qu’un juriste et un statisticien ne donnent pas forcément le même sens aux mots, et cette phrase m’a dérangé parce qu’une situation qui pourrait sembler “normale” (car observée régulièrement) n’est pas pour autant “juste” (c’est le paradoxe is/ought de Hume, mais j’en parlerais dans un autre billet).

Le point de départ est de comprendre ce qu’est la normalité “empirique”, telle qu’observée dans une population. Ce qui pourrait être “normal” pour un statisticien. Pour commencer, je pensais revenir sur un exemple raconté dans The End of Average de Todd Rose, qui essaye de montrer, exemples à l’appui, que l’homme moyen n’existe pas.

  • L’homme moyen de Quételet

Au XIXème siècle, si plusieurs astronomes mesuraient la vitesse d’un même objet céleste, ils obtenaient (souvent) plusieurs mesures différentes. Pour savoir “laquelle” utiliser dans leurs calculs, l’idée d’utiliser “la méthode des moyennes” s’est rapidement imposée – comme le rappelle Stahl (2006), et surtout Sheynin (1973) – cette “moyenne” ayant une précision plus grande que n’importe quelle autre grandeur (ou dirait aujourd’hui “statistique”).

Adolphe Quetelet fut, semble-t-il, le premier à appliquer ce calcul de moyennes à des mesures humaines, introduisant son fameux concept d'”homme moyen”. Comme j’en avais parlé dans un précédant billet, la moyenne est une grande particulière, dont le sens n’est pas forcément clair. Si on définit la moyenne à l’aide d’une minimisation d’erreur quadratique, on a une interprétation en terme de prévision (on retrouve ici la notion de mesure elicitable dont je parlais dans mon dernier cours): la taille moyenne est la taille de “devrait” mesurer une personne tirée au hasard (à une variation aléatoire – et imprévisible – près). En 1846 dans une lettre (publiée dans l’ouvrage Lettres sur la théorie des probabilités, appliquée aux sciences morales) Adolphe Quételet utilise l’image de la statue du gladiateur pour expliquer ce que peut être l’homme moyen.

  • L’interprétation de Francis Galton

Ce homme moyen a beaucoup plu à l’époque, en particulier au sein de l’école anglaise eugéniste, dirigée à l’époque par Francis Galton, même si ce dernier s’intéresse surtout aux déviations par rapport à cette norme (déviation vers le haut et déviation vers le bas). Comme le rappelle Bulmer (2004),the deviations from that average—upwards towards genius, and downwards towards stupidity—must follow the law that governs deviations from all true averages“. Les travaux de Galton ont visé à comprendre ces déviations. Si Florence Nightingale affirmait que “the Average Man is God’s Will“, Galton de son côté s’intéressait davantage au caractère héréditaire. Mais cet “homme moyen” a-t-il pour autant du sens ?

  • L’être humain moyen n’existe pas

Une anecdote intéressante est celle de deux statues, à Cleveland, celles de Norma et Normann. L’artiste Abram Belskie et l’obstetricien Robert Latou Dickinson ont réalisé ensemble ces statues, en 1943. La particularité est qu’aucun modèle n’a été représenté. En fait, il s’agissait de représenter une femme et un homme qui avaient les mensurations moyennes de l’époque.

Une fois ces statues réalisées, un concours a été organisé pour trouver qui ces statues pouvaient bien représenter. Plusieurs milliers de personnes de l’Ohio ont envoyé leurs mensurations, mais aucun ne correspondaient à celles des statues. Certes, plusieurs centaines avaient la même taille. Plusieurs centaines avaient le même tour de poitrine. Mais aucune n’avait toutes les bonnes mesures. Car comme l’explique Todd Rose, l’homme n’est pas unidimensionnel: c’est sur plusieurs dimensions qu’on le mesure.

Et chercher à le résumer en une grandeur unidimensionnel est beaucoup trop réducteur. C’est ce qu’il montre dans son livre sur les tests d’intelligence, par exemple, où un même QI peut être associé à deux personnes très différentes.

Pareil pour décider de recruter quelqu’un, se focaliser sur un seul indicateur n’a pas de sens. Le soucis quand on travaille dans un contexte multivarié, c’est que la moyenne perd de son sens. Pour reprendre le titre d’un billet mis en ligne voilà 6 mois, être moyen peut être extraordinaire.

  • La malédiction de la dimension (en grande dimension, l’espace est très vide…)

En fait, ce problème est bien connu par les statisticiens, sous le nom de “fléau de la dimension“. Prenons un exemple simple: supposons qu’une grandeur d’intérêt suive une loi normale https://latex.codecogs.com/gif.latex?\mathcal{N}(\mu,\sigma^2), par exemple le poids, la taille, le tour de poitrine, etc. On pourrait dire que la norme, c’est se trouver dans un intervalle https://latex.codecogs.com/gif.latex?[\mu\pm%201.5%20\cdot\sigma]. Si on a une loi normale, cette situation survient dans 85% des cas,

> (1-2*pnorm(-1.5))
[1] 0.8663856

Et 15% seront vues comme “anormales”. Elles peuvent être anormalement petites, ou anormalement grandes. C’est le dessin ci-dessous: on regarde ici seulement une dimension

On peut maintenant regarder deux dimensions, le poids et la taille, par exemple. La norme serait ici que dans les deux dimensions, on soit dans l’intervalle https://latex.codecogs.com/gif.latex?[\mu\pm%201.5%20\cdot\sigma]. Si les grandeurs sont indépendantes, la probabilité que les deux grandeurs soient “normale” est de 75%

> (1-2*pnorm(-1.5))^2
[1] 0.750624

En dimension deux, 75% des observations sont “normales”, et 25% sont “anormales”

En dimension 3, on passe à 65%

> (1-2*pnorm(-1.5))^3
[1] 0.6503298

pour 35% d’observations “anormales” (plus du tiers)

Etc. En dimension cinq, on passe en dessous de 50%

> (1-2*pnorm(-1.5))^5
[1] 0.4881532

autrement dit, être dans la norme dans les 5 dimensions, ce n’est plus le cas de la majorité. Et en dimension vingt, ceux qui sont “normaux” sont plutôt atypiques, avec une proportion de l’ordre de 5%,

> (1-2*pnorm(-1.5))^20
[1] 0.0567838

Bref, la normalité est un concept particulièrement étrange sur le plan empirique, car intuitivement associé à l’idée d’une majorité. Alors que ce n’est pas le cas, la normalité étant justement atypique.

Generating your own normal distribution table

It might sounds incredibly old fashion, but for my the exam for the ACT2121 probability course (to prepare for the exam P of the Society of Actuaries), I will provide a standard normal distribution table. The problem is that it is never the one we’re looking for (sometimes it is the survival function, sometimes it is the cumulative distribution function, sometimes we consider only positive values, etc). Here is the one that will be given for the exam, this Friday.

Now, here is the code to generate it.

I did use the following code to generate the table (in a latex format),

> u=seq(0,3.09,by=0.01)
> p=pnorm(u)
> m=matrix(p,ncol=10,byrow=TRUE

We have here the table that we wish to have in our table,

> options(digits=4)
> m
        [,1]   [,2]   [,3]   [,4]   [,5]   [,6]   [,7]   [,8]   [,9]  [,10]
 [1,] 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
 [2,] 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
 [3,] 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
 [4,] 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
 [5,] 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
 [6,] 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
 [7,] 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
 [8,] 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
 [9,] 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
[10,] 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
[11,] 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
[12,] 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
[13,] 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
[14,] 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
[15,] 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
[16,] 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
[17,] 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
[18,] 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
[19,] 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
[20,] 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
[21,] 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
[22,] 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
[23,] 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
[24,] 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
[25,] 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
[26,] 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
[27,] 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
[28,] 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
[29,] 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
[30,] 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
[31,] 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
> rownames(m)=seq(0,3,b=.1)
> colnames(m)=seq(0,.09,by=.01)

To put it in a nice latex format, we can use

> library(xtable)
> newm=xtable(m,digits=4)
> print.xtable(newm, type="latex", file="nor1.tex")

We now have a simple tex file containing a table.

\begin{table}[ht]
\centering
\begin{tabular}{rrrrrrrrrrr}
  \hline
 & 0 & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.006 & 0.007 & 0.008 & 0.009 \\ 
  \hline
0 & 0.5000 & 0.5040 & 0.5080 & 0.5120 & 0.5160 & 0.5199 & 0.5239 & 0.5279 & 0.5319 & 0.5359 \\ 
  0.1 & 0.5398 & 0.5438 & 0.5478 & 0.5517 & 0.5557 & 0.5596 & 0.5636 & 0.5675 & 0.5714 & 0.5753 \\ 
  0.2 & 0.5793 & 0.5832 & 0.5871 & 0.5910 & 0.5948 & 0.5987 & 0.6026 & 0.6064 & 0.6103 & 0.6141 \\ 
  0.3 & 0.6179 & 0.6217 & 0.6255 & 0.6293 & 0.6331 & 0.6368 & 0.6406 & 0.6443 & 0.6480 & 0.6517 \\ 
  0.4 & 0.6554 & 0.6591 & 0.6628 & 0.6664 & 0.6700 & 0.6736 & 0.6772 & 0.6808 & 0.6844 & 0.6879 \\ 
  0.5 & 0.6915 & 0.6950 & 0.6985 & 0.7019 & 0.7054 & 0.7088 & 0.7123 & 0.7157 & 0.7190 & 0.7224 \\ 
  0.6 & 0.7257 & 0.7291 & 0.7324 & 0.7357 & 0.7389 & 0.7422 & 0.7454 & 0.7486 & 0.7517 & 0.7549 \\ 
  0.7 & 0.7580 & 0.7611 & 0.7642 & 0.7673 & 0.7704 & 0.7734 & 0.7764 & 0.7794 & 0.7823 & 0.7852 \\ 
  0.8 & 0.7881 & 0.7910 & 0.7939 & 0.7967 & 0.7995 & 0.8023 & 0.8051 & 0.8078 & 0.8106 & 0.8133 \\ 
  0.9 & 0.8159 & 0.8186 & 0.8212 & 0.8238 & 0.8264 & 0.8289 & 0.8315 & 0.8340 & 0.8365 & 0.8389 \\ 
  1 & 0.8413 & 0.8438 & 0.8461 & 0.8485 & 0.8508 & 0.8531 & 0.8554 & 0.8577 & 0.8599 & 0.8621 \\ 
  1.1 & 0.8643 & 0.8665 & 0.8686 & 0.8708 & 0.8729 & 0.8749 & 0.8770 & 0.8790 & 0.8810 & 0.8830 \\ 
  1.2 & 0.8849 & 0.8869 & 0.8888 & 0.8907 & 0.8925 & 0.8944 & 0.8962 & 0.8980 & 0.8997 & 0.9015 \\ 
  1.3 & 0.9032 & 0.9049 & 0.9066 & 0.9082 & 0.9099 & 0.9115 & 0.9131 & 0.9147 & 0.9162 & 0.9177 \\ 
  1.4 & 0.9192 & 0.9207 & 0.9222 & 0.9236 & 0.9251 & 0.9265 & 0.9279 & 0.9292 & 0.9306 & 0.9319 \\ 
  1.5 & 0.9332 & 0.9345 & 0.9357 & 0.9370 & 0.9382 & 0.9394 & 0.9406 & 0.9418 & 0.9429 & 0.9441 \\ 
  1.6 & 0.9452 & 0.9463 & 0.9474 & 0.9484 & 0.9495 & 0.9505 & 0.9515 & 0.9525 & 0.9535 & 0.9545 \\ 
  1.7 & 0.9554 & 0.9564 & 0.9573 & 0.9582 & 0.9591 & 0.9599 & 0.9608 & 0.9616 & 0.9625 & 0.9633 \\ 
  1.8 & 0.9641 & 0.9649 & 0.9656 & 0.9664 & 0.9671 & 0.9678 & 0.9686 & 0.9693 & 0.9699 & 0.9706 \\ 
  1.9 & 0.9713 & 0.9719 & 0.9726 & 0.9732 & 0.9738 & 0.9744 & 0.9750 & 0.9756 & 0.9761 & 0.9767 \\ 
  2 & 0.9772 & 0.9778 & 0.9783 & 0.9788 & 0.9793 & 0.9798 & 0.9803 & 0.9808 & 0.9812 & 0.9817 \\ 
  2.1 & 0.9821 & 0.9826 & 0.9830 & 0.9834 & 0.9838 & 0.9842 & 0.9846 & 0.9850 & 0.9854 & 0.9857 \\ 
  2.2 & 0.9861 & 0.9864 & 0.9868 & 0.9871 & 0.9875 & 0.9878 & 0.9881 & 0.9884 & 0.9887 & 0.9890 \\ 
  2.3 & 0.9893 & 0.9896 & 0.9898 & 0.9901 & 0.9904 & 0.9906 & 0.9909 & 0.9911 & 0.9913 & 0.9916 \\ 
  2.4 & 0.9918 & 0.9920 & 0.9922 & 0.9925 & 0.9927 & 0.9929 & 0.9931 & 0.9932 & 0.9934 & 0.9936 \\ 
  2.5 & 0.9938 & 0.9940 & 0.9941 & 0.9943 & 0.9945 & 0.9946 & 0.9948 & 0.9949 & 0.9951 & 0.9952 \\ 
  2.6 & 0.9953 & 0.9955 & 0.9956 & 0.9957 & 0.9959 & 0.9960 & 0.9961 & 0.9962 & 0.9963 & 0.9964 \\ 
  2.7 & 0.9965 & 0.9966 & 0.9967 & 0.9968 & 0.9969 & 0.9970 & 0.9971 & 0.9972 & 0.9973 & 0.9974 \\ 
  2.8 & 0.9974 & 0.9975 & 0.9976 & 0.9977 & 0.9977 & 0.9978 & 0.9979 & 0.9979 & 0.9980 & 0.9981 \\ 
  2.9 & 0.9981 & 0.9982 & 0.9982 & 0.9983 & 0.9984 & 0.9984 & 0.9985 & 0.9985 & 0.9986 & 0.9986 \\ 
  3 & 0.9987 & 0.9987 & 0.9987 & 0.9988 & 0.9988 & 0.9989 & 0.9989 & 0.9989 & 0.9990 & 0.9990 \\ 
   \hline
\end{tabular}
\end{table}

and the following code to get a graph, illustrating was was actually computed, in the table (see a previous post for more details)

> library("tikzDevice")
> options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))
+ tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))
> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")
> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\leq x)=\\displaystyle{
+ \\int_{-\\infty}^x \\varphi(t)dt}}$", cex = 1.5)
> dev.off()

Now we have the graph in another tex file. It is possible to embed the code in a tex file, or to compile the tex file to get a pdf file. I did generate the pdf file.

 Here is the tex file I finally get. It is now extremely simple to get your own normal distribution table. Now, I guess it could be possible to use sweave, or knitr. Once I’ll get a copy of Yihui’s book, I’ll try to use it to generate distribution table for my courses !

Le passage au log dans les modèles linéaires

Un billet rapide pour compléter et illustrer le passage au log dans un modèle linéaire (que l’on abordera cette semaine en cours). Le point de départ est le modèle linéaire, où on suppose que, conditionnellement à  suit une loi normale. Pour rappel, si on a une loi normale, , alors  et . Les intervalles de confiance à 90% et 95% sont symétriques par rapport à la moyenne (qui est aussi la médiane, soit dit en passant),

Dans un modèle Gaussien avec homoscédasiticité,  i.e.  alors que . On a alors les bandes de confiance suivantes, pour un modèle de régression linéaire,

Bon, maintenant, que se passe-t-il si on prend l’exponentiel ? Pour la loi normale, rappelons que l’on obtient une loi lognormale, i.e. , les deux paramètres étant liés à la loi normale sous jacente, car désormais

alors que

Graphiquement, on a la loi suivante, avec les intervalles de confiance à 90% et 95% représentés ci-dessous. Le point noir est   alors que le point bleu est l’espérance de la loi lognormale.

On notera que le quantile de la loi log-normale est l’exponentiel du quantile de la loi normale. En effet, si https://latex.codecogs.com/gif.latex?\mathbb{P}(Y\leq%20q)=\alpha alors https://latex.codecogs.com/gif.latex?\mathbb{P}(\exp(Y)\leq%20\exp(q))=\alpha. En particulier,  n’est pas la moyenne de , mais la médiane (puisque  était la médiane de ).

Mais il n’est pas rare de voir utilisé un intervalle de confiance de la forme

qui est la forme classique de l’intervalle de confiance Gaussien (symétrique autour de la moyenne). Ici, on aurait les niveaux suivants

Notons qu’il n’y a aucune raison ici d’avoir une probabilité https://latex.codecogs.com/gif.latex?1-\alpha d’être dans l’intervalle de confiance obtenu avec les quantiles https://latex.codecogs.com/gif.latex?q_{1-\alpha/2} de la loi normale.

Maintenant, si on prend l’exponentiel d’un modèle linéaire (i.e. le logarithme de la variable d’intérêt est modélisé par un modèle linéaire) on a

avec une variance (conditionnelle) qui dépend de la variable explicative

Là encore, le plus naturel est d’utiliser comme bornes de l’intervalle de confiance des quantiles associés à la loi lognormale,

mais il n’est pas rare de voir utilisé des intervalles de type Gaussiens,

On perd là encore en interprétation car les bornes n’ont plus rien à voir avec les quantiles.

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let https://latex.codecogs.com/gif.latex?\Delta denote the set of univariate continuous distribution function, left-continuous, on https://latex.codecogs.com/gif.latex?\mathbb{R}. And https://latex.codecogs.com/gif.latex?\Delta^+ the set of distributions on https://latex.codecogs.com/gif.latex?\mathbb{R}^+. Thus, https://latex.codecogs.com/gif.latex?F\in\Delta^+ if https://latex.codecogs.com/gif.latex?F\in\Delta and https://latex.codecogs.com/gif.latex?F(0)=0. Consider now two distributions https://latex.codecogs.com/gif.latex?F,G\in\Delta^+. In a very general setting, it is possible to consider operators on https://latex.codecogs.com/gif.latex?\Delta^+\times%20\Delta^+. Thus, let https://latex.codecogs.com/gif.latex?T:[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that https://latex.codecogs.com/gif.latex?T(1,1)=1. And consider some function https://latex.codecogs.com/gif.latex?L:\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions https://latex.codecogs.com/gif.latex?T and https://latex.codecogs.com/gif.latex?L, define the following (general) operator, https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G) as

https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when https://latex.codecogs.com/gif.latex?Tis a copula, https://latex.codecogs.com/gif.latex?C. In that case,

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where https://latex.codecogs.com/gif.latex?C^\star is the survival copula associated with https://latex.codecogs.com/gif.latex?C, i.e. https://latex.codecogs.com/gif.latex?C^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e. https://latex.codecogs.com/gif.latex?L(x,y)=x+y, in that case https://latex.codecogs.com/gif.latex?\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, and copula https://latex.codecogs.com/gif.latex?C. Thus, https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,

https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator https://latex.codecogs.com/gif.latex?L, then, for any copula https://latex.codecogs.com/gif.latex?C, one can find a lower bound for https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula https://latex.codecogs.com/gif.latex?C, https://latex.codecogs.com/gif.latex?C\geq%20C^-, where https://latex.codecogs.com/gif.latex?C^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, bounds for the distribution of the sum https://latex.codecogs.com/gif.latex?H(x)=\mathbb{P}(X+Y\leq%20x), where https://latex.codecogs.com/gif.latex?X\sim%20F and https://latex.codecogs.com/gif.latex?Y\sim%20G, can be written

https://latex.codecogs.com/gif.latex?H^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and

https://latex.codecogs.com/gif.latex?H^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all https://latex.codecogs.com/gif.latex?t\in(0,1), there is a copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all https://latex.codecogs.com/gif.latex?F\in\Delta^+ let https://latex.codecogs.com/gif.latex?F^{-1} denotes its generalized inverse, left continuous, and let https://latex.codecogs.com/gif.latex?\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,

https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

and

https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})(x)=\sup_{(u,v)\in%20T^\star^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

Those definitions are really dual versions of the previous ones, in the sense that https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is

https://latex.codecogs.com/gif.latex?\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider https://latex.codecogs.com/gif.latex?x=i/n and https://latex.codecogs.com/gif.latex?u=j/n, and the bound for the quantile function at point https://latex.codecogs.com/gif.latex?i/n is then

https://latex.codecogs.com/gif.latex?\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given https://latex.codecogs.com/gif.latex?n is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several https://latex.codecogs.com/gif.latex?ns were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…

Visualization in regression analysis

Visualization is a key to success in regression analysis. This is one of the (many) reasons I am also suspicious when I read an article with a quantitative (econometric) analysis without any graph. Consider for instance the following dataset, obtained from http://data.worldbank.org/, with, for each country, the GDP per capita (in some common currency) and the infant mortality rate (deaths before the age of 5),

> library(gdata)
> XLS1=read.xls("http://api.worldbank.org/datafiles
/NY.GDP.PCAP.PP.CD_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data1=XLS1[-(1:28),c("Country.Name","Country.Code","X2010")]
> names(data1)[3]="GDP"
> XLS2=read.xls("http://api.worldbank.org/datafiles
/SH.DYN.MORT_Indicator_MetaData_en_EXCEL.xls", sheet = 1)
> data2=XLS2[-(1:28),c("Country.Code","X2010")]
> names(data2)[2]="MORTALITY"
> data=merge(data1,data2)
> head(data)
Country.Code         Country.Name       GDP MORTALITY
1          ABW                Aruba        NA        NA
2          AFG          Afghanistan  1207.278     149.2
3          AGO               Angola  6119.930     160.5
4          ALB              Albania  8817.009      18.4
5          AND              Andorra        NA       3.8
6          ARE United Arab Emirates 47215.315       7.1

If we estimate a simple linear regression – http://freakonometrics.blog.free.fr/public/perso5/logormal01.gif  – we get

> regBB=lm(MORTALITY~GDP,data=data)
> summary(regBB)

Call:
lm(formula = MORTALITY ~ GDP, data = data)

Residuals:
Min     1Q Median     3Q    Max
-45.24 -29.58 -12.12  16.19 115.83

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 67.1008781  4.1577411  16.139  < 2e-16 ***
GDP         -0.0017887  0.0002161  -8.278 3.83e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 39.99 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.2909,	Adjusted R-squared: 0.2867
F-statistic: 68.53 on 1 and 167 DF,  p-value: 3.834e-14

We can look at the scatter plot, including the linear regression line, and some confidence bounds,

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5)",cex=.5)
> text(data$GDP,data$MORTALITY,data$Country.Name,pos=3)
> x=seq(-10000,100000,length=101)
> y=predict(regBB,newdata=data.frame(GDP=x),
+ interval="prediction",level = 0.9)
> lines(x,y[,1],col="red")
> lines(x,y[,2],col="red",lty=2)
> lines(x,y[,3],col="red",lty=2)

We should be able to do a better job here. For instance, if we look at the Box-Cox profile likelihood,

> boxcox(regBB)

it looks like taking the logarithm of the mortality rate should be better, i.e. http://freakonometrics.blog.free.fr/public/perso5/lognormal02.gif or http://freakonometrics.blog.free.fr/public/perso5/lognormal05.gif:

> regLB=lm(log(MORTALITY)~GDP,data=data)
> summary(regLB)

Call:
lm(formula = log(MORTALITY) ~ GDP, data = data)

Residuals:
Min      1Q  Median      3Q     Max
-1.3035 -0.5837 -0.1138  0.5597  3.0583

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  3.989e+00  7.970e-02   50.05   <2e-16 ***
GDP         -6.487e-05  4.142e-06  -15.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7666 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.5949,	Adjusted R-squared: 0.5925
F-statistic: 245.3 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5,log="y")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=seq(300,100000,length=101)
> y=exp(predict(regLB,newdata=data.frame(GDP=x)))*
+ exp(summary(regLB)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLB,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLB)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scale or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita",
+ ylab="Mortality rate (under 5) log scale",cex=.5)

on the standard scale. Here we use quantiles of the log-normal distribution to derive confidence intervals.

But why shouldn’t we take also the logarithm of the GDP ? We can fit a model http://freakonometrics.blog.free.fr/public/perso5/lognormal03.gif or equivalently http://freakonometrics.blog.free.fr/public/perso5/lognormal04.gif.

> regLL=lm(log(MORTALITY)~log(GDP),data=data)
> summary(regLL)

Call:
lm(formula = log(MORTALITY) ~ log(GDP), data = data)

Residuals:
Min       1Q   Median       3Q      Max
-1.13200 -0.38326 -0.07127  0.26610  3.02212

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.50192    0.31556   33.28   <2e-16 ***
log(GDP)    -0.83496    0.03548  -23.54   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.5797 on 167 degrees of freedom
(47 observations deleted due to missingness)
Multiple R-squared: 0.7684,	Adjusted R-squared: 0.767
F-statistic:   554 on 1 and 167 DF,  p-value: < 2.2e-16

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5,log="xy")
> text(data$GDP,data$MORTALITY,data$Country.Name)
> x=exp(seq(1,12,by=.1))
> y=exp(predict(regLL,newdata=data.frame(GDP=x)))*
+ exp(summary(regLL)$sigma^2/2)
> lines(x,y,col="red")
> y=qlnorm(.95, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)
> y=qlnorm(.05, meanlog=predict(regLL,newdata=data.frame(GDP=x)),
+ sdlog=summary(regLL)$sigma^2)
> lines(x,y,col="red",lty=2)

on the log scales or

> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ",
+ ylab="Mortality rate (under 5)",cex=.5)

on the standard scale. If we compare the last two predictions, we have

with in blue is the log model, and in red is the log-log model (I did not include the first one for obvious reasons).

Fisher-Tippett theorem and limiting distribution for the maximum

Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples http://freakonometrics.blog.free.fr/public/perso5/max-00.gif. For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. http://freakonometrics.blog.free.fr/public/perso5/max-09.gif on the unit interval. Let http://freakonometrics.blog.free.fr/public/perso5/max-10.gif and http://freakonometrics.blog.free.fr/public/perso5/max-11.gif. Then, for all http://freakonometrics.blog.free.fr/public/perso5/max-12.gif and http://freakonometrics.blog.free.fr/public/perso5/max-13.gif,

http://freakonometrics.blog.free.fr/public/perso5/max-14.gif

i.e. the limiting distribution of the maximum is Weibull’s.

set.seed(1)
s=1000000
n=100
M=matrix(runif(s),n,s/n)
V=apply(M,2,max)
bn=1
an=1/n
U=(V-bn)/an
hist(U,probability=TRUE,,col="light green",
xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25))
u=seq(-10,0,by=.1)
v=exp(u)
lines(u,v,lwd=3,col="red")

For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function http://freakonometrics.blog.free.fr/public/perso5/max-05.gif. Let http://freakonometrics.blog.free.fr/public/perso5/max-06.gif and http://freakonometrics.blog.free.fr/public/perso5/max-07.gif, then

http://freakonometrics.blog.free.fr/public/perso5/max-08.gif

which means that the limiting distribution is Fréchet’s.

set.seed(1)
s=1000000
n=100
M=matrix((runif(s))^(-1/2),n,s/n)
V=apply(M,2,max)
bn=0
an=n^(1/2)
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25))
u=seq(0,10,by=.1)
v=dfrechet(u,shape=2)
lines(u,v,lwd=3,col="red")

For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function http://freakonometrics.blog.free.fr/public/perso5/max-01.gif. Let http://freakonometrics.blog.free.fr/public/perso5/max-02.gif and http://freakonometrics.blog.free.fr/public/perso5/max-03.gif, then

http://freakonometrics.blog.free.fr/public/perso5/max-04.gif

i.e. the limiting distribution for the maximum is Gumbel’s distribution.

library(evd)
set.seed(1)
s=1000000
n=100
M=matrix(rexp(s,1),n,s/n)
V=apply(M,2,max)
(bn=qexp(1-1/n))
log(n)
an=1
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Consider now a Gaussian http://freakonometrics.blog.free.fr/public/perso5/max-17.gif sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)

http://freakonometrics.blog.free.fr/public/perso5/max-15.gif

as http://freakonometrics.blog.free.fr/public/perso5/max-16.gif. Let http://freakonometrics.blog.free.fr/public/perso5/max-18.gif and http://freakonometrics.blog.free.fr/public/perso5/max-19.gif. Then we can get

http://freakonometrics.blog.free.fr/public/perso5/max-20.gif

as http://freakonometrics.blog.free.fr/public/perso5/max-21.gif. I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,

and it is only slightly better with 1,000 observations,

set.seed(1)
s=10000000
n=1000
M=matrix(rnorm(s,0,1),n,s/n)
V=apply(M,2,max)
(bn=qnorm(1-1/n,0,1))
an=1/bn
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since http://freakonometrics.blog.free.fr/public/perso5/max-22.gif, if

http://freakonometrics.blog.free.fr/public/perso5/max-23.gif

then

http://freakonometrics.blog.free.fr/public/perso5/max-24.gif

i.e. using Taylor’s approximation on the right term,

http://freakonometrics.blog.free.fr/public/perso5/max-25.gif

This gives us normalizing coefficients we should use here.

set.seed(1)
s=10000000
n=1000
M=matrix(rlnorm(s,0,1),n,s/n)
V=apply(M,2,max)
bn=exp(qnorm(1-1/n,0,1))
an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1))
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

In statistics, having too much information might not be a good thing

A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-06.gif

where http://freakonometrics.blog.free.fr/public/perso2/inc-out-8.gif is the quantile of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif of the standard normal distribution http://freakonometrics.blog.free.fr/public/perso2/inc-out-09.gif. But usually, standard deviation http://freakonometrics.blog.free.fr/public/perso2/inc-cout-10.gif (the something is was talking about earlier) is usually unknown. So we substitute an estimation of the standard deviation, e.g.

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-02.gif

and the cost we have to pay is that the new confidence interval is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-01.gif

where now http://freakonometrics.blog.free.fr/public/perso2/IC-cout-03.gif is the quantile of the Student distribution, of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif, with http://freakonometrics.blog.free.fr/public/perso2/IC-cout-04.gif degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels,  here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_11.gif i.i.d. with distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif, a copula, but in practice, we start from a samplehttp://freakonometrics.blog.free.fr/public/perso2/ic-cout_12.gif with joint distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cour_14.gif (assumed to have continuous margins, and – unique – copula http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-15.gif

but in the real world, we have to consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-16.gif

where it is standard to consider ranks, i.e. http://freakonometrics.blog.free.fr/public/perso2/ic-cout_109.gif are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout-17.gif than on perfect sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_18.gif.
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:

With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,

Well, we observed something for finite http://freakonometrics.blog.free.fr/public/maths/mariage01.png, but Christian and Johan obtained an analytical result. Hence, if we denote

http://freakonometrics.blog.free.fr/public/perso2/JSCG-1.gif

the empirical copula in the perfect world (with known margins) and

http://freakonometrics.blog.free.fr/public/perso2/JSCG-2.gif

the one constructed from the pseudo sample, they obtained that, everywhere

http://freakonometrics.blog.free.fr/public/perso2/JSCG-6.gif

with nice graphs of http://freakonometrics.blog.free.fr/public/perso2/JSCG-7.gif,

So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !

from two to three…

A short post to give more details about the final remark in the course of Financial Econometrics, and more precisely the formula that can be found in the book of Philip Jorion,

Note that this formula can be found (perhaps written with slight changes) in several papers, e.g. the following sentence (on the http://www.bis.org/website),

or the following formula, on documents from the Bank of England website,

I recently pulished (in French, here) a paper on the Value-at-Risk, including the following graph,

Usually, three times the average over 60 trading days is the larger component, but during the financial crisis, it turned out that the daily component was almost three times higher than the average value over the past the months (this fact was mention by Paul Embrechts in some conference in Paris on risk measures).
The interpreation of the multiplicative k coefficient (which is from 2 to 3 in some publications, or which exceeds 3 in others) has been proposed in a paper of Gerhard Stahl, entitled three cheers. The idea is to use the Bienaymé-Tchebychev inequality. For random variables with finite variance, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-01.png

Recall that this inequality is simply a corrolary of Markov’s inequality

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-02.png

or for any increasing function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-99.png

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-03.png

(taking function https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-04.png, applied to https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-05.png). This upper bound can be far away from the true probability, see e.g. the gaussian case below, i.e. if  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-07.png

 

> z = seq(0,3,by=.01)
> P = 2*dnorm(k)
> U = 1/z^2
> plot(z,P,type="l",lwd=2,col="red",xlab="",ylab="")

The ratio between the two is given below,

> plot(z,U/P,type="l",lwd=2,col="purple",xlab="",ylab="",ylim=c(0,10))

Note that it is possible to interprete the axis values as probabilities values, taking quantiles of the gaussian distribution

> plot(pnorm(z),U/P,type="l",lwd=2,col="purple",xlab="",
+ ylab="",ylim=c(0,10),xlim=c(.9,1))
> abline(h=3,lty=2)

The interpretation is that the upper bound is 3 times higher than the true probability in the Gaussian case when z is the quantile of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-06.png distribution associated with probability level 99%.
Note that

  • if z is the 95% quantile of the mathcal{N}(0,1) distribution, the ratio is 2 (1.92)
  • if z is the 99% quantile of the mathcal{N}(0,1) distribution, the ratio is 3 (3.04)
  • if z is the 99.55% quantile of the mathcal{N}(0,1) distribution, the ratio is almost 4 (3.88)
  • if z is the 99.75% quantile of the mathcal{N}(0,1) distribution, the ratio is 5 (5.04)

A more formal explaination is to assume that X is symmetric, and then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-09.png

Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-10.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-11.png, we have an upper bound for the  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.pngValue-at-Risk,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-20.png

where the upper bound is the upper bound for the https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-12.png Value-at-Risk for any distribution with finite variance and centred.
If  https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-31.png, then https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-32.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png.  But since, https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-33.png for a https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-36.png distribution, then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-21.png

and further

https://perso.univ-rennes1.fr/arthur.charpentier/latex/BTC-22.png