Tag Archives: Extremes

Can we diversify extremal events?

This post was originaly written in French and translated below.

In a financial context, diversifying risks means investing in a variety of assets, sectors, or geographic regions to avoid having the poor performance of a single investment significantly affect the overall portfolio. Diversification allows for risk reduction, or, in its mathematical formulation, the reduction of variance. But what happens when we encounter large risks, infinite variance? Or worse, infinite expectation?

Extreme Risks and Infinite Expectation?

Formalizing quantities related to random and uncertain quantities is a complex exercise. Probabilities, in the sense the word is often understood, are defined as the limits of frequencies observed through repeated events. The probability of rolling a 3 on a die is 1/6 because, by rolling a die a million times [1], a billion times, the probability will be as close as desired to 1/6. This is what the law of large numbers states, in its weakest form. Saying that the probability it will rain today is 1/6 is entirely different because it is a unique event. If I get drenched by a shower today, it will not prove that the probability was not 1/6, nor will it disprove the meteorological model. This is just a reminder that when modeling, we try to imagine small values for rare events, and it is unfortunately very difficult to validate them.

When modeling large risks, very large risks, it is not uncommon to suggest that the risks have infinite variance or expectation. The notion of infinite expectation is both strange and probably counterintuitive [2]. If we consider a positive random variable X (for simplicity), and let S(x)=\mathbb{P}(X>x) be the survival function, and f(x) the density function (corresponding to the opposite of the derivative of S), we can show that the empirical mean of a million or a billion draws of this variable will approach a value, called the mathematical expectation:\mathbb{E}(X) =\int_0^\infty S(x)dx= \int_0^\infty xf(x)dxThere is nothing surprising here; this is still the law of large numbers, stated as early as 1713 by Jacob Bernoulli (the “golden theorem” of Raper (2018)) and especially by Pierre-Simon Laplace in 1814. However, this integral must be finite, which is not guaranteed. For example, the Pareto distribution with index a satisfies S(x)=\mathbb{P}(X>x)=x^{-a}. As early as 1925, Karl-Gustaf Hagstroem noted that this distribution seemed particularly suited for modeling large risks, and thus in reinsurance [3]. For a variable following a Pareto distribution with index 1, its expectation is, mathematically, infinite.

What does this infinite expectation mean? There will be no “claim of infinite cost,” and it will always be possible to calculate an empirical average over n observations. However, this average will tend toward infinity as n increases. Louis Bachelier, in discussing the St. Petersburg paradox (a game with infinite expected gain), reminds us that “a paradoxical result in mathematical sciences necessarily stems from a flaw in our understanding, incapable of deciphering a too complex whole, unable to represent the infinitely large. Common sense cannot be invoked in delicate matters; it does not allow us to recognize whether the area between a curve and its asymptote is finite or not, whether a series is convergent or divergent.” This average will tend toward infinity as n increases, meaning that we can be sure the average will always exceed any value we can imagine. This can be visualized at the top of Figure 1 with 10 simulations of 100,000 values. On the left, the case where variance is infinite, and expectation is finite; on the right, both are infinite.

Figure 1: Evolution of the average n\mapsto (x_1+\cdots+x_n)/n for generated samples from a distribution with finite expected value (and infinite mean) on the left, and infinite expected value on the right.

Another interesting measure is the ratio of the maximum over n observations to the sum. For variables with infinite expectation, this ratio does not tend towards 0. It is possible that if the x variables represent claim costs, with 100,000 claims of infinite expectation, the largest claim could represent more than 90% of the total burden.

Figure 2 : Evolution of the ratio n\mapsto \max\{x_1,\cdots,x_n\}/(x_1+\cdots+x_n) with a distribution of finite expectation (and infinite variance) on the left, and a distribution of infinite expectation on the right.

As we can see, this property is important, but it is difficult to identify because it is a fundamental property of the underlying model, related to the distribution of observations, since it is always possible to calculate the average. For example, the following sequence corresponds to eight values obtained by randomly drawing from a Pareto distribution with index 1 (and thus theoretically of infinite expectation):

1.657442 || 4.138543 || 15.592108 || 1.429090

1.684843 || 1.186745 || 1.341435 || 3.308316

How can we tell if a set of claim costs follows a distribution of finite expectation or not? The classic approach, presented for example in Zajdenweber (1996, 2000), is to use the so-called Pareto plot, with the logarithm of costs on the x-axis, and the logarithm of the survival probability on the y-axis. If the points are aligned along a straight line with slope -a, then the Pareto distribution with parameter a is perfectly adapted. Indeed, if \mathbb{P}(X>x)=x^{-a}, then, taking the logarithm of both quantities, and ordering the sample (x_1\leq x_2\leq\cdots\leq x_n), we have
\log\left(\frac{n-i}{n}\right)=-a\cdot \log(x_i)And if the slope is too moderate, greater than -1, then the costs have infinite expectation.

Figure 3: Pareto plot, with \log((n-i)/n) on the y-axis and \log(x_i) on the x-axis. The points are aligned along a line with slope -a, corresponding to a Pareto distribution with index aa. a≤1a≤1 means that the risks have infinite expectation.

This hypothesis of a Pareto index close to 1 is not unrealistic when we talk about natural or industrial disasters:

  • hurricanes, Hsieh (1999), a\sim1.5
  • company fires, Biffis et al. (2014), a\sim1.25
  • business interruption, Zajdenweber (1996), a\sim1
  • earthquakes, Sornette et al. (1996), a\sim1
  • tsunamis, Embrechts et al. (2024),  a\sim1
  • operational risk, Moscadelli (2004) and Chavez-Demoulin et al. (2006) a\sim1
  • cyber risk, Eling et al. (2019) a\sim1
  • nuclear risk, Hofert et al. (2012), a\in (0.6;0.7)

On the Diversification of Large Risks

Instead of working by risk type, we can consider the aggregation of these risks together. Heuristically, having portfolios with flood, earthquake, or drought risks could offer some “diversification.” The concept of “diversification” can be introduced with the law of large numbers, as previously mentioned, and it will be very close to the idea of insurance, of risk pooling. Smith & Kane (1994), for example, remind us that the contribution of an n+1-th independent risk in a group of n risks, fairly priced, generally allows for a marginal reduction in risk, which reinforces the insurer’s risk pooling. This diversification effect still works even if the risks are correlated (but not perfectly correlated, and the diversification gains decrease with correlation, as Charpentier (2011) pointed out).

Often, when we talk about “diversification,” we think of the work of Harry Markowitz or Arthur Roy in finance in the 1950s, which laid the foundation for portfolio theory. This theory shows how rational investors can use diversification, corresponding to the correlation between assets, to optimize their financial portfolio. In this approach, it is generally assumed that investors’ preference for a risk/return trade-off can be described by a quadratic utility function. In other words, only the expected return (the expected gain) and the volatility (the standard deviation) or variance are the parameters considered by the investor. This literature shows that an investor can reduce the risk of their portfolio simply by holding assets that are not (or only slightly) correlated, thus diversifying their investments. They can then achieve the same expected return while reducing the variability of their portfolio.

But what happens if the variance no longer exists? This question challenges the use of the normal distribution to model financial returns. The normal distribution was interesting partly because it satisfies a property of stability by summation[4]. Keeping this property while considering a distribution with more extremes than the normal distribution amounts to using “stable” distributions studied by Paul Lévy, as proposed[5] by Benoit Mandelbrot in the 1960s.

In cases where the variance is infinite, it is necessary to use a more general risk measure than the standard deviation, and heuristically, “diversification” is related to the sub-additivity of the risk measure: a portfolio containing the average of the holdings of two other portfolios has a lower risk than the average of the risks of the two other portfolios. Daníelsson et al. (2013) remind us that in the presence of large risks (infinite expectation), diversification no longer works. This property was described and discussed by Paul Samuelson as early as 1967, Stephen Ross in 1976, and more recently by Rustam Ibragimov, Dwight Jaffee, Johan Walden, Paul Embrech, or Ruodu Wang, among others. The introduction by Ibragimov et al. (2015) explains it well, “there are limitations to diversification with such risk distributions [heavy-tailed distribution]. Specifically, whereas diversification is preferred by risk-averse agents when risks are thin-tailed (the traditional case that has been extensively studied), it may actually be hurtful for agents to diversify when risks are heavy-tailed […] nondiversification traps may arise when risk distributions have heavy left tails and insurance providers have limited liability.” These properties, widely discussed from a mathematical perspective, are difficult to accept because they are theoretical and counter-intuitive. Moreover, it is often difficult to determine for whom diversification becomes dangerous, since there are several stakeholders: the insured, insurers, reinsurers, and the state. Ibragimov et al. (2011) provide some answers, “when these risks are thin-tailed, risk-sharing is always optimal for both individual intermediaries and society. But, with moderately heavy-tailed risks, risk-sharing may be suboptimal for society, although individual intermediaries still benefit from it […] and it is well-known that diversification may be suboptimal in the extremely heavy-tailed case.

Over the past twenty years, there have been many examples where diversification does not work, and practitioners are aware of them. Fabozzi et al. (2014), discussing the financial crisis, remind us, “the financial crisis has clearly shown that when you need diversification most, it may not work.” When considering risks related to climate disasters, we see that these risks are extreme, potentially uninsurable because of potentially infinite expectation. Uninsurability mainly means that a market mechanism does not make sense without state intervention. One might also think that it could be interesting to diversify risks by offering multi-peril coverage (as proposed by the current cat-nat mechanism), or by considering geographical diversification, for example at the European level, as recently suggested by Carlo Cimbri, Thierry Derez, and Philippe Lallemand. But the scientific literature reminds us that this diversification is dangerous, in any case, unimaginable without strong and clear state intervention.

References

Biffis, E., & Chavez, E. (2014). Tail risk in commercial property insurance. Risks, 2(4), 393–410.

Charpentier, A. (2011). La loi des grands nombres et le théorème central limite comme base de l’assurabilité ? Risques, 86.

Chavez-Demoulin, V., Embrechts, P., & Nešlehová, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. Journal of Banking & Finance, 30(10), 2635-2658.

Chen, Y., Embrechts, P., & Wang, R. (2024). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operations Research.

Cimbri, C., Derez, T. & Lallemand, P. (2024). Mutualisons l’assurance pour offrir aux Européens une protection à la hauteur des risques actuels ! La Tribune, 23 mai.

Daníelsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., & de Vries, C. G. (2013). Fat tails, VaR and subadditivity. Journal of econometrics, 172(2), 283-291

Eling, M., & Wirfs, J. (2019). What are the actual costs of cyber risk events? European Journal of Operational Research, 272(3), 1109–1119.

Embrechts, P., Hofert, M., & Chavez-Demoulin, V. (2024). Risk Revealed: Cautionary Tales, Understanding and Communication. Cambridge University Press.

Fabozzi, F. J., Focardi, S. M., Jonas, C.: Investment Management: A Science to Teach or an Art to Learn?. CFA Institute Research Foundation (2014)

Fama, E. F. (1965). Portfolio analysis in a stable Paretian market. Management science, 11(3), 404-419.

Hagstroem, K.-G. (1925). Pareto and reinsurance. Scandinavian Actuarial Journal, 216–248

Hofert, M., & Wüthrich, M. V. (2012). Statistical review of nuclear power accidents. Asia-Pacific Journal of Risk and Insurance, 7(1).

Hsieh, P.-H. (1999). Robustness of tail index estimation. Journal of Computational and Graphical Statistics, 8(2), 318–332.

Ibragimov, R., & Walden, J. (2007). The limits of diversification when losses may be large. Journal of banking & finance, 31(8), 2551-2569.

Ibragimov, R., Jaffee, D., & Walden, J. (2011). Diversification disasters. Journal of financial economics, 99(2), 333-348.

Ibragimov, M., Ibragimov, R., & Walden, J. (2015). Heavy-tailed distributions and robustness in economics and finance (Vol. 214). Springer.

Lévy, Paul (1925). Calcul des probabilités. Paris: Gauthier-Villars.

Mandelbrot, B. (1960). The Pareto–Lévy Law and the Distribution of Income. International Economic Review. 1 (2): 79–106.

Markowitz, H. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.

Markowitz, H. (1971). Portfolio selection : efficient diversification of investments. Yale University Press.

Moscadelli, M. (2004). The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca d’Italia

Raper, S. (2018). Turning points: Bernoulli’s golden theorem. Significance, 15(4), 26-29.

Ross, S. A. (1976). A note on a paradox in portfolio theory. Unpublished Mimeo, University of Pennsylvania.

Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 431-449.

Samuelson, P. A. (1967). Efficient portfolio selection for Pareto-Lévy investments. Journal of financial and quantitative analysis, 2(2), 107-122.

Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes. Journal of Geophysical Research: Solid Earth, 101(B6), 13883-13893.

Smith, M. L., & Kane, S. A. (1994). The law of large numbers and the strength of insurance. In Insurance, Risk Management, and Public Policy: Essays in Memory of Robert I. Mehr (pp. 1-27). Dordrecht: Springer Netherlands.

Zajdenweber, D. (1996). Extreme values in business interruption insurance. Journal of Risk and Insurance, 95-110.

Zajdenweber, D. (2000). Économie des extrêmes. Flammarion.

1. The case of dice is somewhat peculiar because the geometry of the cube, particularly its regularity (we refer to it as a regular hexahedron with six faces), allows us to infer the probability without any experimentation

2. The theoretical literature on probabilities is largely built on the idea of finite expectation variables, and it is very hard to do without them (making any reasoning “on average” impossible)

3. It was not until the 1970s, with the work of Guus Balkema and Laurens de Haan, that we had a mathematical proof of this result. The Dutch school of statistics made significant advances in the analysis of extreme events following the 1953 North Sea flood, which had major and disastrous consequences in the Netherlands, as recalled by Embrechts et al. (2024)

4. The sum (or average) of independent normal variables also follows a normal distribution

5. He calls these laws Pareto-Lévy to emphasize the shape of the distribution tails, corresponding to Pareto-type laws, on extreme losses (on the left) and extreme gains (on the right)

Peut-on diversifier des risques extrêmes ?

Dans un contexte financier, diversifier les risques signifie investir dans une variété d’actifs, secteurs ou régions géographiques pour éviter que la mauvaise performance d’un investissement n’affecte trop l’ensemble du portefeuille. La diversification permet de réduire le risque, ou, dans sa formulation mathématique, de réduire la variance. Mais que se passe-t-il quand on est en présence de grands risques, de variance infinie ? Ou pire encore, d’espérance infinie ?

Risques extrêmes, et espérance infinie ?

Formaliser des grandeurs en lien avec des quantités aléatoires et incertaines est un exercice compliqué. Les probabilités, au sens où le mot est souvent entendu, sont définies comme des limites de fréquences observées par répétitions d’événements. La probabilité d’avoir 3 en lançant un dé est ⅙ car en lançant un dé un million de fois [1], un milliard de fois, la probabilité sera aussi proche qu’on veut de ⅙. C’est ce que dit la loi des grands nombres, dans sa version la plus faible. Dire que la probabilité qu’il pleuvra aujourd’hui est de ⅙ est totalement différent, car c’est un évènement unique. Si je me fais tremper par une averse aujourd’hui, cela ne permettra aucunement de dire que la probabilité n’était pas, a priori, de ⅙, et que le modèle météorologique s’est trompé. Tout ça pour rappeler que lorsqu’on fait de la modélisation, on va essayer d’imaginer les valeurs petites d’évènements rares, et qu’il est malheureusement très difficile de les valider.

Et quand on modélise les grands risques, les très grands risques, il n’est pas rare d’avancer l’idée que les risques sont de variance ou d’espérance infinie. Or la notion d’espérance infinie est un à la fois étrange, et probablement contre-intuitive [2].. Si on considère une variable aléatoire X positive (pour simplifier), et si on note S(x)=\mathbb{P}(X>x) la fonction de survie, et f(x) la densité (correspondant à l’opposé de la dérivée de S), on peut montrer que la moyenne empirique d’un million ou d’un milliards de tirage de cette variable va s’approcher autant qu’on veut d’une grandeur, appelée l’espérance mathématique\mathbb{E}(X) =\int_0^\infty S(x)dx= \int_0^\infty xf(x)dxRien de bien étonnant ici, c’est encore la loi des grands nombres, énoncée dès 1713 par Jacob Bernoulli (le “golden theorem” de Raper (2018)) et surtout Pierre-Simon Laplace, en 1814. A condition toutefois que cette intégrale soit finie. Ce qui n’est pas garanti. Par exemple, la loi de Pareto d’indice a vérifie S(x)=\mathbb{P}(X>x)=x^{-a}. Dès 1925, Karl-Gustaf Hagstroem avait noté que cette loi semblait particulièrement adaptée à la modélisation des grands risques, et donc en réassurance [3]. Et pour une variable qui suit une loi de Pareto d’indice 1, son espérance est, mathématiquement, infinie.

Que signifie cette espérance infinie ? Il n’y aura aucun “sinistre de coût infini”, et il sera toujours possible de calculer une moyenne empirique sur n observations. Mais cette moyenne va tendre vers l’infini quand n croit. Louis Bachelier, en évoquant le paradoxe de Saint Pétersbourg (qui est un jeu dont l’espérance de gain est infinie), rappelle que “un résultat paradoxal, dans les sciences mathématiques, provient nécessairement d’un défaut de notre intelligence inhabile à déchiffrer un ensemble trop complexe, inapte à se représenter l’infiniment grand (…) le bon sens ne peut être invoqué lorsqu’il s’agit de questions délicates ; il ne permet pas de reconnaître si l’aire comprise entre une courbe et son asymptote est finie ou non, si une série est convergente ou divergente”. Cette moyenne va tendre vers l’infini quand n croit, c’est-à-dire qu’on peut être certain que la moyenne va toujours dépasser n’importe quelle valeur aussi grande qu’on puisse l’imaginer. On peut le visualiser en haut de la Figure 1 avec 10 simulations de 100,000 valeurs. À gauche, le cas où la variance et l’espérance sont finies; au centre, la variance est infinie, et l’espérance finie; à droite, les deux sont infinies.

Figure 1: Évolution de la moyenne n\mapsto (x_1+\cdots+x_n)/n avec une loi d’espérance finie et de variance infinie à gauche, et une loi d’espérance infinie à droite.

Une autre grandeur intéressante est le ratio du maximum sur n observations sur la somme. Pour des variables d’espérance infinie, ce ratio ne tend pas vers 0. Il est alors possible, si les variables x désignent les coûts de sinistres, avec 100,000 sinistres d’espérance infinie, il est possible que le plus gros des sinistres représente plus de 90% de la charge totale.

Figure 2 : Évolution du ratio n\mapsto \max\{x_1,\cdots,x_n\}/(x_1+\cdots+x_n) avec une loi d’espérance finie et de variance infinie à gauche, et une loi d’espérance infinie à droite.

On le voit, cette propriété est importante, mais elle est difficile à identifier car il s’agit d’une propriété fondamentale du modèle sous-jacent, en lien avec la distribution des observations, puisqu’il est toujours possible de calculer la moyenne. Par exemple, la suite suivante correspond à huit valeurs obtenues en tirant au hasard une loi de Pareto d’indice 1 (et donc théoriquement d’espérance infinie)

1.657442 || 4.138543 || 15.592108 || 1.429090

1.684843 || 1.186745 || 1.341435 || 3.308316

Comment savoir si un ensemble de coûts de sinistres suit un loi d’espérance finie, ou pas ? L’approche classique, présentée par exemple dans Zajdenweber (1996, 2000), consiste à utiliser le graphique dit de Pareto, avec le logarithme des coûts sur l’axe des abscisses, et le logarithme de la probabilité de survie en ordonnées. Si les points sont alignés suivant une droite de pente -a, alors la loi de Pareto de paramètre a est parfaitement adaptée. En effet, si \mathbb{P}(X>x)=x^{-a}, alors, en prenant le logarithme des deux grandeurs, et si on ordonne l’échantillon (x_1\leq x_2\leq\cdots\leq x_n), alors \log\left(\frac{n-i}{n}\right)=-a\cdot \log(x_i)Et si la pente est trop modérée, plus grande que -1, alors les coûts sont d’espérance infinie.

Figure 3: Graphique de Pareto, avec \log((n-i)/n) sur l’axe des ordonnées et \log(x_i) sur l’axe des abscisses. Les points sont alignés suivant une droite, de pente -a, correspondant à une loi de Pareto d’indice a. a\leq 1 signifie que les risques sont d’espérance infinie.

Cette hypothèse d’indice de Pareto proche de 1 n’est pas irréaliste quand on parle de catastrophes naturelles, ou industrielles,

  • ouragans, Hsieh (1999), a\sim1.5
  • incendies entreprises, Biffis et al. (2014), a\sim1.25
  • tremblement de terre, Sornette et al. (1996), a\sim1
  • tsunamis, Embrechts et al. (2024), a\sim1
  • risque opérationnel, Moscadelli (2004) et Chavez-Demoulin et al. (2006) a\sim1
  • risque cyber, Eling et al. (2019) a\sim1
  • risque nucléaires, Hofert et al. (2012), a\in(0.6;0.7)

De la diversification des grands risques

Au lieu de travailler par type de risque, on peut envisager l’agrégation de ces risques, tous ensemble. Heuristiquement, avoir des portefeuilles avec des risques d’inondation, de tremblement de terre, ou de sécheresse pourrait offrir un peu de “diversification”. Le concept de “diversification” peut être introduit avec la loi des grands nombres, dont on parlait auparavant, et il sera très proche de l’idée même d’assurance, de mutualisation des risques. Smith & Kane (1994), par exemple, rappellent que la contribution d’un n+1-ième risque, indépendant, dans un groupe de n risques, tarifés de manière actuariellement juste, permet généralement de faire marginalement diminuer le risque, ce qui renforce la mutualisation des risques par l’assureur. Cet effet de diversification marche encore, même si les risques sont corrélés (mais pas parfaitement corrélés, et les gains de diversification diminuent avec la corrélation, comme le rappelait Charpentier (2011)).

Mais bien souvent, quand on parle de “diversification” on pense aux travaux d’Harry Markowitz ou d’Arthur Roy en finance dans les années 1950, comme base de la théorie du portefeuille. Cette théorie montre comment des investisseurs rationnels peuvent utiliser la diversification, correspondant à la corrélation entre actifs, afin d’optimiser leur portefeuille financier. Dans cette approche, on suppose généralement que la préférence des investisseurs pour un couple risque / rendement peut être décrite par une fonction d’utilité quadratique. Autrement dit, seuls le rendement attendu (l’espérance de gain) et la volatilité (l’écart type) ou la variance, sont les paramètres considérés par l’investisseur. Ce que montre cette littérature, c’est qu’un investisseur peut réduire le risque de son portefeuille simplement en détenant des actifs qui ne sont pas (ou peu) corrélés, donc en diversifiant ses placements. Il peut alors obtenir la même espérance de rendement en diminuant la variabilité de son portefeuille.

Mais que se passe-t-il si la variance n’existe plus ? Cette question revient à questionner l’utilisation de la loi normale pour modéliser les rendements financiers. La loi normale était intéressante en partie parce qu’elle vérifie une propriété de stabilité par sommation[4]. Garder cette propriété tout en considérant une loi ayant davantage d’extrêmes que la loi normale, revient à utiliser les lois “stables” étudiées par Paul Lévy, comme le proposait[5] Benoit Mandelbrot dans les années 1960.

Dans le cas où la variance est infinie, il convient d’utiliser une mesure de risque plus générale que l’écart-type, et heuristiquement, la “diversification” est liée à la sous-additivité de la mesure de risque : un portefeuille contenant la moyenne des avoirs de deux autres portefeuilles a un risque plus faible de la moyenne des risques des deux autres portefeuilles. Daníelsson et al. (2013) rappellent qu’en présence de grands risques (d’espérance infinie), la diversification ne fonctionne plus. Cette propriété avait été décrite et discutée par Paul Samuelson dès 1967, Stephen Ross en 1976, ou plus récemment Rustam Ibragimov,  Dwight Jaffee, Johan Walden, Paul Embrech ou Ruodo Wang, entre autres. L’introduction de Ibragimov et al. (2015) l’explique bien, “there are limitations to diversification with such risk distributions [heavy-tailed distribution]. Specifically, whereas diversification is preferred by risk-averse agents when risks are thin-tailed (the traditional case that has been extensively studied), it may actually be hurtful for agents to diversify when risks are heavy-tailed […] nondiversification traps may arise when risk distributions have heavy left tails and insurance providers have limited liability ”. Ces propriétés, largement discutées d’un point de vue mathématiques, sont compliquées à faire admettre car elles sont théoriques, et contre-intuitives. De plus, il est souvent difficile de savoir pour qui la diversification devient dangereuse, puisqu’il y a plusieurs acteurs, les assurés, les assureurs, les réassureurs, l’état. Ibragimov et al. (2011) donnent des éléments de réponse, “when these risks are thin-tailed, risk-sharing is always optimal for both individual intermediaries and society. But, with moderately heavy-tailed risks, risk-sharing may be suboptimal for society, although individual intermediaries still benefit from it […]. and it is well-known that diversification may be suboptimal in the extremely heavy-tailed case”.

Depuis une vingtaine d’années, les exemples où la diversification ne fonctionne pas sont nombreux, et connus des praticiens. Fabozzi et al. (2014), évoquant la crise financière le rappellent “The financial crisis has clearly shown that when you need diversification most, it may not work.” Quand on s’intéresse aux risques liés aux catastrophes climatiques, on voit que ces risques sont extrêmes, potentiellement inassurables car d’espérance potentiellement infinie. L’inassurabilité signifie surtout qu’un mécanisme de marché n’a pas de sens, sans intervention de l’État. On pourrait aussi penser qu’il pourrait être intéressant de diversifier les risques, en offrant une couverture multi-périls (comme le propose le mécanisme cat-nat actuel), ou bien en envisageant une diversification géographique, par exemple au niveau européen, comme le suggéraient récemment Carlo Cimbri, Thierry Derez et Philippe Lallemand. Mais la littérature scientifique nous rappelle que cette diversification est dangereuse, en tous cas inenvisageable sans une intervention forte et claire des États.

Références

Biffis, E., & Chavez, E. (2014). Tail risk in commercial property insurance. Risks, 2(4), 393–410.

Charpentier, A. (2011). La loi des grands nombres et le théorème central limite comme base de l’assurabilité ? Risques, 86.

Chavez-Demoulin, V., Embrechts, P., & Nešlehová, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. Journal of Banking & Finance, 30(10), 2635-2658.

Chen, Y., Embrechts, P., & Wang, R. (2024). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operations Research.

Cimbri, C., Derez, T. & Lallemand, P. (2024). Mutualisons l’assurance pour offrir aux Européens une protection à la hauteur des risques actuels ! La Tribune, 23 mai.

Daníelsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., & de Vries, C. G. (2013). Fat tails, VaR and subadditivity. Journal of econometrics, 172(2), 283-291

Eling, M., & Wirfs, J. (2019). What are the actual costs of cyber risk events? European Journal of Operational Research, 272(3), 1109–1119.

Embrechts, P., Hofert, M., & Chavez-Demoulin, V. (2024). Risk Revealed: Cautionary Tales, Understanding and Communication. Cambridge University Press.

Fabozzi, F. J., Focardi, S. M., Jonas, C.: Investment Management: A Science to Teach or an Art to Learn?. CFA Institute Research Foundation (2014)

Fama, E. F. (1965). Portfolio analysis in a stable Paretian market. Management science, 11(3), 404-419.

Hagstroem, K.-G. (1925). Pareto and reinsurance. Scandinavian Actuarial Journal, 216–248

Hofert, M., & Wüthrich, M. V. (2012). Statistical review of nuclear power accidents. Asia-Pacific Journal of Risk and Insurance, 7(1).

Hsieh, P.-H. (1999). Robustness of tail index estimation. Journal of Computational and Graphical Statistics, 8(2), 318–332.

Ibragimov, R., & Walden, J. (2007). The limits of diversification when losses may be large. Journal of banking & finance, 31(8), 2551-2569.

Ibragimov, R., Jaffee, D., & Walden, J. (2011). Diversification disasters. Journal of financial economics, 99(2), 333-348.

Ibragimov, M., Ibragimov, R., & Walden, J. (2015). Heavy-tailed distributions and robustness in economics and finance (Vol. 214). Springer.

Lévy, Paul (1925). Calcul des probabilités. Paris: Gauthier-Villars.

Mandelbrot, B. (1960). The Pareto–Lévy Law and the Distribution of Income. International Economic Review. 1 (2): 79–106.

Markowitz, H. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.

Markowitz, H. (1971). Portfolio selection : efficient diversification of investments. Yale University Press.

Moscadelli, M. (2004). The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca d’Italia

Raper, S. (2018). Turning points: Bernoulli’s golden theorem. Significance, 15(4), 26-29.

Ross, S. A. (1976). A note on a paradox in portfolio theory. Unpublished Mimeo, University of Pennsylvania.

Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 431-449.

Samuelson, P. A. (1967). Efficient portfolio selection for Pareto-Lévy investments. Journal of financial and quantitative analysis, 2(2), 107-122.

Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes. Journal of Geophysical Research: Solid Earth, 101(B6), 13883-13893.

Smith, M. L., & Kane, S. A. (1994). The law of large numbers and the strength of insurance. In Insurance, Risk Management, and Public Policy: Essays in Memory of Robert I. Mehr (pp. 1-27). Dordrecht: Springer Netherlands.

Zajdenweber, D. (1996). Extreme values in business interruption insurance. Journal of Risk and Insurance, 95-110.

Zajdenweber, D. (2000). Économie des extrêmes. Flammarion.

1. Le cas des dés est un peu particulier car la géométrie du cube, en particularité sa régularité (on parle d’hexaèdre régulier, à 6 faces), permet d’inférer la probabilité sans faire la moindre expérience

2. La littérature théorique des probabilités s’est largement construite sur l’idée de variables d’espérances finies, et il est très dur de s’en passer (tout raisonnement “en moyenne” devenant impossible)

3. Il faudra attendre les années 1970, et les travaux de Guus Balkema ou Laurens de Haan, pour avoir une preuve mathématique de ce résultat. L’école néerlandais de statistique a fait des avancées majeures sur l’analyse des évènements extrêmes suite au raz-de-marée de 1953 en mer du Nord, qui a eu des conséquences majeures et désastreuses aux Pays-Bas, comme le rappellent Embrechts et al. (2024)

4. La somme (ou la moyenne) de variables normales indépendantes suit aussi une loi normale

5. Il appelle ces lois Pareto-Lévy pour souligner la forme des queues de distributions, correspondant à des lois de type Pareto, sur les pertes extrêmes (à gauche) et les gains extrêmes (à droite)

Brief talk on non-diversification of extreme risks, for France Stratégie

Tomorrow, I was invited to give a (brief) talk at our working group, at France Stratégies, on (non) diversification of extreme risks. Slides are online, and results are related to recent papers by Paul Embrechts and Ruodu Wang. More precisely, here are some references

But first, before discussing large risks, I need to get back (quickly) on the Pareto distribution,

To visualize Pareto tails, one can consider the Pareto plot. If points are on a straight line with (negative) slope \alpha, then observations are Pareto distributed, with tail index precisely \alpha. Depending on the slope (compared with -1), risks have either finite or infinite mean.

Infinite mean is not that common actually. It is hard to visualize what it means, actually because for any (finite) n, the empirical average \displaystyle{\overline{x}=\frac{1}{n}\sum_{i=1}^nx_i} always exists. To visualize what’s going on, we can plot the ratio \max\{x_i\} over the sum. That could be related to the concept of “top share” in inequality.

On the left, risks with finite variance (and of course finite mean). In the middle, infinite variance by finite mean. After a while, it is quite rare to have the maximum weighting for more than 1% in the total sum. With infinite mean, on the right (not too far from the limit, since \alpha is here 0.95 – finite mean means that \alpha exceeds one).

Now, if we get back to risks and insurance, recall basic things on stochastic dominance,

Then we have the following results (that is actually the most important slide)

I did include a slide with the mathematical proof (that is quite lovely actually, and straightforward)

Risk Measures with Extreme Value Models

We’ve seen Monday, in the MAT8595 course how to use the Generalized Pareto Distribution to estimate some downside risk measures, given a sample (assumed to be i.i.d., I will not mention here properties on extremes for stochastic processes) with distribution https://latex.codecogs.com/gif.latex?F. The cumulative distribution function of the  Pareto distribution is here

For some threshold , and https://latex.codecogs.com/gif.latex?x\geq%20u, we can write

From Pickands–Balkema–de Haan theorem, if is large enough, then

Given our sample https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_n\}, let  denote the number of observations over,  threshold . Then we can write

or equivalently

If we invert this function, we get the quantile of level ,

Actually, a threshold and then the implied number of observation exceeding that threshold, it is possible to consider a fixed number of observation, and then the associated threshold will be the associated order statistics.

The density of the Pareto distribution is here

https://latex.codecogs.com/gif.latex?%20%20%20%20%20g_{(\xi,\sigma)}(x)%20=%20\frac{1}{\sigma}\left(1%20+%20\frac{\xi%20x}{\sigma}\right)^{\left(-\frac{1}{\xi}%20-%201\right)}

which is here function of two paramters, https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?\sigma.As discussed in the course, it is possible to use the Delta method to derive the asymptotic distribution of any quantile, and get then an approximated (asymptotic) confidence interval.

But since https://latex.codecogs.com/gif.latex?\sigma is usually not a parameter of interest, why not considering a reparametrization of our density, as a function of  https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?Q(p) (for some probability https://latex.codecogs.com/gif.latex?p that will be considered as fixed from now on). We can easily get (assuming that https://latex.codecogs.com/gif.latex?\xi\neq%200) that

https://latex.codecogs.com/gif.latex?g_{\xi,Q(p)}(x)=\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi[Q(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{[Q(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

Tis expression is simple, and can be used to derive the likelihood (on the observations exceeding the threshold)

https://latex.codecogs.com/gif.latex?\log\mathcal{L}(\xi,Q(p);\boldsymbol{x})=\sum_{i=0}^{N_u-1}%20\log%20g_{\xi,Q(p)}(x_{n-i:n})Numerically, let us write (and plot) that function. Consider some real data here

> X=as.numeric(danish)
> Xs=sort(X,decreasing=TRUE)
> n=length(X)
> u=10
> nu=sum(X>u)

Consider, say, the 99.9% quantile,

> p=.999

The empirical quantile is here

> quantile(X,p)
   99.9% 
131.5519

The density and the loglikelihood functions are here

> gq=function(x,xi,q){
+ ( (n/nu*(1-p) ) ^ (-xi)-1)/(xi*(q-u))*
+ (1+((n/nu*(1-p))^(-xi)-1)/(q-u)*x)^(-1/xi-1)}

> loglik=function(param){
+ xi=param[2];q=param[1]
+ lg=function(i) log(gq(Xs[i],xi,q))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

We can try to plot this likelihood using

> h=201
> Q=seq(50,300,length=h)
> XI=seq(.1,1,length=h)
> XIQ=as.matrix(expand.grid(Q,XI))
> M=mapply(loglik,XIQ)

Unfortunately, it was not working, so I used the old style

> M=matrix(NA,h,h)
> for(i in 1:h){for(j in 1:h){M[i,j]=loglik(c(Q[i],XI[j]))}}

The level curves of the log-likelihood are here

> hc=heat.colors(100)
> image(Q,XI,-M,col=hc)
> contour(Q,XI,-M,add=TRUE)

Again, since our interest is in the quantile, we can draw the profile likelihood and get the maximum of that function

> PL=function(Q){
+ profilelikelihood=function(xi){
+ loglik(c(Q,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))

$minimum
[1] 111.1055

and the graph is

> XQ=seq(50,300,length=101)
> L=Vectorize(PL)(XQ)
> plot(XQ,-L,type="l")
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(-L>=-up-qchisq(p=.95,df=1))
> lines(XQ[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XQ[I]),lty=2,col="red")

which can be seen as an alternative to

> gpd.q(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 64.66184  94.28956 188.91752 

$objective
[1] 454.6481

If we want to focus on another downside risk measure, that shouldn’t be too difficult. For instance, the expected shortfall,  can be estimated as

where  denotes the mean excess function, which can be writen, with a Generalized Pareto Distribution

Thus, a natural estimator for the expected shortfall is

One more time, it is possible to re-parametrize the density of the Pareto distribution, using https://latex.codecogs.com/gif.latex?ES(p) instead of https://latex.codecogs.com/gif.latex?\sigma. Here, we get

https://latex.codecogs.com/gif.latex?g_{\xi,ES(p)}(x)=\frac{\displaystyle{\xi+\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi(1-\xi)[ES(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{(1-\xi)[ES(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

The code to get the associated log-likelihood is here

> ge=function(x,xi,es){
+ (xi+(n/nu*(1-p))^(-xi)-1)/(xi*(1-xi)*(es-u))*(1+(xi+(n/nu*(1-p))^(-xi)
+ -1)/((es-u)*(1-xi))*x)^(-1/xi-1)
+ }
> loglik=function(param){
+ xi=param[2];es=param[1]
+ lg=function(i) log(ge(Xs[i],xi,es))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

and again, we can plot it

and the profile (log) likelihood is here (for the 99.9% expected shortfall)

> PL=function(ES){
+ profilelikelihood=function(xi){
+ loglik(c(ES,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))
$minimum
[1] 143.66

$objective
[1] 454.6481

which could be compared with

> gpd.sfall(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 96.64625 191.36972 394.87555

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

How old is the oldest person you know?

Last week, we had a discussion with some colleagues about the fact that – in order to prepare for the SOA exams – we did not have time (so far) to mention results on extreme values in our actuarial program. I did gave an introduction in my nonlife actuarial models class, but it was only an introduction, in three hours, in order to illustrate reinsurance pricing. And I told my students that if they wanted to know more about extreme values, they should start a master program in actuarial science and finance, since I will give a course on extremes (and copulas) next winter.

But actually, extreme values are everywhere ! For instance, there is a Prudential TV commercial where has people place large, round stickers on a number line to represent the age of the oldest person they know. This forms some kind of histogram. The message is to have Prudential prepare you to have adequate money for all these years. And actually, anyone can add his or her own sticker at the Prudential website.

Patrick Honner, on his blog (http://mrhonner.com/…), did mention this interesting representation. But this idea is not new, as mentioned in a post, published three years ago. In 1932, Emil Gumbel gave a talk in France on the “âge limite“. And as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que cet âge ait une valeur donnée – soit Gaussienne“. In 1932 (not aware of Fisher and Tippett work, he thought that the limiting distribution for a maximum would be Gaussian). But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. And in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. One should also mention one of the most important paper in extreme value theory, published in 1974 by Balkema and de Haan, on Residual Life Time at Great Age.

Because in this experiment, the question is “How Old is the Oldest Person You Know?“, so it is the distribution of a maximum. And from Fisher-Tippett theorem, if we assume that the age is bounded (and that there exists some finite upper limit), then the limiting distribution for the maxima (or to be more rigorous, a affine transformation of the maxima) should be Weibull distribution. And this is what it looks like

> plot(-x,dweibull(x,2.25,4),type="l",lwd=2)

As an actuary, the only thing I know about demography, is the distribution of the age of death. For instance, consider the following French life table

> alive <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ sep=";",header=TRUE)$Lx
> nb= -diff(alive)
> ages=0:110
> plot(ages,nb,type="h")

This is the distribution of the age of the death in a given population. Which is not the same as the distribution mentioned above! What we look for is the following: given that someone is alive, what could be the distribution of his-her age ? Actually, if we assume that the yearly number of birth is constant with time (as well as death probability), then we can compute easily to number of people of age https://latex.codecogs.com/gif.latex?x : we take everyone born (exactly) https://latex.codecogs.com/gif.latex?x years ago, and remove all those who died at at https://latex.codecogs.com/gif.latex?x, https://latex.codecogs.com/gif.latex?x-1, etc. So the function should be

> probadeath=nb/sum(nb)
> nbx=function(x) 1-sum(probadeath[1:(x+1)])
> surv=Vectorize(nbx)(ages)
> distrage=surv/sum(surv)

which looks like

But this assumption of constant number of birth is not that relevent. And actually, what we need is the distribution of the age within a population… This is a population pyramid, actually. The French one can be downloaded from http://www.insee.fr/fr/ppp/bases-de-donnees/….

> population <- read.table("popinsee2007.csv",sep=";",header=TRUE)$POPTOT07
> ages=0:107
> plot(ages,population/sum(population),type="h")

(the red line being the one obtained previously, using some natality assumptions). Now, let us use this population to generate acquaintances.

> agemax=function(nsim=1000,size=20){
+ agemax=rep(NA,nsim)
+ for(i in 1:nsim){
+ X=sample(ages,prob=population/sum(population),size=size,replace=TRUE)
+ agemax[i]=max(X)}
+ return(agemax)}

Here, we assume that everyone knows 20 other people, randomly chosen in the entire population, then we return the age of the oldest. And we do that for 1,000 people. Here is the distribution, we obtain

> XS=agemax(10000,20)
> plot(table(XS)/length(XS),type="h",xlim=c(0,108))

where the red line is a Weibull distribution (a transformed one, actually, since in extremely value theory, the distance to the upper bound of the distribution has a Weibull density),

> library(MASS)
> fit=fitdistr(108-XS,dweibull,list(shape=1,scale=1))
> lines(ages,dweibull(108-ages,fit$estimate[1],fit$estimate[2]),col="red")

Which is quite close to the distribution obtained in the commercial, don’t you think ? But still, it should be possible to be more accurate, since people should think of their parents, or grandparents. So I guess it could be possible to build a more accurate algorithm, to get something closer to the distribution obtained on the Prudential website. But first, let us wait to have more stickers, more observations… and then I’ll be back to play with it !

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif is smaller or larger than http://freakonometrics.blog.free.fr/public/perso5/lrtest22.gif (where in the context of finite versus infinite mean http://freakonometrics.blog.free.fr/public/perso5/lrtest23.gif). I.e. either http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif belongs to the set http://freakonometrics.blog.free.fr/public/perso5/lrtest-10.gif or to its complementary http://freakonometrics.blog.free.fr/public/perso5/lrtest-11.gif. Consider the maximum likelihood estimator http://freakonometrics.blog.free.fr/public/perso5/lrtest24.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/lrtest-9.gif

Let http://freakonometrics.blog.free.fr/public/perso5/lrtest25.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-3.gif denote the constrained maximum likelihood estimators on http://freakonometrics.blog.free.fr/public/perso5/lrtest26.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest27.gif respectively,

http://freakonometrics.blog.free.fr/public/perso5/lrtest-12.gif

http://freakonometrics.blog.free.fr/public/perso5/lrtest-2.gif

Either http://freakonometrics.blog.free.fr/public/perso5/lrtest-13.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-6.gif (on the left), or http://freakonometrics.blog.free.fr/public/perso5/lrtest-14.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-7.gif (on the right)

So likelihood ratios

http://freakonometrics.blog.free.fr/public/perso5/lrtest-15.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-16.gif

 are either equal to

http://freakonometrics.blog.free.fr/public/perso5/lrtest-19.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-18.gif

or

http://freakonometrics.blog.free.fr/public/perso5/lrtest-20.gif        http://freakonometrics.blog.free.fr/public/perso5/lrtest-17.gif

If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$par.ses[1]
+ }
>
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

MAT8886 Extremes and sums (of i.i.d. random variables)

Yesterday, we have discussed briefly sums and maximas of i.i.d. random variables using the concept of subexponential distributions. Today, we will introduce the concept of regular variation: a positive function is said to be regularly varying (at infinity), denoted http://freakonometrics.blog.free.fr/public/perso5/subexp-30.gif, for some http://freakonometrics.blog.free.fr/public/perso5/subexp-31.gif, if

http://freakonometrics.blog.free.fr/public/perso5/subexp-33.gif
for all http://freakonometrics.blog.free.fr/public/perso5/subexo_34.gif. An this concept can be related to sums and maxima (see section 6.2.6 in Embrechts et al. (1997)). Consider i.i.d. positive random variables http://freakonometrics.blog.free.fr/public/perso5/subsexp-01.gif: lethttp://freakonometrics.blog.free.fr/public/perso5/subexp-2.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-3.gif. Then it can be shown easily that

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-20.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-10.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-23.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Z.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-13.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-14.gif
If is not that simple to check for such convergences, it is still possible to use graphs to study the behavior of the empirical version of those quantities. Consider the following function to visualize convergence of empirical ratios,

CONVERGENCE=function(g,p=1,n=500000){
set.seed(1)
X=g(n);X1=g(n);X2=g(n);X3= g(n);X4=g(n)
Tp =cummax(X^p)/cumsum(X^p)
Tp1=cummax(X1^p)/cumsum(X1^p)
Tp2=cummax(X2^p)/cumsum(X2^p)
Tp3=cummax(X3^p)/cumsum(X3^p)
Tp4=cummax(X4^p)/cumsum(X4^p)
plot(Tp4,type="l",ylim=c(0,1),log="x",
xlim=c(100,n),ylab="",col="light blue",xlab="")
lines(Tp1,col="light green")
lines(Tp2,col="yellow")
lines(Tp3,col="pink")
lines(Tp,lwd=2)
abline(h=0:1,col="red",lty=2)
}

or the following to study the “asymptotic” distribution of the ratio on simulated samples

LIMITDIST=function(g,p=1,n=500000,ns=1000){
set.seed(1)
T=rep(NA,ns)
for(i in 1:ns){
X=g(n)
T[i]=max(X^p)/sum(X^p)
}
hist(T,breaks=seq(0,1,by=.05),probability=TRUE,
col="light green",ylab="",xlab="",main="")
}

In the case of exponentially distributed variables, we have

CONVERGENCE(rexp)

For variables with a lognormal distribution,

CONVERGENCE(rlnorm)

And finally, consider the case of a Pareto distribution

rpareto=function(n){runif(n)^(-1/1.5)-1}
CONVERGENCE(rpareto)

Here, it looks like those three distributions have finite variance (and actually, they do). To go one step further, for http://freakonometrics.blog.free.fr/public/perso5/subexp00.gif, define http://freakonometrics.blog.free.fr/public/perso5/suuuuuubexp.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-5.gif. Then analogous results can be derived,

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-99.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-11.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-25.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Zk.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-12.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-15.gif
Again, it is possible to use the function defined above,

CONVERGENCE(rexp,p=2)

or

CONVERGENCE(rexp,p=3)

or even

CONVERGENCE(rexp,p=10)

If the power is not too high, it looks like the ratio goes to zero. But when it becomes larger, it looks like more simulations might be necessary to say something relevant.

CONVERGENCE(rlnorm,p=2)

or

CONVERGENCE(rlnorm,p=3)

Here also, it looks like we have a light tailed distribution (and actually, it is the case). And finally, if we consider the case of a Pareto distribution

CONVERGENCE(rpareto,p=2)

Then it looks like it is an heavy tailed distribution. In order to get a better understanding, plot the distribution of the ratio obtained from 1,000 simulated samples (of size 500,000),

LIMITDIST(rpareto,p=1)

versus

LIMITDIST(rpareto,p=2)

So obviously, something is going on between 1 and 2 (recall that the power parameter of the Pareto distribution is 1.5).

Fisher-Tippett theorem with an historical perspective

A couple of weeks ago, Rafael asked me if I had something on the history of extreme value theory. Since I will get back to fundamental results about extremes in my course, I promised I will write down a short post on all that issue.

To start from the beginning, in 1928, Ronald Fisher and Leonard Tippett formulated the three types of limiting distributions for the maximum term of a random sample (Fisher & Tippett (1928)). The problem was to characterize function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/ext-2.gif

where http://freakonometrics.hypotheses.org/files/2015/12/ext-3.gif where http://freakonometrics.hypotheses.org/files/2015/12/ext-4.gif‘s are i.i.d. with cumulative distribution function http://freakonometrics.hypotheses.org/files/2015/12/ext-5.gif. They had supporting arguments, but no (rigorous) proof. Nevertheless, the obtained that the only possible types for G were

http://freakonometrics.hypotheses.org/files/2015/12/ext-6.gif

i.e. Fréchet type (Pareto-type tails), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-7.gif

i.e. Weibull type (bounded distribution type), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-8.gif

i.e. Gumbel type (exponential-type tails). Emil Gumbel has been intensively using the so-called Gumbel distribution on river flows, since (as he explained in 1958), “it seems that the rivers know the theory. It only remains to convince the engineers of the validity of this analysis“.
Independently of that work (published in 1928), Maurice Fréchet considered in 1927 (in Sur la loi de probabilité de l’écart maximum) possible limits of

http://freakonometrics.hypotheses.org/files/2015/12/ext-9.gif

and obtained only http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif as possible limit. Richard von Mises gave in 1936 sufficient, but not necessary conditions for their (max) domain of attraction, i.e. characterization of function http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gif such that the maxima converges to some specific function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif (von Mises (1936)). E.g. he noticed that a sufficient condition on http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gifto be in the (max) domain of attraction of the Gumbel distribution is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-13.gif

Then in 1943, Boris Gnedenko gave a complete characterization of those three types, with a complete characterization for two of them (heavy tails, i.e. Fréchet type and bounded support, i.e. Weibull) but his necessary and sufficient condition was based on a function that was not explicitly defined (see Gnedenko (1943)). Laurens de Haan in the 70’s derived checkable condition for Gumbel’s type.
Boris Gnedenko proved (in Section 4 of his paper) that F is the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif if and only if http://freakonometrics.hypotheses.org/files/2015/12/ext-16.gif is regularly varying at infinity, with index http://freakonometrics.hypotheses.org/files/2015/12/ext-17.gif (even if the term “regular variation” was not mentioned in the paper). Similar results were derived to characterize functions in the (max) domain of attraction of Weibull. For the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif, Boris Gnedenko obtained that a necessary and sufficient condition was that there exists a function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif such http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif goes to 0 at infinity and

http://freakonometrics.hypotheses.org/files/2015/12/ext-20.gif

Several papers have discussed what function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif could be e.g. David Mejzler in 1949 (in Russian, but see also his 1965 paper), and Laurens de Hann in 1970 and 1971 (following the dramatic flood in the Netherlands in 1953, researchers in the Netherlands have focuses on dikes, and extreme value applications).

Mejzler’s idea was to work on quantiles, and not on the cumulative distribution function. I.e. define

http://freakonometrics.hypotheses.org/files/2015/12/ext-21.gif

Then a necessary and sufficient condition for F to be in the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-23.gif

Laurens de Haan proved in 1971 that function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif can be – in general – given by

http://freakonometrics.hypotheses.org/files/2015/12/ext-25.gif

And in 1976, Laurens de Haan obtained a three-type convergence working on quantile function http://freakonometrics.hypotheses.org/files/2015/12/ext-26.gif (with a much shorter proof).
There have been many many papers extending Fisher-Tippett’s theorem, e.g. on non-independent sequences, like exchangeable ones (in a paper by Simeon Berman in 1962, or on stationary Gaussian sequences in 1964).