Category Archives: Publications

Selection bias in insurance: why portfolio-specific fairness fails to extend market-wide

With Marie-Pier Côté and Olivier Côté, we recently upload a short note, selection bias in insurance: why portfolio-specific fairness fails to extend market-wide, now available on SSRN,

Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.

(to be continued…)

How to Go Beyond the Coldness of Numbers and Take Action?

 This article was written with Nicolas Marescaux, originally in French.

Today, our modern life relies largely on numbers. They guide most collective decisions and many individual choices. For Lord Kelvin [1], “If you cannot measure it, you cannot improve it.” In other words, to make a good decision, you must first measure well. But is that enough? IPCC reports have been compiling data and figures for decades, announcing a short-term catastrophe. And yet, nothing happens. “The modern man scorns imagination,” stated Stéphane Mallarmé in 1897. Isn’t it this subjectivity of our imagination that could save us? Continue reading How to Go Beyond the Coldness of Numbers and Take Action?

Comment dépasser la froideur des chiffres, et agir ?

Cet article a été écrit avec Nicolas Marescaux,

Aujourd’hui, notre vie moderne repose en grande partie sur les chiffres. Ils orientent la plupart des décisions collectives et de nombreux choix individuels. Pour Lord Kelvin [1], « If you cannot measure it, you cannot improve it. » Autrement dit, pour bien décider, il faudrait d’abord bien mesurer. Mais est-ce suffisant ? Les rapports du GIEC compilent des données et des chiffres annonçant, depuis plusieurs décennies, une catastrophe à court terme. Et pourtant, rien ne se passe. « Le moderne dédaigne d’imaginer » affirmait Stéphane Mallarmé en 1897. N’est-ce pas cette subjectivité de notre imaginaire qui pourrait nous sauver ?
Continue reading Comment dépasser la froideur des chiffres, et agir ?

Probabilistic Scores of Classifiers, Calibration is not Enough

Our paper “Probabilistic Scores of Classifiers, Calibration is not Enough”, with Agathe Fernandes Machado, Emmanuel Flachaire, Ewen Gallic and François Hu is now available on https://arxiv.org/abs/2408.03421

In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications such as predicting payment defaults or assessing medical risks. The model must then be well-calibrated to ensure alignment between predicted probabilities and actual outcomes. However, when score heterogeneity deviates from the underlying data probability distribution, traditional calibration metrics lose reliability, failing to align score distribution with actual probabilities. In this study, we highlight approaches that prioritize optimizing the alignment between predicted scores and true probability distributions over minimizing traditional performance or calibration metrics. When employing tree-based models such as Random Forest and XGBoost, our analysis emphasizes the flexibility these models offer in tuning hyperparameters to minimize the Kullback-Leibler (KL) divergence between predicted and true distributions. Through extensive empirical analysis across 10 UCI datasets and simulations, we demonstrate that optimizing tree-based models based on KL divergence yields superior alignment between predicted scores and actual probabilities without significant performance loss. In real-world scenarios, the reference probability is determined a priori as a Beta distribution estimated through maximum likelihood. Conversely, minimizing traditional calibration metrics may lead to suboptimal results, characterized by notable performance declines and inferior KL values. Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.

Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness

Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

Measuring and mitigating biases in motor insurance pricing

Our paper, with Mulah Moriah and Franck Vermet, Measuring and mitigating biases in motor insurance pricing, has been recenlty published in the European Actuarial Journal,

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.

Generalized Oversampling for Learning from Imbalanced datasets and Associated Theory

Our paper, Generalized Oversampling for Learning from Imbalanced datasets and Associated Theory: Application in Regression, written with Samuel Stocksieker and Denys Pommeret, has been accepted for publication in TMLR (Transactions on Machine Learning Research)

In supervised learning, it is quite frequent to be confronted with real imbalanced datasets. This situation leads to a learning difficulty for standard algorithms. Research and solutions in imbalanced learning have mainly focused on classification tasks. Despite its importance, very few solutions exist for imbalanced regression. In this paper, we propose a data augmentation procedure, the GOLIATH algorithm, based on kernel density estimates and especially dedicated to the problem of imbalanced data. This general approach encompasses two large families of synthetic oversampling: those based on perturbations, such as Gaussian Noise, and those based on interpolations, such as SMOTE. It also provides an explicit form of such machine learning algorithms. New synthetic data generators are deduced. We apply GOLIATH in imbalanced regression combining such generator procedures with a new wild-bootstrap resampling technique for the target values. We evaluate the performance of the GOLIATH algorithm in imbalanced regression where we compare our approach with state-of-the-art techniques.

 

Can we diversify extremal events?

This post was originaly written in French and translated below.

In a financial context, diversifying risks means investing in a variety of assets, sectors, or geographic regions to avoid having the poor performance of a single investment significantly affect the overall portfolio. Diversification allows for risk reduction, or, in its mathematical formulation, the reduction of variance. But what happens when we encounter large risks, infinite variance? Or worse, infinite expectation?

Extreme Risks and Infinite Expectation?

Formalizing quantities related to random and uncertain quantities is a complex exercise. Probabilities, in the sense the word is often understood, are defined as the limits of frequencies observed through repeated events. The probability of rolling a 3 on a die is 1/6 because, by rolling a die a million times [1], a billion times, the probability will be as close as desired to 1/6. This is what the law of large numbers states, in its weakest form. Saying that the probability it will rain today is 1/6 is entirely different because it is a unique event. If I get drenched by a shower today, it will not prove that the probability was not 1/6, nor will it disprove the meteorological model. This is just a reminder that when modeling, we try to imagine small values for rare events, and it is unfortunately very difficult to validate them.

When modeling large risks, very large risks, it is not uncommon to suggest that the risks have infinite variance or expectation. The notion of infinite expectation is both strange and probably counterintuitive [2]. If we consider a positive random variable X (for simplicity), and let S(x)=\mathbb{P}(X>x) be the survival function, and f(x) the density function (corresponding to the opposite of the derivative of S), we can show that the empirical mean of a million or a billion draws of this variable will approach a value, called the mathematical expectation:\mathbb{E}(X) =\int_0^\infty S(x)dx= \int_0^\infty xf(x)dxThere is nothing surprising here; this is still the law of large numbers, stated as early as 1713 by Jacob Bernoulli (the “golden theorem” of Raper (2018)) and especially by Pierre-Simon Laplace in 1814. However, this integral must be finite, which is not guaranteed. For example, the Pareto distribution with index a satisfies S(x)=\mathbb{P}(X>x)=x^{-a}. As early as 1925, Karl-Gustaf Hagstroem noted that this distribution seemed particularly suited for modeling large risks, and thus in reinsurance [3]. For a variable following a Pareto distribution with index 1, its expectation is, mathematically, infinite.

What does this infinite expectation mean? There will be no “claim of infinite cost,” and it will always be possible to calculate an empirical average over n observations. However, this average will tend toward infinity as n increases. Louis Bachelier, in discussing the St. Petersburg paradox (a game with infinite expected gain), reminds us that “a paradoxical result in mathematical sciences necessarily stems from a flaw in our understanding, incapable of deciphering a too complex whole, unable to represent the infinitely large. Common sense cannot be invoked in delicate matters; it does not allow us to recognize whether the area between a curve and its asymptote is finite or not, whether a series is convergent or divergent.” This average will tend toward infinity as n increases, meaning that we can be sure the average will always exceed any value we can imagine. This can be visualized at the top of Figure 1 with 10 simulations of 100,000 values. On the left, the case where variance is infinite, and expectation is finite; on the right, both are infinite.

Figure 1: Evolution of the average n\mapsto (x_1+\cdots+x_n)/n for generated samples from a distribution with finite expected value (and infinite mean) on the left, and infinite expected value on the right.

Another interesting measure is the ratio of the maximum over n observations to the sum. For variables with infinite expectation, this ratio does not tend towards 0. It is possible that if the x variables represent claim costs, with 100,000 claims of infinite expectation, the largest claim could represent more than 90% of the total burden.

Figure 2 : Evolution of the ratio n\mapsto \max\{x_1,\cdots,x_n\}/(x_1+\cdots+x_n) with a distribution of finite expectation (and infinite variance) on the left, and a distribution of infinite expectation on the right.

As we can see, this property is important, but it is difficult to identify because it is a fundamental property of the underlying model, related to the distribution of observations, since it is always possible to calculate the average. For example, the following sequence corresponds to eight values obtained by randomly drawing from a Pareto distribution with index 1 (and thus theoretically of infinite expectation):

1.657442 || 4.138543 || 15.592108 || 1.429090

1.684843 || 1.186745 || 1.341435 || 3.308316

How can we tell if a set of claim costs follows a distribution of finite expectation or not? The classic approach, presented for example in Zajdenweber (1996, 2000), is to use the so-called Pareto plot, with the logarithm of costs on the x-axis, and the logarithm of the survival probability on the y-axis. If the points are aligned along a straight line with slope -a, then the Pareto distribution with parameter a is perfectly adapted. Indeed, if \mathbb{P}(X>x)=x^{-a}, then, taking the logarithm of both quantities, and ordering the sample (x_1\leq x_2\leq\cdots\leq x_n), we have
\log\left(\frac{n-i}{n}\right)=-a\cdot \log(x_i)And if the slope is too moderate, greater than -1, then the costs have infinite expectation.

Figure 3: Pareto plot, with \log((n-i)/n) on the y-axis and \log(x_i) on the x-axis. The points are aligned along a line with slope -a, corresponding to a Pareto distribution with index aa. a≤1a≤1 means that the risks have infinite expectation.

This hypothesis of a Pareto index close to 1 is not unrealistic when we talk about natural or industrial disasters:

  • hurricanes, Hsieh (1999), a\sim1.5
  • company fires, Biffis et al. (2014), a\sim1.25
  • business interruption, Zajdenweber (1996), a\sim1
  • earthquakes, Sornette et al. (1996), a\sim1
  • tsunamis, Embrechts et al. (2024),  a\sim1
  • operational risk, Moscadelli (2004) and Chavez-Demoulin et al. (2006) a\sim1
  • cyber risk, Eling et al. (2019) a\sim1
  • nuclear risk, Hofert et al. (2012), a\in (0.6;0.7)

On the Diversification of Large Risks

Instead of working by risk type, we can consider the aggregation of these risks together. Heuristically, having portfolios with flood, earthquake, or drought risks could offer some “diversification.” The concept of “diversification” can be introduced with the law of large numbers, as previously mentioned, and it will be very close to the idea of insurance, of risk pooling. Smith & Kane (1994), for example, remind us that the contribution of an n+1-th independent risk in a group of n risks, fairly priced, generally allows for a marginal reduction in risk, which reinforces the insurer’s risk pooling. This diversification effect still works even if the risks are correlated (but not perfectly correlated, and the diversification gains decrease with correlation, as Charpentier (2011) pointed out).

Often, when we talk about “diversification,” we think of the work of Harry Markowitz or Arthur Roy in finance in the 1950s, which laid the foundation for portfolio theory. This theory shows how rational investors can use diversification, corresponding to the correlation between assets, to optimize their financial portfolio. In this approach, it is generally assumed that investors’ preference for a risk/return trade-off can be described by a quadratic utility function. In other words, only the expected return (the expected gain) and the volatility (the standard deviation) or variance are the parameters considered by the investor. This literature shows that an investor can reduce the risk of their portfolio simply by holding assets that are not (or only slightly) correlated, thus diversifying their investments. They can then achieve the same expected return while reducing the variability of their portfolio.

But what happens if the variance no longer exists? This question challenges the use of the normal distribution to model financial returns. The normal distribution was interesting partly because it satisfies a property of stability by summation[4]. Keeping this property while considering a distribution with more extremes than the normal distribution amounts to using “stable” distributions studied by Paul Lévy, as proposed[5] by Benoit Mandelbrot in the 1960s.

In cases where the variance is infinite, it is necessary to use a more general risk measure than the standard deviation, and heuristically, “diversification” is related to the sub-additivity of the risk measure: a portfolio containing the average of the holdings of two other portfolios has a lower risk than the average of the risks of the two other portfolios. Daníelsson et al. (2013) remind us that in the presence of large risks (infinite expectation), diversification no longer works. This property was described and discussed by Paul Samuelson as early as 1967, Stephen Ross in 1976, and more recently by Rustam Ibragimov, Dwight Jaffee, Johan Walden, Paul Embrech, or Ruodu Wang, among others. The introduction by Ibragimov et al. (2015) explains it well, “there are limitations to diversification with such risk distributions [heavy-tailed distribution]. Specifically, whereas diversification is preferred by risk-averse agents when risks are thin-tailed (the traditional case that has been extensively studied), it may actually be hurtful for agents to diversify when risks are heavy-tailed […] nondiversification traps may arise when risk distributions have heavy left tails and insurance providers have limited liability.” These properties, widely discussed from a mathematical perspective, are difficult to accept because they are theoretical and counter-intuitive. Moreover, it is often difficult to determine for whom diversification becomes dangerous, since there are several stakeholders: the insured, insurers, reinsurers, and the state. Ibragimov et al. (2011) provide some answers, “when these risks are thin-tailed, risk-sharing is always optimal for both individual intermediaries and society. But, with moderately heavy-tailed risks, risk-sharing may be suboptimal for society, although individual intermediaries still benefit from it […] and it is well-known that diversification may be suboptimal in the extremely heavy-tailed case.

Over the past twenty years, there have been many examples where diversification does not work, and practitioners are aware of them. Fabozzi et al. (2014), discussing the financial crisis, remind us, “the financial crisis has clearly shown that when you need diversification most, it may not work.” When considering risks related to climate disasters, we see that these risks are extreme, potentially uninsurable because of potentially infinite expectation. Uninsurability mainly means that a market mechanism does not make sense without state intervention. One might also think that it could be interesting to diversify risks by offering multi-peril coverage (as proposed by the current cat-nat mechanism), or by considering geographical diversification, for example at the European level, as recently suggested by Carlo Cimbri, Thierry Derez, and Philippe Lallemand. But the scientific literature reminds us that this diversification is dangerous, in any case, unimaginable without strong and clear state intervention.

References

Biffis, E., & Chavez, E. (2014). Tail risk in commercial property insurance. Risks, 2(4), 393–410.

Charpentier, A. (2011). La loi des grands nombres et le théorème central limite comme base de l’assurabilité ? Risques, 86.

Chavez-Demoulin, V., Embrechts, P., & Nešlehová, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. Journal of Banking & Finance, 30(10), 2635-2658.

Chen, Y., Embrechts, P., & Wang, R. (2024). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operations Research.

Cimbri, C., Derez, T. & Lallemand, P. (2024). Mutualisons l’assurance pour offrir aux Européens une protection à la hauteur des risques actuels ! La Tribune, 23 mai.

Daníelsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., & de Vries, C. G. (2013). Fat tails, VaR and subadditivity. Journal of econometrics, 172(2), 283-291

Eling, M., & Wirfs, J. (2019). What are the actual costs of cyber risk events? European Journal of Operational Research, 272(3), 1109–1119.

Embrechts, P., Hofert, M., & Chavez-Demoulin, V. (2024). Risk Revealed: Cautionary Tales, Understanding and Communication. Cambridge University Press.

Fabozzi, F. J., Focardi, S. M., Jonas, C.: Investment Management: A Science to Teach or an Art to Learn?. CFA Institute Research Foundation (2014)

Fama, E. F. (1965). Portfolio analysis in a stable Paretian market. Management science, 11(3), 404-419.

Hagstroem, K.-G. (1925). Pareto and reinsurance. Scandinavian Actuarial Journal, 216–248

Hofert, M., & Wüthrich, M. V. (2012). Statistical review of nuclear power accidents. Asia-Pacific Journal of Risk and Insurance, 7(1).

Hsieh, P.-H. (1999). Robustness of tail index estimation. Journal of Computational and Graphical Statistics, 8(2), 318–332.

Ibragimov, R., & Walden, J. (2007). The limits of diversification when losses may be large. Journal of banking & finance, 31(8), 2551-2569.

Ibragimov, R., Jaffee, D., & Walden, J. (2011). Diversification disasters. Journal of financial economics, 99(2), 333-348.

Ibragimov, M., Ibragimov, R., & Walden, J. (2015). Heavy-tailed distributions and robustness in economics and finance (Vol. 214). Springer.

Lévy, Paul (1925). Calcul des probabilités. Paris: Gauthier-Villars.

Mandelbrot, B. (1960). The Pareto–Lévy Law and the Distribution of Income. International Economic Review. 1 (2): 79–106.

Markowitz, H. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.

Markowitz, H. (1971). Portfolio selection : efficient diversification of investments. Yale University Press.

Moscadelli, M. (2004). The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca d’Italia

Raper, S. (2018). Turning points: Bernoulli’s golden theorem. Significance, 15(4), 26-29.

Ross, S. A. (1976). A note on a paradox in portfolio theory. Unpublished Mimeo, University of Pennsylvania.

Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 431-449.

Samuelson, P. A. (1967). Efficient portfolio selection for Pareto-Lévy investments. Journal of financial and quantitative analysis, 2(2), 107-122.

Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes. Journal of Geophysical Research: Solid Earth, 101(B6), 13883-13893.

Smith, M. L., & Kane, S. A. (1994). The law of large numbers and the strength of insurance. In Insurance, Risk Management, and Public Policy: Essays in Memory of Robert I. Mehr (pp. 1-27). Dordrecht: Springer Netherlands.

Zajdenweber, D. (1996). Extreme values in business interruption insurance. Journal of Risk and Insurance, 95-110.

Zajdenweber, D. (2000). Économie des extrêmes. Flammarion.

1. The case of dice is somewhat peculiar because the geometry of the cube, particularly its regularity (we refer to it as a regular hexahedron with six faces), allows us to infer the probability without any experimentation

2. The theoretical literature on probabilities is largely built on the idea of finite expectation variables, and it is very hard to do without them (making any reasoning “on average” impossible)

3. It was not until the 1970s, with the work of Guus Balkema and Laurens de Haan, that we had a mathematical proof of this result. The Dutch school of statistics made significant advances in the analysis of extreme events following the 1953 North Sea flood, which had major and disastrous consequences in the Netherlands, as recalled by Embrechts et al. (2024)

4. The sum (or average) of independent normal variables also follows a normal distribution

5. He calls these laws Pareto-Lévy to emphasize the shape of the distribution tails, corresponding to Pareto-type laws, on extreme losses (on the left) and extreme gains (on the right)

Peut-on diversifier des risques extrêmes ?

Dans un contexte financier, diversifier les risques signifie investir dans une variété d’actifs, secteurs ou régions géographiques pour éviter que la mauvaise performance d’un investissement n’affecte trop l’ensemble du portefeuille. La diversification permet de réduire le risque, ou, dans sa formulation mathématique, de réduire la variance. Mais que se passe-t-il quand on est en présence de grands risques, de variance infinie ? Ou pire encore, d’espérance infinie ?

Risques extrêmes, et espérance infinie ?

Formaliser des grandeurs en lien avec des quantités aléatoires et incertaines est un exercice compliqué. Les probabilités, au sens où le mot est souvent entendu, sont définies comme des limites de fréquences observées par répétitions d’événements. La probabilité d’avoir 3 en lançant un dé est ⅙ car en lançant un dé un million de fois [1], un milliard de fois, la probabilité sera aussi proche qu’on veut de ⅙. C’est ce que dit la loi des grands nombres, dans sa version la plus faible. Dire que la probabilité qu’il pleuvra aujourd’hui est de ⅙ est totalement différent, car c’est un évènement unique. Si je me fais tremper par une averse aujourd’hui, cela ne permettra aucunement de dire que la probabilité n’était pas, a priori, de ⅙, et que le modèle météorologique s’est trompé. Tout ça pour rappeler que lorsqu’on fait de la modélisation, on va essayer d’imaginer les valeurs petites d’évènements rares, et qu’il est malheureusement très difficile de les valider.

Et quand on modélise les grands risques, les très grands risques, il n’est pas rare d’avancer l’idée que les risques sont de variance ou d’espérance infinie. Or la notion d’espérance infinie est un à la fois étrange, et probablement contre-intuitive [2].. Si on considère une variable aléatoire X positive (pour simplifier), et si on note S(x)=\mathbb{P}(X>x) la fonction de survie, et f(x) la densité (correspondant à l’opposé de la dérivée de S), on peut montrer que la moyenne empirique d’un million ou d’un milliards de tirage de cette variable va s’approcher autant qu’on veut d’une grandeur, appelée l’espérance mathématique\mathbb{E}(X) =\int_0^\infty S(x)dx= \int_0^\infty xf(x)dxRien de bien étonnant ici, c’est encore la loi des grands nombres, énoncée dès 1713 par Jacob Bernoulli (le “golden theorem” de Raper (2018)) et surtout Pierre-Simon Laplace, en 1814. A condition toutefois que cette intégrale soit finie. Ce qui n’est pas garanti. Par exemple, la loi de Pareto d’indice a vérifie S(x)=\mathbb{P}(X>x)=x^{-a}. Dès 1925, Karl-Gustaf Hagstroem avait noté que cette loi semblait particulièrement adaptée à la modélisation des grands risques, et donc en réassurance [3]. Et pour une variable qui suit une loi de Pareto d’indice 1, son espérance est, mathématiquement, infinie.

Que signifie cette espérance infinie ? Il n’y aura aucun “sinistre de coût infini”, et il sera toujours possible de calculer une moyenne empirique sur n observations. Mais cette moyenne va tendre vers l’infini quand n croit. Louis Bachelier, en évoquant le paradoxe de Saint Pétersbourg (qui est un jeu dont l’espérance de gain est infinie), rappelle que “un résultat paradoxal, dans les sciences mathématiques, provient nécessairement d’un défaut de notre intelligence inhabile à déchiffrer un ensemble trop complexe, inapte à se représenter l’infiniment grand (…) le bon sens ne peut être invoqué lorsqu’il s’agit de questions délicates ; il ne permet pas de reconnaître si l’aire comprise entre une courbe et son asymptote est finie ou non, si une série est convergente ou divergente”. Cette moyenne va tendre vers l’infini quand n croit, c’est-à-dire qu’on peut être certain que la moyenne va toujours dépasser n’importe quelle valeur aussi grande qu’on puisse l’imaginer. On peut le visualiser en haut de la Figure 1 avec 10 simulations de 100,000 valeurs. À gauche, le cas où la variance et l’espérance sont finies; au centre, la variance est infinie, et l’espérance finie; à droite, les deux sont infinies.

Figure 1: Évolution de la moyenne n\mapsto (x_1+\cdots+x_n)/n avec une loi d’espérance finie et de variance infinie à gauche, et une loi d’espérance infinie à droite.

Une autre grandeur intéressante est le ratio du maximum sur n observations sur la somme. Pour des variables d’espérance infinie, ce ratio ne tend pas vers 0. Il est alors possible, si les variables x désignent les coûts de sinistres, avec 100,000 sinistres d’espérance infinie, il est possible que le plus gros des sinistres représente plus de 90% de la charge totale.

Figure 2 : Évolution du ratio n\mapsto \max\{x_1,\cdots,x_n\}/(x_1+\cdots+x_n) avec une loi d’espérance finie et de variance infinie à gauche, et une loi d’espérance infinie à droite.

On le voit, cette propriété est importante, mais elle est difficile à identifier car il s’agit d’une propriété fondamentale du modèle sous-jacent, en lien avec la distribution des observations, puisqu’il est toujours possible de calculer la moyenne. Par exemple, la suite suivante correspond à huit valeurs obtenues en tirant au hasard une loi de Pareto d’indice 1 (et donc théoriquement d’espérance infinie)

1.657442 || 4.138543 || 15.592108 || 1.429090

1.684843 || 1.186745 || 1.341435 || 3.308316

Comment savoir si un ensemble de coûts de sinistres suit un loi d’espérance finie, ou pas ? L’approche classique, présentée par exemple dans Zajdenweber (1996, 2000), consiste à utiliser le graphique dit de Pareto, avec le logarithme des coûts sur l’axe des abscisses, et le logarithme de la probabilité de survie en ordonnées. Si les points sont alignés suivant une droite de pente -a, alors la loi de Pareto de paramètre a est parfaitement adaptée. En effet, si \mathbb{P}(X>x)=x^{-a}, alors, en prenant le logarithme des deux grandeurs, et si on ordonne l’échantillon (x_1\leq x_2\leq\cdots\leq x_n), alors \log\left(\frac{n-i}{n}\right)=-a\cdot \log(x_i)Et si la pente est trop modérée, plus grande que -1, alors les coûts sont d’espérance infinie.

Figure 3: Graphique de Pareto, avec \log((n-i)/n) sur l’axe des ordonnées et \log(x_i) sur l’axe des abscisses. Les points sont alignés suivant une droite, de pente -a, correspondant à une loi de Pareto d’indice a. a\leq 1 signifie que les risques sont d’espérance infinie.

Cette hypothèse d’indice de Pareto proche de 1 n’est pas irréaliste quand on parle de catastrophes naturelles, ou industrielles,

  • ouragans, Hsieh (1999), a\sim1.5
  • incendies entreprises, Biffis et al. (2014), a\sim1.25
  • tremblement de terre, Sornette et al. (1996), a\sim1
  • tsunamis, Embrechts et al. (2024), a\sim1
  • risque opérationnel, Moscadelli (2004) et Chavez-Demoulin et al. (2006) a\sim1
  • risque cyber, Eling et al. (2019) a\sim1
  • risque nucléaires, Hofert et al. (2012), a\in(0.6;0.7)

De la diversification des grands risques

Au lieu de travailler par type de risque, on peut envisager l’agrégation de ces risques, tous ensemble. Heuristiquement, avoir des portefeuilles avec des risques d’inondation, de tremblement de terre, ou de sécheresse pourrait offrir un peu de “diversification”. Le concept de “diversification” peut être introduit avec la loi des grands nombres, dont on parlait auparavant, et il sera très proche de l’idée même d’assurance, de mutualisation des risques. Smith & Kane (1994), par exemple, rappellent que la contribution d’un n+1-ième risque, indépendant, dans un groupe de n risques, tarifés de manière actuariellement juste, permet généralement de faire marginalement diminuer le risque, ce qui renforce la mutualisation des risques par l’assureur. Cet effet de diversification marche encore, même si les risques sont corrélés (mais pas parfaitement corrélés, et les gains de diversification diminuent avec la corrélation, comme le rappelait Charpentier (2011)).

Mais bien souvent, quand on parle de “diversification” on pense aux travaux d’Harry Markowitz ou d’Arthur Roy en finance dans les années 1950, comme base de la théorie du portefeuille. Cette théorie montre comment des investisseurs rationnels peuvent utiliser la diversification, correspondant à la corrélation entre actifs, afin d’optimiser leur portefeuille financier. Dans cette approche, on suppose généralement que la préférence des investisseurs pour un couple risque / rendement peut être décrite par une fonction d’utilité quadratique. Autrement dit, seuls le rendement attendu (l’espérance de gain) et la volatilité (l’écart type) ou la variance, sont les paramètres considérés par l’investisseur. Ce que montre cette littérature, c’est qu’un investisseur peut réduire le risque de son portefeuille simplement en détenant des actifs qui ne sont pas (ou peu) corrélés, donc en diversifiant ses placements. Il peut alors obtenir la même espérance de rendement en diminuant la variabilité de son portefeuille.

Mais que se passe-t-il si la variance n’existe plus ? Cette question revient à questionner l’utilisation de la loi normale pour modéliser les rendements financiers. La loi normale était intéressante en partie parce qu’elle vérifie une propriété de stabilité par sommation[4]. Garder cette propriété tout en considérant une loi ayant davantage d’extrêmes que la loi normale, revient à utiliser les lois “stables” étudiées par Paul Lévy, comme le proposait[5] Benoit Mandelbrot dans les années 1960.

Dans le cas où la variance est infinie, il convient d’utiliser une mesure de risque plus générale que l’écart-type, et heuristiquement, la “diversification” est liée à la sous-additivité de la mesure de risque : un portefeuille contenant la moyenne des avoirs de deux autres portefeuilles a un risque plus faible de la moyenne des risques des deux autres portefeuilles. Daníelsson et al. (2013) rappellent qu’en présence de grands risques (d’espérance infinie), la diversification ne fonctionne plus. Cette propriété avait été décrite et discutée par Paul Samuelson dès 1967, Stephen Ross en 1976, ou plus récemment Rustam Ibragimov,  Dwight Jaffee, Johan Walden, Paul Embrech ou Ruodo Wang, entre autres. L’introduction de Ibragimov et al. (2015) l’explique bien, “there are limitations to diversification with such risk distributions [heavy-tailed distribution]. Specifically, whereas diversification is preferred by risk-averse agents when risks are thin-tailed (the traditional case that has been extensively studied), it may actually be hurtful for agents to diversify when risks are heavy-tailed […] nondiversification traps may arise when risk distributions have heavy left tails and insurance providers have limited liability ”. Ces propriétés, largement discutées d’un point de vue mathématiques, sont compliquées à faire admettre car elles sont théoriques, et contre-intuitives. De plus, il est souvent difficile de savoir pour qui la diversification devient dangereuse, puisqu’il y a plusieurs acteurs, les assurés, les assureurs, les réassureurs, l’état. Ibragimov et al. (2011) donnent des éléments de réponse, “when these risks are thin-tailed, risk-sharing is always optimal for both individual intermediaries and society. But, with moderately heavy-tailed risks, risk-sharing may be suboptimal for society, although individual intermediaries still benefit from it […]. and it is well-known that diversification may be suboptimal in the extremely heavy-tailed case”.

Depuis une vingtaine d’années, les exemples où la diversification ne fonctionne pas sont nombreux, et connus des praticiens. Fabozzi et al. (2014), évoquant la crise financière le rappellent “The financial crisis has clearly shown that when you need diversification most, it may not work.” Quand on s’intéresse aux risques liés aux catastrophes climatiques, on voit que ces risques sont extrêmes, potentiellement inassurables car d’espérance potentiellement infinie. L’inassurabilité signifie surtout qu’un mécanisme de marché n’a pas de sens, sans intervention de l’État. On pourrait aussi penser qu’il pourrait être intéressant de diversifier les risques, en offrant une couverture multi-périls (comme le propose le mécanisme cat-nat actuel), ou bien en envisageant une diversification géographique, par exemple au niveau européen, comme le suggéraient récemment Carlo Cimbri, Thierry Derez et Philippe Lallemand. Mais la littérature scientifique nous rappelle que cette diversification est dangereuse, en tous cas inenvisageable sans une intervention forte et claire des États.

Références

Biffis, E., & Chavez, E. (2014). Tail risk in commercial property insurance. Risks, 2(4), 393–410.

Charpentier, A. (2011). La loi des grands nombres et le théorème central limite comme base de l’assurabilité ? Risques, 86.

Chavez-Demoulin, V., Embrechts, P., & Nešlehová, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. Journal of Banking & Finance, 30(10), 2635-2658.

Chen, Y., Embrechts, P., & Wang, R. (2024). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operations Research.

Cimbri, C., Derez, T. & Lallemand, P. (2024). Mutualisons l’assurance pour offrir aux Européens une protection à la hauteur des risques actuels ! La Tribune, 23 mai.

Daníelsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., & de Vries, C. G. (2013). Fat tails, VaR and subadditivity. Journal of econometrics, 172(2), 283-291

Eling, M., & Wirfs, J. (2019). What are the actual costs of cyber risk events? European Journal of Operational Research, 272(3), 1109–1119.

Embrechts, P., Hofert, M., & Chavez-Demoulin, V. (2024). Risk Revealed: Cautionary Tales, Understanding and Communication. Cambridge University Press.

Fabozzi, F. J., Focardi, S. M., Jonas, C.: Investment Management: A Science to Teach or an Art to Learn?. CFA Institute Research Foundation (2014)

Fama, E. F. (1965). Portfolio analysis in a stable Paretian market. Management science, 11(3), 404-419.

Hagstroem, K.-G. (1925). Pareto and reinsurance. Scandinavian Actuarial Journal, 216–248

Hofert, M., & Wüthrich, M. V. (2012). Statistical review of nuclear power accidents. Asia-Pacific Journal of Risk and Insurance, 7(1).

Hsieh, P.-H. (1999). Robustness of tail index estimation. Journal of Computational and Graphical Statistics, 8(2), 318–332.

Ibragimov, R., & Walden, J. (2007). The limits of diversification when losses may be large. Journal of banking & finance, 31(8), 2551-2569.

Ibragimov, R., Jaffee, D., & Walden, J. (2011). Diversification disasters. Journal of financial economics, 99(2), 333-348.

Ibragimov, M., Ibragimov, R., & Walden, J. (2015). Heavy-tailed distributions and robustness in economics and finance (Vol. 214). Springer.

Lévy, Paul (1925). Calcul des probabilités. Paris: Gauthier-Villars.

Mandelbrot, B. (1960). The Pareto–Lévy Law and the Distribution of Income. International Economic Review. 1 (2): 79–106.

Markowitz, H. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.

Markowitz, H. (1971). Portfolio selection : efficient diversification of investments. Yale University Press.

Moscadelli, M. (2004). The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca d’Italia

Raper, S. (2018). Turning points: Bernoulli’s golden theorem. Significance, 15(4), 26-29.

Ross, S. A. (1976). A note on a paradox in portfolio theory. Unpublished Mimeo, University of Pennsylvania.

Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 431-449.

Samuelson, P. A. (1967). Efficient portfolio selection for Pareto-Lévy investments. Journal of financial and quantitative analysis, 2(2), 107-122.

Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes. Journal of Geophysical Research: Solid Earth, 101(B6), 13883-13893.

Smith, M. L., & Kane, S. A. (1994). The law of large numbers and the strength of insurance. In Insurance, Risk Management, and Public Policy: Essays in Memory of Robert I. Mehr (pp. 1-27). Dordrecht: Springer Netherlands.

Zajdenweber, D. (1996). Extreme values in business interruption insurance. Journal of Risk and Insurance, 95-110.

Zajdenweber, D. (2000). Économie des extrêmes. Flammarion.

1. Le cas des dés est un peu particulier car la géométrie du cube, en particularité sa régularité (on parle d’hexaèdre régulier, à 6 faces), permet d’inférer la probabilité sans faire la moindre expérience

2. La littérature théorique des probabilités s’est largement construite sur l’idée de variables d’espérances finies, et il est très dur de s’en passer (tout raisonnement “en moyenne” devenant impossible)

3. Il faudra attendre les années 1970, et les travaux de Guus Balkema ou Laurens de Haan, pour avoir une preuve mathématique de ce résultat. L’école néerlandais de statistique a fait des avancées majeures sur l’analyse des évènements extrêmes suite au raz-de-marée de 1953 en mer du Nord, qui a eu des conséquences majeures et désastreuses aux Pays-Bas, comme le rappellent Embrechts et al. (2024)

4. La somme (ou la moyenne) de variables normales indépendantes suit aussi une loi normale

5. Il appelle ces lois Pareto-Lévy pour souligner la forme des queues de distributions, correspondant à des lois de type Pareto, sur les pertes extrêmes (à gauche) et les gains extrêmes (à droite)

Croissance, décroissance, de quoi parle-t-on ?

Ce petit billet a été coécrit avec Ewen Gallic,

« Fin du monde, fin du mois, même combat, » peut-on lire régulièrement sur des pancartes et les banderoles, lors de diverses manifestations, mais aussi comme titre de leçon inaugurale au Collège de France de l’économiste Christian Gollier, rappelant que le changement climatique et l’économie se font face dans un combat qui s’annonce sanglant. La “croissance” semble être un élément clé dans ce combat, mais ce dernier restera probablement vain tant que ce terme ne serra pas clairement discuté, permettant de quitter des tranchées souvent dogmatiques.
Continue reading Croissance, décroissance, de quoi parle-t-on ?

Oaxaca-Blinder decomposition of changes in means and inequality

Our paper, Oaxaca-Blinder decomposition of changes in means and inequality: A simultaneous approach, with Emmanuel Flachaire, was just published in the Economics Bulletin,

In this paper, we show that a decomposition of changes in inequality, with the mean log deviation index, can be
obtained directly from the Oaxaca-Blinder decompositions of changes in means of incomes and log-incomes. It allows practitioners to conduct simultaneously empirical analyses to explain which factors account for changes in means and in inequality indices between two distributions with strictly positive values.

Note that the Oaxaca-Blinder decomposition actually originated in the work of Evelyn Kitagawa in the 1950’s, to quantify gender discrimination in labour economics.

Kitagawa, E. M. (1955). Components of a difference between two rates. Journal of the American Statistical Association, 50 (272), 1168–1194.

Geospatial Disparities: A Case Study on Real Estate Prices in Paris

Our paper, Geospatial Disparities: A Case Study on Real Estate Prices in Paris, and Agathe Fernandes Machado, François Hu, Philipp Ratz and Ewen Gallic, is now online on ArXiv,

Driven by an increasing prevalence of trackers, ever more IoT sensors, and the declining cost of computing power, geospatial information has come to play a pivotal role in contemporary predictive models. While enhancing prognostic performance, geospatial data also has the potential to perpetuate many historical socio-economic patterns, raising concerns about a resurgence of biases and exclusionary practices, with their disproportionate impacts on society. Addressing this, our paper emphasizes the crucial need to identify and rectify such biases and calibration errors in predictive models, particularly as algorithms become more intricate and less interpretable. The increasing granularity of geospatial information further introduces ethical concerns, as choosing different geographical scales may exacerbate disparities akin to redlining and exclusionary zoning. To address these issues, we propose a toolkit for identifying and mitigating biases arising from geospatial data. Extending classical fairness definitions, we incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus. This extension allows us to gauge disparities stemming from data aggregation levels and advocates for a less interfering correction approach. Illustrating our methodology using a Parisian real estate dataset, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.