Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Some updates about the insurance datasets package (CASdataset)

Ten years ago, Computational Actuarial Science with R was published. With Christophe Dutang, we created at the same time an R package, collecting datasets used in the book. It was mainly to give access to the datasets to reproduce the applications, since functions used in the different chapters were coming from other R packages. Then, we started adding more and more datasets, not used in the book, but that could be used by researchers and students. We are quite happy to see that those datasets are now considered as a benchmark in actuarial and insurance litterature (and also outside the community, actually).

The maintenance was a bit complicated since it was not possible to be hosted by the CRAN (Comprehensive R Archive Network), so it was either on Christophe’s github repo, or on a dedicated website at UQAM. Christophe’s repo

https://dutangc.github.io/CASdatasets/

is under construction (or major refreshing, with Ewen Gallic), and several vignettes will be added, created by ). Actually, we encourage colleagues, or students, who used datasets from the package to share some codes, we can now host the application. And there is also the following repository,

https://entrepot.recherche.data.gouv.fr/

Hence, the dataset has now an official DOI, which makes it easier to cite doi:10.57745/P0KHAG. And  the following bib file can be obtained,

@data{P0KHAG_2024,
author = {Dutang, Christophe and Charpentier, Arthur},
publisher = {Recherche Data Gouv},
title = {{Insurance dataset}},
year = {2024},
version = {V1},
doi = {10.57745/P0KHAG},
url = {https://doi.org/10.57745/P0KHAG}
}

Talk at the 27th International Congress on Insurance: Mathematics and Economics

On Wednesday morning, I will be chairing our session “Discrimination-free Insurance Pricing” at the Insurance: Mathematics & Insurance Conference, in Chicago. With Olivier Côté, Lydia Gabric and Hong Beng Lim, we will be four speaker, just before lunch time. My talk will be a mix of recent work on quantifying and mitigating discrimination in scores (in insurance). Slides are available online.

 

Talk on collaborative insurance, unfairness and discrimination

Monday, I will be giving a short course at the workshop on Decentralized Insurance and Risk Sharing (SAC 161), in Chicago

  • Decentralized Finance and Blockchain: Implications for the Insurance Industry, by Marco Mirabella
  • Decentralized risk sharing: definitions, properties, and characterizations, by Jan Dhaene
  • Collaborative insurance, unfairness, and discrimination, by Arthur Charpentier
  • Decentralized insurance: bridging the gap between industry practice and academic theory, by Runhuan Feng

My slides are available online.

In this course, we will get back to mathematical properties of risk sharing on networks, with reciprocal contrats. We will discuss conditions about stochastic dominance, proving that policyholers might have interest in sharing risks with “friends”.
Then, we will try to adress fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much litterature about fairness on networks. It is important to adress those issues, since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatement.

Insurance, Biases, Discrimination and Fairness

Insurance, Biases, Discrimination and Fairness was published a few weeks ago. I still plan to spend some time this summer on the R package, including data and some functions…

This book offers an introduction to the technical foundations of discrimination and equity issues in insurance models, catering to undergraduates, postgraduates, and practitioners. It is a self-contained resource, accessible to those with a basic understanding of probability and statistics. Designed as both a reference guide and a means to develop fairer models, the book acknowledges the complexity and ambiguity surrounding the question of discrimination in insurance. In insurance, proposing differentiated premiums that accurately reflect policyholders’ true risk—termed “actuarial fairness” or “legitimate discrimination”—is economically and ethically motivated. However, such segmentation can appear discriminatory from a legal perspective. By intertwining real-life examples with academic models, the book incorporates diverse perspectives from philosophy, social sciences, economics, mathematics, and computer science. Although discrimination has long been a subject of inquiry in economics and philosophy, it has gained renewed prominence in the context of “big data,” with an abundance of proxy variables capturing sensitive attributes, and “artificial intelligence” or specifically “machine learning” techniques, which often involve less interpretable black box algorithms.

The book distinguishes between models and data to enhance our comprehension of why a model may appear unfair. It reminds us that while a model may not be inherently good or bad, it is never neutral and often represents a formalization of a world seen through potentially biased data. Furthermore, the book equips actuaries with technical tools to quantify and mitigate potential discrimination, featuring dedicated chapters that delve into these methods.

Samuel が横浜に到着

After defending his PhD last week, Samuel just arrived in 横浜市, at the International Joint Conference on Neural Networks (IJCNN’24), that will take place at the IEEE World Congress on Computational Intelligence (WCCI).

He will present our recent work on Boarding for ISS: Imbalanced Self-Supervised Discovery of a Scaled Autoencoder for Mixed Tabular Datasets,

The field of imbalanced self-supervised learning, especially in the context of tabular data, has not been extensively studied. Existing research has predominantly focused on image datasets. This paper aims to fill this gap by examining the specific challenges posed by data imbalance in self-supervised learning in the domain of tabular data, with a primary focus on autoencoders. Autoencoders are widely employed for learning and constructing a new representation of a dataset, particularly for dimensionality reduction. They are also often used for generative model learning, as seen in variational autoencoders. When dealing with mixed tabular data, qualitative variables are often encoded using a one-hot encoder with a standard loss function (MSE or Cross Entropy). In this paper, we analyze the drawbacks of this approach, especially when categorical variables are imbalanced. We propose a novel metric to balance learning: a Multi-Supervised Balanced MSE. This approach reduces the reconstruction error by balancing the influence of variables. Finally, we empirically demonstrate that this new metric, compared to the standard MSE: i) outperforms when the dataset is imbalanced, especially when the learning process is insufficient, and ii) provides similar results in the opposite case.

Contribution of machine learning in modeling rare values and imbalanced data

This morning (Montréal time), Samuel Stocksieker defended his PhD thesis entitled “contribution of machine learning in modeling rare values and imbalanced data“. Cécile Capponi, Marianne Clausel, Julie Josse, Frédéric Planchet and  Anne Sabourin, Christian-Yann Robert and Stéphane Loisel were in the jury,

the work is structured around major axes: Imbalanced Features and Imbalanced Regression. The first axis addresses the issue of feature imbalance, that is, when it concerns the attributes and not the variable to be explained. The first solution involves adjusting the distribution of a continuous covariate relative to a given target distribution. It proposes to combine weighted resampling and synthetic data generators. This strategy notably allows to deal with selection bias: when the distribution of the covariate in the training sample is significantly different from that of the population. A second solution is proposed in the context of multi-supervised learning, particularly with autoencoders. It relies on a new metric aimed at balancing the influence of variables during learning and is applicable not only to supervised and unsupervised models, but also to generative models such as variational autoencoders. The second part deals with the issue of regression from imbalanced data. Various preprocessing solutions, including synthetic data generation, are proposed. Initially, we propose to explore the initial data space by introducing new generators and methodologies to address the specific case of regression. We then propose to immerse the data in a latent space in order to provide a more conducive framework for synthetic data generation.

Generalized Oversampling for Learning from Imbalanced datasets and Associated Theory

Our paper, Generalized Oversampling for Learning from Imbalanced datasets and Associated Theory: Application in Regression, written with Samuel Stocksieker and Denys Pommeret, has been accepted for publication in TMLR (Transactions on Machine Learning Research)

In supervised learning, it is quite frequent to be confronted with real imbalanced datasets. This situation leads to a learning difficulty for standard algorithms. Research and solutions in imbalanced learning have mainly focused on classification tasks. Despite its importance, very few solutions exist for imbalanced regression. In this paper, we propose a data augmentation procedure, the GOLIATH algorithm, based on kernel density estimates and especially dedicated to the problem of imbalanced data. This general approach encompasses two large families of synthetic oversampling: those based on perturbations, such as Gaussian Noise, and those based on interpolations, such as SMOTE. It also provides an explicit form of such machine learning algorithms. New synthetic data generators are deduced. We apply GOLIATH in imbalanced regression combining such generator procedures with a new wild-bootstrap resampling technique for the target values. We evaluate the performance of the GOLIATH algorithm in imbalanced regression where we compare our approach with state-of-the-art techniques.

 

Can we diversify extremal events?

This post was originaly written in French and translated below.

In a financial context, diversifying risks means investing in a variety of assets, sectors, or geographic regions to avoid having the poor performance of a single investment significantly affect the overall portfolio. Diversification allows for risk reduction, or, in its mathematical formulation, the reduction of variance. But what happens when we encounter large risks, infinite variance? Or worse, infinite expectation?

Extreme Risks and Infinite Expectation?

Formalizing quantities related to random and uncertain quantities is a complex exercise. Probabilities, in the sense the word is often understood, are defined as the limits of frequencies observed through repeated events. The probability of rolling a 3 on a die is 1/6 because, by rolling a die a million times [1], a billion times, the probability will be as close as desired to 1/6. This is what the law of large numbers states, in its weakest form. Saying that the probability it will rain today is 1/6 is entirely different because it is a unique event. If I get drenched by a shower today, it will not prove that the probability was not 1/6, nor will it disprove the meteorological model. This is just a reminder that when modeling, we try to imagine small values for rare events, and it is unfortunately very difficult to validate them.

When modeling large risks, very large risks, it is not uncommon to suggest that the risks have infinite variance or expectation. The notion of infinite expectation is both strange and probably counterintuitive [2]. If we consider a positive random variable X (for simplicity), and let S(x)=\mathbb{P}(X>x) be the survival function, and f(x) the density function (corresponding to the opposite of the derivative of S), we can show that the empirical mean of a million or a billion draws of this variable will approach a value, called the mathematical expectation:\mathbb{E}(X) =\int_0^\infty S(x)dx= \int_0^\infty xf(x)dxThere is nothing surprising here; this is still the law of large numbers, stated as early as 1713 by Jacob Bernoulli (the “golden theorem” of Raper (2018)) and especially by Pierre-Simon Laplace in 1814. However, this integral must be finite, which is not guaranteed. For example, the Pareto distribution with index a satisfies S(x)=\mathbb{P}(X>x)=x^{-a}. As early as 1925, Karl-Gustaf Hagstroem noted that this distribution seemed particularly suited for modeling large risks, and thus in reinsurance [3]. For a variable following a Pareto distribution with index 1, its expectation is, mathematically, infinite.

What does this infinite expectation mean? There will be no “claim of infinite cost,” and it will always be possible to calculate an empirical average over n observations. However, this average will tend toward infinity as n increases. Louis Bachelier, in discussing the St. Petersburg paradox (a game with infinite expected gain), reminds us that “a paradoxical result in mathematical sciences necessarily stems from a flaw in our understanding, incapable of deciphering a too complex whole, unable to represent the infinitely large. Common sense cannot be invoked in delicate matters; it does not allow us to recognize whether the area between a curve and its asymptote is finite or not, whether a series is convergent or divergent.” This average will tend toward infinity as n increases, meaning that we can be sure the average will always exceed any value we can imagine. This can be visualized at the top of Figure 1 with 10 simulations of 100,000 values. On the left, the case where variance is infinite, and expectation is finite; on the right, both are infinite.

Figure 1: Evolution of the average n\mapsto (x_1+\cdots+x_n)/n for generated samples from a distribution with finite expected value (and infinite mean) on the left, and infinite expected value on the right.

Another interesting measure is the ratio of the maximum over n observations to the sum. For variables with infinite expectation, this ratio does not tend towards 0. It is possible that if the x variables represent claim costs, with 100,000 claims of infinite expectation, the largest claim could represent more than 90% of the total burden.

Figure 2 : Evolution of the ratio n\mapsto \max\{x_1,\cdots,x_n\}/(x_1+\cdots+x_n) with a distribution of finite expectation (and infinite variance) on the left, and a distribution of infinite expectation on the right.

As we can see, this property is important, but it is difficult to identify because it is a fundamental property of the underlying model, related to the distribution of observations, since it is always possible to calculate the average. For example, the following sequence corresponds to eight values obtained by randomly drawing from a Pareto distribution with index 1 (and thus theoretically of infinite expectation):

1.657442 || 4.138543 || 15.592108 || 1.429090

1.684843 || 1.186745 || 1.341435 || 3.308316

How can we tell if a set of claim costs follows a distribution of finite expectation or not? The classic approach, presented for example in Zajdenweber (1996, 2000), is to use the so-called Pareto plot, with the logarithm of costs on the x-axis, and the logarithm of the survival probability on the y-axis. If the points are aligned along a straight line with slope -a, then the Pareto distribution with parameter a is perfectly adapted. Indeed, if \mathbb{P}(X>x)=x^{-a}, then, taking the logarithm of both quantities, and ordering the sample (x_1\leq x_2\leq\cdots\leq x_n), we have
\log\left(\frac{n-i}{n}\right)=-a\cdot \log(x_i)And if the slope is too moderate, greater than -1, then the costs have infinite expectation.

Figure 3: Pareto plot, with \log((n-i)/n) on the y-axis and \log(x_i) on the x-axis. The points are aligned along a line with slope -a, corresponding to a Pareto distribution with index aa. a≤1a≤1 means that the risks have infinite expectation.

This hypothesis of a Pareto index close to 1 is not unrealistic when we talk about natural or industrial disasters:

  • hurricanes, Hsieh (1999), a\sim1.5
  • company fires, Biffis et al. (2014), a\sim1.25
  • business interruption, Zajdenweber (1996), a\sim1
  • earthquakes, Sornette et al. (1996), a\sim1
  • tsunamis, Embrechts et al. (2024),  a\sim1
  • operational risk, Moscadelli (2004) and Chavez-Demoulin et al. (2006) a\sim1
  • cyber risk, Eling et al. (2019) a\sim1
  • nuclear risk, Hofert et al. (2012), a\in (0.6;0.7)

On the Diversification of Large Risks

Instead of working by risk type, we can consider the aggregation of these risks together. Heuristically, having portfolios with flood, earthquake, or drought risks could offer some “diversification.” The concept of “diversification” can be introduced with the law of large numbers, as previously mentioned, and it will be very close to the idea of insurance, of risk pooling. Smith & Kane (1994), for example, remind us that the contribution of an n+1-th independent risk in a group of n risks, fairly priced, generally allows for a marginal reduction in risk, which reinforces the insurer’s risk pooling. This diversification effect still works even if the risks are correlated (but not perfectly correlated, and the diversification gains decrease with correlation, as Charpentier (2011) pointed out).

Often, when we talk about “diversification,” we think of the work of Harry Markowitz or Arthur Roy in finance in the 1950s, which laid the foundation for portfolio theory. This theory shows how rational investors can use diversification, corresponding to the correlation between assets, to optimize their financial portfolio. In this approach, it is generally assumed that investors’ preference for a risk/return trade-off can be described by a quadratic utility function. In other words, only the expected return (the expected gain) and the volatility (the standard deviation) or variance are the parameters considered by the investor. This literature shows that an investor can reduce the risk of their portfolio simply by holding assets that are not (or only slightly) correlated, thus diversifying their investments. They can then achieve the same expected return while reducing the variability of their portfolio.

But what happens if the variance no longer exists? This question challenges the use of the normal distribution to model financial returns. The normal distribution was interesting partly because it satisfies a property of stability by summation[4]. Keeping this property while considering a distribution with more extremes than the normal distribution amounts to using “stable” distributions studied by Paul Lévy, as proposed[5] by Benoit Mandelbrot in the 1960s.

In cases where the variance is infinite, it is necessary to use a more general risk measure than the standard deviation, and heuristically, “diversification” is related to the sub-additivity of the risk measure: a portfolio containing the average of the holdings of two other portfolios has a lower risk than the average of the risks of the two other portfolios. Daníelsson et al. (2013) remind us that in the presence of large risks (infinite expectation), diversification no longer works. This property was described and discussed by Paul Samuelson as early as 1967, Stephen Ross in 1976, and more recently by Rustam Ibragimov, Dwight Jaffee, Johan Walden, Paul Embrech, or Ruodu Wang, among others. The introduction by Ibragimov et al. (2015) explains it well, “there are limitations to diversification with such risk distributions [heavy-tailed distribution]. Specifically, whereas diversification is preferred by risk-averse agents when risks are thin-tailed (the traditional case that has been extensively studied), it may actually be hurtful for agents to diversify when risks are heavy-tailed […] nondiversification traps may arise when risk distributions have heavy left tails and insurance providers have limited liability.” These properties, widely discussed from a mathematical perspective, are difficult to accept because they are theoretical and counter-intuitive. Moreover, it is often difficult to determine for whom diversification becomes dangerous, since there are several stakeholders: the insured, insurers, reinsurers, and the state. Ibragimov et al. (2011) provide some answers, “when these risks are thin-tailed, risk-sharing is always optimal for both individual intermediaries and society. But, with moderately heavy-tailed risks, risk-sharing may be suboptimal for society, although individual intermediaries still benefit from it […] and it is well-known that diversification may be suboptimal in the extremely heavy-tailed case.

Over the past twenty years, there have been many examples where diversification does not work, and practitioners are aware of them. Fabozzi et al. (2014), discussing the financial crisis, remind us, “the financial crisis has clearly shown that when you need diversification most, it may not work.” When considering risks related to climate disasters, we see that these risks are extreme, potentially uninsurable because of potentially infinite expectation. Uninsurability mainly means that a market mechanism does not make sense without state intervention. One might also think that it could be interesting to diversify risks by offering multi-peril coverage (as proposed by the current cat-nat mechanism), or by considering geographical diversification, for example at the European level, as recently suggested by Carlo Cimbri, Thierry Derez, and Philippe Lallemand. But the scientific literature reminds us that this diversification is dangerous, in any case, unimaginable without strong and clear state intervention.

References

Biffis, E., & Chavez, E. (2014). Tail risk in commercial property insurance. Risks, 2(4), 393–410.

Charpentier, A. (2011). La loi des grands nombres et le théorème central limite comme base de l’assurabilité ? Risques, 86.

Chavez-Demoulin, V., Embrechts, P., & Nešlehová, J. (2006). Quantitative models for operational risk: extremes, dependence and aggregation. Journal of Banking & Finance, 30(10), 2635-2658.

Chen, Y., Embrechts, P., & Wang, R. (2024). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operations Research.

Cimbri, C., Derez, T. & Lallemand, P. (2024). Mutualisons l’assurance pour offrir aux Européens une protection à la hauteur des risques actuels ! La Tribune, 23 mai.

Daníelsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., & de Vries, C. G. (2013). Fat tails, VaR and subadditivity. Journal of econometrics, 172(2), 283-291

Eling, M., & Wirfs, J. (2019). What are the actual costs of cyber risk events? European Journal of Operational Research, 272(3), 1109–1119.

Embrechts, P., Hofert, M., & Chavez-Demoulin, V. (2024). Risk Revealed: Cautionary Tales, Understanding and Communication. Cambridge University Press.

Fabozzi, F. J., Focardi, S. M., Jonas, C.: Investment Management: A Science to Teach or an Art to Learn?. CFA Institute Research Foundation (2014)

Fama, E. F. (1965). Portfolio analysis in a stable Paretian market. Management science, 11(3), 404-419.

Hagstroem, K.-G. (1925). Pareto and reinsurance. Scandinavian Actuarial Journal, 216–248

Hofert, M., & Wüthrich, M. V. (2012). Statistical review of nuclear power accidents. Asia-Pacific Journal of Risk and Insurance, 7(1).

Hsieh, P.-H. (1999). Robustness of tail index estimation. Journal of Computational and Graphical Statistics, 8(2), 318–332.

Ibragimov, R., & Walden, J. (2007). The limits of diversification when losses may be large. Journal of banking & finance, 31(8), 2551-2569.

Ibragimov, R., Jaffee, D., & Walden, J. (2011). Diversification disasters. Journal of financial economics, 99(2), 333-348.

Ibragimov, M., Ibragimov, R., & Walden, J. (2015). Heavy-tailed distributions and robustness in economics and finance (Vol. 214). Springer.

Lévy, Paul (1925). Calcul des probabilités. Paris: Gauthier-Villars.

Mandelbrot, B. (1960). The Pareto–Lévy Law and the Distribution of Income. International Economic Review. 1 (2): 79–106.

Markowitz, H. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.

Markowitz, H. (1971). Portfolio selection : efficient diversification of investments. Yale University Press.

Moscadelli, M. (2004). The modelling of operational risk: experience with the analysis of the data collected by the Basel committee. Technical Report 517, Banca d’Italia

Raper, S. (2018). Turning points: Bernoulli’s golden theorem. Significance, 15(4), 26-29.

Ross, S. A. (1976). A note on a paradox in portfolio theory. Unpublished Mimeo, University of Pennsylvania.

Roy, A. D. (1952). Safety first and the holding of assets. Econometrica, 431-449.

Samuelson, P. A. (1967). Efficient portfolio selection for Pareto-Lévy investments. Journal of financial and quantitative analysis, 2(2), 107-122.

Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes. Journal of Geophysical Research: Solid Earth, 101(B6), 13883-13893.

Smith, M. L., & Kane, S. A. (1994). The law of large numbers and the strength of insurance. In Insurance, Risk Management, and Public Policy: Essays in Memory of Robert I. Mehr (pp. 1-27). Dordrecht: Springer Netherlands.

Zajdenweber, D. (1996). Extreme values in business interruption insurance. Journal of Risk and Insurance, 95-110.

Zajdenweber, D. (2000). Économie des extrêmes. Flammarion.

1. The case of dice is somewhat peculiar because the geometry of the cube, particularly its regularity (we refer to it as a regular hexahedron with six faces), allows us to infer the probability without any experimentation

2. The theoretical literature on probabilities is largely built on the idea of finite expectation variables, and it is very hard to do without them (making any reasoning “on average” impossible)

3. It was not until the 1970s, with the work of Guus Balkema and Laurens de Haan, that we had a mathematical proof of this result. The Dutch school of statistics made significant advances in the analysis of extreme events following the 1953 North Sea flood, which had major and disastrous consequences in the Netherlands, as recalled by Embrechts et al. (2024)

4. The sum (or average) of independent normal variables also follows a normal distribution

5. He calls these laws Pareto-Lévy to emphasize the shape of the distribution tails, corresponding to Pareto-type laws, on extreme losses (on the left) and extreme gains (on the right)

"sendo l'intento mio scrivere cosa utile a chi la intende…"