This post was written with Laurence Barry and Ewen Gallic, in French, in November 2019 (see hal-02350006)
Insurance policies are classic examples of random contracts. This forces insurers to regularly quantify this uncertainty, to calculate probabilities in order to propose “fair” premiums for the commitments they are going to make. Isn’t it time to question this practice, at a time when artificial intelligence is exploding, offering predictive algorithms of a precision never seen before? At a time when big data / big brother could mean the disappearance of uncertainty itself?
Insuring a heterogeneous population, or the importance of classification
The grouping of risks according to various pieces of information such as the age of the insured, his state of health or even his profession constitutes what is called risk classification. This practice of segmentation is justified (for eligibility purposes but also for pricing purposes) by the assumption that risks are placed in relatively homogeneous groups, within which the probabilities of occurrence are similar. For Schauer (2006), this “generalization”, which aims to see the individual through the prism of his or her risk class, to generalize his or her behavior on the basis of a few explanatory variables, is probably the raison d’être of the actuary: “To be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial.” Statistically, we are looking for a classification method that is as “discriminatory” as possible (in the statistical sense of the word, in the sense introduced by Fisher(1936)}, bearing in mind that discrimination is forbidden, which makes the exercise perilous and often criticized (we will come back to this later).
Insurers often use two arguments to justify segmentation. The first is that it is made economically necessary by competition; not classifying leads to anti-selection, as the major risks remain alone with insurers who do not segment. In such a situation, market equilibrium would not be possible since the low risks would be with a competitor who has segmented. If the risk factor were observable by both policyholders and insurers, there would be a self-selection phenomenon, with low-risk policyholders having the cheapest policies. This situation constitutes a Nash equilibrium. But if the risk factor is unobservable, a suboptimal equilibrium may be reached, resulting from a negative externality of this unavailable information, in the manner of Wilson (1977), as described in Cummins et al. (1982) for life insurance contracts. That said, Kleindorfer & Kunreuther (1980) show that access to more information does not necessarily lead to an improvement in consumer welfare. Moreover, if classification is not allowed, the equilibrium is maintained, with low risks subsidizing high risks.
The second argument put forward to justify segmentation is that it (and therefore adjusting premiums to risk) would be fair and equitable. But this view of fairness has not always been the case and seems to be driven by technical developments. Thus, classification has become increasingly refined, multiplying the classes of risk and leading to “personalized” rates. In addition to statistical advances, economic factors could justify this sophistication: the increasingly strong competition in certain branches.
Insurance uncertainty
There are several ways to characterize uncertainty in insurance. As is often the case when making forecasts, it is necessary to distinguish between the uncertainty associated with the estimation of probabilities and the real uncertainty of the result (the hazard of the event). For the second notion, Hacking (1975) speaks of structural probability, and this is the one that is often used to introduce the concepts of probability, for example with dice or card games: the probabilities are known, only the outcome of the game is uncertain. For example, I know that the probability of getting 6 by throwing a dice is 1/6 (given the geometry of the cube). From a statistical point of view, the probability is measured when we can observe a frequency, i.e. a repetition of similar risks. Statisticians have thus defined a notion of empirical probability, based on repetition. In this frequentist approach, and in particular for Ronald Fisher and Richard von Mises, the probability of a single event (known as a “one-shot”) is meaningless. If, in throwing a thousand dice, I get the face 6 173 times, the empirical probability of getting 6 is 17.3%. The law of large numbers assures us that this frequency will tend towards the true value by repeating the experiment, and the central limit theorem allows us to control the fluctuations. This is the first uncertainty we mentioned at the beginning of this section, which we would call the estimation error.
Finally, we can mention two additional notions: first, conditional probabilities. This idea was introduced into insurance by de Moivre, or de Witt, when they noted that to estimate a probability of death, it was necessary to consider people of the same age. This is the idea that we find when we consider a classification: we want homogeneous, similar risks, without being identical. The probability that we obtain is then conditional on this common factor that characterizes the observed class. In our example of dice, this amounts to saying that one should not throw a thousand dice, but a thousand times the same dice — or, failing that, similar dice (same manufacture).
Finally, subjective probabilities have been formalized by Bruno de Finetti and Leonard Savage (and more philosophically by Frank Ramsey to understand and model decision making. They are relatively popular in the economics of uncertainty, but difficult to implement in the context of valuing automobile or home insurance contracts. It is a judgement, which cannot be confronted with reality, but which can be envisaged for the insurance of risks which are still poorly known (for example, the first aviation insurance contracts, as mentioned in McGrayne (2018)). A Bayesian approach then consists in combining this subjective probability with the probability as the observed frequency of a phenomenon: starting from a belief a priori, one refines the estimate by a progressive update by repeating the experiments. Classically, the probability of having the face 6 will be an average between our belief (1 chance on 6) and a so-called historical probability, obtained by making a few throws (3 on 20 throws, for example). The weights attributed to the two depend on the number of experiments performed: we will give more credit to the experiment if we make a thousand throws than if we make sixty.
Certainty of outcome, or fundamental randomness
Predictive probabilities, used to calculate the premium for an insurance policy, are the first step in a classification problem. A classical tool to judge the relevance of a classifier is the ROC curve, described in Kuhn & Johnson (2018): one compares the individual probability (a priori, as resulting from the classification model) to a threshold, between 0 and 1; if the probability is lower than the threshold, the estimate is that the person survives, otherwise that he dies. This estimate is then compared to the realizations (\textit{ex-post}) of survival and death. For each threshold, we can consider the classical confusion matrix of decision theory: it consists in dividing the observations according to the observed result (in column) and the estimate resulting from the model in row (according to the estimated probability for the individual and the threshold that we have set). We can thus divide the population between correct classifications and errors (including “false positives” if the person survived despite an estimated probability of death higher than the threshold, and “false negatives” if the person died despite an estimated probability lower than the threshold).
Figure 1: ROC curve with strongly unbalanced data (20 deaths out of 1000 observed people). For 1.5% threshold, we have 445 predicted survival (440 correctly) and 555 predicted deaths (15 correcly).
The ROC curve is obtained by varying the threshold. Each threshold corresponds to a point on the curve, graphically reporting the rates of false positives (on the abscissa) and true positives (on the ordinate), as in Figure 1.
Consider a group of 1000 insureds, where 20 people died last year.
Assuming a model in which the population is perfectly homogeneous, the estimated probability of death is 2% for everyone. In this case, for any threshold higher than 2%, we estimate that the whole population survives: we will have a false positive rate of 0% and a true positive rate of 0%, hence a point (0,0) on the graph. Conversely, for any threshold lower than 2%, we estimate that the entire population dies: we will have a false positive rate of 100% and a true positive rate of 100\%, hence a point (1,1) on the graph. The ROC curve of this uniform 2% model is therefore the diagonal of the square in Figure 1. But one can also imagine that there is some heterogeneity with, for example, a probability of death of 1% for one half of the population and 3% for the other half, or that the model produces probabilities between 1% and 3% in a non-dichotomous way. The data simulated to construct the black curve in Figure 1 assumes that the population has varying probabilities of death, ranging from 1% to 3%, obtained by logistic regression. Errors are made, and the nature of the error varies with the chosen threshold, which changes the false positive and false negative rates.
The extreme case would be when the model would have correctly assigned a probability of 100% to the 20 people who actually died. This is the red curve in Figure 1. This split is possible ex-post, once the hazard is realized: \textit{a posteriori}, there is a certainty of death for those who actually died. However, this is not very real in insurance, unless one imagines that the actuary is an oracle who knows with certainty who will die and who will survive. The reality is rather that of the intermediate situation between the red curve and the diagonal, before arriving in the hatched region, where the error rate is low, but not zero: one cannot predict, with certainty, who will die. Insurance is only possible if this upper bound is not too high. A fundamental question for the survival of insurance is to know where this upper bound is: how far can we go, between the two extreme cases (a homogeneous population with a 2 % probability for all, and a highly discriminated population, with 2 % of the population having a 100 % chance of dying, and the other 0 %)? And what does this bound depend on? In particular, do more complex models, such as very deep neural networks, really improve prediction? And will data enrichment, as seen with connected objects and fusion with all sorts of external information, move the upper bound upward?
While deep learning — see Goodfellow et al. (2018) — allows for image classifiers with an error rate close to 0\%, it is hard to imagine that it will be possible to predict, almost a year in advance (when the contract is signed), who will die within a year, who will get the flu, who will have water damage, etc. More complex models allow for improved predictions, taking into account non-linearities, cross effects between rate variables, but not to the point of eliminating the hazard. And as long as the insurance is envisaged as an “ex-ante” policy (the premium is fixed at the beginning of the coverage period), it is difficult to imagine that adding information will also make the hazard disappear. This is the case for genetic tests that explain only a (small) part of the risk of cancer, for example. And adding data often means adding noise, which makes the analysis more complex. However, it is clear that more complex models and richer data do tend to “improve” the prediction, by raising the ROC curve. But are we asking the right questions? What does it really mean to have a bound that is very far from the homogeneous case, on the diagonal?
Homogeneity, fairness and causality
As we have seen, insurance pricing relies on a division of risks (contracts) into categories, within which the distribution of losses can be estimated, in order to set a premium level. The distribution is based on the characteristics of the insured and the insured property. By tracing the history of insurance, Ewald (1986) shows that the mechanisms of foresight were set up by shifting the burden of work-related accidents onto society: the idea of individual responsibility for accidents is abandoned in favour of solidarity. Insurance distinguishes “between the damage suffered by a particular individual — a matter of chance or misfortune — and the loss linked to the damage, which is always attributed collectively and socially”. This principle of social solidarity, of risk pooling, means that risk (in insurance) is always thought of collectively.
Today, rates are considered “fair” or “actuarially equitable” if each premium corresponds to the expected loss (not to say “hoped for”, in the mathematical sense) for each insured. In this perception of fairness, an essential assumption is that the classes are “homogeneous”. Indeed, under the opposite assumption, the less risky individuals subsidize the more risky ones, which is perceived as socially unjust.
This version of actuarial fairness can be described using the variance decomposition formula. The overall variance can be decomposed into two terms, the inter-class variance and the intra-class variance: “actuarial fairness” aims to ensure that the risk classes are relatively distinct from one another, and therefore have a high inter-class variance, accompanied by a homogeneity of the classes, and therefore a low intra-class variance. From a statistical point of view, trying to increase one is equivalent to decreasing the other. This mechanism is not always clear to uninformed observers; for example, in Manhart, one of the most documented cases on gender discrimination in insurance, Justice Stevens states: “We focus on fairness to individuals rather than on fairness to classes … even a true generalization about a class is an insufficient reason for disqualifying an individual to whom the generalization does not apply” (quoted in Anzalone (2016)). In other words, for justice, a statistical criterion of the type “true generalization” cannot be applied to an individual.
Another important criticism, found in the “\textit{gender directive}”, is the link between discrimination and causality. Indeed, statistically, actuaries will look for classification factors that are strongly correlated with claims experience. But it is possible that these factors are only a proxy for the true causal variable, which remains unobserved, leading to a poor estimate of risk for some. As noted by Antonio & Charpentier (2017), gender has thus been used for a long time in automobile insurance because it is highly correlated with variables associated with driving style and with other variables that were historically unobservable (but which are now observable thanks to connected objects, such as mileage, driving hours, types of roads used, etc).
This link with causal mechanisms is relatively deep, and Hacking (1975) sees a connection with the “probabilistic revolution”: we can easily highlight correlations, but the causes, if they exist, remain more opaque. Laplace, at the beginning of the 19th century, declared that “probability is relative in part to our knowledge, in part to our ignorance“, linking probabilities to both a deterministic Newtonian vision of the world and to our inability to know it perfectly. The latter component means that we cannot predict the exact date of death of an individual, but statistically, in a homogeneous group, we can predict the number of deaths in a year. And to return to the causal relationship, smoking for example does not necessarily cause premature death but smoking will be seen as dangerous because it increases the probability of death during a given period. Thus, as shown in Hacking (1975), causality is thought of today in a probabilistic context, and no longer in a deterministic one.
References
Antonio, K. & Charpentier, A. (2017). La tarification par genre en assurance, corrélation ou causalité ? Risques, 109.
Anzalone, C.A. (2016). U.S. Supreme Court Cases on Gender and Sexual Equality. Routledge.
Bailey, H., Hutchison, T. & Narber, G. (1975) The regulatory challenge to life insurance classification, Drake Law Review Insurance Law Annual, 4 : 779-827
Barry L. (2019). Justice ou justesse ? L’équité de l’assurance. Working paper, #15, chaire PARI.
Charpentier, A. & Denuit, M. (2004). Mathématiques de l’Assurance Non-Vie : Principes Généraux de Théorie du Risque. Economica.
Cummins, J.D., Smith, B.D., Vance, R.N. & VanDerhai, J.L. (1982). Risk Classification in Life Insurance. Kluwer-Nijhoff Publishing.
Ewald F. (1986). L’État providence. Grasset.
Fisher, R. A. (1936). The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics. (2) : 179–188.
Frézal S. & Barry , L. (2019). Fairness in Uncertainty : Some Limits and Misinterpretations of Actuarial Fairness, Journal of Business Ethics.
Goodfellow, U., Bengio, Y. & Courville, A. (2018) Deep learning. MIT Press.
Hacking, I. (1975) The Emergence of Probability. Cambridge University Press.
Kleindorfer, P. & Kunreuther, H. (1980) Misinformation and Equilibrium in Insurance
Markets, in Economic Analysis of Regulated Markets, Jörg Finsinger Editor, Springer
Verlag, 67-90
Kuhn, M. & Johnson, K. (2018). Applied Predictive Modeling. Springer Verlag.
McGrayne, S.B. (2012) The Theory That Would Not Die : How Bayes’ Rule Cracked the
Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy. Yale University Press.
Ramsey P.F. (1926). Truth and Probability.
Schauer, F. (2006) Profiles, Probabilities, and Stereotypes. Harvard University Press.
Von Mises, R. (1957). Probability, Statistics and Truth. Dover publications.
Wilson, C. (1977). A model of insurance markets with incomplete information. Journal of Economic Theory, 16 :2, 167-207.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (July 12, 2022). What is the future of predictive probabilities in insurance? Freakonometrics. Retrieved October 10, 2024 from https://doi.org/10.58079/ovk3