The report Insurance, biaises, discrimination and fairness is now officially online on the website of the Institut Louis Bachelier.
Massive data and machine learning algorithms’ performance of have turned insurance and actuarial science upside down. Actuaries are pushed to doubt and distrust due to issues raised by these new tools in other contexts from predictive justice (or “actuarial justice” as it is called by Harcourt (2008)) or the debates on “fake news“, to autonomous vehicles and predictive medicine. Kranzberg (1986) asserted that “technology is neither good nor bad; nor is it neutral“, pointing out that, even in the absence of bad intentions, learning algorithms could be unfair. Correcting these possible injustices is complex. For Nielsen (2020) “technology does not necessarily self-regulate, via either market or social pressures” (the market invisible hand or social pressure may not be enough). In this context, we will review the issues of bias, discrimination and fairness in predictive models used in insurance. These changes, both in the data and in the models, observed over the past decade or so, had already challenged insurance’s very existence.
For Loffler (2016) “this leads to demutualization and a focus on predicting and managing individual risks rather than communities“, the increasing individualization of premiums forces us to question the future of mutualization and solidarity between policyholders. The problems of discrimination must be considered in this context of loss of solidarity. Ironically, discrimination only makes sense by considering the individual as a member of a group characterised with a shared trait (women, people of foreign origin, the elderly, etc).
This principle of risk pooling translates into the fact that insurance is “the contribution of the many to the misfortune of the few“. Because of the production cycle inversion, the insurer sells the policyholder a promise of compensation, in the future, for a random risk, in exchange for the payment of a “fair” contribution, presumably proportional to the policyholder’s risk (Thiery et al. (2006) will speak of “actuarial fairness“). The real underlying risk factor being unobservable information when the contract is signed, the insurer will develop predictive algorithms from the available information, to predict the frequency and costs of claims, but also the fraud probability, or the probability of taking out additional cover, for example. By no longer seeing a group of policyholders as a perfectly homogeneous mutuality, actuaries have used increasingly refined algorithms to create more homogeneous subgroups. With the development of machine learning techniques, the idea of personalization and individualization (which has been very present in the computer science community for several years, as pointed out by Adomavicius (2005) with individualized “profiles”) is making headway and pushing insurers to increasingly demutualize. “At the core of insurance business lies discrimination between risky and non-risky insureds” stated Avraham (2017). Also, the insurance operation is technical and has a fundamentally collective dimension, based on risk mutualization within homogeneous risk groups. Insurance classification systems are based on the assumption that individuals meet average characteristics (stereotyped in some way) of a group to which they belong in. This is discrimination in the statistical sense (implemented by statistical and then econometric tools). However, the insurance contract is a matter of law, and has an individual dimension. In this sense, an individual cannot be treated differently because of they belong to a specific group, particularly in group they have not chosen to be in, otherwise it is discrimination, in the legal sense of the term. And in the context of increasingly massive data, and increasingly complex predictive algorithms (not to use the term “black box“), it has become more and more difficult to ensure that insurers are asking a “fair” contribution from policyholders.
Thinking about the equal treatment of policyholders leads to questioning the very possibility of taking out a contract, with a view to coverage, but also to the idea of asking for a non-prohibitive, non-dissuasive premium. As apposed to what financial mathematics teaches us, from the hypothesis of complete markets, Froot (1995), there is no such thing as the law of one price in insurance: the price of a risk is seen through the eyes of a mutuality of policyholders, and of a pricing model. Moreover, policyholders do not buy “insurance”, but a “guarantee” of coverage against certain risks. If certain coverages are subscribed to by a majority of some populations, and not by others, the price difference does not necessarily strictly correspond to discrimination, per se. It is within this context that we will discuss bias, discrimination and fairness in insurance.
The increasingly massive amount of data poses many challenges. First, regulations seek to protect so-called “sensitive” or “protected” information, sometimes prohibiting the collection and storage of certain variables. The main threat is that it becomes difficult to ensure that a model does not discriminate according to a criterion if it is not observed. Masking certain characteristics is not enough to enforce the fairness of a model, and only serves to mask a potential problem. Another challenge is that of the innumerable bias of data collected through all kinds of sources (questionnaires, connected objects, data obtained via different sources, etc). Among these, there are missing variable bias, definition or interpretation bias, measurement bias, survival bias, feedback bias, etc. These “dark data” (to use the term used by Hand (2020)) force us to question the relevance of a risk classification, as discrimination is sometimes perceived according to biased information, possibly misinterpreted. What is the relevance for the insurer, of the main driver’s gender in a straight couple sharing a car ? This leads us to the difficulty of defining variables, which is well known to statisticians.
Let”s return to Simpson’s paradox and ecological fallacy, where the certain variables’ absence can lead to a false interpretation of the meaning of a potential discrimination. In the context of insurance, telematic data and incentive mechanisms of the “gamification” type raise questions about feedback biases, as insurers have the possibility of directly influencing the behaviour of certain policyholders on the basis of data’s real arrival time. This is a form of selection bias, which simply means that historical data has been collected on people who have chosen to take out a policy and who have been accepted by an insurer beforehand (potentially on the basis of a previous model). Likewise the fraud analysis cannot be done in the same way if the fraud-related investigations are conducted randomly or if they are based on a prior fraud detection model. The typical debates between experimental data (often randomized) and administrative or observational data can be found.
As already mentioned above, a central notion is discrimination, a particularly ambiguous term, since actuaries will use the term’s statistical version, like the linear discriminant analysis introduced by Ronald Fisher, whereas jurists see it as unequal and unfavourable treatment applied to some people because of certain criteria. Even if there are cultural differences between countries, there will often be a number of protected characteristics (by moral code or by law) such as the person’s gender or sex, race or national or ethnic origin, disability and any genetic information, etc. These criteria are sometimes presented as protected. These criteria are sometimes presented as “clubs” into which one is born, to use the expression of Macnicol (2006) (which also echoes the concepts of “veil of ignorance” and “genetic lottery”). Other criteria, such as age, are more complex, since a policyholder will pass through all ages in the course of his or her life: if there is “discrimination” against young people, the policyholder will suffer from it at the age of 20, when he or she is in the disadvantaged group, before progressively moving into the privileged group (without mentioning a possible inter-generational solidarity). Finally, some criteria are more a matter of more or less conscious choice. A first challenge is that most kinds of discrimination are not intentional. Furthermore, contrary to what may exist in traditional literature on discrimination where proxies are potentially used instead of a sensitive variable, e.g. redlining, when a city’s neighbourhoods are a proxy for ethical and racial information, in insurance, some sensitive variables (e.g. gender) have long been used as proxies for information that is difficult to access (such as behavioural information concerning driving). Another difficulty lies in a typical high-dimensional problem, and in the multicollinearity of the predictor variables. This can lead to proxy discrimination} sometimes referred to as statistical discrimination or indirect discrimination in European directives related to discrimination, which consists of using a variable that is highly correlated with the protected variable. The extensive use of (undetected) proxies in model development has raised concerns about fairness. Data enhancement adds more and more variables that can be seen as generating indirect discrimination.
Finally, we will outline the concept of fairness of a predictive model. After a brief overview of the concepts of justice, we will present the typical fairness measures that can be used to quantify the extent of a possible discrimination. If we formalize briefly, we have a triplet (y, \boldsymbol{x}, p), where y is a variable of interest (number of claims, annual cost, number of doctor visits, etc), \boldsymbol{x} a set of admissible explanatory variables, used to predict y, and p a sensitive, or protected, variable (assumed unique, here). Building a predictive model \widehat{y}=m(\boldsymbol{x}) using only the \boldsymbol{x} variables and not p is not enough to guarantee that the model will not discriminate according to p, simply because p can be quite correlated to some \boldsymbol{x} characteristics (we find again the proxy idea). Barocas et al. (2019) note that the main principles associated with fairness result in (1) by an independence between \widehat{y} and p, in other words the prediction has nothing to do with the group of p (2) by a notion of separation: \widehat{y} is independent of p given y, and (3) a notion of sufficiency: y is independent of p given \widehat{y}. These principles will result into different notions of group fairness, the most popular being the notion of demographic parity and the notion of equal opportunity. These (so-called group) notions, which are very popular and widely used (e.g. in the labour market, in the United States), are to be distinguished from individual approaches emerging in the scientific literature, inspired by causal inference techniques and aiming to seek a counterfactual to answer the following question: “What would have happened if the insured had the characteristic p=1 instead of p=0”? (assuming that the protected variable is binary, p\in\{0,1\}). It is a causal relationship between the sensitive variable p and the risk variable y, which can legitimise static discrimination, as suggested by the European Commission, which suggested allowing proportional differences in premiums and benefits for individuals when the use of sex is a determining factor in the evaluation of risk, on the basis of relevant and precise actuarial and statistical data. Nevertheless, the presence of proxies poses many challenges, as the usual counterfactual approach (consisting in changing the protected variable p only, ceteris paribus) does not make sense in high dimension, in the presence of proxies strongly correlated to the sensitive variable. An intervention (conceptual and fictitious) on the sensitive variable p must have an impact on one or more predictor variables \boldsymbol{x}, and thus on the prediction.
Other concepts will also be reviewed here, without being the subject of specific chapters, such as responsibility. Indeed, if an algorithm reproduces what it observes in the data, can it be considered responsible for reproducing social biases? From an epistemological point of view, models were historically required to “describe reality” (or let us say reality as it appears in the data, we will speak of “accuracy” in statistical learning), i.e. “what is“, whereas by introducing a moral and ethical dimension, the model is in agreement with “what should be“, according to an ethical norm (the famous opposition “is–ought” of Hume (1739), or between statistical “normality” opposed to moral norm. The other concern is that in order to quantify fairness, it is necessary to have access to personal, private and sensitive data, which brings us back to the discussions on privacy and compliance.
Finally, as we will see throughout the document, these discussions on discrimination, bias and fairness are very close to those concerning the interpretation of predictive models and the notion of explainability. This narrative aspect of model building is important, especially when creating directed causal graphs to understand the relationships between the protected variable p, the possible predictor variables \boldsymbol{x} and the variable of interest y. But in high dimension, this exercise quickly becomes impossible. By affirming that “all models are wrong but some models are useful“, Georges Box insisted on the narrative aspect of modelling and the interpretation that follows from it. A detailed understanding of data and models is fundamental today, as the era of cold and objective (or supposedly objective) calculations by actuaries seems to be over.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (July 12, 2022). Insurance, biaises, discrimination and fairness. Freakonometrics. Retrieved December 3, 2024 from https://doi.org/10.58079/ovk4