Tag Archives: fairness

Fairness and discrimination, PhD Course, #2 Insurance and risk classes

For the second course, we will get back a little bit on insurance pricing in a context of heterogeneous portfolio, and risk classification (slides are still online on the github repository). The starting point will be the pure premium.

See our online textbook, with Michel Denuit, Non Life Insurance Mathematics, for additional motivation. If we have some risk related variables \boldsymbol{x}=(x_1,\cdots,x_k), the pure premium will be the conditional expectation,

Here also, we have some law of numbers, for the conditional expected value,

This relationship, which defines the conditional expected value using the limiting value of a conditional frequency cannot be used to define properly \mathbb{P}[Y|\boldsymbol{X}=\boldsymbol{x}] and \mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]. One can consider a limit,\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\frac{\mathbb{P}(\{Y\in \mathcal{A}\}\cap\{|X -x|\leq \epsilon\})}{\mathbb{P}(\{|X -x|\leq \epsilon\})}or\mathbb{P}\big(Y\in \mathcal{A}\big\vert X = x\big)=\lim_{\epsilon\to0}\mathbb{P}\big(Y\in \mathcal{A}\big\vert |X -x|\leq \epsilon\big)as in the law of the unconscious statistician or as Proschan and Presnell (1998) wrote it

statisticians make liberal use of conditioning arguments to shorten what would otherwise be long proofs

We can now compute conditional frequency, given some risk characteristics, for some quantity of interest y, such as the age of death, in life insurance contracts.

Demographic risk and heterogeneity

First, we will see some gender-based life tables, starting with the one obtained by Nicolaas Struyck (see e.g. Alberts et al. (2014))

More recently, in France, some wealth based life tables were obtained, with various quantiles

And finally, we will see some life tables obtained 50 years ago in the US, with racial distinction

Mean and variance decomposition

About pure premiums, an important property is the law of total expectations, and a desirable property, that we will name “balance property”

We will also mention variance and variance decomposition, depending if we take into heterogeneity, or not. With homogenous pricing, we have

If we use the “true” underlying risk factor, \Theta, we have the standard variance decomposition, also called law of total variance

i.e.

And finally, if we do not observe \Theta, but we have a collection of covariates, \boldsymbol{X}=(X_1,\cdots,X_k),

Some historical perspectives

In the textbook, Insurance: Biases, Discrimination and Fairness, I have several paragraph about an historical perspective, starting with insurance as clubs, without segmentation. Then segmentation started, with risk classes and groups. For example, according to Issues And Needed Improvements In State Regulation Of The Insurance Business, by Harry Havens, in 1979,

The price which a person pays for automobile insurance depends on age, sex, marital status, place of residence and other factors. This risk classification system produces widely differing prices for the same coverage for different people. Questions have been raised about the fairness of this system, and especially about its reliability as a predictor of risk for a particular individual. While we have not tried to judge the propriety of these groupings, and the resulting price differences, we believe that the questions about them warrant careful consideration by the State insurance departments. In most States the authority to examine classification plans is based on the requirement that insurance rates are neither inadequate, excessive, nor unfairly discriminatory. The only criterion for approving classifications in most States is that the classifications be statistically justified — that is, that they reasonably reflect loss experience. Relative rates with respect to age, sex, and marital status are based on the analysis of national data. A youthful male driver, for example, is charged twice as much as an older driver all over the country} (…) t has also been claimed that insurance companies engage in redlining – the arbitrary denial of insurance to everyone living in a particular neighborhood. Community groups and others have complained that State regulators have not been diligent in preventing redlining and other forms of improper discrimination that make insurance unavailable in certain areas. In addition to outright refusals to insure, geographic discrimination can include such practices as: selective placement of agents to reduce business in some areas, terminating agents and not renewing their book of business, pricing insurance at un-affordable levels, and instructing agents to avoid certain areas. We reviewed what the State insurance departments were doing in response to these problem. To determine if redlining exists, it is necessary to collect data on a geographic oasis. Such data should include current insurance policies, new policies being written, cancellations, and non-renewals. It is also important to examine data on losses by neighborhoods within existing rating territories because marked discrepancies within territories would cast doubt on the validity of territorial boundaries. Yet, not even a fifth of the States collect anything other than loss data, and that data is gathered on a territory-wide basis.

According to The Role of Risk Classification in Property and Casualty Insurance: A Study of the Risk Assessment Process : Final Report, by Barbara Casey, Jacques Pezier and Carl Spetzler, in 1976,

On the other hand, the opinion that distinctions based on sex, or any other group variable, necessarily violate individual rights reflects ignorance of the basic rules of logical inference in that it would arbitrarily forbid the use of relevant information. It would be equally fallacious to reject a classification system based on socially acceptable variables because the results appear discriminatory. For example, a classification system may be built on use of car, mileage, merit rating, and other variables, excluding sex. However, when verifying the average rates according to sex one may discover significant differences between males and females. Refusing to allow such differences would be attempting to distort reality by choosing to be selectively blind. The use of rating territories is a case in point. Geographical divisions, however designed, are often correlated with socio-demographic factors such as income level and race because of natural aggregation or forced segregation according to these factors. Again we conclude that insurance companies should be free to delineate territories and assess territorial differences as well as they can. At the same time, insurance companies should recognize that it is in their best interest to be objective and use clearly relevant factors to define territories lest they be accused of invidious discrimination by the public. (…) One possible standard does exist for exception to the counsel that particular rating variables should not be proscribed. What we have called `equal treatment’ standard of fairness may precipitate a societal decision that the process of differentiating among individuals on the basis of certain variables is discriminatory and intolerable. This type of decision should be made on a specific, statutory basis. Once taken, it must be adhered to in private and public transactions alike and enforced by the insurance regulator. This is, in effect, a standard for conduct that by design transcends and preempts economic considerations. Because it is not applied without economic cost, however, insurance regulators and the industry should participate in and inform legislative deliberations that would ban the, use of particular rating variables as discriminatory.

And then, more recently, we started to talk about personalization, as in Barry and Charpentier (2020). And next week, we will start talking about predictive modeling, and machine learning.

Fairness and discrimination, PhD Course, #1 Motivation

This week, we will start our MAT998P course, in Montréal, entitled “équité et discrimination des modèles prédictifs“. It will mainly be based on the forthcoming textbook,

I can also mention the R package

> library(devtools)
> install_github("freakonometrics/InsurFair")

And because it is the first course, this week, I will start with some motivations this week… First of all, let me recall a definition, from Schauer (2006)

To be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision making that is sometimes called actuarial. Actuaries guide insurance companies in making decisions about large categories that have the effect of attributing to the entire category certain characteristics that are probabilistically indicated by membership in the category, but that still may not be possessed by a particular member of the category.

Motivation #1 Redlining

In 1937, the HOLC (Home Owners’ Loan Corporation) produced the following map of Philadelphia, related to “residential security”

These maps were related to concept of “redlining”. According to Merriam Webster dictionary,

to redline is (1) to withhold home-loan funds or insurance from neighborhoods considered poor economic risks; (2) to discriminate against in housing or insurance.

On the (fictitious) maps below, we have three variables, ploted

  • some red and green areas (risky-non risky)
  • some unsanitary index (on a 0-100 scale)
  • the proportion of Black inhabitants per neiborhood

In an insurance context, risky areas (with a higher premium) should be correlated with unsanitarity index (or any risk-related variable), and those variables are legitimate predictive variables. But they can also be related to less-legitimate variable, that could be racial, here. The challenge here is that a lot of variables are correlated…

I could mention here that, for  Glenn (2000), insurer’s risk selection process has two sides:

  • the one presented to regulators and policyholders (numbers, statistics and objectivity),
  •  the other presented to underwriters (stories, character and subjective judgment).

The rhetoric of insurance exclusion – numbers, objectivity and statistics – forms what Brian Glenn calls

the myth of the actuary (…) a powerful rhetorical situation in which decisions appear to be based on objectively determined criteria when they are also largely based on subjective ones (…) or the subjective nature of a seemingly objective process.

Glenn  (2003) claimed that there are many ways to rate accurately. Insurers can rate risks in many different ways depending on the stories they tell on which characteristics are important and which are not.

The fact that the selection of risk factors is subjective and contingent upon narratives of risk and responsibility has in the past played a far larger role than whether or not someone with a wood stove is charged higher premiums (…) virtually every aspect of the insurance industry is predicated on stories first and then numbers

Motivation #2. “Gender directive”, 2004/113/EC

From the Treaty on European Union (26.10.2012)

Art. 2 The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail.

Art. 3 (…) It shall combat social exclusion and discrimination, and shall promote social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child.

from the Charter of Fundamental Rights of the European Union (18.12.2000)

Art. 21 (Non discrimination): Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.

Art. 23 (Equality between men and women) Equality between men and women must be ensured in all areas, including employment, work and pay. The principle of equality shall not prevent the maintenance or adoption of measures providing for specific advantages in favour of the under-represented sex.

and from the EU Directive 2004/113/EC (2004 version)

Art. 5 (Actuarial factors)

1. Member States shall ensure that in all new contracts concluded after 21 December 2007 at the latest, the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services shall not result in differences in individuals’ premiums and benefits.

2. Notwithstanding paragraph 1, Member States may decide before 21 December 2007 to permit proportionate differences in individuals’ premiums and benefits where the use of sex is a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data. The Member States concerned shall inform the Commission and ensure that accurate data relevant to the use of sex as a determining actuarial factor are compiled, published and regularly updated.

There was initially (2004) an opt-out clause (Article 5(2)), since, where gender is a determining factor in the assessment of
risk based on relevant and accurate actuarial and statistical
data then proportionate differences in individual premiums or
benefits are allowed.

In March 2011, the European Court of Justice issued its judgement into the “Test-Achats case”. The ECJ ruled Article 5(2) was invalid. Thus, insurers were no longer able to use gender as a risk factor when pricing policies.

Other legal documents in Europe can be mentioned such that the “Ten Oever” judgement (Gerardus Cornelis Ten Oever v Stichting Bedrijfspensioenfonds voor het Glazenwassers — en Schoonmaakbedrijf). In April 1993, the Advocate General Vangerven argued that (see De Baere (2012))

the fact that women generally live longer than men has no significance at all for the life expectancy of a specific individual and it is not acceptable for an individual to be penalized on account of assumptions which are not certain to be true in his specific case,

which could be related to the concept of “injustice by generalization”.

Motivation #3. Colorado (September 27, 2023)

In September 27, 2023, the Colorado Division of Insurance exposed a new proposed regulation entitled Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes.

Section 4 (Definitions) Bayesian Improved First Name Surname Geocoding, or “BIFSG” means, for the purposes of this regulation, the statistical methodology developed by the RAND corporation for estimating race and ethnicity.

External Consumer Data and Information Source, or “ECDIS” means, for the purposes of this regulation, a data source or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors. This term includes credit scores, credit history, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, occupation that does not have a direct relationship to mortality, morbidity or longevity risk, consumer-generated Internet of Things data, biometric data, and any insurance risk scores derived by the insurer or third-party from the above listed or similar data and/or information source.

Then we have different sections, where insurers are asked to “estimate” the race or ethnicity of policyholders

Section 5 (Estimating Race and Ethnicity) : Insurers shall estimate the race or ethnicity of all proposed insureds that have applied for coverage on or after the insurer’s initial adoption of the use of ECDIS, or algorithms and predictive models that use ECDIS, including a third party acting on behalf of the insurer that used ECDIS, or algorithms and predictive models that used ECDIS, in the underwriting decision-making process, by utilizing:

1. BIFSG and the insureds’ or proposed insureds’ name and geolocation information included in the applications) for life insurance shall be used to estimate the race and ethnicity of each insured or proposed insured.

2. For the purposes of BIFSG, the following racial and ethnic categories shall be used: Hispanic, Black, Asian Pacific Islander (API), and White.

Section 6 (Application Approval Decision Testing Requirements) : Using the BIFSG estimated race and ethnicity of proposed insureds and the following methodology, insurers shall calculate whether Hispanic, Black, and API proposed insureds are disapproved at a statistically significant different rate relative to White applicants for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Logistic regression shall be used to model the binary underwriting outcome of either approved or denied.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in approval rates for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in approval rates, no further testing is required.

b. If there is a statistically significant difference in approval rates, the insurer shall determine whether the difference in approval rates is five (5) percentage points or greater as indicated by the marginal effects value of each BIFSG estimated race or ethnicity variable. (…)

or

Section 7 (Premium Rate Testing Requirements) : Using the insureds’ BIFSG estimated race and ethnicity, insurers shall determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for policies issued to Hispanic, Black, and API insureds relative to White insureds for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.

1. Linear regression shall be used to model the continuous numerical outcome of premium rate per $1,000 of face amount.

2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.

3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific  Islander (API) as separate dummy variables in the regression model.

4. Determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.

a. If there is not a statistically significant difference in premium rate per $1,000 of face amount, no further testing is required.

b. If there is a statistically significant difference in premium rate per $1,000 of face amount, determine whether the premium rate per $1,000 of face amount is at least 5% more than the average premium rate per $1,000 for all policies.

i. If the difference in premium rate per $1,000 of face amount is less than 5%, no further testing is required.

ii. If the difference in premium rate per $1,000 of face amount is 5% or greater, further testing is required as described in Section 8.

(etc). In order to illustrate, we can use some data, in the region of Atlanta

 

We can change the first and last name of people (and keep other relevant information, including the ZIP code) and compare “predictions” of race (white, black, hispanic, asian, etc)

Motivation #4. Motor Insurance in the U.S.

In the context of motor insurance in the U.S., recall that legal restrictions are per states, and we can observe some diversity about what “sensitive” could mean (via thezebra)

(etc). We will also discuss Avraham et al. (2013) that provides a long discussion accross US states.

Motivation #5. Graduate Admission (UC Berkeley)

Another motivation is the popular article, Bickel,  Hammel, and O’Connell (1975)

The dataset mentioned in the article is the following

the bias in the aggregated data stems not from any pattern of discrimination on the part of admissions committees, which seems quite fair on the whole, but apparently from prior screening at earlier levels of the educational system. Women are shunted by their socialization and education toward fields of graduate study that are generally more crowded, less productive of completed degrees, and less well funded, and that frequently offer poorer professional employment prospects

As we can see, if we formalize, we have (almost)

This is Simpson’s paradox. Another simple example is related to mortality : the (overall) mortality rate for women (picked at random in the entiere population) was 0.812% in Costa Rica, lower than 0.929% in Sweden. But as we can see on the left, below, at any age, mortality rates are lower in Sweden than in Costa Rica.

The paradox can easily be explained if we look at age structures in both countries. Long story short, in Costa Rica, picking someone randomly means that the person is very likely to be (very) young, with a low mortality rate; in Sweden, the person is more likely to be older, with a higher mortality rate.

Motivation #6. Propublica, Actuarial Justice

We will also mention actuarial justice, and et al (2016)

Hence, looking at the same data, with difference perspective, could lead to different conclusions. More robust conclusions can be obtained when look at distributions of scores (instead of simple binary predictions)

and we can also consider temporal process (again, instead of simply binary variables, with temporal censoring)

Motivation #7. Insurance in Québec

Two final motivations, in French this time. In Québec, there is the Charte des droits et libertés de la personne (C-12) with some very clear definition of what “discrimination” means,

Art. 10  Toute personne a droit à la reconnaissance et à l’exercice, en pleine égalité, des droits et libertés de la personne, sans distinction, exclusion ou préférence fondée sur la race, la couleur, le sexe, l’identité ou l’expression de genre, la grossesse, l’orientation sexuelle, l’état civil, l’âge sauf dans la mesure prévue par la loi, la religion, les convictions politiques, la langue, l’origine ethnique ou nationale, la condition sociale, le handicap ou l’utilisation d’un moyen pour pallier ce handicap.

Il y a discrimination lorsqu’une telle distinction, exclusion ou préférence a pour effet de détruire ou de compromettre ce droit.

But, interestingly, insurers can almost do anything they want,

Art 20.1 Dans un contrat d’assurance ou de rente, un régime d’avantages sociaux, de retraite, de rentes ou d’assurance ou un régime universel de rentes ou d’assurance, une distinction, exclusion ou préférence fondée sur l’âge, le sexe ou l’état civil est réputée non discriminatoire lorsque son utilisation est légitime et que le motif qui la fonde constitue un facteur de détermination de risque, basé sur des données actuarielles.

Motivation #8. Intention

And finally, I can mention that in many countries (such as France), “indirect discrimination” is considered as discriminatory, so “intention” has nothing to do with the problem… The Loi no 2008-496 du 27 mai 2008 states that

Art. 1 Constitue une discrimination indirecte une disposition, un critère ou une pratique neutre en apparence, mais susceptible d’entraîner, pour l’un des motifs mentionnés au premier alinéa, un désavantage particulier pour des personnes par rapport à d’autres personnes, à moins que cette disposition, ce critère ou cette pratique ne soit objectivement justifié par un but légitime et que les moyens pour réaliser ce but ne soient nécessaires et appropriés.

This law is an extension of Loi no. 72-546 du 1er juillet 1972, which abolished the requirement for specific intent.

Again, following Avraham (2017), keep in mind that insurance is very specific, regarding discrimination

What is unique about insurance is that even statistical discrimination which by definition is absent of any malicious intentions, poses significant moral and legal challenges. Why? Because on the one hand, policy makers would like insurers to treat their insureds equally, without discriminating based on race, gender, age, or other characteristics, even if it makes statistical sense to discriminate (…) On the other hand, at the core of insurance business lies discrimination between risky and non-risky insureds. But riskiness often statistically correlates with the same characteristics policy makers would like to prohibit insurers from taking into account.

That will be the topic of the course…

Melting contestation: insurance fairness and machine learning

Nice review of our paper , with Laurence Barry, on montrealethics.ai,

Machine learning tends to replace the actuary in the selection of features and the building of pricing models. However, avoiding subjective judgments thanks to automation does not necessarily mean that biases are removed. Nor does the absence of bias warrant fairness. This paper critically analyzes discrimination and insurance fairness with machine learning.

Melting contestation: insurance fairness and machine learning

Fondation SCOR, Fairness of predictive models: an application to insurance markets

The Scientific Council of the SCOR Foundation has decided to fund the research project “Fairness of predictive models: an application to insurance markets” until its anticipated completion date in three years (2023-2025). The project will be led by the University of Quebec and directed by Arthur Charpentier, professor in the mathematics department of the University of Quebec in Montreal. This project aims to propose corrections to the automatic artificial intelligence algorithms that can be used to determine the optimal pricing of individual policies in order to remove or limit the biases likely to generate inequities or even discrimination based on gender, race, religion, origin, etc. in the coverage offered by insurers or reinsurers to policyholders. The subject is of both theoretical (better control of black boxes constituted by models based on artificial intelligence algorithms) and practical (reduction of the risks of discrimination and inequity) interest. From this point of view, it is very topical for insurers and reinsurers facing major reputational challenges in the context of the growing importance of social networks. In addition to his role at the University of Quebec, Arthur Charpentier is a member of the Institute of Actuaries, internationally recognized expert in actuarial science, author of numerous academic articles published in renowned academic actuarial journals in both nationally and internationally.

Fairness Explainability using Optimal Transport with Applications in Image Classification

A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals

Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.

Parametric Fairness with Statistical Guarantees

Our paper Parametric Fairness with Statistical Guarantees is now available on ArXiv.

Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.

Présentation sur l’équité et la discrimination en assurance, pour Intact

Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !

A Fair price to pay: exploiting directed acyclic graphs for fairness in insurance

Tonight (Montréal time), Marie-Pier Côté will give a talk on “a fair price to pay: exploiting directed acyclic graphs for fairness in insurance” based on recent joint work with our PhD student, Olivier Côté, in Melbourne, Australia

Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.

More to come soon…

Insurance, biases, discrimination and fairness, v2

In the Summer 2022, my report Insurance, biaises, discrimination and fairness (v1) was officially uploaded on the website of the Institut Louis Bachelier. I have spent another year to add illustrations and examples, and I sent the manuscript to the publisher at the beginning of the Summer 2023. Because of delays, the book is not out yet, but the publisher allowed me to upload Insurance, biaises, discrimination and fairness v2 of the document. Note that it will be the lecture notes of the doctoral course I will give this Winter at ENSAE, in Paris, France.

The R functions (and package) will be uploaded on https://github.com/freakonometrics/InsurFair soon.

Talk at the ESSEC Risk Seminar

Thursday, I will be at La Défense, in Paris (France), to give a talk at the ESSEC Risk Seminar, entitled Causal Inference and Counterfactuals with Optimal Transport, with Applications in Fairness and Discrimination. Slides are now available, and the talk will be based on same recent papers, starting with Mitigating Discrimination in Insurance with Wasserstein Barycenters (presented last week-end at BIAS 2023) but also Fairness in Multi-Task Learning via Wasserstein Barycenters, and A Sequentially Fair Mechanism for Multiple Sensitive Attributes. There is also the textbook that should appear before the winter.

Melting contestation: insurance fairness and machine learning

Our paper, Melting contestation: insurance fairness and machine learning, with Laurence Barry, is now published (in Ethics and Information Technology).

With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.

Fairness in Multi-Task Learning via Wasserstein Barycenters, at ECML PKDD 2023

Toady, presentation of our paper Fairness in Multi-Task Learning via Wasserstein Barycenters, ECML PKDD, in Torino, by François. Slides are available online (and a poster can be found below)

The paper was actually published in Machine Learning and Knowledge Discovery in Databases: Research Track (295–312), available here.