This week, we will start our MAT998P course, in Montréal, entitled “équité et discrimination des modèles prédictifs“. It will mainly be based on the forthcoming textbook,
I can also mention the R package
> library(devtools)
> install_github("freakonometrics/InsurFair")
And because it is the first course, this week, I will start with some motivations this week… First of all, let me recall a definition, from Schauer (2006)
To be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision making that is sometimes called actuarial. Actuaries guide insurance companies in making decisions about large categories that have the effect of attributing to the entire category certain characteristics that are probabilistically indicated by membership in the category, but that still may not be possessed by a particular member of the category.
Motivation #1 Redlining
In 1937, the HOLC (Home Owners’ Loan Corporation) produced the following map of Philadelphia, related to “residential security”
These maps were related to concept of “redlining”. According to Merriam Webster dictionary,
to redline is (1) to withhold home-loan funds or insurance from neighborhoods considered poor economic risks; (2) to discriminate against in housing or insurance.
On the (fictitious) maps below, we have three variables, ploted
- some red and green areas (risky-non risky)
- some unsanitary index (on a 0-100 scale)
- the proportion of Black inhabitants per neiborhood
In an insurance context, risky areas (with a higher premium) should be correlated with unsanitarity index (or any risk-related variable), and those variables are legitimate predictive variables. But they can also be related to less-legitimate variable, that could be racial, here. The challenge here is that a lot of variables are correlated…
I could mention here that, for Glenn (2000), insurer’s risk selection process has two sides:
- the one presented to regulators and policyholders (numbers, statistics and objectivity),
- the other presented to underwriters (stories, character and subjective judgment).
The rhetoric of insurance exclusion – numbers, objectivity and statistics – forms what Brian Glenn calls
the myth of the actuary (…) a powerful rhetorical situation in which decisions appear to be based on objectively determined criteria when they are also largely based on subjective ones (…) or the subjective nature of a seemingly objective process.
Glenn (2003) claimed that there are many ways to rate accurately. Insurers can rate risks in many different ways depending on the stories they tell on which characteristics are important and which are not.
The fact that the selection of risk factors is subjective and contingent upon narratives of risk and responsibility has in the past played a far larger role than whether or not someone with a wood stove is charged higher premiums (…) virtually every aspect of the insurance industry is predicated on stories first and then numbers
Motivation #2. “Gender directive”, 2004/113/EC
From the Treaty on European Union (26.10.2012)
Art. 2 The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail.
Art. 3 (…) It shall combat social exclusion and discrimination, and shall promote social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child.
from the Charter of Fundamental Rights of the European Union (18.12.2000)
Art. 21 (Non discrimination): Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.
Art. 23 (Equality between men and women) Equality between men and women must be ensured in all areas, including employment, work and pay. The principle of equality shall not prevent the maintenance or adoption of measures providing for specific advantages in favour of the under-represented sex.
and from the EU Directive 2004/113/EC (2004 version)
Art. 5 (Actuarial factors)
1. Member States shall ensure that in all new contracts concluded after 21 December 2007 at the latest, the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services shall not result in differences in individuals’ premiums and benefits.
2. Notwithstanding paragraph 1, Member States may decide before 21 December 2007 to permit proportionate differences in individuals’ premiums and benefits where the use of sex is a determining factor in the assessment of risk based on relevant and accurate actuarial and statistical data. The Member States concerned shall inform the Commission and ensure that accurate data relevant to the use of sex as a determining actuarial factor are compiled, published and regularly updated.
There was initially (2004) an opt-out clause (Article 5(2)), since, where gender is a determining factor in the assessment of
risk based on relevant and accurate actuarial and statistical
data then proportionate differences in individual premiums or
benefits are allowed.
In March 2011, the European Court of Justice issued its judgement into the “Test-Achats case”. The ECJ ruled Article 5(2) was invalid. Thus, insurers were no longer able to use gender as a risk factor when pricing policies.
Other legal documents in Europe can be mentioned such that the “Ten Oever” judgement (Gerardus Cornelis Ten Oever v Stichting Bedrijfspensioenfonds voor het Glazenwassers — en Schoonmaakbedrijf). In April 1993, the Advocate General Vangerven argued that (see De Baere (2012))
the fact that women generally live longer than men has no significance at all for the life expectancy of a specific individual and it is not acceptable for an individual to be penalized on account of assumptions which are not certain to be true in his specific case,
which could be related to the concept of “injustice by generalization”.
Motivation #3. Colorado (September 27, 2023)
In September 27, 2023, the Colorado Division of Insurance exposed a new proposed regulation entitled Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes.
Section 4 (Definitions) Bayesian Improved First Name Surname Geocoding, or “BIFSG” means, for the purposes of this regulation, the statistical methodology developed by the RAND corporation for estimating race and ethnicity.
External Consumer Data and Information Source, or “ECDIS” means, for the purposes of this regulation, a data source or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors. This term includes credit scores, credit history, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, occupation that does not have a direct relationship to mortality, morbidity or longevity risk, consumer-generated Internet of Things data, biometric data, and any insurance risk scores derived by the insurer or third-party from the above listed or similar data and/or information source.
Then we have different sections, where insurers are asked to “estimate” the race or ethnicity of policyholders
Section 5 (Estimating Race and Ethnicity) : Insurers shall estimate the race or ethnicity of all proposed insureds that have applied for coverage on or after the insurer’s initial adoption of the use of ECDIS, or algorithms and predictive models that use ECDIS, including a third party acting on behalf of the insurer that used ECDIS, or algorithms and predictive models that used ECDIS, in the underwriting decision-making process, by utilizing:
1. BIFSG and the insureds’ or proposed insureds’ name and geolocation information included in the applications) for life insurance shall be used to estimate the race and ethnicity of each insured or proposed insured.
2. For the purposes of BIFSG, the following racial and ethnic categories shall be used: Hispanic, Black, Asian Pacific Islander (API), and White.
Section 6 (Application Approval Decision Testing Requirements) : Using the BIFSG estimated race and ethnicity of proposed insureds and the following methodology, insurers shall calculate whether Hispanic, Black, and API proposed insureds are disapproved at a statistically significant different rate relative to White applicants for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.
1. Logistic regression shall be used to model the binary underwriting outcome of either approved or denied.
2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.
3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific Islander (API) as separate dummy variables in the regression model.
4. Determine if there is a statistically significant difference in approval rates for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.
a. If there is not a statistically significant difference in approval rates, no further testing is required.
b. If there is a statistically significant difference in approval rates, the insurer shall determine whether the difference in approval rates is five (5) percentage points or greater as indicated by the marginal effects value of each BIFSG estimated race or ethnicity variable. (…)
or
Section 7 (Premium Rate Testing Requirements) : Using the insureds’ BIFSG estimated race and ethnicity, insurers shall determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for policies issued to Hispanic, Black, and API insureds relative to White insureds for whom the insurer, or a third party acting on behalf of the insurer, used ECDIS, or an algorithm or predictive model that used ECDIS, in the underwriting decision-making process.
1. Linear regression shall be used to model the continuous numerical outcome of premium rate per $1,000 of face amount.
2. The following factors may be accounted for as control variables in the regression model: policy type, face amount, age, gender, and tobacco use.
3. The estimated race or ethnicity of the proposed insureds shall be accounted for by including Hispanic, Black, and Asian Pacific Islander (API) as separate dummy variables in the regression model.
4. Determine if there is a statistically significant difference in the premium rate per $1,000 of face amount for each BIFSG estimated race or ethnicity variable as indicated by a p-value of less than .05.
a. If there is not a statistically significant difference in premium rate per $1,000 of face amount, no further testing is required.
b. If there is a statistically significant difference in premium rate per $1,000 of face amount, determine whether the premium rate per $1,000 of face amount is at least 5% more than the average premium rate per $1,000 for all policies.
i. If the difference in premium rate per $1,000 of face amount is less than 5%, no further testing is required.
ii. If the difference in premium rate per $1,000 of face amount is 5% or greater, further testing is required as described in Section 8.
(etc). In order to illustrate, we can use some data, in the region of Atlanta
We can change the first and last name of people (and keep other relevant information, including the ZIP code) and compare “predictions” of race (white, black, hispanic, asian, etc)
Motivation #4. Motor Insurance in the U.S.
In the context of motor insurance in the U.S., recall that legal restrictions are per states, and we can observe some diversity about what “sensitive” could mean (via thezebra)
(etc). We will also discuss Avraham et al. (2013) that provides a long discussion accross US states.
Motivation #5. Graduate Admission (UC Berkeley)
Another motivation is the popular article, (1975)
The dataset mentioned in the article is the following
the bias in the aggregated data stems not from any pattern of discrimination on the part of admissions committees, which seems quite fair on the whole, but apparently from prior screening at earlier levels of the educational system. Women are shunted by their socialization and education toward fields of graduate study that are generally more crowded, less productive of completed degrees, and less well funded, and that frequently offer poorer professional employment prospects
As we can see, if we formalize, we have (almost)
This is Simpson’s paradox. Another simple example is related to mortality : the (overall) mortality rate for women (picked at random in the entiere population) was 0.812% in Costa Rica, lower than 0.929% in Sweden. But as we can see on the left, below, at any age, mortality rates are lower in Sweden than in Costa Rica.
The paradox can easily be explained if we look at age structures in both countries. Long story short, in Costa Rica, picking someone randomly means that the person is very likely to be (very) young, with a low mortality rate; in Sweden, the person is more likely to be older, with a higher mortality rate.
Motivation #6. Propublica, Actuarial Justice
We will also mention actuarial justice, and et al (2016)
Hence, looking at the same data, with difference perspective, could lead to different conclusions. More robust conclusions can be obtained when look at distributions of scores (instead of simple binary predictions)
and we can also consider temporal process (again, instead of simply binary variables, with temporal censoring)
Motivation #7. Insurance in Québec
Two final motivations, in French this time. In Québec, there is the Charte des droits et libertés de la personne (C-12) with some very clear definition of what “discrimination” means,
Art. 10 Toute personne a droit à la reconnaissance et à l’exercice, en pleine égalité, des droits et libertés de la personne, sans distinction, exclusion ou préférence fondée sur la race, la couleur, le sexe, l’identité ou l’expression de genre, la grossesse, l’orientation sexuelle, l’état civil, l’âge sauf dans la mesure prévue par la loi, la religion, les convictions politiques, la langue, l’origine ethnique ou nationale, la condition sociale, le handicap ou l’utilisation d’un moyen pour pallier ce handicap.
Il y a discrimination lorsqu’une telle distinction, exclusion ou préférence a pour effet de détruire ou de compromettre ce droit.
But, interestingly, insurers can almost do anything they want,
Art 20.1 Dans un contrat d’assurance ou de rente, un régime d’avantages sociaux, de retraite, de rentes ou d’assurance ou un régime universel de rentes ou d’assurance, une distinction, exclusion ou préférence fondée sur l’âge, le sexe ou l’état civil est réputée non discriminatoire lorsque son utilisation est légitime et que le motif qui la fonde constitue un facteur de détermination de risque, basé sur des données actuarielles.
Motivation #8. Intention
And finally, I can mention that in many countries (such as France), “indirect discrimination” is considered as discriminatory, so “intention” has nothing to do with the problem… The Loi no 2008-496 du 27 mai 2008 states that
Art. 1 Constitue une discrimination indirecte une disposition, un critère ou une pratique neutre en apparence, mais susceptible d’entraîner, pour l’un des motifs mentionnés au premier alinéa, un désavantage particulier pour des personnes par rapport à d’autres personnes, à moins que cette disposition, ce critère ou cette pratique ne soit objectivement justifié par un but légitime et que les moyens pour réaliser ce but ne soient nécessaires et appropriés.
This law is an extension of Loi no. 72-546 du 1er juillet 1972, which abolished the requirement for specific intent.
Again, following Avraham (2017), keep in mind that insurance is very specific, regarding discrimination
What is unique about insurance is that even statistical discrimination which by definition is absent of any malicious intentions, poses significant moral and legal challenges. Why? Because on the one hand, policy makers would like insurers to treat their insureds equally, without discriminating based on race, gender, age, or other characteristics, even if it makes statistical sense to discriminate (…) On the other hand, at the core of insurance business lies discrimination between risky and non-risky insureds. But riskiness often statistically correlates with the same characteristics policy makers would like to prohibit insurers from taking into account.
That will be the topic of the course…