In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.
Sensitive attributes ?
Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…
Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)
This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)
Racism
The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.
One should keep in mind that race is a social information, and most of the time, it is based on self-identification
This leads to popular maps in the U.S.
Racism is usually related to “colourism” (discrimination based on skin tone)
Is it relevant in the context of insurance, and risk ?
It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.
Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.
Sexism
Sexism is another popular exemple of discrimination, related to sex, or gender.
Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.
Ageism
Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.
In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.
Genetics
Another important sensitive variable is related to “genetic information”.
Such information is usually classified as sensitive everywhere.
To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.
Finally, let us discuss proxies that can be related to those sensitive variables.
Names and language
The first one was discussed in the introduction : names contain information about race and ethnical origin,
Text and discussion can also reveal sensitive information.
Pictures
Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.
Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.
One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.
Credit Scoring
Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive
Clearly, a bad credit score will have a big impact not only on mortgages and loans,
but also on insurance rates ! As we explained here, it costs a lot to be poor.
Networks
Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.
We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…
It is an extension of th friendship paradox.
Proxies
Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.