Tag Archives: SCOR

SCOR Foundation for Science Webinar, ML and Econometrics

This week, I will give a talk at the SCOR Foundation for Science webinar (slides are available online). and I was asked to give a talk on econometrics vs IA (or machine learning),

Of course, the two concepts are related, and there is a continuum between them.

As we wrote it Charpentier et al. (2017)

Econometrics and machine learning seem to have one common goal: to construct a predictive model, for a variable of interest, using explanatory variables (or features).

For the purposes of this presentation, we will begin by contrasting the two, emphasizing the differences, and then showing the connections that exist.

Long story short, in between, we have computation statistics, or statistical learning, corresponding to computational techniques with mathematical probabilistic guarantees.

Continue reading SCOR Foundation for Science Webinar, ML and Econometrics

Confidence and Fairness: Scientific Foundations in AI and Risk, Workshop in Paris

Tomorrow, we organize our workshop Confidence and Fairness: Scientific Foundations in AI and Risk, at the SCOR headquarters, in Paris. I’m going to give the keynote address for the day, presenting the work we’ve been able to carry out over the past 18 months (over the 3 years of funding), while laying the foundations for the concepts we’ll be discussing throughout the day.


9:00 – Registration
9:20 – Introduction speech
9:30 – Arthur Charpentier – “Fairness of predictive models: an application to insurance markets
10:15 – Coffee break
10:45 – Toon Calders – “Unfair, You Say? Explain Yourself!”
11:30 – Isabel Valera – “Society-centered AI: An Integrative Perspective on Algorithmic Fairness”
12:15 – Lunch break
13:15 – Jean Michel Loubes – “Beyond fairness measures, discovering the bias in the algorithm”
14:00 – Evgeny Chzhen – “An optimization approach to post-processing for classification with system constraints”
14:45 – Michele Loi – “From Facts to Fairness: Diagnostic Models in Algorithmic Decision-Making”
15:30 – Coffee break
16:00 – Aurélie Lemmens – “Fair Active Learning for Personalized Policies”
16:45 – François Hu and Antoine Ly – “Fairness and Confidence in Insurance Markets, a Practitioners Perspective”
17:30 – Closing cocktail

Confidence and Fairness: Scientific Foundations in AI and Risk (mid-May in Paris)

Mid-May, we organize, with the SCOR Foundation for Science a one-day workshop on Confidence and Fairness, Scientific Foundations in AI and Risk. Registrations are now open ! The agenda will be

9:00 registration
9:20 – introduction speach
9:30 – Arthur Charpentier
10:15 coffee break
10:45 – Toon Calders
11:30 – Isabel Valera
12:15 lunch break
13:15 – Jean Michel Loubes
14:00 – Evgeny Chzhen
14:45 – Michele Loi
15:30 coffee break
16:00 – Aurélie Lemmens
16:45 – François Hu and Antoine Ly
17:30 – closing cocktail

SCOR Project Newsletter #3

The third newsletter, related to the SCOR research project is now available. It is a brief summary of the third six months block, from October till the end of March (i.e. Fall and Winter). The first one is available here and the second one there. For the first time, we started writing one in French. I’d like to take this opportunity to thank all those involved in the project!

Continue reading SCOR Project Newsletter #3

Projet SCOR, Infolettre #3

La troisième infolettre associée au projet financé par la Fondation SCOR pour la science est enfin disponible ! Il s’agit d’un résumé, illustré, en quelques pages, de nos activités des six derniers mois, d’octobre à fin mars (autrement dit, pour l’automne et l’hiver). La nouveauté est qu’on inaugure la version en français de ces infolettres, la toute première étant en ligne ici (en anglais), et la seconde . Pour la troisième, une version en anglais est aussi disponible, bien entendu… Merci encore à toutes celles et ceux qui participent aux travaux du projet !

Continue reading Projet SCOR, Infolettre #3

SCOR Project Newsletter #2

The second newsletter, related to the SCOR research project is now available. It is a brief summary of the second six months block, from April till the end of September (the first one is available here).

As explained, over the past six months, we have had several interns, including Noé Bosc-Haddad, Florent Crouzet, Julien Siharath, Ana María Patrón Piñerez and Cassandra Mussard, the visit of Laurence Barry and Fei Huang, Philipp Ratz and Samuel Stocksieker defended their PhD, François Hu finished his postdoctoral fellowship, while Marouane Il Idrissi and Arsene Zotsa just arrived, Agathe Fernandes Machado and Olivier Côté (co-supervised with Ewen Gallic and Marie-Pier Côté) finished their PhD courses and are now 100% on their research… We wrote papers, gave talks… Thank you to all those who have supported us, and continue to support us. At least two more years to work on insurance and predictive models, fairness, calibration, discrimination, trust, explainability, interpretability, market equilibria, competition, generative models, and so much more… We’ve still got a lot of work to do, and plenty of enthusiasm!

SCOR Foundation – Scope and limits of Artificial intelligence

On May 15, 2024, the SCOR Foundation for Science hosted a webinar titled “Scope and limits of Artificial intelligence”, delivered by Arthur Charpentier. A professor in the Department of Mathematics at the University of Quebec in Montreal and a member of the Institute of Actuaries, Arthur Charpentier is an internationally recognized expert in actuarial science and the author of numerous academic articles published in the best actuarial academic journals worldwide.

During the webinar, Arthur Charpentier discussed the research project “Fairness of predictive models: an application to insurance markets”, which is supported by the SCOR Foundation for Science. This project addresses biases within the automatic artificial intelligence algorithms utilized to determine optimal pricing in individual policies. Its aim is to mitigate or eliminate such biases, which could lead to inequities or discriminatory practices based on factors such as gender, race, religion, or origin in the coverage provided by insurers or reinsurers to policyholders.

“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar

This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).

Insurance (and further motivations)

Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,

Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,

The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,

So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.

We will also disccuss legal aspects.

This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec

I can also mention some recent projects, in Colorado, where insurers are asked  to predict race and ethnicity (that specific topic is on our agenda for the summer)

And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination

I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.

As a sidenote, we see nowadays some blue-lining related to climate risks,

“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.

Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.

A bit more on insurance

I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.

Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”

Quantifying unfairness with optimal transport

Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.

Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.

The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx

As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…

and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.

Mitigation

For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,

Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,

For the other group it is the opposite

Beyond demographic parity

If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as

If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.

More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,

Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.

Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.

To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man

Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?

Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.

Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular

let the data speak for itself

In insurance, interpretation is very important, probably more important than model assumptions

Interpretation become a key concept when dealing with multiple sensitive attributes

To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….

Tomorrow, we will discuss further at our workshop, in Québec city

Fondation SCOR, Fairness of predictive models: an application to insurance markets

The Scientific Council of the SCOR Foundation has decided to fund the research project “Fairness of predictive models: an application to insurance markets” until its anticipated completion date in three years (2023-2025). The project will be led by the University of Quebec and directed by Arthur Charpentier, professor in the mathematics department of the University of Quebec in Montreal. This project aims to propose corrections to the automatic artificial intelligence algorithms that can be used to determine the optimal pricing of individual policies in order to remove or limit the biases likely to generate inequities or even discrimination based on gender, race, religion, origin, etc. in the coverage offered by insurers or reinsurers to policyholders. The subject is of both theoretical (better control of black boxes constituted by models based on artificial intelligence algorithms) and practical (reduction of the risks of discrimination and inequity) interest. From this point of view, it is very topical for insurers and reinsurers facing major reputational challenges in the context of the growing importance of social networks. In addition to his role at the University of Quebec, Arthur Charpentier is a member of the Institute of Actuaries, internationally recognized expert in actuarial science, author of numerous academic articles published in renowned academic actuarial journals in both nationally and internationally.