Category Archives: Conferences

Talk at the 27th International Congress on Insurance: Mathematics and Economics

On Wednesday morning, I will be chairing our session “Discrimination-free Insurance Pricing” at the Insurance: Mathematics & Insurance Conference, in Chicago. With Olivier Côté, Lydia Gabric and Hong Beng Lim, we will be four speaker, just before lunch time. My talk will be a mix of recent work on quantifying and mitigating discrimination in scores (in insurance). Slides are available online.

 

Talk on collaborative insurance, unfairness and discrimination

Monday, I will be giving a short course at the workshop on Decentralized Insurance and Risk Sharing (SAC 161), in Chicago

  • Decentralized Finance and Blockchain: Implications for the Insurance Industry, by Marco Mirabella
  • Decentralized risk sharing: definitions, properties, and characterizations, by Jan Dhaene
  • Collaborative insurance, unfairness, and discrimination, by Arthur Charpentier
  • Decentralized insurance: bridging the gap between industry practice and academic theory, by Runhuan Feng

My slides are available online.

In this course, we will get back to mathematical properties of risk sharing on networks, with reciprocal contrats. We will discuss conditions about stochastic dominance, proving that policyholers might have interest in sharing risks with “friends”.
Then, we will try to adress fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much litterature about fairness on networks. It is important to adress those issues, since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatement.

Samuel が横浜に到着

After defending his PhD last week, Samuel just arrived in 横浜市, at the International Joint Conference on Neural Networks (IJCNN’24), that will take place at the IEEE World Congress on Computational Intelligence (WCCI).

He will present our recent work on Boarding for ISS: Imbalanced Self-Supervised Discovery of a Scaled Autoencoder for Mixed Tabular Datasets,

The field of imbalanced self-supervised learning, especially in the context of tabular data, has not been extensively studied. Existing research has predominantly focused on image datasets. This paper aims to fill this gap by examining the specific challenges posed by data imbalance in self-supervised learning in the domain of tabular data, with a primary focus on autoencoders. Autoencoders are widely employed for learning and constructing a new representation of a dataset, particularly for dimensionality reduction. They are also often used for generative model learning, as seen in variational autoencoders. When dealing with mixed tabular data, qualitative variables are often encoded using a one-hot encoder with a standard loss function (MSE or Cross Entropy). In this paper, we analyze the drawbacks of this approach, especially when categorical variables are imbalanced. We propose a novel metric to balance learning: a Multi-Supervised Balanced MSE. This approach reduces the reconstruction error by balancing the influence of variables. Finally, we empirically demonstrate that this new metric, compared to the standard MSE: i) outperforms when the dataset is imbalanced, especially when the learning process is insufficient, and ii) provides similar results in the opposite case.

Talk in Stockholm, Sweden, at the Insurance Data Science Conference

This week, I will attend the Insurance Data Science conference in Sweeden. It has been a while… I was a keynote speaker at the one in London, ten years ago (to give a talk I still have feedbacks about – Getting into Bayesian Wizardry… (with the eyes of a muggle actuary) – by that time, the conference was “R in Insurance”), and then, we organized the one in Paris, back in 2017. Then we had the online events, but it was… different.

This time, I will get back to our recent paper A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, and the equipy package, wrote with Agathe Fernandes-Machado and Suzie Grondin. The slides are available online.

Trip in (Northern) Europe

The next two weeks, in will be in (Northern) Europe, with a first stop in Brussels (to visit colleagues), then in Leuven (I will give a talk on Monday at KU Leuven), then in København (I will give a talk on Friday at Københavns Universitet), and finally in Stockholm (at Stockholm University, for the Insurance Data Science conference).

In the Fall, I will be in Europe, with Lisbon (European Actuarial Journal conference), in France (Cerisy Colloques) and in Warsaw in Poland. In Poland, I will give a two day cours on Insurance, Biases, Discrimination and Fairness

More to come soon…

Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Jeudi, je vais participer, à distance, au 23e congrès de l’Institut des Actuaires, en France, avec Florence Picard et Laurence Barry.

Notre exposé a pour titre “assurabilité : vers de nouveaux partages de risques?”. Je parlerais un peu des catastrophes naturelles… et du risque de sécheresse, ou plutôt du risque “RGA“.

Continue reading Assurabilité, vers de nouveaux partages de risque, Congrès des Actuaires

Workshop on Trustworthy AI, in Montreal

This Monday, a Workshop on Trustworthy AI will be held May 27, 2024 in Montreal.

We will be there with Agathe and Olivier, to chat with people who might have some interest

Here are our posters. I wil talk about discrimination and insurance

Agathe will explain why callibration of scores is important,

and finally, Olivier will talk about building (causal) graphs for fairness

 

Présentation à Bordeaux, Journées de Statistique

Cette semaine, Sam – Samuel Stocksieker – sera à Bordeaux, aux journées de statistiques, pour parler “smoothed bootstrap” et génération de données synthétiques pour la modélisation des extrêmes (papier co-écrit avec Denys Pommeret).

En apprentissage supervisé, il est assez fréquent de se retrouver confronté à des données présentant des distributions déséquilibrées. Cette situation entraîne souvent une difficulté d’apprentissage pour les algorithmes standards. La recherche et les solutions en matière d’apprentissage à partir de distributions déséquilibrées se sont principalement concentrées sur les tâches de classification. Malgré son importance, très peu de solutions existent pour la régression déséquilibrée (Imbalanced Regression). Dans cet article, nous proposons une procédure d’augmentation de données, nommée DENIS, basée sur des estimations à noyau de densité. Cette approche fournit une expression des densités conditionnelles des générateurs. Nous appliquons DENIS en régression déséquilibrée et proposons de le combiner à une nouveau type de générateur de type wild-boostrap pour simuler la variable cible, conditionnellement aux nouvelles données synthétiques. Nous évaluons les performances de l’algorithme DENIS dans des situations de régression déséquilibrée. Nous évaluons empiriquement et comparons notre approche et démontrons une amélioration significative par rapport aux techniques existantes.

2024 Optimization Days, (algorithmic) collusions in games

Tomorrow, I will attend the 2024 Optimization Days, in Montréal. I will present some work we did last Fall with Philipp Ratz and Suzie Grondin, on (algorithmic) collusions in games, “Market Pricing with Reinforcement Learning” (the paper will be available soon)

Several recent articles have attempted to gain a better understanding of algorithmic collusion (Calvano et al. (2020), Klein (2021), Banchio & Mantegazza (2022) Rocher et al. (2023)). For example, in Calvano et al. (2020), a simulation study showed that for a simplified market environment, basic Q-Learning Agents can learn to collude tacitly, in order to propose higher prices and increase their combined profit. Inspired by some Iterated Prisoners Dilemma, we derive some reinforcement learning algorithm to investigate and discuss several recent results and their robustness, and explain how reinforcement learning differs from simpler strategies and which conditions lead to unfavorable outcomes from a consumer perspective. In particular, we first describe the reinforcement learning problem in a more general manner and investigate the influence of the hyper-parameters. We then consider two situations separately. One, similar in spirit to Rocher et al. (2023), assumes that the market is in equilibrium and that a general agent tries to exploit a pricing strategy of an incumbent agent. The second, more general, approach consists of an agent continuously updating their own policy.

The starting point was Calvano et al. (2020),

For classical games, the mathematical framework is the following

for example, with the prisoner’s dilemma

Then, consider repeated games, and possible collusion

The next step is to include randomness, with (dynamic) stochastic games

and standard equations

(I describe quickly the different concepts). Finally, we can move from here to reinforcement learning, and Q-learning

The idea will be to play (or to interact) to learn that matrix

with the following interpretations, for the different parameters

Then, we will play a little bit, on the framework introduced to present the prisoner’s dilemma, for instance to understand the importance of \beta, using in the \epsilon-greedy approach, with \epsilon_t=\exp(-\beta t)

That is our first approach to the concept of collusion : agents don’t need to “cooperate” to have collusion

Then, we will use the experiment of Calvano et al. (2020) to get more complex discussions…