# Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity

Our research paper, Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity, with Molly James and Laurence Barry, is now published in the Geneva Papers on Risk and Insurance.

Natural disasters offer a special case for the study of private and public insurance mix. Indeed, the experience accumulated over the past decades has made it possible to transform poorly known hazards, long considered uninsurable, into risks that can be assessed with some precision. They exemplify however the limits of the risk-based premiums method, as it might imply unaffordability for some. The French scheme reflects such ideas and offers a wide coverage for moderate premiums to all, but is shaken by climate change: we show that some wealthier areas, that were not perceived as “at risk” in the past, have become exposed to submersion risk in the future. This singularly makes some well-off properties the potential main beneficiaries of a scheme that was historically thought to protect the worst-off. Acknowledging that some segmentation might become desirable, we examine several models for flood risk and the disparity in premiums they entail.

# Mathematics for Public Health (MfPH)

C’est maintenant officiel, nous serons quelques chercheurs à Montréal à rejoindre le groupe des chercheurs du Fields, PIMS, AARMS et du CRM, à travailler sur la modélisation des maladies infectieuses. Après l’annonce du Fields, il y a eu un joli communiqué sur le site de l’Université de Montréal.

# « Plus un pays est égalitaire et prospère, moins on trouve de femmes en sciences »

Je suis tombé sur cette phrase au détour d’un article cet après midi, dans La Presse,

« Plus un pays est égalitaire et prospère, moins on trouve de femmes en sciences. C’est difficile à comprendre »

et j’avoue que je suis surpris. Surpris que cette relation soit forte, et significative, au point de l’énoncer comme une loi. Je ne sais pas comment se mesurent l’égalité et la prospérité, mais j’ai tenté l’espérance de vie à la naissance, sur http://data.uis.unesco.org/. Et “researchers FTE” (nombre de chercheurs en équivalents temps plein), femme divisé par le total, pour avoir un ratio de femmes en “Science, technology and innovation“. J’ai du mal à observer la relation décroissante…

A moins d’enlever une dizaines de pays à faible espérance de vie à la naissance (dont le Burundi, l’Éthiopie, la Gambie, l’Inde, Madagascar, le Pakistan, le Togo).

Si quelqu’un sait comment voire cette étonnante relation décroissante, je suis preneur !

# Reinforcement Learning in Economics and Finance, a state-of-the-art

Our joint paper, with Romuald Elie and Carl Remlinger entitled Reinforcement Learning in Economics and Finance just appeared in Computational Economics,

Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy — a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.

# Journée IVADO sur l’intelligence numérique collaborative

Mercredi, je participerai à une journée organisée à IVADO sur l’intelligence numérique collaborative. Je reviendrais sur la récente compétition AIcrowd que nous avons organisée pour mieux comprendre la compétition sur les marchés d’assurance.

# Big data, the tech giants, and insurance

A few months ago, I published a short article, Big data, the tech giants, and insurance, in the Annales de Mines. The original article was in French, but the Editors shared an English version,

Technology and insurance companies seem like polar opposites in every possible way. The tech giants, agile and fast-acting, are obsessed with the future, whereas insurers, conservative and reflexive, are fascinated with the data that the tech giants collect. However these two sectors are now eyeing each other and have started forming partnerships as they come to understand that, in both cases, their core business is data.

to be continued…

# Dynamic Programming in Distributional Reinforcement Learning

Last summer, I supervised a summer intern, Cédric Odin, student at Ecole Normale in Ker Lann, France, on Dynamic Programming in Distributional Reinforcement Learning. A state-of-the-art is now available online https://hal.archives-ouvertes.fr/hal-03168889

The classic approach to reinforcement learning is limited in that it only predicts the expected return. The specialized literature has long tried to remedy this problem by studying risk-sensitive models, but the distributional approach will not emerge until 2017. Since the seminal article Bellemare, Dabney, and Munos 2017 and the state-of-the-art performance of the C51 algorithm in the ATARI 2600 suite of benchmark tasks (Bellemare, Naddaf, et al. 2013), research has focused on understanding the behaviour of distributional algorithms. In this paper we place Bellemare’s original results in distributional dynamic programming in parallel with the classic results.

# Assurances et véhicules autonomes

Lundi matin, j’interviendrai dans un séminaire organisé par Stève Bernardin, sur le thème  “assurances et véhicules autonomes”, avec comme autres participants Philippe Talleux, Romain Cros, David Zuby et Jonathan Charak. En guise d’introduction, je peux renvoyer vers deux articles que nous avions écrit avec Rodolphe Bigot, Repenser la responsabilité, et la causalité et surtout Quelle responsabilité pour les algorithmes ?

# Louis Bachelier Fellowship

I am delighted and honored to have been inducted as Academic Fellow by Institut Louis Bachelier. It’s a great opportunity to join such a distinguished group of scholars !

Louis Bachelier Fellows

# From multinomial regression to binary classification on some Siamese data

There are two kinds of people in the world: people who think there are two kinds of people in the world and people who don’t

(borrowed from Menand (2018)). Because things are always simpler when we face only binary choice, aren’t they? But consider here the case were multiple options are possible, and let us see if we cannot get back to simpler binary choices. Consider a collection of observations $(y_i,\boldsymbol{x}_i)$ where $y_i$ is some categorical variable, $y_i\in\mathcal{A}$ where $\mathcal{A}=\lbrace A_1,\cdots,A_\kappa \rbrace$, with $\kappa$ possible categories. Let $\mathcal{I}_k=\lbrace i:y_i\in A_k \rbrace$.

In a classical multinomial logistic regression, suppose that $A_1$ is the reference, then $$\mathbb{P}[Y=A_j|\boldsymbol{X}=\boldsymbol{x}]=\frac{\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_j]}{1+\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_2]+\cdots+\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_k]}$$With a lot a categories, and a small number of observations, inference can be complicated, and non-robust.

• the Siamese dataset

The name Siamese I use, here, comes from Siamese Networks. Or sort of… As we say in French, it is an « histoire de l’homme qui a vu l’homme qui a vu l’ours » (story of the man who saw the man who saw the bear). A few years ago, a student tried to explain to me the idea of Siamese Networks and this is what I understood. I might be completely wrong, but the idea I got from it did make sense, in my mind at least. That is the story of that blog post…

The idea of the siamese algorithm will be to consider all pairs of observations, $(y_i,\boldsymbol{x}_i)$ and $(y_j,\boldsymbol{x}_j)$ :

1. $\tilde y_{i,j}=\boldsymbol{1}(y_i=y_j)$ indicating if individuals $i$ and $j$ are in the same category
2. $\tilde{\boldsymbol{x}}_{i,j}$ is a collection of $p-1$ variables,
• $\tilde {x}_{k:i,j}={x}_{k:i}-{x}_{k:j}$ if $x_k$ is continuous, or $\tilde {x}_{k:i,j}=|{x}_{k:i}-{x}_{k:j}|$ (we can use another metric, e.g. $\tilde {x}_{k:i,j}=|{x}_{k:i}-{x}_{k:j}|^2$, and this is why I decided to use some GAM model in the logistic regression on the Siamese dataset)
• $\tilde {x}_{k:i,j}=({x}_{k:i},{x}_{k:j})\in\mathcal{X}_k\times\mathcal{X}_k$ if $x_k$ is a categorical variables (taking values in the set $\mathcal{X}_k$), or $\tilde {x}_{k:i,j}=\boldsymbol{1}({x}_{k:i}\neq{x}_{k:j})\in\{0,1\}$

The original dataset was a $n\times p$ matrix, and (if there are no categorical variable), it becomes a $n(n-1)/2\times p$ matrix. The key point is that if the original variable $y_i$ was multinomial, $y_{i,j}$ is now binomial. For instance, if our initial dataset was the following, with two covariates, one continuous and one categorical

its siamese counterpart is the following

• Classification step

On the dataset $(\tilde y_{i,j},\tilde {x}_{k:i,j})_{i,j}$, fit a logistic regression, $$\mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}]=\frac{\exp[\tilde{\boldsymbol{x}}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}^\top\boldsymbol{\beta}]}$$(or any classification model – CART, random forest, etc). But that is the easy part (unless $n$ is large, because the siamese dataset has (roughly) $n^2/2$ rows). The difficult task is the prediction

• Prediction step

Consider a new input variable $\boldsymbol{x}_{\cdot}$, and define its siamese version, $\tilde{\boldsymbol{x}}_{\cdot}=(\tilde{\boldsymbol{x}}_{\cdot,j})_j$, i.e. a database with $n$ rows. Then compute
$$p_{\cdot,j}=\mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}_{\cdot,j}]=\frac{\exp[\tilde{\boldsymbol{x}}_{\cdot,j}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}_{\cdot,j}^\top\boldsymbol{\beta}]}$$ where $p_{\cdot,j}$ is the probability that $(y_j,\boldsymbol{x}_{j})$ and $(y_{\cdot},\boldsymbol{x}_{\cdot})$ are in the same category, as well as $$p_{i,j}=\mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}_{i,j}]=\frac{\exp[\tilde{\boldsymbol{x}}_{i,j}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}_{i,j}^\top\boldsymbol{\beta}]}$$
Let $\boldsymbol{p}_{\cdot}=(p_{\cdot,j})$, and similarly $\boldsymbol{p}_{i}=(p_{i,j})$. Then several techniques can be used to predict $y_{\cdot}$.

1. $\widehat{y}_{\cdot}=y_{j^\star}$ where ${j^\star}=\underset{j=1,\cdots,n}{\text{argmax}}\{p_{\cdot,j}\}$: the predicted class is the one of the observation the most likely to be other same class
2. $\widehat{y}_{\cdot}=y_{j^\star}$ where ${j^\star}=\underset{\ell=1,\cdots,k}{\text{argmax}}\{\overline{p}_{\ell}\}$, where$$\overline{p}_{\ell} = \frac{1}{n_{\ell}}\sum_{j\in\mathcal{I}_s} \boldsymbol{1}(y_j =y_{\ell}),\text{ where }\mathcal{I}_s=\lbrace i:p_{\cdot,i}>s\rbrace$$consider only probabilities sufficiently high to be considered, and predict the most important class (majority rule)
3. $\widehat{y}_{\cdot}=y_{j^\star}$ where $${j^\star}=\underset{i=1,\cdots,n}{\text{argmax}}\{\theta_i\}$$ where $$\theta_i=\cos(\boldsymbol{p}_{\cdot},\boldsymbol{p}_{i})=\displaystyle{\frac{\boldsymbol{p}_{\cdot}\cdot\boldsymbol{p}_{i}}{\|\boldsymbol{p}_{\cdot}\|\|\boldsymbol{p}_{i}\|}}$$
4. $\widehat{y}_{\cdot}=y_{j^\star}$ where $${j^\star}=\underset{i=1,\cdots,n}{\text{argmax}}\{KL_{\cdot|i}\}$$ and $KL_{\cdot|i}=\displaystyle{\sum_{j=1}^n p_{\cdot,j}\log\frac{p_{\cdot,j}}{p_{i,j}}}$ (but one can select another metric)
5. $\widehat{y}_{\cdot}=y_{j^\star}$ where ${j^\star}=\underset{j\in\mathcal{J}}{\text{argmax}}\{p_{\cdot,j}\}$ and $\mathcal{J}$ is a sample of $k$ observations, chosen randomly, one in each group (one-shot procedure): the predicted class is the one of the observation the most likely to be other same class

Heuristically, it can be related to some $k$ nearest neighbors strategy: we give the attribute that most neighbors have. The total distance is a weighted sum of the componentwise distances (for the logistic regression).

• Simulation study

In order to test that technique, let us generate some multinomial model where $y$ has 10 possible labels, with 6 (independent) covariates $x_1,\cdots,x_6$, and $\mathbb{P}[Y=A_k|\boldsymbol{X}=\boldsymbol{x}]\propto \exp[\boldsymbol{x}^\top\boldsymbol{\beta}_k]$ (where coefficients $\boldsymbol{\beta}_k$ where generated randomly) for $k\in\{1,2,\cdots,10\}$ (there were 10 categories) and with $n=700$ observations.

n=700 X1=rnorm(n) X2=rnorm(n) X3=rnorm(n) X4=rnorm(n) X5=rnorm(n) X6=rnorm(n) X=cbind(1,X1,X2,X3,sqrt(abs(X4)),X5*X1,X6) k = 10 PARAM = matrix(rnorm(k*6),k,6) PARAM[,1]=PARAM[,1]-1 PARAM=cbind(PARAM,0) P=matrix(NA,n,k-1) for(j in 1:(k-1)) P[,j] = X %*% (PARAM[j,])+rnorm(n) P=cbind(P,0) S=apply(exp(P),1,sum) Pb = exp(P)/S tirage = function(i){ sample(1:10,size=1,prob = Pb[i,]) } Y = LETTERS[Vectorize(tirage)(1:n)] dbase = data.frame(Y=as.factor(Y),X1,X2,X3,X4,X5)

In the paragraph previously, I suggested to take the most likely one. Being wrong means that it was not the first choice. But perhaps being the second or the third choice is not that bad, actually. So in my simulations, I look at the proportion of predictions were our prediction is the good one (top 1), if the true one is either the most likely or the second most likely (top 2), or in the top 3. That will be on my $x$-axis. I draw some line, but we simply have three points (top 1, top 2 and top 3). I compute the proportion of good prediction, using cross-validation techniques (10-fold). The black lines are one of the methods described above. The red one is the standard multinomial model (with a logistic link function). For the Siamese model, I tried several models. I tried a logistic regression, and some smooth version (GAM) on top

and a classification tree, on the left, as well as some random forest on the right, below.

It looks like the multinomial approach performs always better than any Siamese one… and to be honest, I am disappointed.

Here is the code I did use when I considered a logistic regression on the Siamese dataset,

set.seed(1) kfold = sample(rep(1:10,n/10))   KFOLDglm = function(i){ i_test=which(kfold==i) i_calibration=which(kfold!=i) y=credit[i_calibration,"Y"] tirage = function(){ v=c(sample(i_calibration[y==levels(y)[1]],size=1), sample(i_calibration[y==levels(y)[2]],size=1), sample(i_calibration[y==levels(y)[3]],size=1), sample(i_calibration[y==levels(y)[4]],size=1), sample(i_calibration[y==levels(y)[5]],size=1), sample(i_calibration[y==levels(y)[6]],size=1), sample(i_calibration[y==levels(y)[7]],size=1), sample(i_calibration[y==levels(y)[8]],size=1), sample(i_calibration[y==levels(y)[9]],size=1), sample(i_calibration[y==levels(y)[10]],size=1)) names(v)=levels(y) return(v) }   LogisticModel &lt;- multinom(Y ~ ., data = credit[i_calibration,], trace=FALSE)   comparaisonx = function(base,x=base[1,]){ mix_base = base for(j in 1:ncol(base)){ xj = as.numeric(x[j])-base[,j] mix_base[,j] = (xj) } mix_base } comparaisony = function(base,y=base[1]){ as.factor(base == y) } creditx = credit[,-which(names(credit) == "Y")] nc=length(i_calibration) B=comparaisonx(base = creditx[i_calibration[2:nc],],x=creditx[i_calibration[1],]) B$Y=comparaisony(base = credit[i_calibration[2:nc],"Y"],y=credit[i_calibration[1],"Y"]) for(i in 2:(nc-1)){ B0=comparaisonx(base = creditx[i_calibration[(i+1):nc],],x=creditx[i_calibration[i],]) B0$Y=comparaisony(base = credit[i_calibration[(i+1):nc],"Y"],y=credit[i_calibration[i],"Y"]) B=rbind(B,B0) } credit_mix = B   OneShotLogisticModel &lt;- glm(Y ~ ., data = credit_mix, family=binomial) A_ref = table(credit[i_calibration,"Y"])/length(i_calibration)   vect_oneshot = function(i){ B2=comparaisonx(base = creditx[i_calibration,],x=creditx[i,]) predict(OneShotLogisticModel,type="response",newdata=B2) }   prediction_oneshot = function(i,type=1){ B2=comparaisonx(base = creditx[i_calibration,],x=creditx[i,]) p=predict(OneShotLogisticModel,type="response",newdata=B2) y=credit[i_calibration,"Y"] base = data.frame(p,y) base = base[rev(order(base$p)),] if(type==1){T = table(base$y[1:11]) return(names(which.max(T)))} if(type==2){return(base$y[1])} if(type==3){A=table(base$y[1:10])/10 T=A/A_ref return(names(which.max(T)))} if(type==4){ costheta = rep(NA,length(i_calibration)) for(j in 1:length(i_calibration)){ vecteur_proba = vect_oneshot(i_calibration[j]) costheta[j] = sum(vecteur_proba*p)/(sqrt(sum(vecteur_proba^2))*sqrt(sum(p^2))) } return(y[which.max(costheta)])} if(type==5){ kl = rep(NA,length(i_calibration)) for(j in 1:length(i_calibration)){ vecteur_proba = vect_oneshot(i_calibration[j]) kl[j] = sum(p*log(vecteur_proba/p)) } return(y[which.max(as.vector(kl))])} if(type==6){ ## one shot : tirer au hasard un de chaque, et dire lequel est plus credible !   y=credit[i_calibration,"Y"] tirage = function(){ v=c(sample(i_calibration[y==levels(y)[1]],size=1), sample(i_calibration[y==levels(y)[2]],size=1), sample(i_calibration[y==levels(y)[3]],size=1), sample(i_calibration[y==levels(y)[4]],size=1), sample(i_calibration[y==levels(y)[5]],size=1), sample(i_calibration[y==levels(y)[6]],size=1), sample(i_calibration[y==levels(y)[7]],size=1), sample(i_calibration[y==levels(y)[8]],size=1), sample(i_calibration[y==levels(y)[9]],size=1), sample(i_calibration[y==levels(y)[10]],size=1)) names(v)=levels(y) return(v) } pd=rep(NA,101) for(ix in 1:101){ ids = tirage() B2=comparaisonx(base = creditx[ids,],x=creditx[i,]) p=predict(OneShotLogisticModel,type="response",newdata=B2) pd[ix]=levels(y)[which.max(p)] } levels(y)[which.max(table(pd))] } } PRED0=as.character(credit[i_test,"Y"]) PRED1=as.character(predict(LogisticModel,type = "class", newdata=credit[i_test,])) PRED21=as.character(Vectorize(function(i) prediction_oneshot(i,type=1)) (i_test)) PRED22=as.character(Vectorize(function(i) prediction_oneshot(i,type=2))(i_test)) PRED23=as.character(Vectorize(function(i) prediction_oneshot(i,type=3))(i_test)) PRED24=as.character(Vectorize(function(i) prediction_oneshot(i,type=4))(i_test)) PRED25=as.character(Vectorize(function(i) prediction_oneshot(i,type=5))(i_test)) PRED26=as.character(Vectorize(function(i) prediction_oneshot(i,type=6))(i_test)) B=data.frame(PRED0,PRED1,PRED21,PRED22,PRED23,PRED24,PRED25,PRED26) B }   s=1/100;setTxtProgressBar(pb, s*2) for(i in 2:10){ PREDICTION = rbind(PREDICTION,KFOLDglm(i)) s=s+1/100;setTxtProgressBar(pb, s*2)} for(j in 1:5) PREDICTION[,j]=as.character(PREDICTION[,j]) L=list() v=mean(PREDICTION[,1]!=PREDICTION[,2]) names(v)="logistic" L[["logistic"]]=v v=c(mean(PREDICTION[,1]!=PREDICTION[,3]), mean(PREDICTION[,1]!=PREDICTION[,4]), mean(PREDICTION[,1]!=PREDICTION[,5]), mean(PREDICTION[,1]!=PREDICTION[,6]), mean(PREDICTION[,1]!=PREDICTION[,7]), mean(PREDICTION[,1]!=PREDICTION[,8])) names(v)=c("top10","max","10norm","cos","KL","OS") L[["glm"]]=v

# Autocalibration for Insurance Pricing with Machine Learning

With Michel Denuit and Julien Trufin, we recently uploaded a joint paper on ArXiv, entitled Autocalibration and Tweedie-dominance for Insurance Pricing with Machine Learning.

Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing.
Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also,
the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts.
The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical
link setting has been empirically documented in Wüthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting.
The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance.
It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale.
Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis.
Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).

In this paper, we started with a simple observation : with GLM (like a Poisson regression) we have unbiased predictions, in the sense that $\widehat{y}_1+...+\widehat{y}_n=y_1+...+y_n$ (on the training dataset, we should expect to have $\widehat{y}_1+...+\widehat{y}_n\approx y_1+...+y_n$ on the validation dataset). But with any machin learning algorithm, this is no longer the case. For instance, with a boosting algorithm, we can end up with a significant bias. This is an application on an insurance dataset from the CASdataset package, with a GLM (Poisson regression) on the left, and additive smooth version, GAM in the middle and some boosting algorithm on the right. Here is the dispersion of the predictions, $\widehat{y}_i$‘s,

Let $\widehat{\pi}$ denote a fited model. If $\widehat{\pi}$ was close to the true parameter of the Poisson model, $\mu$ where $\mu(\boldsymbol{x})=\mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]$, then function  $s\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=s]$ should be very close to the first diagonal (since $\mathbb{E}[Y|\mu(\boldsymbol{X})=s]=s$). And this is was we observe below, for the GLM. But the boosting algorithm has a significant bias.

Furthermore, the bias is not the same everywhere. We can see it more precisely on a quantile version of the $x$-axis

that is $u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]$. Here is a multiplicative version of that bias $u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]/u$

The idea of autocalibration is to use that multiplicative factor to correct from the bias. Thus, if $\widehat{\pi}(\boldsymbol{x})$ is close to the 75% upper quantile, then the true value should be 20% larger for the bootsting algorithm. Here is the new distribution of the predictions, with that correction

and we can observe that $u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]$ is now much closer to the first diagonal (below is the multiplicative bias)

We discuss in the paper this correction for local bias, so that $u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]$ becomes as close as possible to the first diagonal… see https://arxiv.org/abs/2103.03635 for a complete version of the paper, and https://github.com/freakonometrics/autocalibration/ for the complete version of the R codes

# Spring break

I will be off for a few days…

# Unusual data for insurance, joint research initiative

We recently started a joint research initiative, funded by the AXA Research Fund, to work on unusual data for insurance

Insurers sometimes lack information at the time of claim submission, such as the structure of a building, the presence of health risks common to a group of people, or the spatial diffusion of a pandemic. Unusual data, such as satellite images, personal network connections, and tweets can be used to populate this information gap. In this joint research initiative, we will use images, network data, and texts for risk analysis from an actuarial perspective. Specifically, the project will explore how using unusual data can contribute to smoother claims assessment and reduced data quality risk, allowing for better risk selection and pricing. The project will look at three types of unusual data: pictures/satellite images, network data, and text data.

More information will be shared via a dedicated website (https://jridata.github.io/), even if I will also mention interesting papers, conferences and open-source codes on this blog….

# Principal Component Analysis: A Generalized Gini Approach

Our paper, with Stéphane Mussard and Téa Ouraga, entitle Principal Component Analysis: A Generalized Gini Approach is finally out in the European Journal of Operations Research.

A principal component analysis based on the generalized Gini correlation index is proposed (Gini PCA). The Gini PCA generalizes the standard PCA based on the variance. It is shown, in the Gaussian case, that the standard PCA is equivalent to the Gini PCA. It is also proven that the dimensionality reduction based on the generalized Gini correlation matrix, that relies on city-block distances, is robust to outliers. Monte Carlo simulations and an application on cars data (with outliers) show the robustness of the Gini PCA and provide different interpretations of the results compared with the variance PCA.

# Some general thoughts on Partial Dependence Plots with correlated covariates

The partial dependence plot is a nice tool to analyse the impact of some explanatory variables when using nonlinear models, such as a random forest, or some gradient boosting.The idea (in dimension 2), given a model $m(x_1,x_2)$ for $\mathbb{E}[Y|X_1=x_1,X_2=x_2]$. The partial dependence plot for variable $x_1$ is model $m$ is function $p_1$ defined as $x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)]$. This can be approximated, using some dataset using $$\widehat{p}_1(x_1)=\frac{1}{n}\sum_{i=1}^n m(x_1,x_{2,i})$$My concern here what the interpretation of that plot when there are some (strongly) correlated covariates. Let us generate some dataset to start with

n=1000 library(mnormt) r=.7 set.seed(1234) X = rmnorm(n,mean = c(0,0),varcov = matrix(c(1,r,r,1),2,2)) Y = 1+X[,1]-2*X[,2]+rnorm(n)/2 df = data.frame(Y=Y,X1=X[,1],X2=X[,2])

As we can see, the true model is here is $y_i=\beta_0+\beta_1 x_{1,i}+\beta_2x_{2,i}+\varepsilon_i$ where $\beta_1 =1$ but the two variables are positively correlated, and the second one has a strong negative impact. Note that here

reg = lm(Y~.,data=df) summary(reg)   Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 1.01414 0.01601 63.35 &lt;2e-16 *** X1 1.02268 0.02305 44.37 &lt;2e-16 *** X2 -2.03248 0.02342 -86.80 &lt;2e-16 ***

If we estimate a wrongly specified model $y_i=b_0+b_1 x_{1,i}+\eta_i$, we would get

reg1 = lm(Y~X1,data=df) summary(reg1)   Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 1.03522 0.04680 22.121 &lt;2e-16 *** X1 -0.44148 0.04591 -9.616 &lt;2e-16 ***

Thus, on the proper model, $\widehat{\beta}_1\sim+1.02$ while $\widehat{b}_1\sim-0.44$  on the mispecified model.

Now, let us look at the parial dependence plot of the good model, using standard R dedicated packages,

library(pdp) pdp::partial(reg, pred.var = "X1", plot = TRUE, plot.engine = "ggplot2")

which is the linear line $y=1+x$, that corresponds to $y=\beta_0+\beta_1x$.

library(DALEX) plot(DALEX::single_variable(DALEX::explain(reg, data=df),variable = "X1",type = "pdp"))

which corresponds to the previous graph. Here, it is also possible to creaste our own function to compute that partial dependence plot,

pdp1 = function(x1){ nd = data.frame(X1=x1,X2=df$X2) mean(predict(reg,newdata=nd)) } that will be the straight line below (the dotted line is the theoretical one $y=1+x$, vx=seq(-3.5,3.5,length=101) vpdp1 = Vectorize(pdp1)(vx) plot(vx,vpdp1,type="l") abline(a=1,b=1,lty=2) which is very different from the univariate regression on $x_1$ abline(reg1,col="red") Actually, the later is very consistent with a local regression, only on $x_1$ library(locfit) lines(locfit(Y~X1,data=df),col="blue") Now, to get back to the definition of the partial dependence plot, $x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)]$, in the context of correlated variable, I was wondering if it would not make more sense to consider some local version actually, something like $x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2|X_1}}[m(x_1,X_2)]$. My intuition was that, somehow, it did not make any sense to consider any $X_2$ while $X_1$ was fixed (and equal to $x_1$). But it would make more sense actually to look at more valid $X_2$‘s given the value of $X_1$. And a natural estimate could be some $k$ neareast-neighbors, i.e. $$\tilde{p}_1(x_1)=\frac{1}{k}\sum_{i\in\mathcal{V}_k(x)}^n m(x_1,x_{2,i})$$ where $\mathcal{V}_k(x)$ is the set of indices of the $k$ $x_i$‘s that are the closest to $x$, i.e. lpdp1 = function(x1){ nd = data.frame(X1=x1,X2=df$X2) idx = rank(abs(df\$X1-x1)) mean(predict(reg,newdata=nd[idx&lt;50,])) } vlpdp1 = Vectorize(lpdp1)(vx) lines(vx,vlpdp1,col="darkgreen",lwd=2)

Surprisingly (?), this local partial dependence plot gives a curve that corresponds to the simple regression…

## An Open Lab-Notebook Experiment

Search OpenEdition Search

You will be redirected to OpenEdition Search