Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Discounting the Future?

this post is written with Béatrice Cherrier (Research Director, CNRS-ENSAE / CREST)

The first lessons in insurance and financial mathematics address discounting and the value of time, borrowing Christian Gollier’s expression, because insurers must account for this temporal aspect in medium-term annuity calculations. But do these discounting calculations, used for centuries to reflect individual decisions (of policyholders, investors, companies), still make sense when used to guide public policy decisions with long-term consequences, like climate policies?

When Kenneth Arrow joined the IPCC team in 1993, he expressed this concern to the coordinator of certain chapters: discounting in climate economics is as necessary as it is controversial. He wrote: “Your outline is very complete, with one exception. There needs to be discussion of discount rates. To a considerable extent, suggested policies require present costs (reduced carbon consumption) to prevent future disutilities and costs. Clearly, the tradeoff between present and future is very important, controversial though it be” (Cherrier and Duarte 2024).

The history of this transfer of a mathematical tool from the individual to the collective dimension since the 1930s, summarized here, is rich with lessons.
Continue reading Discounting the Future?

Julien Trufin, on “Predictive Modeling and Balance Property through Autocalibration”

This Thursday, Julien Trufin will be giving a talk at the CANSSI SSC Seminar, live from Montréal

Machine learning techniques provide actuaries with predictors exhibiting high correlation with claim frequencies and severities. However, these predictors generally fail to achieve financial equilibrium and thus do not qualify as pure premiums. Autocalibration effectively addresses this issue since it ensures that every group of policyholders paying the same premium is on average self-financing. This talk proposes to look at recent results concerning autocalibration. In particular, we present a new characterization of autocalibration which enables us to identify whether a predictor is autocalibrated or not, we study a method (called balance correction) for obtaining an autocalibrated predictor from any regression model, we highlight the effect of balance correction on resulting pure premiums, and finally we go through some performance criteria that are particularly relevant for autocalibrated predictors.

Julien is actually with us the entiere week.

The role of government versus private sector provision of insurance

A short paper The role of government versus private sector provision of insurance has just been published in the Journal of Risk and Insurance.

Insurance markets are important for managing risk and promoting economic stability, since they play a key role in mitigating financial losses from unpredictable events such as natural disasters, cyberattacks, and health crises. However, these markets often face challenges, including market failures, information asymmetries, and correlated risks that can destabilize private insurers. In response, governments frequently intervene in insurance markets, either by providing insurance directly or by acting as a reinsurer of last resort. The interaction between government and private sector provision of insurance raises interesting and important questions about the appropriate role of each player in ensuring market efficiency and protecting individuals and businesses from catastrophic risks.

Selection bias in insurance: why portfolio-specific fairness fails to extend market-wide

With Marie-Pier Côté and Olivier Côté, we recently upload a short note, selection bias in insurance: why portfolio-specific fairness fails to extend market-wide, now available on SSRN,

Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.

(to be continued…)

Les algorithmes et la bureaucratie

La semaine dernière, j’assistais à une série d’exposés sur les algorithmes, l’IA, les inégalités, les injustices, etc, et j’avais l’impression que beaucoup de monde se trompait de combat, en mettant beaucoup de choses “sur le dos des algorithmes”… Je vais en profiter pour ressortir un article que j’avais écrit voilà quelques années… “L’intelligence artificielle dilue-t-elle la responsabilité ?

On essaie de nous faire croire que l’intelligence artificielle est une révolution. Et s’il n’en était rien ? Ne peut-on pas voir tout simplement la logique d’un processus qui remonte au moins aux cinquante dernières années ? La bureaucratie nous a poussés à mettre en place dans tous les domaines de la vie quotidienne des procédures simples, permettant à tout le monde de se dégager de toute responsabilité, de ne plus avoir à faire preuve d’intelligence. Les algorithmes font peur ; on se demande où se trouve l’« humain » dans ces procédures décisionnelles… Et s’il avait déjà disparu depuis bien longtemps ?

à suivre…

Talk at the Financial Conduct Authority, UK

This morning (Montréal time), I will give a talk for the Financial Conduct Authority in London, in the UK, on “Demystify fairness and discrimination in insurance, and avoid some pitfalls“.

What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.

More neurons in the hidden layer than predictive features in neural nets

This week, we were talking about neural networks for the first time, and I was saying that, in many illustrations of neural networks, there was a layer with fewer neurons than predictive variables,

but sometimes, it could make sense to have more neurons in the layer than predictive variables,

To illustrate, consider a simple example with one single variable x, and a binary outcome y\in\{0,1\}

set.seed(12345)
n = 100
x = c(runif(n),1+runif(n),2+runif(n))
y = rep(c(0,1,0),each=n)

We should insure that observations are in the [0,1] interval,

minmax = function(z) (z-min(z))/(max(z)-min(z))
xm = minmax(x)
df = data.frame(x=xm,y=y)

just like what we can visualize below

plot(df$x,rep(0,3*n),col=1+df$y)

Here, the blue and the red dots (when y is either 0 or 1) are not linearly separable. The standard activation function in neural nets is the sigmoid

sigmoid = function(x) 1 / (1 + exp(-x))

Let us compute a neural network

library(nnet)
set.seed(1234)
model_nnet = nnet(y~x,size=2,data=df)

We can then get the weights, and we can visualize the two neurons

w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
b = w$wts$`out 1`
plot(sigmoid(x1),sigmoid(x2),col=1+df$y)

 

Now, the the blue and the red dots (when y is either 0 or 1) are actually linearly separable.

abline(a=-b[1]/b[3],b=-b[2]/b[3])

If we do not specify the seed of the random generator, we can get a different outcome since, obviously, this model is not identifiable

or

If we now have

set.seed(12345)
n=100
x=c(runif(n),1+runif(n),2+runif(n),3+runif(n))
y=rep(c(0,1,0,1),each=n)
xm = minmax(x)
df = data.frame(x=xm,y=y)
plot(df$x,rep(0,4*n),col=1+df$y)

then we need more neurons (one more, at least)

set.seed(321)
model_nnet = nnet(y~x,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1),
y=sigmoid(x2), z=sigmoid(x3),color=1+df$y)

And one more time, we have been able to separate (linearly) the blue and the red points (just imagine the plane, I did not manage to add it on the 3d scatterplot)

Finally, consider

set.seed(123)
n=500
x1=runif(n)*3-1.5
x2=runif(n)*3-1.5
y = (x1^2+x2^2)<=1
x1m = minmax(x1)
x2m = minmax(x2)
df = data.frame(x1=x1m,x2=x2m,y=y)
plot(df$x1,df$x2,col=1+df$y)

and again, we three neurons (for two explanatory variables) we can, linearly, separate the blue and the red points

set.seed(1234)
model_nnet = nnet(y~x1+x2,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1), y=sigmoid(x2), z=sigmoid(x3),
color=1+df$y)

Here, neural networks play the rule of the kernel trick, as coined in Koutroumbas, K. & Theodoridis, S. (2008). Pattern Recognition. Academic Press

The m=√p rule for random forests

A couple of days ago, in our lab session, we discussed random forrests, and, since it was based on the example in ISLR, we had a quick discussion about the random choice of features, and the “m=\sqrt{p}” rule

Interestingly, on that one, we can play a bit, and try all choices, and do it again, on a different train/test split,

library(randomForest)
library(ISLR2)
set.seed(123)

sim = function(t){
train = sample(nrow(Boston), size = nrow(Boston)*.7)
subsim = function(i){
rf.boston <- randomForest(medv ~ ., data = Boston,
subset = train, mtry = i)
yhat.rf <- predict(rf.boston, newdata = Boston[-train, ])
mean((yhat.rf - Boston[-train, "medv"])^2)
}
Vectorize(subsim)(2:12)
}
M=Vectorize(sim)(1:499)

and now we can plot it, with the MSE on the test dataset, as a function of m, the number of features selected, at each node

boxplot(t(M))

or more clearly

vm=apply(M,1,mean)
plot(2:12,vm,type="b",pch=19,ylim=c(10.5,15))
abline(v=sqrt(12),col="red")

Even if here, the “m=\sqrt{p}” rule might not be optimal, we can see that using a random forest instead of a bagging strategy, i.e. “m<\sqrt{p}“, could improve predictions (and not only make the code run faster).

CASdatasets 1.2.0

Nearly ten years ago, Chrisophe Dutang and I launched a curated collection of datasets featured in Computational Actuarial Science with R, bundled in the CASdatasets R package. Now, this package offers an extensive range of actuarial datasets, serving as a vital resource for students, educators, and researchers alike. We’re excited to unveil version 1.2.0, which includes new vignettes created this summer with Ewen Gallic and Julien Siharath. Explore the latest additions at

https://dutangc.github.io/CASdatasets/index.html

and feel free to contribute with your own applications of these datasets! Observe that a DOI has been assigned to the package:

https://doi.org/10.57745/P0KHAG

How to Go Beyond the Coldness of Numbers and Take Action?

 This article was written with Nicolas Marescaux, originally in French.

Today, our modern life relies largely on numbers. They guide most collective decisions and many individual choices. For Lord Kelvin [1], “If you cannot measure it, you cannot improve it.” In other words, to make a good decision, you must first measure well. But is that enough? IPCC reports have been compiling data and figures for decades, announcing a short-term catastrophe. And yet, nothing happens. “The modern man scorns imagination,” stated Stéphane Mallarmé in 1897. Isn’t it this subjectivity of our imagination that could save us? Continue reading How to Go Beyond the Coldness of Numbers and Take Action?

Comment dépasser la froideur des chiffres, et agir ?

Cet article a été écrit avec Nicolas Marescaux,

Aujourd’hui, notre vie moderne repose en grande partie sur les chiffres. Ils orientent la plupart des décisions collectives et de nombreux choix individuels. Pour Lord Kelvin [1], « If you cannot measure it, you cannot improve it. » Autrement dit, pour bien décider, il faudrait d’abord bien mesurer. Mais est-ce suffisant ? Les rapports du GIEC compilent des données et des chiffres annonçant, depuis plusieurs décennies, une catastrophe à court terme. Et pourtant, rien ne se passe. « Le moderne dédaigne d’imaginer » affirmait Stéphane Mallarmé en 1897. N’est-ce pas cette subjectivité de notre imaginaire qui pourrait nous sauver ?
Continue reading Comment dépasser la froideur des chiffres, et agir ?

Calculating an LOOCV MSE by hand

Last week, we had an “mid-term” exam, for our introduction to statistical learning course.  The question is simple: consider three points, (x_i,y_i), here \{(0,2),(2,2),(3,1)\}Consider here some linear models, estimated using least square techniques, what would be the leave-one-out cross-validation MSE ?

I like this exercise since we can compute everything easily, by hand. Since at each step we remove one single observation, only two observations remain in the sample. In with two points, fiting a linear model is straightforward (whatever the technique considered). Here, we’re simply considering the straight line that passes through the other two points. And since we have the straight line (without the minimal calculation of minimizing the sum of squared errors), we have the error committed on the omitted observation. This is exactly what we see in the drawing below

In other words, the LOOCV MSE is here{\displaystyle\operatorname{MSE}={\frac{1}{n}}\sum_{i=1}^{n}\left(Y_{i}-{\hat {Y_{i}}^{(-i)}}\right)^{2}}, where, intuitively, \hat {Y_{i}}^{(-i)} denotes the prediction associated with x_i with the model obtained on the other n-1 observations. Thus, here{\displaystyle\operatorname{MSE}=\frac{1}{3}\big(2^2+\frac{2^2}{3^2}+1^2\big)=\frac{1}{27}\big(36+4+9\big)=\frac{49}{27}}Note that we can also use R to compute that quantity,

> x = c(0,2,3)
> y = c(2,2,1)
> df = data.frame(x=x,y=y)
> yp = rep(NA,3)
> for(i in 1:3){
+ reg = lm(y~x, data=df[-i,])
+ yp[i] = predict(reg,newdata=df)[i]
+ }
> 1/3*sum((yp-y)^2)
[1] 1.814815

which is precisely what we obtained, by hand.

"sendo l'intento mio scrivere cosa utile a chi la intende…"