J’ai grandi en Europe, en France, dans un monde où l’adresse semblait aller de soi. Une maison était d’abord située sur une rue, puis placée dans la rue par un numéro. En ville, on apprenait très tôt à se repérer avec ce qu’on pourrait appeler la grammaire ordinaire de l’espace urbain : les numéros montent depuis le début de la voie, les pairs sont d’un côté, les impairs de l’autre, et l’ordre croissant suit en général une logique d’éloignement du centre. Pour aller du 56 dans une rue au 72, on sait qu’il faudra longer 16 maisons (ou immeubles) entre les deux. Dans les zones plus rurales, j’ai aussi connu une autre logique, moins intuitive au premier abord mais finalement très parlante : celle où le numéro ne dit plus seulement le rang d’une maison, mais une distance. Le 72 est maintenant 16 mètres plus loin, autrement dit, c’est peut être la maison à côté. Les guides français d’adressage parlent ainsi d’une numérotation métrique où le numéro correspond à la distance, en mètres, entre le début de la voie et l’habitation, avec un « point zéro » qui peut être la mairie ou l’église du village (comme le rappelle le Guide pratique pour un bon adressage des communes de Charente).
Continue reading La géométrie de la ville, les lignes et les pôlygones
Category Archives: Computer
Personalized Insurance Premiums Cheaper Thanks to AI? Here’s Why It’s a Slippery Slope
This article was initially written in French.
Insurance is based on a principle of solidarity that is undermined by the algorithms tasked with creating our profiles. As algorithms become more precise, the bill becomes more personalized. Various “at risk” profiles may thus find themselves excluded from insurance schemes, as the costs are so high. Personalization has an obvious legitimacy. But it must be reconciled with equitable access to insurance.
It must first be understood that insurance is marked by a fundamental paradox. On the one hand, its very principles assume a collective mechanism in which everyone contributes according to their capacity and benefits from solidarity in the event of a loss. On the other hand, technological advances, ever-larger datasets, and increasingly precise actuarial methods push toward ever greater individualization of premiums.
It is nothing less than reconciling actuarial refinement with the values of redistribution and solidarity on which the profession of insurer was founded.
To this tension is added an increasingly demanding legal framework, which prohibits any form of discrimination based on sensitive data, sometimes correlated with risk factors that are nevertheless relevant.
Pricing segmentation
Insurance companies have long used classification as the pillar of their economic model: age, sex, profession, geographic area, claims history…
In 1662, English statistician John Graunt published the Bills of Mortality, a first statistical analysis of London’s death registers. In 1693, English astronomer Edmund Halley developed the first quantified mortality table, allowing calculation of life expectancy at each age. These works laid the foundations for differentiated pricing by age and sex, which long remained the two main criteria of segmentation in life-death insurance.
At the same time, after the Great Fire of London in 1666, the first fire insurance contracts appeared: companies collected data on the type of construction materials and urban density. In the 18th–19th centuries, premiums were segmented according to the proximity of neighboring buildings and the presence of fire services, giving rise to the first “high-risk zones” and “low-risk zones.”
With the rise of the automobile in the 1910s–1920s, American insurers began systematically recording the number of claims, and the age and sex of drivers. As early as the 1920s, several pricing “classes” were distinguished: young drivers, women drivers, experienced drivers, making it possible to set variable premiums depending on the profile.
Today, actuaries have sophisticated algorithms, machine learning tools, and a flood of data: onboard telematics, connected objects, geolocation, driving or lifestyle behavior… For the insurer, refining segmentation makes it possible to charge each policyholder “their true risk level,” reducing the cross-subsidization effect from good risks to bad ones, while improving overall profitability.
But overly fine pricing reduces pooling; it can make insurance very expensive, even inaccessible for certain high-risk segments. Hence today, actuaries seek a subtle balance, aiming to capture the right information to differentiate profiles, while preserving the viability of the insured community.
Policyholders and the illusion of winning personalization
In Europe, the legislative proposal FIDA (Financial Data Access Framework) would open regulated access for insurers to individuals’ financial data. Its purpose is to refine understanding of spending and repayment behaviors. In this context, the promise of ultra-personalized pricing arouses both hopes of lower premiums and fears of excessive profiling and significant exclusions.
Faced with this new influx of data, many clients perceive personalization as a win-win approach: if I manage my budget better, I will benefit from a discount; if my saving and repayment habits are judged virtuous, my health premium will decrease; if my financial profile improves, my home insurance will become lighter.
This “pay-as-you-live” or “pay-how-you-drive” logic appeals: individuals believe themselves masters of their insurance cost through their lifestyle choices.
Yet several points deserve to be highlighted.
The principle of pooling is not neutralized: those who cannot adopt the most virtuous behaviors remain dependent on the solidarity of others. Indeed, even if higher-risk individuals pay more individually, those who are less at risk nevertheless continue to bear part of the costs thanks to the principle of pooling.
The asymmetry of information is reinforced, as the insurer knows statistics better than the client. The offer of personalization is often based on correlations, sometimes tenuous, whose scope the client does not understand.
Very fine personalization can force the most at-risk to over-insure, or on the contrary to give up insurance, weakening the pool.
Thus, even strengthened by access to financial data, “personalization” is not necessarily synonymous with “empowerment” for the consumer.
The legal framework: when the fight against discrimination is required
The development of big data in insurance raises important ethical and legal questions: how far can sensitive variables be exploited to predict risk?
In France and in the European Union, legislation explicitly prohibits discrimination based on protected criteria: ethnic origin, gender, sexual orientation, disability, religious beliefs, etc. The Solvency II Directive (EU) requires insurers to use “transparent” and non-discriminatory risk models.
Unlike the European Union—which bans differentiated pricing based on protected criteria (gender, origin, disability)—the Quebec model offers a more permissive framework. While the Charter of Human Rights and Freedoms of Quebec also prohibits discrimination, it provides exemptions specific to insurers: they can, when a factor is statistically relevant, base pricing on age, sex, or marital status.
This usage, authorized solely on the basis of a correlation, raises questions.
Ethics and social responsibility of insurers
Beyond mere legal compliance, insurers are increasingly judged on their ethical practices and social responsibility by consumer associations and the media, which relay incidents of algorithmic discrimination and exert reputational pressure.
In recent years, insurers must therefore ask themselves collectively how to guarantee equitable access to their products for vulnerable populations, without sacrificing the financial viability of their portfolios. Some innovative models propose “solidarity” formulas or capped premiums to avoid exclusion.
Insurers are being required to show more and more transparency. They must clearly explain pricing criteria, make calculation keys accessible to avoid a feeling of arbitrariness. Finally, they must integrate data protection and privacy from the design stage of offers (“privacy by design”), in order to preserve trust.
Insurers that manage to reconcile personalization, fairness and inclusion will become reference players for clients concerned with ethics.
Reconciling solidarity and data: a crucial challenge
The challenge, as we see, is considerable.
It is nothing less than reconciling actuarial precision with the values of redistribution and solidarity that founded the insurance profession.
It is in resolving this tension that the future of insurance will be decided: neither pure price discrimination nor simple illusory personalization, but rather a balance allowing each to contribute according to their risk and to benefit in fair measure from the pooling of life’s uncertainties.
The (non-)Ethics of Capitalism
Back in 2018, there was a survey on Gallup, about honesty and ethical standards, per profession
More than four in five Americans (84%) again rate the honesty and ethical standards of nurses as “very high” or “high,” earning them the top spot among a diverse list of professions for the 17th consecutive year. At the same time, members of Congress are again held in the lowest esteem, as nearly 58% of Americans say they have “low” or “very low” ethical standards. Telemarketers join members of Congress as having a majority of low/very low ratings.

One might wonder if there is a correlation between ethics and salary. Using the U.S. Bureau of Labor Statistics (BLS) National Occupational Employment and Wage Estimates data, one could easily get the average salary in the U.S. for some professions. For the others, one has to dig a bit more
-
Nurses (Registered Nurses) — 84% — $94,480 — BLS OEWS 2023, 29-1141
-
Medical doctors (Physicians, all) — 67% — $263,840 — BLS OEWS 2023, 29-1210
-
Pharmacists — 66% — $132,750 — BLS OEWS 2023, 29-1051
-
High school teachers (Secondary school teachers, except special/CATE) — 60% — $73,800 — BLS OEWS 2023, 25-2031
-
Police officers (Police & Sheriff’s Patrol Officers) — 54% — $76,550 — BLS OEWS 2023, 33-3051
-
Accountants (Accountants & Auditors) — 42% — $90,780 — BLS OEWS 2023, 13-2011
-
Funeral directors (Morticians, Undertakers, and Funeral Arrangers) — 39% — $58,020 — BLS OEWS 2023, 39-4031
-
Clergy — 37% — $63,720 — BLS OEWS 2023, 21-2011
-
Journalists (News analysts, reporters, journalists) — 33% — $101,430 — BLS OEWS 2023, 27-3023
-
Building contractors (Construction Managers) — 29% — $116,960 — BLS OEWS 2023, 11-9021
-
Bankers (Loan Officers as proxy) — 27% — $84,490 — BLS OEWS 2023, 13-2072
-
Real estate agents (Sales Agents) — 25% — $69,610 — BLS OEWS 2023, 41-9022
-
Labor union leaders (Labor Relations Specialists as proxy) — 21% — $82,420 — BLS OEWS 2023, 13-1075
-
Lawyers — 19% — $176,470 — BLS OEWS 2023, 23-1011
-
Business executives (Chief Executives) — 17% — $258,900 — BLS OEWS 2023, 11-1011
-
Stockbrokers (Securities/Financial Services Sales Agents) — 14% — $109,710 — BLS OEWS 2023, 41-3031
-
Advertising practitioners (Advertising Sales Agents) — 13% — $75,820 — BLS OEWS 2023, 41-3011
-
Telemarketers — 9% — $36,680 — BLS OEWS 2023, 41-9041
-
Car salespeople (Retail Salespersons in Automobile Dealers) — 8% — $62,380 — BLS OEWS 2023, NAICS 441100 industry table
-
Members of Congress — 8% — $174,000 — Congressional Research Service RL30064
Store all those information in a csv file, and then, add the honesty/ethics high-very high percentage,
download.file("https://freakonometrics.hypotheses.org/files/2025/08/gallup_ethics_salaries.csv", destfile = "data.csv")
df = read.csv("data.csv", stringsAsFactors = FALSE)
and we can get a plot.
plot(df$percent_high, df$avg_salary_usd,
xlab = "Honesty/Ethics rated 'Very high/High' (%)",
ylab = "Average salary (USD)",
pch = 19)
# label points (may overlap a bit)
text(df$percent_high, df$avg_salary_usd, labels = df$profession, pos = 4, cex = 0.6)

At best, there is no link between salaries and honesty / ethics.
Kyoto (京都), Japan, vs. Montréal, Canada, a first comparison
We just arrived in Kyoto (京都), Japan, from Montréal, Canada. Everything seems very different. I still have in mind the time I spent in Hong Kong (more than a year), but that was 25 years ago… Just to compare, I used wikipedia’s page of Kyoto vs. Montréal. Quite naturaly, I used the pages in French, but that was not stupid because the pages in English are more compex to read (with temperatures in °C and °F, humidity in mm and inches, etc).
urlM="https://fr.wikipedia.org/wiki/Montr%C3%A9al"
urlK="https://fr.wikipedia.org/wiki/Kyoto"
download.file(urlM,destfile = "tempMtrl.html")
download.file(urlK,destfile = "tempKt.html")
library(XML)
Then we extract the tables with important informations
tables=readHTMLTable("tempKt.html")
TK=tables[[6]]
TK[] <- lapply(TK, function(col) {
if (is.character(col)) {
col <- gsub(",", ".", col)
col <- gsub("\\s", "", col)
}
suppressWarnings(as.numeric(col))
})
tables=readHTMLTable("tempMtrl.html")
TM=tables[[7]][,-1]
TM[] <- lapply(TM, function(col) {
if (is.character(col)) {
col <- gsub("\u2212", "-", col)
col <- gsub(",", ".", col)
col <- gsub("\\s", "", col)
}
suppressWarnings(as.numeric(col))
})
and we plot them
cols <- paste0("V", 2:13)
mois <- c("JAN","FEB","MAR","APR","MAY","JUN","JUL","AUG","SEP","OCT","NOV","DEC")
yminK <- suppressWarnings(as.numeric(TK[2, cols]))
ymaxK <- suppressWarnings(as.numeric(TK[3, cols]))
yminM <- suppressWarnings(as.numeric(TM[2, cols]))
ymaxM <- suppressWarnings(as.numeric(TM[3, cols]))
y0K <- pmin(yminK, ymaxK, na.rm = TRUE)
y1K <- pmax(yminK, ymaxK, na.rm = TRUE)
y0M <- pmin(yminM, ymaxM, na.rm = TRUE)
y1M <- pmax(yminM, ymaxM, na.rm = TRUE)
x <- seq_along(cols)
plot(NA,
xlim = c(0.5, length(cols) + 0.5),
ylim = range(y0M, y1M, y0K, y1K, na.rm = TRUE),
xaxt = "n", xlab = "", ylab = "",
main = "Temperatures min-max (°C, averages)")
axis(1, at = x, labels = mois)
w <- 0.8
for (i in x) {
if (!is.na(y0[i]) && !is.na(y1[i])) {
rect(i - w/2, y0K[i], i + w/2, y1K[i],
col = "lightblue", border = "steelblue", lwd = 1.2)
}
}
for (i in x) {
if (!is.na(y0[i]) && !is.na(y1[i])) {
rect(i - w/2, y0M[i], i + w/2, y1M[i],
col = "lightcoral", border = "firebrick", lwd = 1.2)
}
}
grid(nx = NA, ny = NULL, col = "gray85")
with Kyoto in blue, Montréal in red,

of course, we can do the same for humidity

or daylight

暑いですね (atsuidesune, litt. “It’s so hot, isn’t it?”)
CASdatasets 1.2.0
Nearly ten years ago, Chrisophe Dutang and I launched a curated collection of datasets featured in Computational Actuarial Science with R, bundled in the CASdatasets R package. Now, this package offers an extensive range of actuarial datasets, serving as a vital resource for students, educators, and researchers alike. We’re excited to unveil version 1.2.0, which includes new vignettes created this summer with Ewen Gallic and Julien Siharath. Explore the latest additions at
https://dutangc.github.io/CASdatasets/index.html
and feel free to contribute with your own applications of these datasets! Observe that a DOI has been assigned to the package:
Calculating an LOOCV MSE by hand
Last week, we had an “mid-term” exam, for our introduction to statistical learning course. The question is simple: consider three points, (x_i,y_i), here \{(0,2),(2,2),(3,1)\}Consider here some linear models, estimated using least square techniques, what would be the leave-one-out cross-validation MSE ?

I like this exercise since we can compute everything easily, by hand. Since at each step we remove one single observation, only two observations remain in the sample. In with two points, fiting a linear model is straightforward (whatever the technique considered). Here, we’re simply considering the straight line that passes through the other two points. And since we have the straight line (without the minimal calculation of minimizing the sum of squared errors), we have the error committed on the omitted observation. This is exactly what we see in the drawing below

In other words, the LOOCV MSE is here{\displaystyle\operatorname{MSE}={\frac{1}{n}}\sum_{i=1}^{n}\left(Y_{i}-{\hat {Y_{i}}^{(-i)}}\right)^{2}}, where, intuitively, \hat {Y_{i}}^{(-i)} denotes the prediction associated with x_i with the model obtained on the other n-1 observations. Thus, here{\displaystyle\operatorname{MSE}=\frac{1}{3}\big(2^2+\frac{2^2}{3^2}+1^2\big)=\frac{1}{27}\big(36+4+9\big)=\frac{49}{27}}Note that we can also use R to compute that quantity,
> x = c(0,2,3)
> y = c(2,2,1)
> df = data.frame(x=x,y=y)
> yp = rep(NA,3)
> for(i in 1:3){
+ reg = lm(y~x, data=df[-i,])
+ yp[i] = predict(reg,newdata=df)[i]
+ }
> 1/3*sum((yp-y)^2)
[1] 1.814815
which is precisely what we obtained, by hand.
Some updates about the insurance datasets package (CASdataset)
Ten years ago, Computational Actuarial Science with R was published. With Christophe Dutang, we created at the same time an R package, collecting datasets used in the book. It was mainly to give access to the datasets to reproduce the applications, since functions used in the different chapters were coming from other R packages. Then, we started adding more and more datasets, not used in the book, but that could be used by researchers and students. We are quite happy to see that those datasets are now considered as a benchmark in actuarial and insurance litterature (and also outside the community, actually).
The maintenance was a bit complicated since it was not possible to be hosted by the CRAN (Comprehensive R Archive Network), so it was either on Christophe’s github repo, or on a dedicated website at UQAM. Christophe’s repo
https://dutangc.github.io/CASdatasets/
is under construction (or major refreshing, with Ewen Gallic), and several vignettes will be added, created by ). Actually, we encourage colleagues, or students, who used datasets from the package to share some codes, we can now host the application. And there is also the following repository,
https://entrepot.recherche.data.gouv.fr/
Hence, the dataset has now an official DOI, which makes it easier to cite doi:10.57745/P0KHAG. And the following bib file can be obtained,
@data{P0KHAG_2024,
author = {Dutang, Christophe and Charpentier, Arthur},
publisher = {Recherche Data Gouv},
title = {{Insurance dataset}},
year = {2024},
version = {V1},
doi = {10.57745/P0KHAG},
url = {https://doi.org/10.57745/P0KHAG}
}
IDSC’24, Insurance Data Science Conference, in Stockholm

Great time at the IDSC’24, Insurance Data Science Conference, in Stockholm, those two days…

I am glad to see so many people using the datasets of the CASdatasets package… good news, we are working with Christophe Dutang, Julien Siharath and Ewen Gallic this summer to enrich it, with fresh and new data, and with vignettes ! more about it this Fall !
Discrimination by proxy (a real case study)
Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,
library(InsurFair)
library(randomForest)
Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model
subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)
We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables
dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"
Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.
n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}
Let us not compute it for all variables
N = 0:15
M = Vectorize(metric_gender)(N)
and plot it
plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.
Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)
metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).
Fairness in Multi-Task Learning via Wasserstein Barycenters
Our new paper, with François Hu and Philipp Ratz, Fairness in Multi-Task Learning via Wasserstein Barycenters, is now available.
Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.

It will be presented in September, at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023), in Torino.
Quand on marche sur des œufs avec Chat-GPT
Avec la sortie de GPT4, et pour faire suite à mon article La société du bullshit, quelques nouvelles de GPT. Alors que je parlais de “bullshit”, on voit ressortir ces derniers temps l’idée d'”hallucinations”, définies proprement dans Hallucinations in Neural Machine Translation, voilà déjà 5 ans. Un récent article en reparlait cette semaine. Pour illustrer, inspiré de discussions avec Louis Abraham, lors de ma visite parisienne, j’ai tenté d’avoir des conseils culinaires

J’ai tenté d’autres types d’œufs, de lapin

ou de baleine,

Louis me faisait remarquer que GPT3 pouvait facilement être induit en erreur, contrairement à ChatGPT… alors j’ai tenté,

que ce soit des œufs de lapin ou de beluga, ChatGPT donne des conseils étranges

Et il ne s’arrête pas

je ne peux m’empêcher de partager cette petite conclusion

Sur le moment, je me suis demandé s’il n’était pas pris dans mon délire absurde…

J’ai même tenté les œufs de cochon

J’ai fini par lui poser la question franchement,

et c’était fini, il avait quitté mon délire, impossible de jouer davantage….
Je me suis aussi amusé à poser des questions sur la connaissance des relations familiales. Et c’est assez facile de piéger GPT3

Impossible d’avoir un “je ne sais pas”. Ici, il va prendre le seule prénom féminin qui ait été mentionné… Bref, il n’y a aucune représentation de ce qu’une famille peut être, ce qui sera une étape indispensable pour que l’outil fonctionne bien.
Monty Hall problem, with Thompson sampling
We all know the Monty Hall problem. Recently, Jason Rosenhouse published a book on that topic (entitled The Monty Hall Problem, The Remarkable Story of Math’s Most Contentious Brain Teaser). The game is more or less described by the following question
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
While I was preparing some slides for a lecture on Bayesian modeling and thinking, I wanted to find an illustration of what is sometimes called the Bayesian brain, that can be related to updates of beliefs, when we experience. And I was looking for examples of Thompson sampling. And actually, it is possible to learn that switching is the optimal strategy, in the Monty Hall problem, just by playing sequentially the game, and learning from previous strategies. The following code is used, to choose the door with the price (the car), and the one we first select
set.seed(1) n = 5000 listdoor = matrix(1:3,3,n) door = listdoor win = sample(1:3,size=n,replace=TRUE) pick1 = sample(1:3,size=n,replace=TRUE) |
Then, the presenter picks one, that is neither the car, nor the one we chose initially. The following trick can be used, to get the list of available choices
door[win+(0:(n-1))*3] = NA door[,1:10] [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] NA NA NA 1 NA NA 1 NA 1 NA [2,] 2 2 NA NA 2 2 2 NA NA 2 [3,] 3 NA 3 3 NA NA NA 3 NA NA door[pick1+(0:(n-1))*3] = NA door[,1:10] [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] NA NA NA 1 NA NA 1 NA 1 NA [2,] 2 2 NA NA 2 2 2 NA NA 2 [3,] 3 NA 3 3 NA NA NA 3 NA NA |
Then, the presenter picks one
presenter = apply(door,2, function(x) sample(x[!is.na(x)],size=1)) > presenter[win != pick1] = apply(door,2,function(x) x[!is.na(x)])[win != pick1] presenter = unlist(presenter) presenter[1:10] [1] 3 2 3 1 2 2 2 3 1 2 |
Now, let us consider the Monty Hall problem. We have two possible strategies. The first one is to keep the door we chose, initially
pick2a = pick1 gaina = (pick2a==win) mean(gaina) [1] 0.3392 |
As expected, on average, we win with (about) 1 chance out of 3. The second one is to (always) pick the other door (the one left). The code is close to the one we used before
door = listdoor door[pick1+(0:(n-1))*3] = NA door[presenter+(0:(n-1))*3] = NA pick2b = apply(door,2,function(x) x[!is.na(x)]) gainb = (pick2b==win) mean(gainb) [1] 0.6608 |
If you know Monty Hall problem the probability to win is now 2 chance out of 3 (which is what the maths tells us). That is what we have with simulations.
Now, what if we don’t know how to do the maths, and we don’t want to compute it? We can use Thompson sampling to explore, and exploit. In a general context, we have to choose among On a le choix entre K alternatives (here K=2, since we can either keep our initial choice, or pick the other one), and the output is \boldsymbol{X}=(X_1,\cdots, X_K), where X_k\sim\mathcal{B}(\theta_k), but \theta_k is unknow, and we will play the game, and learn. From previous computations, we know that \theta_1=1/3 while \theta_2=2/3.
We use some prior distribution, \theta_k\sim\mathcal{B}eta(\alpha_k,\beta_k), since the Beta distribution is the conjugate of the Bernoulli. At time t, we draw K (independent) Beta variables B_k\sim\mathcal{B}eta(\alpha_k,\beta_k), and pick k^\star = \displaystyle{\underset{k=1,\cdots,K}{\text{argmax}}\{B_k\}}. Here the code will be
set.seed(2) X = cbind(pick2a == win,pick2b == win)*1 AB1 = AB2 = tirage = matrix(NA,n,2) choix = rep(NA,n) k=1 AB1[k,] = AB2[k,] = c(1,1) for(k in 1:(n-1)){ tirage[k,] = c(rbeta(1,AB1[k,1],AB1[k,2]), rbeta(1,AB2[k,1],AB2[k,2])) choix[k] = which.max(tirage[k,]) if(choix[k] == 1){ AB1[k+1,] = AB1[k,] + c(X[k,1],1-X[k,1]) AB2[k+1,] = AB2[k,] } if(choix[k] == 2){ AB1[k+1,] = AB1[k,] AB2[k+1,] = AB2[k,] + c(X[k,2],1-X[k,2]) }} |
Before showing some graphs, let us check that indeed, we select more the second strategy (which is here to select the other door)
AB1[n,] [1] 5 13 AB2[n,] [1] 3292 1693 |
Indeed, since the average of a Beta distribution, \mathcal{B}eta(\alpha,\beta) is \alpha/(\alpha+\beta)
AB2[n,1]/(sum(AB2[n,])) [1] 0.6603811 |
i.e. the probability to win, with this second strategy is about 2/3 (as obtained previously). We can visualize this on the animation below, with, in red the first strategy (keep your initial choice), in green the second one (select the other door), 0 and 1 respectively if we win, or not. Then we can visualize the evolution of \alpha_2 and \beta_2 on topc, and \alpha_1 and \beta_1 below (the index is time t). Finallly, we have the two variables B_1 and B_2 drawn,

Of course, another simulation would have given different B_1‘s and B_2‘s, but finally, we learn that the second strategy is better, and we learn it quite fast…

Here is another one (just to confirm)

So clearly, even if we don’t know which is the optimal strategy (keep our initial choice, or switch), a player who played that game about 30 times should be able to understand that switching should be a better strategy.
Lilliefors, Kolmogorov-Smirnov and cross-validation
In statistics, Kolmogorov–Smirnov test is a popular procedure to test, from a sample \{x_1,\cdots,x_n\} is drawn from a distribution F, or usually F_{\theta_0}, where F_{\theta} is some parametric distribution. For instance, we can test H_0:X_i\sim\mathcal{N(0,1)} (where \theta_0=(\mu_0,\sigma_0^2)=(0,1)) using that test. More specifically, I wanted to discuss today p-values. Given n let us draw \mathcal{N}(0,1) samples of size n, and compute the p-values of Kolmogorov–Smirnov tests
n=300 p = rep(NA,1e5) for(s in 1:1e5){ X = rnorm(n,0,1) p[s] = ks.test(X,"pnorm",0,1)$p.value } |
We can visualise the distribution of the p-values below (I added some Beta distribution fit here)
library(fitdistrplus) fit.dist = fitdist(p,"beta") hist(p,probability = TRUE,main="",xlab="",ylab="") vu = seq(0,1,by=.01) vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2]) lines(vu,vv,col="dark red", lwd=2) |

It looks like it is quite uniform (theoretically, the p-value is uniform). More specifically, the p-value was lower than 5% in 5% of the samples
[note: here I compute ‘mean(p<=.05)’ but I have some trouble with the ‘<‘ and ‘>’ symbols, as always]
mean(p<=.05) [1] 0.0479 |
i.e. we wrongly reject H_0:X_i\sim\mathcal{N(0,1)} is 5% of the samples.
As discussed previously on the blog, in many cases, we do care about the distribution, and not really the parameters, so we wish to test something like H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2. Therefore, a natural idea can be to test H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)}, for some estimates of \mu and \sigma^2. That’s the idea of Lilliefors test. More specifically, Lilliefors test suggests to use , Kolmogorov–Smirnov statistics, but corrects the p-value. Indeed, if we draw many samples, and use Kolmogorov–Smirnov statistics and its classical p-value to test for H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)},
n=300 p = rep(NA,1e5) for(s in 1:1e5){ X = rnorm(n,0,1) p[s] = ks.test(X,"pnorm",mean(X),sd(X))$p.value } |
we see clearly that the distribution of p-values is no longer uniform
fit.dist = fitdist(p,"beta") hist(p,probability = TRUE,main="",xlab="",ylab="") vu = seq(0,1,by=.01) vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2]) lines(vu,vv,col="dark red", lwd=2) |

More specifically, if x_i‘s are actually drawn from some Gaussian distribution, there are no chance to reject H_0, the p-value being almost never below 5%
mean(p<=.05) [1] 0.00012 |
Usually, to interpret that result, the heuristics is that \hat\mu and \hat\sigma^2 are both based on the sample, while previously 0 and 1 where based on some prior knowledge. Somehow, it reminded me on the classical problem when mention when we introduce cross-validation, which is Goodhart’s law
When a measure becomes a target, it ceases to be a good measure
i.e. we cannot assess goodness of fit using the same data as the ones used to estimate parameters. So here, why not use some hold-out (or cross-validation) procedure : split the dataset in two parts, \{x_1,\cdots,x_k\} (with k<n) to estimate parameters \mu and \sigma^2 and then use \{x_{k+1},\cdots,x_n\} and Kolmogorov–Smirnov statistics on it to test if x_i‘s are drawn from some Gaussian distribution. More precisely, will the p-value computed using the standard Kolmogorov–Smirnov procedure be ok here. Here, I tried two scenarios, k/n being either 1/3 or 2/3,
p = matrix(NA,1e5,4) for(s in 1:1e5){ X = rnorm(n,0,1) p[s,1] = ks.test(X,"pnorm",0,1)$p.value p[s,2] = ks.test(X,"pnorm",mean(X),sd(X))$p.value p[s,3] = ks.test(X[1:200],"pnorm",mean(X[201:300]),sd(X[201:300]))$p.value p[s,4] = ks.test(X[201:300],"pnorm",mean(X[1:200]),sd(X[1:200]))$p.value } |
Again, we can visualize the distributions of p-values, in the case where 1/3 of the data is used to estimate \mu and \sigma^2, and 2/3 of the data is used to test
fit.dist = fitdist(p[,3],"beta") hist(p[,3],probability = TRUE,main="",xlab="",ylab="") vu=seq(0,1,by=.01) vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2]) lines(vu,vv,col="dark red", lwd=2) |

and in the case where 2/3 of the data is used to estimate \mu and \sigma^2, and 1/3 of the data is used to test
fit.dist = fitdist(p[,4],"beta") hist(p[,4],probability = TRUE,main="",xlab="",ylab="") vu=seq(0,1,by=.01) vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2]) lines(vu,vv,col="dark red", lwd=2) |

Observe here that we (wrongly) reject too frequently H_0, since the p-values are below 5% in 25% of the scenarios, in the first case (less data used to estimate), and 9% of the scenarios, in the second case (less data used to test)
mean(p[,3]<=.05) [1] 0.24168 mean(p[,4]<=.05) [1] 0.09334 |
We can actually compute that probability as a function of k/n
n=300 p = matrix(NA,1e4,99) for(s in 1:1e4){ X = rnorm(n,0,1) KS = function(p) ks.test(X[1:(p*n)],"pnorm",mean(X[(p*n+1):n]),sd(X[(p*n+1):n]))$p.value p[s,] = Vectorize(KS)((1:99)/100) } |
The evolution of the probability is the following
prob5pc = apply(p,2,function(x) mean(x<=.05)) plot((1:99)/100,prob5pc) |
so, it looks like we can use some sort of hold-out procedure to test for H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2, using Kolmogorov–Smirnov test with \mu=\hat\mu and \sigma^2=\hat\sigma^2 but the proportion of data used to estimate those quantities should be (much) larger that the one used to compute the statistics. Otherwise, we clearly reject too frequently H_0.
Insurance Pricing Game
Would you like to put your data science skills to the test?
Imperial College London, Universite du Quebec à Montreal (UQAM), and actuarial institutes in Singapore, the UK, including the IFoA, and Australia, ASTIN, the Casualty Actuarial Society are co-organising a global data science competition.
Would you like to accurately predict the cost of insurance by putting your data science skills to the test? We are hosting two competitions with separate datasets, a loss prediction competition on Kaggle with synthetic workers’ compensation data, and a pricing competition in a simulated market hosted on AI Crowd with real-world motor insurance contracts. Codes can be either in R or python. The competition is being sponsored by a number of different organisations, with a total of US$12,000 in cash prizes to be won. For more information about how to take part please visit www.pricing-game.com
Hidding values in the output of the summary function for a (linear) regression
Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently
url = "http://freakonometrics.free.fr/onlineExams.R" source(url) |
I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression
library(car) reg = lm(prestige ~ women, data=Prestige) my_summary(reg) Call: lm(formula = prestige ~ women, data = Prestige) Residuals: Min 1Q Median 3Q Max -33.444 -12.391 -4.126 13.034 39.185 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 48.69300 2.30760 21.101 2e-16 *** women -0.06417 0.05385 -1.192 0.236 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 17.17 on 100 degrees of freedom Multiple R-squared: 0.014, Adjusted R-squared: 0.004143 F-statistic: 1.42 on 1 and 100 DF, p-value: 0.2362 |
A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use
my_summary(reg, Fisher=TRUE) Call: lm(formula = prestige ~ women, data = Prestige) Residuals: Min 1Q Median 3Q Max -33.444 -12.391 -4.126 13.034 39.185 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 48.69300 2.30760 21.101 2e-16 *** women -0.06417 0.05385 -1.192 0.236 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 17.17 on 100 degrees of freedom Multiple R-squared: 0.014, Adjusted R-squared: 0.004143 F-statistic: 1.42 on 1 and 100 DF, p-value: ■■■■■ |
(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary
if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], digits = digits), "on", x$fstatistic[2L], "and", x$fstatistic[3L], "DF, p-value:", "■■■■■") if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], digits = digits), "on", x$fstatistic[2L], "and", x$fstatistic[3L], "DF, p-value:", format.pval(pf(x$fstatistic[1L], x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), digits = digits)) |
(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.
Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc. It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function. So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression
reg = lm(prestige ~ ., data=Prestige) |
and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors
vligne = c(1,4), vcolonne = c(1,4) |
with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead
Cf2=Cf if(length(vligne)>0){ for(i in 1:length(vligne)){ long = nchar(Cf[vligne[i],vcolonne[i]]) Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "") }} |
Then, we print the updated version of the table
print.default(Cf2, quote = quote, right = right, na.print = na.print,...) |
For example, here, it would be
my_summary(reg, vligne=c(1,4), vcolonne=c(1,4)) Call: lm(formula = prestige ~ ., data = Prestige) Residuals: Min 1Q Median 3Q Max -12.9863 -4.9813 0.6983 4.8690 19.2402 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) ■■■■■■■■■■ 8.018e+00 -1.513 0.13380 education 3.933e+00 6.535e-01 6.019 3.64e-08 *** income 9.946e-04 2.601e-04 3.824 0.00024 *** women 1.310e-02 3.019e-02 0.434 ■■■■■■■ census 1.156e-03 6.183e-04 1.870 0.06471 . typeprof 1.077e+01 4.676e+00 2.303 0.02354 * typewc 2.877e-01 3.139e+00 0.092 0.92718 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 7.037 on 91 degrees of freedom (4 observations deleted due to missingness) Multiple R-squared: 0.841, Adjusted R-squared: 0.8306 F-statistic: 80.25 on 6 and 91 DF, p-value: < 2.2e-16 |
Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.

