Last talk of the year…

This Saturday, I will be giving a talk (from home, I won’t be able to be in person in London), at CMStatistics 2021 (the 14th International Conference on Computational and Methodological Statistics), on dependencies among lifelengths in families. The paper is based on our paper, Modeling Joint Lives within Families (available on arxiv), and slides now online.

Some additional figures are available in the slides,

Talk on climate models and insurance

Tomorrow morning, at 7 am, I will give a talk at the Actuarial Conference # 62 on climate models and insurance, getting back on two recent paper.

  • Flood, French’s Nat Cat System and Fairness

The first paper is Insurance against NaturalCatastrophes: Balancing Actuarial Fairness and Social Solidarity with Laurence Barry (PARI) and Molly James (EURIA)

Based on official risk areas (PPRL and PPRI)

we will investigate the prices of houses and apartments

and discuss connections between risk and wealth.

  • Subsidence and predictions

The second paper is Predicting Drought and SubsidenceRisks in France with Hani Ali (Willis Re) and Molly James (EURIA)

We did try several models to predict subsidence frequency

GLMs and also random forests

Then we got predictions for frequency, in 2017

and 2018

Then, we’ve been able to derive some risk maps

Here are predictions for costs for 2017

for 2019

and for 2020

I still wonder how to take into account climate change in this approach (except that we are more and more likely to be in the upper left corner – hot and dry)

  • Extensions (wildfires in Québec and RL)

Finally, I will (very briefly) discuss two recent works, the first one with Amirouche Benchallal (UQAM) and Yacine Bouroubi (Sherbrooke) on wildfire in Québec

and the second one, with Nouri Sakr (Columbia) and Mennatalla Mohamed Hassan (AmericanUniversity in Cairo) on government intervention in the context of natural catastrophes.

Talk at Sherbrooke University on bias and (well) calibration

This Tuesday, I will be at Sherbrooke University to give a talk at the statistics seminar, on autocalibration. According to the Handbook of Statistical Methods,

Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Bias is a quantitative term describing the difference between the average of measurements made on the same object and its true value

As mentioned on scikit learn‘s page,

Well calibrated classifiers are probabilistic classifiers for which the output can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a [predicted probability] value close to 0.8, approximately 80% actually belong to the positive class.

We can find that idea in Dawid (1982),

Suppose that a forecaster sequentially assigns probabilities to events. He is well calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent

and, with the same words, in Nate Silver’s The signal and the noise

Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If, over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated

Or according to Kuhn & Johnson (2013)

we desire that the estimated class probabilities are reflective of the true underlying probability of the sample

Finally, Kruger & Ziegel (2020) gave the definition of autocalibration

the forecast X of Y is an auto-calibrated forecast of Y  if \mathbb{E}(Y|X)=X almost surely,

With my notations, it means that \mathbb{E}(Y|\widehat{Y}=y)=y , \forall y. See also Van Calster et al. (2019) for a discussion.

Of course, we will get back to what models are, starting with a (generalized) linear model

the extention of additive models with splines

Then, we can consider neural nets

Finally, consider ensemble methods, such as bagging

and boosting