This Tuesday morning (7 am), I will give a talk at the ASTIN reading club, on auto-calibration. More precisely, we will look at the topic of ensuring the calibration of machine learning models for non-life pricing. Slides are available here.
Tag Archives: autocalibration
Talk at Sherbrooke University on bias and (well) calibration
This Tuesday, I will be at Sherbrooke University to give a talk at the statistics seminar, on autocalibration. According to the Handbook of Statistical Methods,
Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Bias is a quantitative term describing the difference between the average of measurements made on the same object and its true value
As mentioned on scikit learn‘s page,
Well calibrated classifiers are probabilistic classifiers for which the output can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a [predicted probability] value close to 0.8, approximately 80% actually belong to the positive class.
We can find that idea in Dawid (1982),
Suppose that a forecaster sequentially assigns probabilities to events. He is well calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent
and, with the same words, in Nate Silver’s The signal and the noise
Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If, over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated
Or according to Kuhn & Johnson (2013)
we desire that the estimated class probabilities are reflective of the true underlying probability of the sample
Finally, Kruger & Ziegel (2020) gave the definition of autocalibration
the forecast X of Y is an auto-calibrated forecast of Y if \mathbb{E}(Y|X)=X almost surely,
With my notations, it means that \mathbb{E}(Y|\widehat{Y}=y)=y , \forall y. See also Van Calster et al. (2019) for a discussion.
Of course, we will get back to what models are, starting with a (generalized) linear model
the extention of additive models with splines
Then, we can consider neural nets
Finally, consider ensemble methods, such as bagging
and boosting
United As One, IME 2021
Next week, colleagues from the University of Illinois Urbana-Champaign and the Pennsylvania State University in the United States, Ulm University in Germany, and the University of New South Wales (UNSW Sydney) in Australia organize the 24th International Congress on Insurance: Mathematics and Economics, aka IME2021. I will present our joint work, with Michel Denuit and Julien Trufin, Autocalibration for Insurance Pricing with Machine Learning, Philipp Ratz will present our joint work on peer-to-peer insurance model. I will also chair some sessions, and participate to a round table on Thursday morning (Montréal time).
Exposé à la conférence annuelle de la Société Canadienne de Statistique
La saison des conférences continue. Cette semaine, je présenterais le papier avec Michel Denuit et Julien Trufin, Autocalibration for Insurance Pricing with Machine Learning. Les codes sont en ligne sur github et il y aura bientôt des exemples supplémentaires…
ASTIN Online Colloquium
Next week, I will present at the 2021 ASTIN Online Colloquium (online, of course, it will not be possible to meet in person, in Florida). I will present the joint paper with Michel Denuit and Julien Trufin, Autocalibration for Insurance Pricing with Machine Learning. Earlier on Wednesday, Philipp Ratz will present our work Collaborative Insurance Sustainability and Network Structure.
Autocalibration for Insurance Pricing with Machine Learning
With Michel Denuit and Julien Trufin, we recently uploaded a joint paper on ArXiv, entitled Autocalibration and Tweedie-dominance for Insurance Pricing with Machine Learning.
Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing.
Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also,
the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts.
The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical
link setting has been empirically documented in Wüthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting.
The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance.
It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale.
Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis.
Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).
In this paper, we started with a simple observation : with GLM (like a Poisson regression) we have unbiased predictions, in the sense that \widehat{y}_1+...+\widehat{y}_n=y_1+...+y_n (on the training dataset, we should expect to have \widehat{y}_1+...+\widehat{y}_n\approx y_1+...+y_n on the validation dataset). But with any machin learning algorithm, this is no longer the case. For instance, with a boosting algorithm, we can end up with a significant bias. This is an application on an insurance dataset from the CASdataset package, with a GLM (Poisson regression) on the left, and additive smooth version, GAM in the middle and some boosting algorithm on the right. Here is the dispersion of the predictions, \widehat{y}_i‘s,
Let \widehat{\pi} denote a fited model. If \widehat{\pi} was close to the true parameter of the Poisson model, \mu where \mu(\boldsymbol{x})=\mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}], then function s\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=s] should be very close to the first diagonal (since \mathbb{E}[Y|\mu(\boldsymbol{X})=s]=s). And this is was we observe below, for the GLM. But the boosting algorithm has a significant bias.
Furthermore, the bias is not the same everywhere. We can see it more precisely on a quantile version of the x-axis
that is u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]. Here is a multiplicative version of that bias u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)]/u
The idea of autocalibration is to use that multiplicative factor to correct from the bias. Thus, if \widehat{\pi}(\boldsymbol{x}) is close to the 75% upper quantile, then the true value should be 20% larger for the bootsting algorithm. Here is the new distribution of the predictions, with that correction
and we can observe that u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)] is now much closer to the first diagonal (below is the multiplicative bias)
We discuss in the paper this correction for local bias, so that u\mapsto\mathbb{E}[Y|\widehat{\pi}(\boldsymbol{X})=F_{\widehat{\pi}}^{-1}(u)] becomes as close as possible to the first diagonal… see https://arxiv.org/abs/2103.03635 for a complete version of the paper, and https://github.com/freakonometrics/autocalibration/ for the complete version of the R codes