This Tuesday, I will be at Sherbrooke University to give a talk at the statistics seminar, on autocalibration. According to the Handbook of Statistical Methods,
Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Bias is a quantitative term describing the difference between the average of measurements made on the same object and its true value
As mentioned on scikit learn‘s page,
Well calibrated classifiers are probabilistic classifiers for which the output can be directly interpreted as a confidence level. For instance, a well calibrated (binary) classifier should classify the samples such that among the samples to which it gave a [predicted probability] value close to 0.8, approximately 80% actually belong to the positive class.
We can find that idea in Dawid (1982),
Suppose that a forecaster sequentially assigns probabilities to events. He is well calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent
and, with the same words, in Nate Silver’s The signal and the noise
Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If, over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated
Or according to Kuhn & Johnson (2013)
we desire that the estimated class probabilities are reflective of the true underlying probability of the sample
Finally, Kruger & Ziegel (2020) gave the definition of autocalibration
the forecast X of Y is an auto-calibrated forecast of Y if \mathbb{E}(Y|X)=X almost surely,
With my notations, it means that \mathbb{E}(Y|\widehat{Y}=y)=y , \forall y. See also Van Calster et al. (2019) for a discussion.
Of course, we will get back to what models are, starting with a (generalized) linear model
the extention of additive models with splines
Then, we can consider neural nets
Finally, consider ensemble methods, such as bagging
and boosting