Machine learning tends to replace the actuary in the selection of features and the building of pricing models. However, avoiding subjective judgments thanks to automation does not necessarily mean that biases are removed. Nor does the absence of bias warrant fairness. This paper critically analyzes discrimination and insurance fairness with machine learning.
Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes.
Sacrificing predictive performance is often viewed as an unacceptable option in machine learning. However, we note that to satisfy a fairness objective, the predictive performance can be reduced too much, especially for generic fair algorithms. Therefore, we have developed a more suitable and practical framework by using autoencoders techniques.