Who benefits from data sharing?

This post was co-written with Laurence Barry, originally in French.

Recently, the European Commission has laid the groundwork for a new framework for accessing financial data (FIDA, or Financial Data Access), allowing consumers and businesses to authorize third parties to access their data held by financial institutions, including insurers.

One of the main arguments in favor of this regulation is transparency, or as the texts put it, ‘promoting financial transparency.’ However, it is difficult to argue against transparency unless one has something to hide. This is the famous ‘nothing to hide’ argument! As Solove (2011) reminds us, the British government used it as an argument to install surveillance cameras: ‘if you’ve got nothing to hide, you’ve got nothing to fear.’ Academic Shoshana Zuboff is much more reserved, stating, ‘if you have nothing to hide, then you are nothing…’ Sharing personal data without limits or accountability for how this information is used is dangerous, both for the individual doing so and collectively. We focus here on how insurers could potentially use more information: this opening of data access significantly compromises the very idea of risk pooling and sharing.

Sharing data, what for?

Sharing and pooling information is, in principle, a sensible idea.

In 1988, the FVI (Victims’ Compensation File) was established to meet the obligations resulting from Article 26 of the July 5, 1985 law (known as the Badinter law), for all victims of road accidents. Since then, insurers and magistrates have been required to disclose the amounts of compensation paid for bodily injuries sustained in car accidents. The objective was to enable every victim to know the amounts of compensation they could claim. But more importantly, collectively, it allows for better monitoring and understanding, for example, of discrepancies observed between different Courts of Appeal for comparable situations. More generally, opening data related to road accidents helps with road safety and prevention. The impact of a regulatory change on road speed limits on fatalities cannot be quantified without insurer data. But if one wishes to analyze bodily injuries, finer data, possessed by insurers, are necessary. By using insurer data, Retting (2017) showed that SUVs cause much more severe injuries in accidents involving pedestrians, for example.

Similarly, climate models allow for predicting medium-term climate or its consequences on the severity of winter storms, the risks of drought, or the number of heatwave days. However, predicting economic consequences requires other data, data that insurers possess this time. To analyze the impacts of specific urban planning rules, insurers have data that would enable robust analysis, for instance. During the recent COVID-19 pandemic, numerous debates occurred regarding the release of health data[1], in order to better forecast and anticipate the spread. In public health matters, securely collected data sharing would truly enable prevention and benefit everyone.

Sharing data indeed allows for scientific studies to better map risks and thus reduce them. The danger, of course, lies in sharing sensitive data or more subtle data, allowing for inferring sensitive information, as reminded by the notion of discrimination by proxy. Thus, the FIDA regulation project explicitly excludes health data from the scope of open data, but indirect indicators can be used instead, such as purchases in tobacco shops or payments for healthcare services to infer people’s pathologies. Similarly, by observing where a person uses their payment card, for example, frequented restaurants, it is possible to infer their religious practices; knowing the browsing history allows inferring political sensitivities… As Daniel Solove points out, seemingly innocuous data can thus reveal sensitive information. But above all, this data sharing with third parties who could sell insurance coverage and “improve” their pricing is not without danger to the balance of the system, especially concerning financial accessibility for all.

A Brief History of Tariff Segmentation

The collegia veteranorum of Roman legionnaires are an example of the earliest forms of insurance mechanisms, as recounted by Ginsburg (1940). In 17th century France, the welfare funds of corporations developed until their disappearance in 1791 by the Le Chapelier law, before being reorganized later into “mutual aid societies.” Each member pays a contribution, their premium, to their mutual aid society, and the funds accumulated are used to compensate for damage suffered by any of the members. In England, it was the “friendly societies” that played this role. In the second half of the 19th century, these mutual aid societies developed on a corporatist and geographical basis. The Lyon weavers (canuts) were grouped together as early as 1828 within the Mutual Duty Society, of which they became members by paying an entrance fee of 3 francs and a monthly subscription of one franc, as recounted by Bron (1968). A similar functioning can be found in the Typographic Society of Paris. As Da Silva (2020) explains, “mutual aid societies operate on a principle of solidarity insofar as contributions are not necessarily linked to the level of risk.”

But the idea of ​​introducing a more scientific approach took hold, as evidenced by Hubbard (1852). To quote Da Silva (2020), “while mutualists see in the figure of the actuary the programmed end of solidarity, republicans see in it the rationalization of aid.” Pricing based on risk gradually became established during the 20th century, with differentiated pricing, based on segmentation into a few simple tariff classes, established on the basis of easily collected information (for example, in a questionnaire or a quick medical examination). And above all, this practice was theoretically legitimized based on the concept of actuarial fairness.

About Actuarial Fairness

Since the Kansas regulator stated in 1909 that pricing must be both “adequate” (sufficiently high to avoid the insurer’s ruin) and “not unfairly discriminatory,” discussions have been lively in the actuarial world to determine the meaning of this fairness. Initially, it appeared that two identical risks should not pay different premiums: in this sense, if all insureds in a portfolio pay the same premium regardless of their risk, this pricing is entirely fair. But with competition, very quickly the discussion revolved around the definition of “identical” risks: segmentation is necessary to avoid adverse selection, but segmentation must be able to distinguish “identical” risks in order to remain fair… hence the need for increasingly granular variables that delimit the contours of this risk identity, as Barry (2020) reminds us.

The theory of rational choice, which gained momentum after the war based on the work of von Neumann and Morgenstern, was quickly applied to insurance by Friedman and Savage, but especially Arrow, and transformed this notion of actuarial fairness. Indeed, while until then it was a matter of pooling premiums, the notion of the insured as homo oeconomicus upheaved this approach: if, as described by rational choice theory, every person is capable of attaching probabilities to future events, then adjusting the premium to risk becomes a tool for communication between the insured and the insurer. The premium as a “risk signal” becomes a prevention tool, and any deviation from this “exact” premium is a form of injustice to the insured since it does not allow them to make the right decisions in uncertain situations (see Barry 2023). Segmentation, which was previously presented as a competitive constraint, is now justified by equity considerations.

The mathematical theory of segmentation

Drawing from De Wit & Van Eeghen (1984), it is possible to show that more information increases segmentation and the spread of premiums (with riskier individuals contributing more and less risky ones contributing less). Let p(\boldsymbol{x}) be the predicted accident frequency (the score) for an insured with characteristics \boldsymbol{x}, in other words, p(\boldsymbol{x})=E[Y|\boldsymbol{X}=\boldsymbol{x}], and let \theta be the underlying risk parameter[2] perfectly capturing the insured’s risk. In this case, the variance of the scores is Var[p(\boldsymbol{X})]=Var[Y]-(E[Var[Y|\Theta]]+E[Var[E[Y|\Theta]|\boldsymbol{X}]]) where the first two terms are incompressible, with the variance of risk in the portfolio, and the fundamental variability, linked to the underlying risk. The last term is the one explaining how the explanatory variables \boldsymbol{x} capture the information to approximate \theta. The more explanatory variables we have, the closer we will be a priori to \theta. In other words, the variance of the score mechanically increases with the number of explanatory variables[3].

A case study

But it is possible to go further than this analysis of pure premium variance. With insurer data, in automobile insurance, we can simply focus on the accident frequency involving liability coverage[4]. The explanatory variables are ordered by importance (the most important being the driver’s license seniority, and among the least important, we have, for example, the driver’s marital status). On the left side of Figure 1, we visually find the previous theorem: adding predictive variables increases the variance of scores, of accident frequency predictions. But above all, the distribution of scores is increasingly spread out, as seen on the right. While the overall annual accident frequency is 8.5%, it can be noted that the proportion of insured individuals with a forecast exceeding 17% (translated into pure premium, we look at the proportion of insured individuals paying a premium twice as high as the average premium) goes from less than 1% with 2 predictive variables to over 5% with 16 variables. And this percentage, obtained with logistic regression, is very conservative compared to a more complex machine learning algorithm (a random forest), where 15% of the population is predicted such a high frequency.

Figure 1: Evolution of the variance of accident frequency on the left (in gray, a classic logistic regression, and in black, with smoothing of three continuous variables), and distribution of scores on the right, with models with 2, 7, and 16 explanatory variables.

We can also visualize the relative difference between the 90th and 10th percentiles depending on the number of explanatory variables, as shown in figure 2, on the left. In other words, with only one explanatory variable, the riskiest 10% pay 50% more than the least risky 10%, and four times more with 15 variables (eight times more with a machine learning algorithm). If we compare the average premium between the two deciles (the riskiest 10% and the least risky 10%), the ratio goes from almost 1 to 8 – 13 if we compare the riskiest 5%. The highest risks are asked, on average, thirteen times more than the lowest risks[5]. Here again, econometric models are very conservative compared to machine learning models.

Figure 2: Evolution of the ratio between the 90th and 10th percentiles of accident frequency, and evolution of the ratio between the average in both deciles. In gray, a classic logistic regression, and in black, with smoothing of three continuous variables.

If numerical values strongly depend on the database used[6], it is essential to remember that adding relevant explanatory variables to predict risk will mechanically increase premium dispersion and reduce risk pooling in the portfolio. And if certain variables were not collected by the insurer to construct its pricing, it may be because they were considered sensitive or even prohibited. By collecting personal data from other sources, proxy discrimination is established (perhaps because it begins to use information highly correlated with a handicap, for example).

Who will benefit from data openness

Claiming that consumers will benefit from this data openness overlooks the fact that the benefits will not be evenly distributed at all. Insurance was fundamentally a story of pooling and therefore[7] “sharing the cake”: if there are winners, there will be losers. As we have just seen, low risks are highly likely to see their premiums decrease by increasing available information, and high-risk individuals are highly likely to be excluded, unable to find affordable premiums. While actuarially, this customization of premiums may make sense and may also flatter the consumer, convinced that they are a good risk (and that if they have an accident, it is the fault of others), with data openness, the idea of risk pooling melts away. And if collectively, we value ideas of solidarity and justice to some extent, we must acknowledge that this regulatory approach is not heading in the right direction.

[1] The draft regulation (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52023PC0360) currently excludes “health data” (without clearly defining them, which will inevitably raise many questions in the years to come – for example, does purchasing wigs or books on cancer reveal information about me or a family member?)

[2] If the probability of having an accident is perfectly known (for example, 11.847%), the realization of the risk during the coverage year remains fundamentally random and unpredictable.

[3] In linear models, it is the well-known result that asserts that, in nested models, the model with the most explanatory variables will have the highest R^2, or formulated differently, the largest variance of predictions.

[4] The code is available online at https://github.com/freakonometrics/fida/ based on data used in Charpentier (2014).

[5] In the bonus-malus system, the maximum bonus is 0.50, or 50%, and the maximum malus is 3.50, or 350%, resulting in a ratio of 7 between the two bounds.

[6] In particular, the plateau observed to the right of the curves is artificial and stems from the fact that the variables are ordered by importance: the last variables added here have low predictive power (but are available in the database).

[7] To use the classic expression in welfare economics, as in Brams & Taylor (1996).

References

Arrow, K. J. (1951). Alternative approaches to the theory of choice in risk-taking situations. Econometrica, 404-437.

Barry, L. (2020). Insurance, big data and changing conceptions of fairness. European Journal of Sociology/Archives Européennes de Sociologie, 61(2), 159-184.

Barry, L. (2023), From Small to Big Data: (Mis)Uses of Insurance Premium for the Government of Hazards,” Journal of Cultural Economy, 1–16

Brams, S. J., & Taylor, A. D. (1996). Fair Division: From cake-cutting to dispute resolution. Cambridge University Press.

Bron, J. (1968). Histoire du mouvement ouvrier français. Éditions Ouvrières.

Charpentier, A. (Ed.). (2014). Computational actuarial science with R. CRC press.

Charpentier, A. (2024). Insurance, biases, discrimination and fairness. Springer

Da Silva, N. (2023). La bataille de la Sécu: une histoire du système de santé. La fabrique éditions.

De Wit, G. W., & Van Eeghen, J. (1984). Rate making and society’s sense of fairness. ASTIN Bulletin: The Journal of the IAA, 14(2), 151-163.

Fernandes, T., & Pereira, N. (2021). Revisiting the privacy calculus: Why are consumers (really) willing to disclose personal data online?. Telematics and Informatics, 65, 101717.

Frezal, S., & Barry, L. (2020). Fairness in uncertainty: Some limits and misinterpretations of actuarial fairness. Journal of Business Ethics, 167, 127-136.

Friedman, M., & Savage, L. J. (1948). The utility analysis of choices involving risk. Journal of political Economy, 56(4), 279-304.

Ginsburg, M. (1940). Roman military clubs and their social functions. Transactions and Proceedings of the American Philological Association 71, 149-156.

Hubbard, N. G. (1852). De l’organisation des sociétés de prévoyance: ou, de secours mutuels, et des bases scientifiques sur lesquelles elles doivent être établies; avec une table de maladie et une table de mortalité dressées sur des documents spéciaux. Guillaumin et cie.

Retting, R. (2017). Pedestrian traffic fatalities by state. Governors Highway Safety Association: Washington, DC, USA.

Schneier, B. (2015). Secrets and lies: digital security in a networked world. John Wiley & Sons.

Solove, D. J. (2011). Nothing to hide: The false tradeoff between privacy and security. Yale University Press.

von Neumann, J. & Morgenstern, O (1944) Theory of Games and Economic Behavior. Princeton University Press.

Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (February 15, 2024). Who benefits from data sharing? Freakonometrics. Retrieved December 3, 2024 from https://doi.org/10.58079/vumf


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.