This article was initially written in French and published in November 2021.
Rubinstein (2012) claimed that “in economic theory, like Harry Potter, the Emperor’s New Clothes, or King Solomon’s Tales, we play in imaginary worlds. Economic theory invents tales and calls them models. An economic model is also somewhere between fantasy and reality (…) The word ‘model’ sounds more scientific than the word ‘fable’ or ‘tale’, but I think we are talking about the same thing“. Today, very often, learning models will build a model, based on learning data, and the actuary’s job will be to make sense of it, to find the story – the fable – that it is possible to tell.
Explaining a model?
The Villani report (2018) mentioned the importance of the explicability of machine learning algorithms, thus taking up a neologism, corresponding to a requirement that has been observed for half a century in complex systems. In the 1980s, as Swartout et al (1991) recall, the Strategic Computing Initiative of the US Department of Defense, launched the acronym EES (Explainable Expert Systems), the adjective “explainable” giving the noun “explainability”. More recently, in 2016, the General Data Protection Regulation (GDPR) imposed a requirement to provide “meaningful information about the underlying logic” of any automated decision. Around the same time, the “Lemaire” law imposed an obligation on the administration to communicate to the individual the “rules defining any [automatic] processing and the main characteristics of its implementation.” In 2018, “Convention 108” (or the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data), gave individuals the right to obtain knowledge of “the reasoning behind the automatic” processing. While the terms interpretability and explicability are not mentioned, the idea (with which the concepts of transparency, “auditability” or accountability of algorithms will soon be associated) is clearly present.
Recently, Miller (2019) has attempted a definition of this “explicability”. As he points out, defining the term “explanation” has mobilized a good number of philosophers, and all of them seem to emphasize the importance of causality in explanation, i.e. that an explanation necessarily refers to causes. All sorts of terms seem to be used to describe this idea, from “interpretation” to “justification”. Often, then, explanation is a mode by which an observer can gain understanding. A justification explains why a decision is good, but does not necessarily aim to provide an explanation of the decision-making process.
The need for explanation, or trust
“Young children almost endlessly ask, “And why…? [i] There are many reasons why people may ask for explanations. Curiosity is one of the main motivations, but more pragmatic arguments can be invoked. To illustrate our (natural) need for explanations, Johnston (2021) gives the simple example of a visit to the garage: my car breaks down, I take it to the garage, which fixes it and asks me for 900 euros. But, before handing over the money, I ask what’s wrong and he explains that the inlet valve of the catalytic converter was blocked, which caused the bearings of the injector motor to wear out and allowed dust to get into the discharge valve, which could clog the outlet pipe, so he replaced the bearings and put in a new pipe. This technical mumbo-jumbo was invented, but the fact is that often the explanation was not really useful, as one has only a (very) limited knowledge of how cars work. What was useful was that the mechanic actually fixed the car. And if we go further, these explanations have a comforting power: we feel confident if the explanation seems relevant (as Kästner et al. 2021 reminds us). And if the repair works, I don’t necessarily want to know more. But if the problem persists, I want to understand. It is often the error analysis that is important. In 2017, during one of the first debates at the NeurIPS conference to discuss algorithmic fairness, it was pointed out that “if we wish to make AI systems deployed on self-driving cars safe, straightforward black-box models will not suffice, as we need methods of understanding their rare but costly mistakes. During this conference, Yann LeCun pointed out that when presented with two models to users (one extremely explainable and 90% accurate, the other behaving like a black box but with a higher accuracy of 99%), they always chose the more accurate one. In other words “people don’t really care about interpretability but just want some sort of reassurances from the working model”. In other words, interpretability is not important if you are convinced that the model works well, in the conditions in which it is supposed to work. Isn’t that what happens when I get on a plane, or undergo a surgical operation?
Justice as an example to follow?
Almost a century ago, Ernest Burgess began tracking a cohort of 3,000 convicts, released on parole, and managed to identify twenty-two parameters that would distinguish between those who would pass their probation and those who would fail. In the 1950s, Glueck and other researchers continued his analysis, multiplying studies devoted to recidivism factors to build multiple risk prediction scales. These instruments rely on statistical methods inspired by insurance practices to determine the levels of risk associated with a group of offenders with common characteristics and, based on these correlations, to predict the future criminal behavior of a specific individual, as Harcourt (2008) relates.
This “actuarial justice” will allow the development of decision support tools, allowing judges to have recidivism or dangerousness scores. But these tools are only an aid, and the judges must then justify their decision, provide an explanation. For in judicial matters, the individualization of the sentence is one of the fundamental principles of French criminal law, as Saleilles (1897) emphasized. For Dadoun (2018), in order for the requirements of a fair trial to be respected, “the accused must be able to understand the verdict that has been rendered”.
This motivation of judgments and rulings can be demonstrative, narrative or peremptory, as shown by Cottin et al. (2020). In the majority of cases, the motivation is an explanation of the decision, except for the peremptory motivation, which is more of an assertion than an explanation[ii]. The narrative version contains an explanation of the context of the offence, it is about telling the story. This is probably the approach that is most similar to what we have in mind when we ask for an explanation[iii]. Can justice be used as an example to understand what explicability should be? In justice, motivation is based on facts, and expert opinions. As Coche (2011) points out, “in order to assess the dangerousness of persons prosecuted or convicted, the legislator has increased the use of expert reports. However, expert assessments of dangerousness are not only unreliable, but they cannot become so. They therefore create the illusion, constantly disappointed, of a scientific assessment of dangerousness“. This assessment by experts is called “clinical assessment”, and as Dubourg & Gautron (2015) note, hundreds of studies consider that, in the context of assessing a risk of recidivism, these unstructured clinical assessments present estimates close to chance, historically in the United States, criticizing clinicians for an overestimation of recidivism risks. The same criticism is made in France, where “the use of psychoanalytical concepts remains predominant both in general psychiatry and in the context of expert assessments. However, these concepts have no theoretical link with the criminal behaviour to be predicted. Thus, the method based on unstructured clinical judgment is a subjective assessment, not scientifically validated, and based on intuitive correlations…. It is therefore no longer a question of trying to understand or explain. In the majority of expert reports, the scientific basis of the explanation is that there is no smoke without fire. Dubourg & Gautron (2015) show that the hypothesis of dangerousness and/or risk of recidivism is very often validated, with 75% of convicts being given an unfavorable prognosis by at least one expert during the process. Delacrausaz & Gasser (2012) observe that the expert simply “extracts this or that element of observation to deduce all sorts of reasoning, without explaining either the motivations for his choices, or the theoretical foundations on which he bases them“. And very often, from the same factual elements present in the file, the defense lawyer and the prosecutor will have two radically different explanations.
The two cultures
The world of justice, with judges, prosecutors, and lawyers who are overwhelmingly, in France at least, women and men of letters, is very far from the statistical culture of data. At the end of the 1950s, Baron Charles Percy Snow stated that the intellectual life of Western society is essentially divided into two distinct cultures, that of the sciences and that of the humanities, and that the shared culture tends to disappear. The world of numbers and the world of stories.
Glenn (2000) took up this idea by explaining that the risk selection process of an insurer has two faces (like the Roman god Janus): the one that is presented to regulators and policyholders, and the other that is presented to underwriters. On the one hand, there is the face of numbers, statistics and objectivity. On the other, there is the face of stories, character and subjective judgment. Paul Meehl in 1954 spoke of “mechanical prediction” to describe actuarial models. The rhetoric of insurance exclusion (based on objective numbers), for example, forms what Brian Glenn calls “the actuary’s myth,” namely “a powerful rhetorical situation in which decisions appear to be based on objectively determined criteria when they are also largely based on subjective criteria. Glenn (2003) went further, stating that “insurers may assess risk in many different ways depending on the stories they tell about which characteristics are important and which are not…. … The fact that the selection of risk factors is subjective and dependent on risk and liability narratives has historically played a far more important role than the fact that someone with a wood stove is charged higher premiums … Virtually every aspect of the insurance industry is story-based first and numbers-based second.” This importance of storytelling is reflected in George Box’s famous phrase “all models are wrong but some models are useful“. In other words, models are, at best, an interesting fiction.
Machine learning and black boxes
But we must not be mistaken about what we are trying to explain. For example, explaining what a learning algorithm does is quite simple: it tries to minimize an objective (the difference between what it predicts and what is observed) on the basis of more or less complicated optimization algorithms (a Newton-Raphson algorithm for simple logistic regression, or a back-propagation algorithm on portions of the base for deep learning (neural network with several hidden layers)). A so-called “nearest neighbor” algorithm consists in saying that the frequency of car accidents of an individual will be the average frequency of the people closest to this individual (in terms of characteristics: same driving experience, driving the same type of vehicle, driving the same distance, etc). This algorithm is simple to explain. The hard part is interpreting the constructed model, making the predictions intelligible. Pasquale (2015) pointed out that machine learning algorithms are characterized by their opacity and “incomprehensibility”, sometimes called “black box properties”. In response, or concomitantly, there has been a demand for transparency of algorithms, or even computer code, such as Citron and Pasquale (2014) or Mittelstadt et al. (2016). That said, the “nearest neighbors” algorithm is transparent and simple, it is the data that is important: it is impossible to make a prediction without access to the data (unlike a linear regression that yields a numerical function). Burrell (2016) and Laat (2017) noted that the lack of transparency was partly due to the behavior of development and user entities that refuse to disclose algorithms, or even just programmed decision rules and criteria to external parties for reasons of trade secret protection, copyright protection, data protection (when computer systems contain personal data of third parties), or out of caution against targeted behavioral adjustments by data subjects. This last point had been highlighted in the case of the Facebook algorithm, as told in Charpentier (2021).
The linear model as a “white box” reference?
One of the concerns is that the simplest model, the linear model, is often described as an “interpretable” model, but by construction, this interpretation is fallacious. Indeed, to estimate a linear model, to link two variables and , it is the correlation between the two variables that matters, and that is often interpreted as a causal relationship (for simplicity of narrative).
Figure 1: Cyclists in Stockholm, with average temperature and number of cyclists, by day.
Figure 1 shows data from the same database, where x_t is the number of cyclists on a street in Stockholm on day t (of 2014), and y_t is the average temperature on day t. On the left side, (y_{t-1},x_t) and a linear model (x=\alpha_0+\alpha_1y+\eta), assumed to be relevant when the temperature is above 0oC, are plotted. The slope is significantly non-zero, we have an R^2 exceeding 75%, and the interpretation would be “the number of cyclists on the road increases with temperature, one degree more brings 750 more cyclists per day on the road“. Or a more succinct explanation would be “people in Stockholm prefer to cycle when it is warm”. On the right hand side, (x_{t-1},y_t) has been plotted and again a linear model (y=\beta_0+\beta_1x+\epsilon) seems to make sense, excluding the leftmost part of the curve (when is low). Again, the slope is significantly non-zero, we have an R^2 slightly less than 75%, and the interpretation would be “the temperature increases with the number of cyclists on the road, each thousand more cyclists increasing the temperature by 1oC”. Again, simplifying, “we can fight global warming by limiting the number of bicycles on the road“. From the data, and the data only, can I say that one of the explanations given is more valid than the other?
Wishful thinking?
We ask to be able to understand and interpret any algorithmic prediction, but isn’t that too ambitious? For as Saint Augustine would have said, “If no one asks me what time is, I know what it is; and if I am asked and want to explain it, I no longer know“. This was also noted more recently by Kahneman (2011), introducing the notions of System 1 / System 2 (the two systems of thought). System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us direct our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision making, requiring discipline and sequential thinking. In most cases, we make decisions without really being able to explain them, and without this being a cause for concern. For it is probably something else that is sought in explicability. We mentioned trust, but there is also the importance of fairness, partly mentioned in the legal context: explanation does not matter if the decision seems fair.
References
Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. IJCAI-17 workshop on explainable AI (XAI), Vol. 8, No. 1, pp. 8-13.
Burrell, J. (2016). « How the machine ‘thinks’ : Understanding opacity in machine learning algorithms ». Big Data & Society 3.
Charpentier, A. (2018). L’intelligence artificielle dilue-t-elle la responsabilité ? Risques, 114, 145-150
Charpentier, A. (2021). Une mesure ne peut pas être un objectif. Risques, 125, 122-127
Citron, D. K. & F. Pasquale (2014). « The scored society : Due process for automated predictions ». Washington Law Review 89.
Coche, A. (2011). « Faut-il supprimer les expertises de dangerosité ? » Revue de science criminelle et de droit pénal comparé 1, p. 21–35.
Cottin, D. Z., Perrocheau, V., & Milburn, P. (2020). L’obligation de motivation des décisions criminelles en France: de la loi aux pratiques. Analyse empirique de la motivation des décisions de cours d’assises. Revue Juridique Thémis, 54(1).
Dadoun, A. (2018). L’obligation constitutionnelle de motivation des peines. Revue de science criminelle et de droit penal compare, (4), 805-827
Delacrausaz, P., & Gasser, J. (2012). « La place des instruments d’évaluation du risque de récidive dans la pratique de l’expertise psychiatrique pénale : l’exemple lausannois ». L’information psychiatrique 88: 6, p. 439–443
Elkus, A. (2015). « You Can’t Handle the (Algorithmic) Truth ». Slate May 20th.
Dubourg, É. & Gautron, V. (2015). « La rationalisation des outils et méthodes d’évaluation :de l’approche clinique au jugement actuariel ». Criminocorpus. Revue d’Histoire de la justice,des crimes et des peines.
Glenn, B. J. (2000). « The shifting rhetoric of insurance denial ». Law and Society Review, p. 779–808.
Glenn, B. J. (2003). « Postmodernism : the basis of insurance ». Risk Management and Insurance Review 6.2, p. 131–143.
Ginet, C. (2008). In defense of a non-causal account of reasons explanations. The Journal of Ethics, 12(3), 229-237.
Harcourt, B. E. (2008). Against prediction. University of Chicago Press.
Heider, F. (1958). The psychology of interpersonal relations. Wiley.
Johnston, D. (2001) « Explainable Models Are Overrated ». Linkedin 2021-01-25.
Kahneman, D. (2011). Système 1 / Système 2 : Les deux vitesses de la pensée, Flammarion.
Kästner, L., M. Langer, V. Lazar, A. Schomäcker, T. Speith & S. Sterz (2021). « On the Relation of Trust and Explainability : Why to Engineer for Trustworthiness ». 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), p. 169-175.
Laat, P. B. de (2017). « Big data and algorithmic decision-making : can transparency restore accountability ? » ACM Computers and Society 47.3, p. 39–53.
Lakkaraju, H. et al. (2019) Faithful and customizable explanations of black box models. Proceedings of the 2019 AAAI Conference on AIES.
Lipton, Z. C. (2018). « The Mythos of Model Interpretability : In machine learning, the concept of interpretability is both important and slippery. » Queue 16.3, p. 31–57
Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Sciences 10, 464-470.
Malle, B. F. (2006). How the mind explains behavior: Folk explanations, meaning, and social interaction. MIT Press.
Manguel, A. (2020) Monstres fabuleux, Actes Sud.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
Mittelstadt, B. D., P. Allo, M. Taddeo, S. Wachter & L. Floridi (2016). « The ethics of algorithms : Mapping the debate ». Big Data & Society.
Pasquale, F. (2015). The black box society : the secret algorithms that control money and information. Harvard University Press
Rubinstein, A. (2012). Economic fables. Open book publishers.
Saleilles, R. (1897). L’individualisation de la peine : étude de criminalité sociale. F. Alcan.
Shiller, R. J. (2020). Narrative economics : How stories go viral and drive major economic events. Princeton University Press.
Snow, C. P. (1956). The two cultures. New Statesman (6 octobre).
Swartout, W., Paris, C. & Moore, J. (1991), Explanations in knowledge systems: design for explainable expert systems. IEEE Expert, vol. 6, no. 3, pp. 58-64.
Villani, C. (2018) Donner du sens à l’intelligence artificielle.
Woodward, J. (2005) Making things happen: A theory of causal explanation, Oxford University Press.
[i] According to the pediatric literature, between 3 and 4 years of age (more if we are lucky enough to have children with insatiable curiosity).
[ii] “These peremptory assertions do not constitute true motivation, but are, on the contrary, the negation of motivation,” said Cottin et al. (2020).
[iii] The practice is, of course, more complex: in many cases, motivation is not given unless one of the parties appeals. This is reminiscent of Manguel (2020), “Alice knows instinctively that logic is the way for us to make sense of what has no meaning and to discover its secret rules, and she applies it ruthlessly, even to her elders and superiors, whether she is facing the Duchess or the Mad Hatter. And when arguments prove unavailing, she insists on, at the very least, making the absurdity of the situation obvious. When the Queen of Hearts demands that the court render “the sentence first…and the judgment afterwards,” Alice rightly replies, “But that’s nonsense!” That is indeed the only response most of the nonsense in our world deserves.”
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (June 30, 2022). The myth of interpretability and explicability of models. Freakonometrics. Retrieved February 18, 2025 from https://doi.org/10.58079/ovjr