This post was initially written in French with Rodolphe Bigot (lecturer at the University of Picardie Jules Verne), in the Winter 2020, and follows a previous post entitled Rethinking responsibility and causality.
Historically, algorithms were content to provide decision support, leaving a human being to make the decision, but experiments are underway, with autonomous systems, making decisions, whether it be car driving systems, or predictive justice algorithms, as shown by Huss et al. (2018). This autonomy, which basically means the “ability to act freely” also refers to the idea of “governing oneself by one’s own laws“. But what is the responsibility of the decision maker in the case of a prediction that leads to harm?
Understanding and predicting
In Bigot & Charpentier (2019) we questioned the evolution of the notion of responsibility with regard to the developments of the last two centuries, but an essential point is that fundamentally a man[i] is, in the majority of cases, responsible for his acts. Why is this so? Probably because a man is supposed to be able to imagine (taken in the sense of anticipating), probably understand and foresee that his actions will have causes, consequences. Animals are not considered responsible for their actions (but their masters, owners or guardians are [ii]). In his experiments, Ivan Pavlov had conditioned dogs, which began to salivate when a bell rang, announcing the arrival of a meal. They associated the bell with the meal, but there was no causal mechanism, simply a form of instinctive understanding that all animals have.
To understand is to connect knowledge, and deduce forms of universal laws, as in physics. We try to build a theory that explains facts, connects them to the rest of knowledge and then allows us to anticipate. This is the principle of abstraction. Abstraction is a fundamental process in the understanding of a phenomenon; observation is rarely enough. When Galileo, in the 17th century, stated the principle of inertia (postulating that in the absence of force, bodies move in a straight line at a constant speed) against all the experiments done on Earth. At that time (but it is probably still true today) the perception is closer to what Aristotle had stated, namely that force was necessary to maintain motion. One can think of the thought experiment proposed by Galileo on the fall of bodies, to contradict the Aristotelian theory of motion, according to which the speed of a falling body is proportional to its weight (he proposed to drop two bodies of different mass into the void by connecting them by a string). When he states that all bodies fall at the same speed, this law is not a synthesis of known empirical facts, but an abstract understanding of the phenomena. As Weber (1905) stated, “the attribution of effects to causes takes place through a process of thought which includes a series of abstractions. The first and most decisive one takes place when we conceive that one or more of the causal components are modified in a certain direction and we ask ourselves whether, under the conditions thus modified, the same effect […] or another effect ‘would be expected‘”.
But it is possible to understand, without being able to predict. In Charpentier (2018a), it was explained how to generate chaos in a deterministic way. In Figure 1 we see the evolution of two sequences defined by recurrence, with two different initial values, with a difference of 1 in 10,000 at the starting point. Very (very) quickly the two series diverge, and are then considered as statistically independent. Poincaré (1908) said (speaking of natural laws) that if “this allows us to predict the subsequent situation with the same approximation, that is all we need, we say that the phenomenon has been predicted; but this is not always the case, it can happen that small differences in the initial conditions generate very large ones in the final phenomena […] The prediction becomes impossible and we have a fortuitous phenomenon“.
[
Figure 1 : simulation of (pseudo) random numbers by Sedgewick’s method, with u_n=x_n/m where xn=(axn-1+c)x_n modulo m (with here, the first series (red solid line) starting with x_1[latex] worth 6 millions, or [latex]u1=0. 6 and the second (dotted line) u1=0.60001. Source: authors
Conversely, it is possible to predict without understanding. To understand is often to state a general law, and to start from this observation, to make the same causes have the same effects.
But as Maxwell (1876) noted, "to make this maxim intelligible, we must define what we mean by the same causes and the same effects, since it is manifest that no event ever happens more than once, so that the causes and effects cannot be the same in all aspects." And indeed, faced with a given situation, an autonomous car will look for similar situations that could have been experienced. We are often tempted to see causal links, whereas sometimes there are only correlations, "cum hoc ergo propter hoc" (with this, therefore because of this), or even simple coincidences. The correlation is that if x causes y and y causes z, then y and z will be correlated, without any causal link. The classic example in elementary school is the correlation between a number of mistakes in a dictation (y) and the shoes size (z), its causal variable being here the age of the pupils (x). Coincidence is all the easier to obtain in large dimensions: if we have a variable of interest, and a hundred or so variables that are independent of this variable, 5 of these variables (on average) will be "significantly correlated" with our variable of interest, with a threshold of 95%. "To tell the truth, Big Data means above all the crossing of a threshold from which we would be forced (by the quantity, the complexity, the speed of proliferation of data) to abandon the ambitions of modern rationality consisting of linking phenomena to their causes, in favor of a rationality that could be said to be post-modern, indifferent to causality, purely inductive statistics, limited to identifying patterns, that is to say, motifs formed by the correlations observed between data independently of any causal explanation. The repetition of these patterns within large quantities of data gives them a predictive value" wrote Sauvé (2014).
Which algorithms, which machines?
An algorithm simply refers to a set of finite operating rules for solving a problem. We can think of the analogy with kitchen recipes, or bureaucratic procedures, as Charpentier (2018b) showed. However, we must distinguish between an automation algorithm and a learning algorithm, as Godefroy (2017) reminds us. The scores of banks or insurers are (still) often of the first type, which can be used to explain to a customer the reason for refusing a mortgage: a score is constructed as a weighted average[iii] of different quantities (such as age, salary, length of employment, etc.), which is compared to a threshold. These algorithms, classical in insurance, have the advantage of being understandable, with a good predictive power. The second class of algorithms provides greater accuracy in predictions, but the price to pay is the construction of black boxes (or machines that are too "intelligent" to be intelligible).
Learning algorithms "learn" by inductions, by looking for correlations allowing to improve the prediction, with constant, repeated back and forth (one can think of cross-validation) which makes it difficult to understand the selected process. This inductive approach is the strength, but also the weakness, of these algorithms. As Domingos (2012) noted, "induction is a vastly more powerful lever than deduction, requiring much less input knowledge to produce useful results, but it still needs more than zero input knowledge to work. And, as with any lever, the more we put in, the more we can get out [...] Machine learning is not magic; it cannot get something from nothing. What it does is get more from less. In learning algorithms, we do not find fixed decision trees (if .... then ...), but an evolutionary construction, as Reigeluth (2016) reminded us, which endows them with three faculties "memory, adaptation, generalization". One can think of algorithms by reinforcement, where one looks at situations (or similar states of nature), the actions that have been taken, and the consequences that have been produced. One can then try again or explore and try something else (and learn more)[iv]. These are the algorithms that we see arriving in the so-called autonomous machines.
But these machines, if we can call them autonomous, have no will of their own, or free will: they make decisions that will maximize a so-called "objective" function, while respecting a set of constraints.
If it can adapt to a new unknown, it gives the impression of understanding, but as the saying goes, an algorithm that can identify objects in a picture can recognize a dog, but the machine does not know what a dog is[v]. It is this indeterminacy in the autonomous decision-making process that raises questions about the responsibility of so-called autonomous machines. In 2016, the European Parliament noted that "in the event that a robot can make decisions autonomously, the usual rules would not be sufficient to establish legal liability for damage caused by a robot."
On the liability of machines, the case of autonomous vehicles
Before going too far, it is perhaps appropriate to recall that today, the autonomous vehicle does not really exist, to date, various forms of delegation of driving are authorized and experimented, leaving a more or less large place to technologies, to the passenger, or to a person outside the cockpit. The SAE (Society of Automotive Engineers) classification system has 6 levels, shown in Figure 2.
Figure 2: SAE (Society of Automotive Engineers) classification system
We can also recall a subtlety, mentioned in Bigot & Charpentier (2018): scientific causality is not legal causality. As noted by Radé (2012) legal causality results from the legal qualification of events. Scientific causality assumes an automatic succession of events, without intervention of the will, without intention. The questioning of scientists about the interpretability of models is then only one link in the chain. And if the scientists are perplexed, the jurists will be even more so[vi].
Learning algorithms raise concerns because of the indeterminacy of liability in case of damage, if there is no design error or misuse. In 2018, a (fictitious) trial had taken place in France, as Prévost (2018) recalls, raising the question of liability as a result of an (imaginary) accident. As Table 2 reminds us, all systems (or impose?) a role on the "driver" (as he remains a person identified as such, with the possibility of switching back to a so-called "manual" driving mode), and the responsibility would fall on him. As Noguéro (2019) noted, liability for things is enshrined in Article 1242 of the Civil Code, which states that one "is liable not only for the damage that one causes by one's own act, but also for that which is caused by the act of persons for whom one is responsible, or of things that one has in one's custody." The concern is that the very purpose of these cars being to leave their users the freedom to never have to worry about their conduct once the destination is registered. It is then difficult to understand, at the same time, that they would still be designated as having power of use, control and direction over them.
Bensoussan (2015) noted that in the United States, in some states (e.g., Nevada), robots have been recognized as having certain attributes of a legal person, but are not referred to as such. They are then registered and listed in a specially dedicated file, and they are above all assigned a capital, the primary function of which is basically to insure them directly to enable them to respond to the damage they would cause in their interactions in an open environment, as Coulon (2016) reminds us. But who, upstream, would assign a capital likely to compensate for the damage of a serious accident, which could represent several million or even tens of millions of euros in compensation? The manufacturer? The seller? In any case, we would be far from the guarantee provided by the compulsory liability insurance which is, in France, unlimited[vii]. This proposal generates the problem of "disempowerment of the participants: whatever the hypothesis, designers, integrators or users will know that they will never be held responsible and that, in the end, the insurance will pay through a guarantee fund financed by the robotics companies", according to Touati (2017).
And the accidents that occurred during the tests are often revealing: during one of the accidents of a google car, the problem came from the fact that the passenger of the semi-autonomous vehicle, doubting the efficiency of the latter, had himself made a bad decision by suddenly pressing the brake pedal. The robot stopped earlier than expected because the algorithm controlling the intensity and distance of the braking was modified. It is often the interaction between man and machine that causes problems.
Some, like Harari (2018), are convinced that "a driver who predicts a pedestrian's intentions, a banker who assesses the credibility of a potential borrower, and a lawyer who gauges the mood at the negotiating table are not relying on witchcraft." Instead, unbeknownst to them, their brains recognize biochemical patterns by analyzing facial expressions, tones of voice, hand movements and even body odor. An AI equipped with the right sensors could do all this much more accurately and reliably than a human being. By bringing "Mozart into the machine", autonomous vehicles would eliminate the main risk factors that cause fatal accidents (alcohol abuse, speeding and distraction). It is then argued that "although they may have their own problems and limitations, and some accidents are unavoidable, the replacement of all human drivers by computers is expected to reduce the number of deaths and injuries on the road by about 90 percent. In other words, "the shift to autonomous vehicles is likely to save one million lives each year" according to Harari's (2018) count. So, are we moving towards residual liability? As a counterpoint, AI could also learn the concept of merit, which can register against justice. Duru-Bella (2019) reminded us that "merit is popular. With it, the idea that everyone is responsible for what happens to them, for their successes as well as their failures, and the hope that by rewarding talent and effort, we will produce a fair and efficient society. The constant emphasis on merit, without taking into account inequalities (social, gender, origin, etc.) is anything but harmless. It generates many perverse effects". Worse, AI could still learn human lies, as McEwan (1982) imagines, and make mistakes artificially, even intentionally, not to mention, of course, bugs, hacking and criminal acts that could then generate mass damage intensified by networking.
What response(s) from the legislator?
The legislator is struggling: the so-called LOM law (orientation of mobilities) voted on December 24, 2019 ... postpones the problem and announces upcoming ordinances concerning in particular certain provisions relating to compensation for road accidents, and on the liability of autonomous vehicles and connected vehicles. It is thus planned, before the end of 2021, the adaptation of "the legislation, including the Highway Code, to the case of traffic on the public highway of land motor vehicles whose driving functions are, in conditions, including time, place and traffic, predefined, delegated partially or fully to an automated driving system, including defining the liability regime applicable. In this respect, provision of appropriate information or training may be required prior to the availability of vehicles with delegated driving, at the time of sale or rental of such vehicles" (Article 31). Furthermore, before the end of 2020, the law wishes to "make available, in the event of a road accident, the data from accident data recording devices and the data on the status of driver delegation recorded in the period preceding the accident to officers and agents of the judicial police for the purpose of determining responsibility, as well as to the bodies in charge of the technical and safety investigations provided for in Article L. 1621-2 of the Transport Code (Article 32). In the meantime, it is advisable to continue to anticipate the risks, and their coverage, by insurance. To this end, it is agreed that in France, as it stands at present, the Badinter law is perfectly suited to regulate accidents involving autonomous vehicles in the future. "It is sufficient to consider the algorithm guiding the vehicle as its driver in the sense of the law. Beyond this easy recognition, the application of the Badinter law to autonomous vehicles is made necessary by the logic of compensation and would also have many advantages", as Duméry (2019) reminded us.
Larcher (2010) had shown that the law of July 5, 1985 on traffic accidents has the necessary resources for judges to extend the scope of liability to the vehicle manufacturer from the stage of obligation to the debt and to allocate the final burden of reparation equitably between the manufacturer, the driver and the owner of a vehicle equipped with a substitute aid.
To this end, it has been proposed that judges apply the theory of dissociation of custody from structure and behavior on the basis of the law of July 5, 1985, known as the Badinter law. In the fictitious trial mentioned earlier - see Prévost (2018) - the lawyer for the civil parties noted that the "intelligent vehicle leaves its occupants at the mercy of the algorithms [therefore] the AI is indebted to the users, of an obligation of prudence, safety and reliability". It is then the manufacturer, or the designer, of the algorithm, who is then indebted of this obligation: by putting in circulation such a vehicle, the obligation rests on their shoulders.
Several other options are possible: First of all, personal liability could be invoked. It concerns indeed "any act of man" (according to article 1240 of the French Civil Code), which can include any designer of AI. The liability for things is less adapted. It only concerns inanimate things. At the cost of an unfortunate confusion between human beings and property, some would adapt liability for animals in one's custody or for minors for whom one is responsible. The regime of contractual civil liability raises an important difficulty of application in the absence of a contractual relationship with the victim. Liability for defective products (as defined in the Civil Code) is more appropriate. But it would be appropriate to restrict the grounds of exoneration that the producer may invoke, in particular the risk of development, following the example of the existing limit for elements of the human body or for products derived from it (Civil Code, art. 1245-11). It is worrying that one can put into circulation products, or vehicles, of which the manufacturer/producer, in the broad sense (with the subsidiary responsible designated by the legislator, in article 1245-6 of the French Civil Code, such as the salesman, the hirer or any other professional supplier, who will be responsible under the same conditions as the producer that one does not manage to identify), does not have the full control, a fortiori for a "product" like AI brought to evolve in time independently of any external intervention. If AI is unmanageable in its own development, a source of "adolescent" crises, then the simplest thing to do is not to authorize it on the market, at least only as an experiment (as the law of August 17, 2015 intended). As Noguéro (2019) put it, "among the speculations launched is that relating to intelligent things or superior robots which, whatever their cognitive, not merely reactive, capacities, are legally things designed, manufactured, sold, held, connected and used, after instructions."
Other possibilities are put forward, such as the implementation of a no-fault liability regime, with compulsory insurance and a guarantee fund; or the sharing of responsibility between the designer, the user and the manufacturer, with the difficulty of determining the share of responsibility attributable to each. In this respect, "there is nothing to prevent the legislator from creating presumptions to facilitate and accelerate the procedure for identifying those responsible, especially for the benefit of direct victims." The basis of the risk to be attributed can also guide a policy for strict liability" noted Noguéro (2019). For Bensamoun (2016), "the solution is probably not to be found in a total reinvention of the law, which would require a clean slate and the construction of new rules ex nihilo." And insurers undeniably have a role to play, in the interest of their policyholders, by separating the responsibilities of each other, of policyholder "drivers" and of manufacturers.
References
Barraud B. (2019), Le droit en datas : comment l’intelligence artificielle redessine le monde juridique – PARTIE I : La dictature des algorithmes ou l’intelligence artificielle à la source du droit, RLDI 2019/164, n° 5604, p. 49 et s.
Bensamoun, A. (2016). Des robots et du droit…, Dalloz IP/IT, juin 2016, p. 282 et s.
Bensoussan, A. (2015). Le droit de la robotique : aux confins du droit des biens et du droit des personnes : Rev. Juristes de Sciences Po, mars 2015.
Bigot, R. & Charpentier, A. (2018). Repenser la responsabilité, et la causalité. Risques, 120.
Bigot R. (2019), Le comportement de l’assuré, colloque relatif à l’ « intensification de la fonction normative de la responsabilité civile. Acte II de la réforme du livre III du Code civil » UFR Droit de Metz, 17 mai 2019.
Bostrom N. (2014) Superintelligence, Path, Dangers, Stategies, Oxford University Press.
Broussy C. (2016). Histoire du contrat d’assurance (XVIe-XXe siècles), Université de Montpellier, thèse de doctorat.
Brun, P. (2007). Causalité juridique et causalité scientifique. Revue Lamy Droit Civil, 40 :2630.
Charpentier, A. (2018a). Histoire du hasard et de la simulation. Risques 116.
Charpentier, A. (2018b). L’intelligence artificielle dilue-t-elle la responsabilité. Risques 114.
Charpentier, A. & Cherrier, B. (2019). La valeur de la vie. Risques, 118
Coulon, Cédric (2016). Du robot en droit de la responsabilité civile : à propos des dommages causés par les choses intelligentes. Responsabilité Civile et Assurances – Revue Mensuelle LexisNexis, avril 2016.
Delvaux, M. (2016). Projet de rapport contenant des recommandations à la Commission concernant des règles de droit civil sur la robotique. Parlement Européen 2015/2103(INL),
Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55:10.
Duméry A. (2019), Pour l’application de la loi Badinter aux véhicules autonomes, RLDC 2019/174, n° 6665.
Duru-Bellat M. (2019). Le mérite contre la justice, SciencesPo Les Presses.
Godefroy, Lêmy (2017). Les algorithmes : quel statut juridique pour quelles responsabilités ?
Harari, Y. N. (2018). 21 leçons pour le XXIe siècle, éd. Albin Michel.
Huss, J-V. Legrand, L. & Sentis, T. (2018). Le livre blanc sur la justice prédictive. Ecole de Droit de Science-Po.
Larcher, F. (2010), Aides à la conduite automobile et droit français de la responsabilité civile, Le Mans Université, thèse de doctorat.
McEwan, I. (1982), Une machine comme moi, éd. Gallimard, coll. Du monde entier.
Maxwell, J.C. (1876) Matter and motion.
Mnih, V. Kavukcuoglu, K., Silver, D. & Rusu, A. (2015). Human-level control through deep reinforcement learning, Nature, 518:7540.
Noguéro, David (2019). Assurance et véhicules connectés – regard de l’universitaire français. Dalloz IP/IT no 11, 16-21.
Noguéro, D. & Vingiano-Viricel, I. (2019). Intelligence artificielle et véhicules autonomes, in Loiseau G. & Bensamoun A. (dir.), Droit de l’intelligence artificielle, Lextenso, coll. “Les intégrales”.
Poincarré, Henri (1908) Science et méthode.
Pothier R.-J. (1767). Traité des contrats aléatoires, Paris, Debure éd., 1767, nouvelle éd. 1775, p. 312, n° 38.
Prévert, Jacques (1966). Fatras. Gallimard.
Prévost S. (2018) Procès de la voiture autonome: l’humain innocenté, l’IA condamnée, Dalloz IP/IT.
Radé, C. (2012). Causalité juridique et causalité scientifique : de la distinction à la dialectique. Revue générale de droit médical, 16, 45-56.
Reigeluth, T. (2016). L’algorithme a ses comportements que le comportement ne connait pas. Multitudes. 62 :1.
Ribeiro, M.T., Singh, S & Guestrin, C. (2016). Why Should I Trust You?’: Explaining the Predictions of Any Classifier. ArXiv:1602.04938
SAE (2016) Les véhicules automatisés au Canada or automotive engineers classification.
Sauvé, J-M. (2014). Le numérique et les droits fondamentaux. Conseil d’Etat.
Touati, A. (2017). Il n’existe pas, à l’heure actuelle, de régime adapté pour gérer les dommages causés par des robots. RLDC 2017/145, n° 6279.
Weber, M. (1905). Counterfactuals Thought Experiments and Singular Causal Analysis in History, Philosophy of Science, 76 (2009), p. 712-723.
[i] In all this article 'man' designates a human being: it is not opposed to 'woman' but to 'machine' or 'robot'.
[ii] Article 1243 of the French Civil Code: "The owner of an animal, or the one who uses it, while it is in his use, is responsible for the damage that the animal has caused, whether the animal was under his care, or whether it was lost or escaped".
[iii] The weights are fixed from the banker's point of view, they are estimated from regression models on historical data according to statistical terminology (the computer scientist will speak of training the algorithm on a learning base).
[iv] Conceptually, these algorithms are not new since the formalism was established at the end of the 1980s. However, the power of these algorithms was revealed when machines beat the best Go players with this strategy, and to win at video games without having learned the "rule" (Minh et al. (2015)).
[v] One can think of Ribeiro et al.'s (2016) experiment that aimed to build an algorithm that distinguished a dog from a wolf in photos, and that had high predictive power, but whose strategy seemed to be relatively simple: if there is snow in the photo it is a wolf (the training photos all showed wolves in the snow). This raises the question of the problems that might arise in predictive justice with the use of photos.
[vi] Although Brun (2007) notes that legal causation looks for the "most reasonable" cause in order to "make the most just decision", which may in a sense be simpler than the goal of understanding that scientists set for themselves.
[vii] According to Article R. 211-7 of the French Insurance Code: "Insurance must be taken out without limit of sum as regards personal injury and for a sum at least equal to that fixed by order of the Minister of the Economy, which may not be less than 1 million euros, per claim and whatever the number of victims, as regards damage to property".
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (July 3, 2022). What responsibility for the algorithms? Freakonometrics. Retrieved September 11, 2024 from https://doi.org/10.58079/ovjw