The ethics of modelling in a world where normality no longer exists

(this article was originaly writen in French – part one and two – and published in Risques)

The mechanism for covering natural disasters, in France, was created to compensate “direct uninsurable material damage caused by the abnormal intensity of a natural agent” (article L. 125-1 paragraph 3 of the Insurance Code). Still on the legal level, the Court of Cassation formulated, in November 1986, a principle according to which “no one must cause others an abnormal neighbourhood disturbance”. And in order to be entitled to compensation following pre-trial detention, it is necessary for the plaintiff to prove that the detention caused him “manifestly abnormal and particularly serious harm” (Article 149 of the Code of Civil Procedure). But what does this “abnormality” in all these articles mean? According to the dictionary, abnormality is defined as “contrary to the usual order of things” (one could see there an empirical, statistical notion), “contrary to the just order of things” (this notion of “just” probably calls for a normative definition) but also “not in conformity with the model”. Defining a standard is already not simple if we are only interested in the descriptive, empirical aspect, as actuaries can do when they construct a model (especially in large dimensions, where, as we shall see, normality no longer exists), but if we also integrate a dimension of justice and ethics, we wonder if the task is not impossible…

The average man from Quetelet and Galton

In the 19th century, if several astronomers measured the speed of the same celestial object, they obtained (often) several different measurements. In order to know which one to use in their calculations, the idea of using the “averages method” was quickly imposed – as Stahl[2006] recalls, and especially Sheynin[1973] – this average having a greater precision than any other quantity (or would now say statistically). From a set of observations \{x_1,...,x_n\}, we set
\bar x=\frac{(x_1+⋯+x_n)}{n}=\frac{1}{n}\sum_{i=1}^nx_iWe can note that this size is also solution of the optimization problem
\bar x=\text{argmin}\lbrace\sum_{i=1}^n(x_i-m)^2\rbracewhich shows the importance of “least squares”. Adolphe Quételet was, it seems, the first to apply this calculation of averages to human measures, introducing his famous concept of the “average man”. If we define the mean using a quadratic error minimization, we have an interpretation in terms of forecast: the mean size is the size that a randomly drawn person should measure (up to a random – and unpredictable – variation). In 1846, in a letter Adolphe Quételet used the image of the gladiator statue to explain what the average man might be:

Suppose a thousand statues were used to copy the gladiator with all the care imaginable. Your Highness certainly does not think that the thousand copies that will have been made will each reproduce exactly the model, and that by measuring them successively, the thousand measures that I would obtain would be as concordant as if I had taken them all on the statue of the gladiator himself. The first chances of error would be joined by the inaccuracies of the copyists; so that the probable error would perhaps be very great. Despite this, if the copyists have not worked with preconceived ideas, exaggerating or reducing certain proportions according to school prejudices, and if their inaccuracies are only accidental, the thousand measures, grouped in order of magnitude, will still present a remarkable regularity and will follow one another in the order assigned to them by the law of possibility. I see Your Highness smiling; she will no doubt tell me that such assertions will not compromise me, since we will not be willing to try the experiment. And why not? Perhaps I will surprise him by saying that the experience is ready-made. Yes really, more than a thousand copies of a statue have been measured, which I will not guarantee to be that of the gladiator, but which, in any case, is not far from it: these copies were even alive, so that the measures were taken with every possible chance of error: I would add, moreover, that the copies could have been distorted by a host of accidental causes. One must therefore expect, here, to find a very sensitive probable error.

This average man liked a lot at the time, especially within the English eugenicist school, directed by Francis Galton, even if the latter is mainly interested in deviations from this norm (upward deviation and downward deviation). As Bulmer recalls[2004], “the deviations from that average – upwards towards genius, and downwards towards stupidity – must follow the law that governs deviations from all true averages”. Galton’s work was aimed at understanding these deviations. If Florence Nightingale stated that “the average man is God’s will”, Galton was more interested in the hereditary character of the deviation than in the average. But does that mean anything to this average man?

Looking for the “average” person

Rose[2016] presents two examples in her book Tyrany of the NormThe End of Average . The first is drawn from problems encountered by the US military in the 1950s. When designing the cockpits of fighter aircraft, engineers had used the dimensions of more than 4,000 pilots to optimally position the seat relative to the pedals, the joystick, the height of the windscreen, but also the shape of the seat, the helmet, etc. These measurements made it possible to calculate the measurements of the “median” pilot in about ten dimensions. For example, the average pilot size was 179 cm, which allowed the average pilot size to be defined between 175 and 185 cm. While a majority of the pilots were medium in size, none of the 4,000 pilots was “average” in all dimensions. As Daniels[1952] stated, “designing a cockpit for the average pilot was in fact not designing one for any pilot.

The second example is linked to two statues, those of Norma and Normann (historically on display in Cleveland, now in the Harvard Library). The artist Abram Belskie and the obstetrician Robert Latou Dickinson made these statues together in 1943. Their particularity is that no model has been represented. In fact, it was to represent a woman and a man who had the average measurements of the time (from measurements made on thousands of subjects). Once these statues were made, a contest was held to find out who these statues could represent. Several thousand people from Ohio sent their measurements, but none matched those of the statues. Of course, several hundred were the same size. Several hundred had the same chest circumference. But none had all the right measurements. Because as Todd Rose explains, man is not unidimensional: it is on several dimensions that we measure it in several dimensions. And trying to summarize it in a one-dimensional size is far too reductive. This is what he shows in his book on intelligence tests, for example, where the same IQ can be associated with two very different people. It makes no sense to focus on a single indicator when deciding to recruit someone. The concern when working in a multivariate context is that the average loses its meaning. In fact, from a probabilistic point of view, being average can be extraordinary.

The curse of the dimensionality

In fact, this problem is well known to statisticians as the “scourge of the dimension”. Let’s take a simple example: suppose that a quantity of interest follows a normal law N(\mu,\sigma^2), for example weight, height, chest circumference, etc.. One could say that being in the norm is being in an interval [\mu\pm1.5\sigma]. If we have a normal law, this situation occurs in 85% of cases. And the 15% that are not in this range will be seen as “abnormal”. The sizes can be abnormally small, or abnormally large. It is the drawing of figure 1, on the top. We can now look at two dimensions, weight and height, for example. The norm here would be that in both dimensions, we are in the interval [\mu\pm1.5\sigma]. If the quantities are independent, the probability that both quantities are “normal” is 75%, since 0.85^2\sim 0.75

In other words, in dimension two, 75% of the observations are globally normal, and 25% are then abnormal. In dimension 3, we pass to 65 %, that is to say more than a third of abnormal observations (on the bottom on figure 1, the red points being the abnormal points).


Figure 1 Proportion of “average” individuals in dimensions 1, 2 and 3

In dimension five, we go below 50%, in other words, being in the norm in the five dimensions is no longer the case of the majority. And in dimension twenty, those which are normal are rather atypical, with a proportion of the order of 5 %. Thus, in large dimensions, normality is no longer associated with the idea of a majority. This is the problem that actuaries face today when using very large data, in pricing for example: it becomes very difficult to characterize a rate class (by saying what the average insured in that class looks like).

Normality, statistics and standards

From an empirical, descriptive point of view, being within the norm means nothing other than being within the average, not getting too far from that average. We will then tend to define the norm as the frequency of what happens most often, as the attitude most frequently encountered or the preference most regularly expressed. But this normality is not normativity, and “to be in the norm”, to be exemplary, is then a different dimension, which this time no longer relates to a description of reality but to an identification of what it should tend towards. So we move from the register of being to that of being, from “is” to “ought” to use Hume’s terminology[1739]. It is indeed difficult to envisage the model (or normality) without sliding towards the second meaning that can be found in the concept of standard, which in turn has a strictly normative dimension. This vision leads to confusion between norms and laws, even if not all normativity is exhausted by laws. Hume thus notes that, in all moral systems, authors move from statements of fact, that is, statements of the “there is” type, to proposals that include a normative expression, such as “one must”, “one must”. What Hume disputes is the shift from one type of statement to another: for him, these are two types of statements that have nothing to do with each other, and that cannot therefore be logically linked with each other, in particular from an empirical norm to a normative rule. For Hume, an assertion that is not normative cannot give rise to a normative conclusion. Hume’s assertion has given rise to numerous comments and interpretations, particularly because, as it stands, it seems to be an obstacle to any attempt at naturalization of morality – as McIntyre[1959] or Rescher[1990] detail. In this sense, there is a strong distinction between the norm in regularity (normality) and the rule (normativity).

Statistical laws, from micro to macro

The statistical law is about what “is” because it has been observed (for example, “men are taller than dogs”). Human law (divine, or judicial) is what “is” because it has been decreed, and therefore “must be” (“Men are free and equal” or “Man is good”). Finally, the physical law is about what “is” because we can show it (“The planets are attracted to each other”), often within the framework of hypotheses. We see that the three concepts can be linked. For example, Kepler’s law was historically established using observations (and historically fell into the first category), before being demonstrated in the Copernican model (and then moved on to the third). A concept of balance can also be associated with this law, this “norm”. However, as Hilpinen[1971] points out, however, probabilistic laws ask many questions, one need only think of dice throws or expectations: what is meant by “it is normal to wait five minutes for the bus to stop”, or more ethically disturbing, “it is normal for a person remanded in custody to be imprisoned for eighteen months”?

The norm can be seen as a regularity of cases, observed using frequencies (or averages), for example, on the size of individuals, the duration of sleep, in other words the data that constitute the description of individuals. Anthropometric data have thus made it possible to define an average size of individuals in a given population, according to their age; compared to this average size, a difference of 20% more or less determines gigantism or dwarfism. If we think of road accidents, it can be considered abnormal to have a road accident in a given year, at an individual (micro) level, because the majority of drivers do not have an accident. Nevertheless, from the insurer’s (macro) point of view, the norm is that 10% of drivers have an accident. It would therefore be abnormal for no one to have an accident.

Nevertheless, from the insurer’s (macro) point of view, the norm is that 10% of drivers have an accident. It would therefore be abnormal for no one to have an accident. This is the argument found in Durkheim[1897]. From the singular act that is suicide, if it is considered from the point of view of the individual who commits it, Durkheim tries to see it as a social act, then a real regularity, within a given society. From then on, according to Durkheim, suicide became a normal phenomenon. Statistics then make it possible to quantify the tendency to suicide in a given society, as soon as we no longer observe the irregularity that appears in the singularity of an individual story, but a social normality of suicide.

Standard, convention and ethical aspects

If we take an evolutionary view, what is normal is what is most capable of adapting, of responding to needs, of providing a model for the resolution of situations (nature making abnormality disappear), and normality tends towards normativity, and it becomes difficult to distinguish between the two aspects. In fact, David Hume addresses this point in the well-known example of rowers, who get into the same boat to cross a river and row in rhythm (this example is discussed at length in Mackie[1980]). The two rowers gradually adjust their rowing strokes, one in relation to the other, and it is not necessary to obtain an explicit agreement (which would formulate the standard) that they would respect. The law, which consists in imposing a standard can be useful in case of conflict (if one of the rowers refuses to row, or two rowers of very different physical capacities), but very often, it is not necessary to formulate explicitly this standard inherent to their conduct. The external observer will observe a regularity (when the cruising rhythm is reached) that he can model, but this normal observed rhythm is not necessarily imposed by a law. In the case of rowers, we find the notion of balance mentioned previously. To build a model is to extract the signal from the noise (to use Silver’s distinction[2015]), it is to look for a standard, in the statistical sense. But this goes further if a predictive model is constructed, and reality must then conform to the model, as actuaries often hope.

Patrick Blackburn, Maarten de Rijke & Yde Venema, Modal Logic, Cambridge University Press, 2002.

Bulmer M., Francis Galton: Pioneer of Heredity and Biometry. Johns Hopkins University Press, 2004.

Daniels G., “The Average Man”, Air Force Aerospace Medical Research Lab, vol. 53, n° 7, 1952.

Durkheim E., Le suicide, 1897.

Hilpinen R., Deontic Logic: Introductory and Systematic Readings, 1971, Dordrecht, D. Reidel Publishing Company.

Hume D., Traité de la nature humaine. Tome III : de la morale, 1739.

McIntyre D.C., “Hume on ‘Is’ and ‘Ought’”, The Philosophical Review, vol. 68, n° 4, 1959, pp. 451-468, Duke University Press.

Mackie J.L., Hume’s Moral Theory, Routledge & Kegan Paul Books, 1980.

Rescher N., “How Wide Is the Gap Between Facts and Values?”, Philosophy and Phenomenological Research, vol. 50, 1990, pp. 297-319.

Silver N., The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t, Penguin Press, 2015.

Rose T., The End of Average: How We Succeed in a World That Values Sameness, HarperOne, 2016.

Sheynin O., “Mathematical Treatment of Astronomical Observations (A Historical Essay)”. Archive for History of Exact Sciences, vol. 11, 1973, pp. 97-126.

Stahl S., “The Evolution of the Normal Distribution”, Mathematics Magazine, vol. 79, 2006, pp. 96-113.

Excess of precautionary principle

(this article was intially writen in French, and published in Risques)

« Dans le doute, abstiens-toi » (“When in doubt, abstain yourself”) says popular wisdom. The precautionary principle (in German “Vorsorgeprinzip“) arose from the idea that it is appropriate to accept that there is doubt, or (scientific) uncertainty, in the knowledge of risks. A little over 20 years ago, in France, Barnier’s law introduced the precautionary principle into French law for the “risk of serious and irreversible damage to the environment”, commonly known as “environmental risk”. A little bit more than 10 years ago, it was enshrined in the French Constitution, approved by 531 Members, thus expressing a very broad political consensus, probably also social. Today, however, the precautionary principle is mentioned in contexts as diverse as the risk of terrorist acts, but also in civil or criminal law procedures. What are the consequences of this drift in the use of the precautionary principle?

Prudence, prevention, precaution

It is often accepted that precaution distinguishes prevention by the absence of risk identification. Prevention could be associated with protection against identified risks, while the notion of precaution questions possible actions in the face of risks not yet identified. As Ewald, Gollier and de Sadeleer (2009) point out, the precautionary principle in environmental matters involves three imperatives: reducing risks and avoiding emissions even when there are no short-term effects, formulating environmental quality objectives, and defining an ecological approach to environmental management. Therefore, even in the absence of certainty (and whatever the scientific knowledge at the time), the precautionary principle aims at not delaying the adoption of effective and proportionate measures to prevent irreversible damage at an acceptable cost. Other principles can complement the latter, such as the principle of information, or participation, which postulates that every citizen must have access to information relating to his environment.

As Hunyadi (2004) noted, these three notions are closely related, but different,

  • Prudence refers to proven risks, those whose existence is demonstrated or known empirically to such an extent that the frequency of occurrence can be estimated. Probability makes the risk insurable. This category includes alcohol consumption, or playing Russian roulette.
  • Prevention targets proven risks, those whose existence is demonstrated or known empirically without however being able to estimate the frequency of occurrence. Nuclear risk probably falls into this category. Uncertainty is not about the risk, but about its probability of occurrence. The absence of probabilities normally makes the risk uninsurable by the traditional insurance industry.
  • Precaution refers to risks for which neither the magnitude nor the probability of occurrence can be calculated with certainty, based on current knowledge. One example that has been much debated is genetically modified organisms, but we can include all the risks associated with nanotechnologies.

Precaution, prevention and forecasts

In the words of Hohmann (1994), precaution arises from “the will to free oneself from the assimilative approach in order to replace it with an anticipatory approach”. Anticipation and forecasting are then at the core of precaution: the precautionary principle implies anticipating risks, with the dimension of uncertainty that will accompany it. Like insurance. Except that the latter generally assumes that the hazard is beyond the control of agents, whereas precaution is concerned with endogenous risks, which should be anticipated.

And when precaution becomes a legal principle, we can fall into excesses well known by science fiction fans.” I’m placing you under arrest for the future murder of Sarah Marks and Donald Doobin that was to take place today,” police officer John Anderton told the potential murderer in the 1956 Philip K. Dick short story Minority Report. Can one arrest someone preventively, or worse, cautiously, in the name of the precautionary principle?

Yet we are not far from this science fiction scenario when we look at how the fight against terrorism works. In November 2015, as a result of the state of emergency, “administrative” searches were conducted.” The objective is not to be accused of doing nothing when we had information. It is a kind of precautionary principle applied to terrorism,” said a police officer in Libération. Before the summer of 2016, Eric Ciotti, Member of Parliament for Alpes-Maritimesin France, said, with regard to people with “S-type records“, that “these people today must have a precautionary principle, they must be placed in a situation of detention. They can no longer be free, because they constitute a threat” (reported by Le Monde (2016)). Is that really the precautionary principle?

Justice, uncertainty, hazard and precaution

The precautionary principle requires the practice of doubt, which may give the impression of “hyperbolic doubt”, which lawyers sometimes attach to the concept of the inversion of the burden of proof. The adage “guilty until proven innocent” is replacing the age-old legal maxim “innocent until proven guilty”, as van den Belt (2003), or Flückiger (2003), questioning the concept of proof in the test of the precautionary principle, evoked.

Strangely, the words proof and probability derive from the same Latin adjective, probus. Probus ager is the field where seeds germinate and grow. What is probus is then what meets an expectation, produces, nourishes. Applied to man, it will imply goodness, honesty (probity). It is then a positive response to expectation (on the contrary, chance would be associated with the improbable, with a negative response). Probus will give probatio, the proof, later called proba. Approbare then means “prove”, or even “approve”, and the probabilis derivative would mean “approvable”. It is this meaning which seems to have been retained by Cicero when he used probabilia to evoke possible conjectures. What is likely then contains a large psychological part, with an ethical judgment (we will approve), but also a truth judgment (we will prove). As Gaston Bachelard said, “it is through confusion between the psychological domain and the real domain that we incorporate the notion of probability” (Bachelard (1927)). The probability, however complicated the calculation behind the value may be, measures only our expectation.

The right decision in an uncertain world

We are asking the courts to rule on problems. Sometimes to establish a truth that scientists cannot establish (as in the context of environmental risk). But in everyday life, judges make many decisions. And make mistakes. In decision theory, this can be summarized in Table 1. In criminal justice, the judge – or a jury – must decide whether a person is innocent or not. Two types of errors are possible: false negatives (those found innocent) and false positives (those sent to prison).

Table 1: Decision-making mechanism and errors (in French)

In children’s stories, false positives, it’s Pierre who howls at the wolf when there’s nothing (and who ends up tiring everyone, and when the wolf actually arrives, nobody believes it). When it comes to justice, a false positive is an innocent man sent to prison. It is not uncommon for a type of error to be “economically” more interesting. A cost-benefit analysis in marketing, for example, shows that soliciting someone who will refuse a product is generally less expensive than missing a good customer. In other applications, one type of error will be “socially preferable”, as in public health. After a quarantine, we may prefer to convict innocent people than to have a contaminated person who leaves the contained area (and then contaminate the rest of the population). The great difficulty in decision theory is precisely to choose your errors (or rather your error acceptance rates).

The right decision and mistakes

Voltaire wrote in 1747, about Zadig, chosen by the King to become Prime Minister, “it is from him that nations derive this great principle: that it is better to chance to save a guilty than to condemn an innocent”. This principle has long seemed correct (this is the second example in Table 2).

Table 2: Example of decisions rendered 1) Realistic case 2) All innocent 3) All guilty

As Table 2 shows, let us start from a real (or supposedly real) situation, with a judicial system that makes some mistakes: out of the 100 people tried, 10 guilty were acquitted, while 15 innocent people were sent to prison. That is 25% errors overall, even if we can imagine that the two errors are not of the same order. By declaring everyone innocent, 70% mistakes are made, which is obviously too high. But if you find everyone guilty, you make 30% mistakes. To some extent, the risks of the two types of errors are inversely related. Reducing one of them will generally be to the detriment of the other. We must recognize that there is an inevitable compromise in the design of our criminal justice system. In economics, the balance depends on our estimate of the economic (and other) costs associated with each of the two types of error. But in judicial matters, one can imagine that the moral aspect is important.

In a civilized justice system, the risks of error of the first type are minimized to the extent possible, at least that is what is meant by the term “innocent until proven guilty”. There is a price to pay for this cautious and civilized approach, namely that eventually many wrongdoers must be acquitted because of a “lack of sufficient evidence”.

Error and liability

The judge has experience built on a sub-population that is probably not representative of the French population. This selection bias is caused by the fact that the police had to convince a prosecutor to continue the investigations, at least to reach the investigation stage. What could almost legitimize that a judge – or in a way any person gravitating in the world of justice – has such a negative a priori, that in doubt, for fear of committing an error, he made his instinct, built from a biased population. Worse still: if a judge gets into the habit of convicting (based on the fact that there is no smoke without fire), he or she may be convinced that he or she has only been confronted with culprits. Yet it is normal to make mistakes.” Errare humanum est” said Saint Augustine. The danger is that the decision-maker often confuses’fault’ and’error’, and for fear of committing a fault, for which he could be held responsible, he does not dare to decide. The difficulty is admitting her mistakes, and learning from her. For it must be remembered that the error of the second kind is a double error: if there is an innocent man languishing in prison, there is also – probably – a criminal still at large.

Victims at the centre of justice

With the precautionary principle, power makes society the potential victim, and invites us to objectify ourselves as such. After the Second World War, as Rechtmann (2005) showed, psychiatrists’ attention shifted from trauma to victims. And this idea was then imposed on society as a whole: everyone must, in order to exist, express his suffering and arouse compassion. While “victimology” has thus made its entry into the world of psychiatry, at the same time the place of the victim has been strengthened in law. Levy (2004) has shown that today, victims are granted excessive rights, especially when they fall into certain categories (children, victims of sexual violence, acts of terrorism, etc). In this case, the victim’s word is sacredized, the accused’s defense therefore becomes impossible, everyone being unconsciously convinced of his guilt. In La Société des Victimes, Guillaume Erner denounces a new moral order which is being established and confers on the victim an almost sacred status, since it becomes a “secularized version of martyrs and saints”.

Lévy (2004) recalled that psychiatry seems to think that psychological disorders resulting from certain offenses are perfectly compatible with normal personalities. Thus, “it is considered that the absence of any visible sign of trauma not only does not exclude the crime, but in some cases, may constitute an additional indication of its reality. This is Edgar Hoover’s idea, as legend has it: after being tapped, if the wiretaps confirmed suspicions, individuals were classified as “subversive”, whereas if the wiretaps were inconclusive, they were classified as “subversive malicious”.

We are a long way from the time when Voltaire could claim to prefer having a guilty man in nature than having an innocent man in prison, demanding the lowest possible first hope error rate. By placing the victim at the centre of justice, we are now asking for a zero second species error rate. This amounts to denying the presumption of innocence: “guilty until proven innocent”, relying abusively on the sacrosanct “precautionary principle”.

The precautionary principle is also a risk

It is now clear that the precautionary principle has set a new standard for judging responsibility and extended its ethical space. As François Ewald noted, “the one who introduces the risk must foresee it”. By not taking sufficient precaution, in particular abstention, he can be held responsible. In the name of this principle, certain festive events are canceled, as reported by La Voix du Nord and France 3 during the cancellation of (major) sales last September. The precautionary principle frightens decision-makers and imposes an inertia, an unlimited conservative. Perhaps it is time to be more cautious in applying this principle.

Bachelard, G. 1927, Essai sur la connaissance approchée, Vrin.

Erner, G. 2006. La Société des victimes, La Découverte.

Ewald, F. Gollier, C. et de Sadeleer, N. 2009, Le principe de précaution, Que sais-je ?

Flückiger, A. 2003. La preuve juridique à l’épreuve du principe de précaution. Revue Européenne des Sciences Sociales, 41, 107-127.

France 3. 2016. Lille : après la braderie, le semi-marathon et le 10 km annulés http://bit.ly/2dpU8eY

Hunyadi, M. 2004. La logique du raisonnement de précaution. Revue Européenne des Sciences Sociales, 42, 9-33.

La Voix du Nord, 2016. Le principe de précaution appliqué à la lettre à la braderie du centre http://bit.ly/2dVE6tS

Le Monde, 2016. Interner tous les djihadistes présumés « fichés S », le retour d’une proposition inapplicable. 14 juin 2016, http://bit.ly/2c9zHF4

Lévy, T. 2004. Éloge de la barbarie judiciaire, Odile Jacob.

Libération, 2015. Les perquisitions, « un principe de précaution ». 19 novembre 2015, http://bit.ly/2cQfxhY

Rechtmann, R. 2005. « Du traumatisme à la victime », in D. Fassin et P. Bourdelais (dir.), Les Constructions de l’intolérable. Études d’anthropologie et d’histoire sur les frontières de l’espace moral, La Découverte.

Van den Belt, H. 2003. Debating the Precautionary Principle: “Guilty until Proven Innocent” or “Innocent until Proven Guilty”? Plant Physiology, 132(3), 1122–1126.

Voltaire, 1747. Zadig ou la destinée.

 

Machines, procedures and avoiding responsibility

Some people are trying to make us believe that artificial intelligence is a “revolution”. What if it wasn’t? Can we not simply see the logic of a process that goes back at least fifty years ago? Bureaucracy has pushed us to put in place simple procedures in all areas of everyday life, allowing everyone to avoid any responsibility, to no longer have to think and to be smart. Algorithms are scary, we wonder where the “human” is in these decision-making procedures… What if he had already disappeared long ago?

Continue reading Machines, procedures and avoiding responsibility

Can predictive models be fair?

In Nosedive the first episode of season 3 of the television series Black Mirror, we discover the dystopia of a society governed by a “personal rating”, a score, a score ranging from 0 to 5. In this world, each person rates the others, the best rated having access to better services (priority in services, better rates, better prices, etc). Will this tendency to construct scores in all sorts of fields (historically on credits but today on criminal, even civic aspects in some countries) not lead to a world that would be an endless popularity contest? And how would it be compatible with social justice, a priori desirable?

Credit scores and social networks

A credit score is, from an actuarial point of view, a quantity proportional to the probability of not honouring its commitments as a creditor. It may also be able to pay the due dates for three consecutive months, or just be late. In real life, as always, it’s a little more complicated. In the United States or Great Britain, it is not uncommon for students to go into debt for decades in order to have the opportunity to follow the courses that interest them (even if the motivation is mainly to obtain a degree at the end of the course). But above all, as soon as they reach the age of 18, credit rating companies will monitor all their movements. Often without their knowledge. And if one day, a consumer credit or mortgage is refused, the reasons are never motivated. Is it a delay in paying rent? Forgotten library fines? An unpaid water bill, years old?

Credit rating companies in the United States, but also in China, are beginning to explore the use of social media data to improve credit scores. Can’t counting the number of times a user uses the word “wasted” in what they post online reveal information about debt repayment? This is at least what the American credit analyst FICO claims: “If you look at how many times a person says’wasted’ in their profile, it has some value in predicting whether they’re going to repay their debt (…) It’s not much, but it’s more than zero” (quoted in McLannahan (2015)). In China, peer-to-peer lender Jubao revealed that he was more likely to give “bonuses” to borrowers if they were Facebook friends with celebrities, as Botsman (2017) tells us.

For the moment, credit rating companies still use the data they know well (utility bills and credit cards), but they imagine that a lot of interesting information must be accessible (in one way or another) on social networks. But data are still scarce, and difficult to analyze. What about the sarcastic or humorous component in a tweet using the word “wasted”? As is often the case, the difficulty is that truly relevant data are difficult to obtain. If it is possible to have information on the payment of rent when a tenant goes through an agency, what to do for transactions between two individuals? And if that were possible, how would you handle the roommate case? Not getting credit because a former roommate didn’t pay on time becomes disturbing. All the more so if it is perhaps about a cellular telephone bill claimed abusively by the telephone company, whereas the subscription had been cancelled.

But the big “malus” in the credit score is often the fact of never having had a credit card. One might think that a person who did not need a credit card (and was satisfied with a debit card, allowing him to buy from a merchant, like most bank cards in France) is a prudent person, who does not need credit for daily expenses. But for credit institutions, this person is not reliable because we don’t know him. And it is up to it to prove that it is (we return to the recurrent practice of reversing the burden of proof mentioned in Charpentier(2016)). This is strangely what happens today when you want to enter the American soil without having a Facebook page.

In a world of widespread surveillance

What if credit institutions aren’t the only ones interested in our lives? What would a world be if, in addition to knowing if I pay my bills on time, some people wanted to know about my networks of friends, which newspapers I read, whether I prefer to buy whole milk or semi-skimmed milk? When we visit the Stasi Museum in Berlin, we discover that this world existed, that 1 person out of 63 was an agent (or indicator) of the Stasi (counting the occasional indicators, the proportion can reach one person out of 6). The museum describes a total panoptism, each being observed permanently, as described by Foucault (1975). But doesn’t this nightmare correspond to today’s world of permanent surveillance, more or less consented. Surveillance via cellular phones (geolocation for the most common function, but sometimes also audio recordings without the user’s knowledge by certain applications), via connected objects, but also surveillance cameras coupled with increasingly powerful facial recognition algorithms. At the end of 2017, 170 million cameras were installed in China, and the 300 million mark should be reached by 2020. During an experiment attempted by the BBC[1], it took 7 minutes to find the journalist John Sudworth walking in the streets.

The danger is that you never know who’s in control. More and more private security companies have partnered with governments. Email providers read our messages to detect spam, but also to resell certain information. For example, in the Privacy Policy attached to Gmail’s Terms of Use (Google) we read “Our automated systems analyze your content (including email) to provide you with custom product features, such as (…) custom advertising. Insurers are increasingly considering the installation of GPS boxes in cars, but through external service providers. Beyond the ownership of data (mentioned in Charpentier & Suire (2016)) we can wonder about their resale, and their use. Knowing that someone regularly visits a blood transfusion centre is potentially interesting information, especially coupled with others.

Since 2014, the Chinese government has been working on an evaluation system for its own citizens, scheduled to be implemented in 2020, as Trujillo (2017) tells us. This “social credit system” aims to create a “citizen score” (to use the expression of Galeon & Bergan (2017)), in order to predict and prevent potential dangers, normalizing individual behaviour through panoptic devices (e.g. video surveillance), inducing self-defence and self-control reflexes. As Foucault (1975) said, it is a question of “ensuring that surveillance is permanent in its effects, even if it is discontinuous in its action; that the perfection of power tends to make the topicality of its exercise useless” (even today, it is more and more continuous in its action). Some of these scores are used by police to find out where to patrol to reduce crime, such as PredPol. But when we look more closely, the predictions say, in substance, that the crimes will take place (in majority) in the (historically) most criminogenic areas of the city. The boundary between banality and tautology is narrow. And the real danger is that scores often transform probabilities into near-certainties, and suspicion becomes proof, as Supiot (2015) noted.

Predictive justice and actuarial methods

In June 2010, a report from the Academy of Medicine called for “improving the practice of expert sex offender dangerousness by teaching and disseminating actuarial methods. These “actuarial methods” are quite simply scoring techniques, “profiling” as defined in the European regulation on personal data of 27 April 2016 (RGDP). Angèle Christin was interested in algorithms that estimate the probability of recidivism in the American criminal justice system. As she has shown, these techniques raise many questions, particularly discriminatory biases, the opacity that makes recourse difficult, and especially the difficulty of understanding what is actually calculated. In the State of Virginia, a score between 1 and 10 is returned, an agreement taken over by Compas (Correctional Offender Management Profiling Alternative Sanctions) which also offers a colour code that predicts the risk of violent recidivism. It is then a decision-making tool, a machine that cannot place a person in detention alone (Christin et al. (2015)).

The conclusions of a predictive score depend on two key elements: the model used, and the data. In the majority of cases in the United States, model codes remain opaque (and therefore impossible to attack), and few have seen the data used to calibrate these models. But one can ask oneself if the court decisions are not also relatively opaque? Judges must certainly give reasons for their decisions, which makes them open to criticism and attack, but if the process were so transparent, shouldn’t the outcome of a (human) trial then be more predictable? Finally, the different biases are quite simple to understand. Suppose being rich means having a good lawyer, and having a good lawyer means not having certain convictions. In this case, a wealth variable (the type of vehicle owned for example) will be positively related to not being guilty (convicted coupagle), and will lower the dangerousness score. The other danger in selection biases is that they are sometimes complex to understand, even paradoxical. A classic example is shown in Figure 1. During World War II, engineers and statisticians were asked how to reinforce bombers who were facing enemy fire.

Figure 1: Damaged locations of returned aircraft (source: McGeddon 2016)

Statistician Abraham Wald began collecting data on impacts in the cabin, as reported by Mangel & Samaniego (1984). To everyone’s surprise, he recommended shielding the areas of the aircraft that showed the least damage. Indeed, the aircraft used in the sample had a significant bias: only returned aircraft were taken into account. If they were able to return with holes at the tips of the wings, it is because these parts are sufficiently solid. And since no aircraft returned with holes in the propeller engines, these were the parts that needed to be reinforced.

Another danger is where causal relationships are reversed. What about this doctor who prescribes a powerful neuroleptic to a patient under investigation, lest justice reproach him for not having seen the dangerousness of his patient, and conversely, justice relies on this prescription to prove that the patient is dangerous? A poorly designed algorithm could misunderstand the meaning of causal relationships.

But predictive models in judicial matters are not only on the side of judges. In the event of a road traffic accident, the Badinter Act (of 5 July 1985) provides for a “right to compensation” for any victim of a traffic accident involving a land motor vehicle. When the driver’s insurance company offers compensation, the victim makes a quick cost/benefit analysis to find out if he goes to court. If it does not formally construct a predictive model, it tries to see, from some elements to its knowledge, the costs of asking a judge to decide on the amount of compensation, and its (potential) benefits.

Another important point is that lawyers call these “predictive” models “actuarial” models. The first function of actuaries was to discount, to calculate the value of time. And judicial time often has disastrous consequences. How would a human decision, imperfect, taken after 5 years of procedure be “better” than an automatic decision taken in 15 days by a machine? Many people who have known proceedings of several years, resulting in a dismissal, dream of accelerated procedures. Because “lost time” has a value, actuaries know it well.

What then of this efficiency of algorithmic models? Justice must be effective, but this constraint must not make us forget the central objective, which is to render justice. What happens if efficiency becomes an objective, not to say the main objective? This is the question posed by predictive models: what is the objective that we are trying to maximize? And how is it formulated in a simple way?

Decision support, or justification for decision making?

In the United States, many judges have been accused of motivating a judgment using decision support tools, which leaves some doubt as to the real function of these tools. The original idea was to help. Recently, several systems put in place in the past years have been questioned. For example, in Australia, the STMP (Suspect Targeting Management Plan) proposed to identify whether or not pre-adolescents should be monitored. This model is similar to any actuarial model, i.e. a risk assessment and prediction tool, focusing either on repeat offenders or on those suspected of committing a future crime. However, a recent report showed that its use had “no observable impact on crime prevention” [2]. At the same time in the United States, Compas (Correctional Offender Management Profiling Alternative Sanctions) has been criticized in Dressel & Farid (2017) : “Advocates of these systems argue that data and advanced automated learning make these analyses more accurate and less biased than those of humans. However, we show that the widely used Compas risk assessment software is no more accurate or fair than predictions made by people with little or no criminal justice expertise. By questioning people recruited on the Internet, without legal skills, it was a question of predicting whether or not people would commit another crime within the next two years. Compas was wrong in 34.8% of cases, and Internet users in 33% of cases. That said, one may wonder here what “to be wrong” means. In this case, recidivism is not measured here, but conviction for recidivism. What if the models (or the people) hadn’t been wrong, but the judges, on the other hand, had?

Predict and make mistakes

And if one of the worries did not come in what one asks a predictive tool? To predict is (basically) to establish a probability for a future fact. As was pointed out in a debate on polls and elections, can we say that we are wrong if we announce that an event can happen with a 5% chance, and that it actually happens? To know if a forecasting technique is good, you need to collect a forecast set, and compare them to observations. This is what meteorologists have been doing for about fifteen years, and which has been formalised by Gneiting et al. (2007). Their idea is that a set of predictive distributions is obtained by a \{\hat F_t,\hat F_{t+1},..,\hat F_{t+h}\} model and it is appropriate to compare these distributions to \{y_t,y_{t+1},..,y_{t+h}\} – and not \{\hat y_t,\hat y_{t+1},..,\hat y_{t+h}\}. It is then necessary to introduce a distance between the predictive distributions, and the observations. In a physical system, it is possible to imagine understanding the different causal relationships, and thus to predict. But in human relations (and justice is a perfect example), nothing is as simple, as automatic as the laws of fluid mechanics that make it possible to model meteorological phenomena.

Références

Binet, Jacques-Louis, 2010, La prévention médicale de la récidive chez les délinquants sexuels. Académie de Médecine.

Botsman, Rachel. 2017. Who Can You Trust?: How Technology Brought Us Together – and Why It Could Drive Us Apart. Portfolio Penguin

Charpentier, Arthur & Suire, Raphaël 2016. Données et santé: valeurs, acteurs et santé. Risques, 107

Charpentier, Arthur. 2016. Les dérives du principe de précaution. Risques. 108

Christin, Agnèle, Rosenblat, Alex & Boyd, Danah 2015. Courts and Predictive Algorithms. Datacivilrights

Dressel, Julie & Farid, Hany 2018. The accuracy, fairness, and limits of predicting recidivism. Science Adavances

Foucault, Michel 1975 Surveiller et punir, naissance de la prison. Gallimard

Galeon, Dom & Bergan, Brad 2017. China’s “Social Credit System” Will Rate How Valuable You Are as a Human. Futurism

Gneiting, Balabdaoui & Raftery 2007. Probabilistic forecasts, calibration and sharpness. JRRS-B, 69, 243–268.

Mangel, Marc & Samaniego, Francisco 1984.  Abraham Wald’s work on aircraft survivability , Journal of the American Statistical Association, vol. 79, no 386,‎ 259–267

McLannahan, 2015 Being ‘wasted’ on Facebook may damage your credit score (Octobre 2015, Financial Times)

Supiot, Alain 2015. La gouvernance par les nombres : cours au Collège de France, 2012-2014. Fayard.

Trujillo, Elsa 2017. La Chine met en place un système de notation de ses citoyens pour 2020. Le Figaro, décembre 2017

[1] In « In Your Face: China’s all-seeing state » http://www.bbc.com/news/av/world-asia-china-42248056/in-your-face-china-s-all-seeing-state

[2] https://www.numerama.com/politique/300907-un-algorithme-teste-par-la-police-pour-anticiper-les-crimes-des-jeunes-inquiete-laustralie.html

 

 

Les machines, les procédures, et la fuite de la responsabilité

On essaye de nous faire croire que l’intelligence artificielle est une « révolution ». Et s’il n’en était rien ? Ne peut-on pas voir tout simplement la logique d’un processus qui remonte au moins aux cinquante dernières années ? La bureaucratie nous a poussés à mettre en place dans tous les domaines de la vie de tous les jours des procédures simples, permettant à tout-à-chacun de se dégager de toute responsabilité, de ne plus avoir à faire preuve d’intelligence. Les algorithmes font peur, on se demande où se trouve l’ « humain » dans ces procédures décisionnelles… Et s’il avait déjà disparu depuis bien longtemps ?

Continue reading Les machines, les procédures, et la fuite de la responsabilité

Enveloppe convexe de points tirés au hasard

Le week-end dernier, Jean-Baptiste qui était de passage à la maison, me présentait un problème amusant de géométrie, lié à un papier mis en ligne l’an dernier, monotonicity of facet numbers of random convex hulls. Dans cet article, ils montrent que quand on tire n points au hasard (dans un espace de dimension d) alors P_n, le nombre moyen de face de l’enveloppe convexe est strictement croissant avec n. Si on n’a pas regardé la démonstration (on avait mieux à faire), Jean-Baptiste me disait que ce problème très simple était en fait très complexe. Et bien entendu, comme ça m’a interpelé, j’ai voulu regarder plus en détails. En dimension d=2, et en tirant en plus uniformément sur le carré unité (oui, j’ai fait très très simple).

Pour tirer des points au hasard, et récupérer l’enveloppe convexe, c’est assez simple, par exemple avec 15 points

library(sp)
library(geosphere)
n=15
UV=matrix(runif(2*n),n,2)
CH=chull(UV)
PLCH=UV[c(CH,CH[1]),]
plot(c(0,0,1,1),c(1,0,0,1))
polygon(PLCH,border="blue",col=rgb(0,0,1,.3))
points(UV,pch=19,cex=2,col="red")

On le voit, ici l’enveloppe convexe est un polygône de 8 côtés (ou 8 sommets). On peut alors faire un petit code pour tirer des points au hasard, construire l’enveloppe convexe, et sortir des infos (nombre de points extrémaux, surface de l’enveloppe convexe, présence ou non de certains points – sur la diagonale, etc)

simu=function(n,isplot=FALSE){
UV=matrix(runif(2*n),n,2)
CH=chull(UV)
PLCH=UV[c(CH,CH[1]),]
nb_ex=length(CH)
p_in=function(u) point.in.polygon(u,u,PLCH[,1],PLCH[,2])
pts_in=Vectorize(p_in)(seq(.5,.95,by=.05))
if(isplot==TRUE) lines(PLCH,col=rgb(0,0,1,.25))
return(list(nb=nb_ex,area=areaPolygon(PLCH),pts=pts_in))}

par exemple

plot(c(0,0,1,1),c(1,0,0,1),col="white")
for(s in 1:1000){S=simu(5,isplot=TRUE)}

(on pourrait bien entendu stocker tout plein de choses)

ou encore avec n= 20 points au lieu de 5

plot(c(0,0,1,1),c(1,0,0,1),col="white")
for(s in 1:1000){S=simu(20,isplot=TRUE)}

Essayons de boucler un peu sur n maintenant

Np=c(3,4,5,6,7,8,10,15,20,30,40,50,75,100,200)
VN=VA=rep(NA,15)
NN=matrix(NA,20000,15)
VPT=matrix(NA,15,10)
for(i in 1:15){
N=A=rep(NA,20000)
PT=matrix(NA,20000,10)
np=Np[i]
for(s in 1:20000){
S=simu(np,isplot=FALSE)
N[s]=S$nb
PT[s,]=S$pts
A[s]=S$area
}
NN[,i]=N
VN[i]=mean(N,na.rm=TRUE)
VA[i]=mean(A,na.rm=TRUE)
VPT[i,]=apply(PT,2,function(x) mean(x,na.rm=TRUE))
}

Cette fois on stocke tout plein de choses. On peut juste faire un boxplot du nombre des points extrémaux en fonction de la taille de l’échantillon

VV=rep(Np,each=20000)
boxplot(as.vector(NN)~as.factor(VV))

Oui, en moyenne, ça semble croitre. Plus amusant, si on regarde la moyenne en fonction de \log(n)

plot(Np,VN,type="l",log="x",col="blue")

on obtient… une belle droite ! Le nombre moyen de points extrémaux croit en \log(n). On peut même avoir la pente de cette droite,

> lm(VN~log(Np))
Coefficients:
(Intercept) log(Np)
0.05224 2.58717717

Je laisse les plus courageux trouver du sens à ce 2.58717… Si on continue un peu, on peut regarder la probabilité que (u,u) soit à l’intérieur, pour plusieurs valeurs de u.

plot(Np,VPT[,10],type="l",log="x",col="blue")
lines(Np,VPT[,8])
lines(Np,VPT[,5],col="red",lwd=2)

On retrouve des fonctions croisantes, en fonction de n, mais la convexité semble dépendre de l’endroit où se trouve le point u. Amusant, non ?