This post was initially written in French, and published in April 2020.
In a conference given on February 13, 2020[i], entitled Against the Method, Didier Raoult stated “I have never done randomized trials […] to do that on infectious diseases, it makes no sense“. This view was repeated in a more detailed article, where Didier Raoult defended (what he called) “the morality [and] the humanism” of the Hippocratic oath against “the method” (and “mathematics”). As he reminds us, doing control groups is “telling the patient that we are going to give him at random either the drug we know works or the drug we do not know works” (Raoult (2020a, 2020b)). While this method of randomized experiments is now hailed in all disciplines – as the Nobel Prize in Economics awarded in 2019 to Esther Duflo, Michael Kremer and Abhijit Banerjee reminds us – how can a researcher take such a position today?
Observation, experimentation, statistics and causality
Statistics can be seen as the set of mathematical tools that one can use to extract information from a set of data. For example, we can ask ourselves whether, following a surgical operation, the patient should recover at home or in hospital (for a faster or more efficient recovery). The statistician will naturally collect data, with as many operations as possible, find out whether people recovered at home or in hospital, perhaps find some additional variables, such as the nature of the operation, the age of the patient, or the distance to the nearest hospital (in case of complications), and look for an indicator of success, such as whether an operation had to be redone in the months that followed. This may seem natural, but does it answer the question? In these collected data, known as observations, can we not expect to have a selection bias? Weren’t the people who were sent home healthier than those who stayed longer in the hospital? Because the question the statistician is asking is basically “what would have happened if the person had chosen the other option? This is causal inference, and the patient chooses one option, never both. There is no way to know what would have happened if the other option had not been chosen. The causal effect of one option, or treatment, is never observed because it is the potential difference between two alternatives.
The simplest method is to take two patients who are close (if not identical) and see how they respond to the two options or treatments. John Stuart Mill referred to this as the “difference method“. Proximity is based on covariates such as age, gender, or various characteristics. The idea of randomized experiments is not to simply observe, but to choose the option, or treatment, at random (and not to let the patient or the doctor choose). Each person is then assigned to a group, regardless of their characteristics. Ideally, John Stuart Mill wanted to compare identical people in alternative choices: he wanted there to be no unmeasurable differences before treatment between the people receiving the treatment or the control group. While the philosophical concept is laudable, John Stuart Mill did not provide a method for creating this ideal situation. During the 20th century, statistics showed that random assignment to treatment groups (one could imagine more choices than two alternatives) was very promising.
This is called an experiment because random assignment rarely occurs in a natural setting. Assignments often occur based on a bias of the person making the decision. In the case of a choice between aggressive and milder treatment, one might imagine that it would be natural to treat less aggressively a patient whose disease is less severe, or to treat more aggressively a patient who might survive but whose disease is more severe, or perhaps to treat less aggressively a patient who has no hope of survival. If severely ill patients receive aggressive treatment (and less severely ill patients receive less aggressive treatment), then aggressive treatment might appear harmful, when in fact it is beneficial, since severely ill patients are, a priori, those most likely to die. To borrow John Stuart Mill’s idea, one wished to compare two options fairly, but fair comparisons are rare, by nature, as Rosenbaum (2017) reminds us.
And randomization is one of the simplest methods to implement[ii], precisely because drawing lots does not favor either treatment. The concepts of equity and justice that we are referring to here are simply those of symmetry (Paternotte (2020) and Ferry-Danini (2020) return to the importance of ethics, a point that we will touch on later). This symmetry in distribution translates into the fact that one patient has, a priori, as much chance of receiving a treatment as another. In the 1920s, this was referred to as “uniformity trials”. Randomized experiments were initially proposed in agriculture, in experimental farms. The farm was divided into several plots, and the treatment was chosen randomly (it could be a fertilizer, and an insecticide). Some plots were used as controls, and by comparing the two, one could quantify the effectiveness of the treatment, in relation to the reference that is the control (Dehue (2001) returns to the first controlled experiments, or Hacking (1988) with the experiments of the 1880s to find out if telepathy works).
Social experiments and public policy evaluation
This approach is nowadays classic for evaluating the impact of a public policy: two groups are set up in a completely random way, a test group (which will benefit from the policy) and a control group (which will not). The policy can be an increase in the salary of certain civil servants via a bonus, loans to women in certain communities, etc. At the end of the test, the two groups are compared to see whether or not there has been a beneficial effect, possibly refining by subgroup to understand who benefits from the policy, or on the contrary, who does not. We can then quantify the impact, rarely explaining the reasons for it (if this impact is deemed significant). This does not allow us to know whether another policy would not have had other effects, or whether another lever of action would not have been more effective. It can be shown that by paying a visit to the doctor to be vaccinated, the mortality rate falls in certain developing countries. Esther Duflo, Michael Kremer and Abhijit Banerjee have advocated the use of these methods to measure the impact of policies on poverty.
This is because randomized controlled trials are the most rigorous way to determine whether there is a causal relationship between a treatment and an outcome, and to quantify the impact of the treatment. The main feature is randomization between intervention groups, but it is not the only one. In medicine, it is often required that patients and researchers do not know which treatment has been administered until the study is complete (known as double-blind studies). This condition is often not possible in the case of public policy. Recent advances have made it possible to go further, by relaxing several assumptions, such as allowing group sizes to be adapted in the case of repeated experiments: if a treatment is seen to have an effect, but the experiment needs to be extended over time, it is possible to include more subjects. To use the terminology of Charpentier et al. (2020), it is possible to strike the right balance between exploration and exploitation.
Experimentation and medicine, the case of polio
Strangely, economists have long said that these randomized experimental methods came from medicine, but the first major public health experiments date only from the 1950s. In 1954, more than 400,000 American children participated in a randomized experiment to quantify the effects of a vaccine, developed by Jonas Salk, to prevent polio (poliomyelitis). In less than a year, this experiment definitively settled the question, which was a first for a public health issue. In the states that participated in the study, participation was not mandatory. Just over 200,000 children, selected at random, were given the vaccine, while an almost equal number were given a placebo, which was then a salt water solution. And just over 300,000, although eligible, did not participate. One might question the ethics of such a procedure, consisting in giving salt water when they could have had the vaccine. But this view is distorted, because it is based on the result of the test: at the time, no one knew whether the vaccine was effective, and in particular, whether it would not have harmful side effects.
This principle of uncertainty (Freedman (1987) refers to it as equipoise) is often considered an ethical prerequisite for launching an experiment. In the vaccine sample (and the figures given by Brownlee (1955), Meier (1990) and Meldrum (1998)), 16 children out of 100,000 had paralytic polio, compared with 57 in the placebo group. Statistically, such a difference can be considered “significant.
For the record, this randomized experiment was not the original plan. The original idea was to give the vaccine to all second graders, and use the first and third graders as a control group. But several researchers objected, noting that the transition from first to second grade was based on grades, and that the best children (academically) would get the vaccine. Another concern was that if the grouping variable was made public, this would lead to cheating, as the doctors who were vaccinating knew the children’s grade level, and their opinion of how the vaccines worked could have induced results one way or the other, sometimes simply by encouraging some children not to participate in the experiment. In the randomized version, the choice to participate or not was made before the groups were formed. Participation in the experiment was related to some variables (in particular, mothers with lower socioeconomic status thought more often that vaccines were dangerous, and withdrew their children more often), but assignment to groups was completely random, and the two groups could be considered comparable. This self-selection did not cause any bias in the analysis, unlike in the non-randomized case[iii]. This first large randomized experiment established unambiguously the efficacy of the vaccine in the prevention of polio, and was an important first step in the eradication of the disease (at least in developed countries). Yes, because polio is an infectious disease, and this experiment was a fundamental step in scientific research, showing the importance of this method when it is possible.
When randomization is impossible?
In an observational study, i.e., a study of the effects of a treatment without random assignment of treatments, an association between the treatment received and the observed outcome is usually ambiguous, as discussed above. This association could reflect an effect caused by the treatment (which is what one hopes for when launching the study), or an unmeasured bias in the way treatments were assigned, or even a combination of both. If Ronald Fisher laid the mathematical foundations of randomized experimentation, William Cochran formalized sampling methods, explaining how to analyze observational studies.
Observational data have an undeniable appeal, as noted by Moses (1995) and Benson & Hartz (2000), including lower cost, faster results, and often a larger number of patients. But they can lead to misconceptions. Before 2002, physicians routinely prescribed hormone replacement therapy to postmenopausal women to prevent myocardial infarction on the basis of observational studies. Yitschaky et al (2011) recall that randomized experiments conducted between 2002 and 2004 found that some of these women had a higher rate of myocardial infarction than women on placebo, and that hormone replacement therapy (estrogen only) did not reduce the incidence of coronary heart disease. As Sibbald & Martin (1998) have already explained, only a randomized experiment can correct a practice that is nevertheless accepted by the profession. One can also think of the MRC Vitamin Study Research Group (1991), which went back on a (non-randomized) experiment aimed at establishing that vitamin intake during pregnancy could prevent neural tube defects in children. The ethics committee, in the 1980s, had not wanted to deprive patients of this potentially useful treatment, which had harmful (unexpected) side effects, and made it difficult to analyse the results – it took more than ten years to show that folic acid was the effective part of the multivitamin cocktail given to pregnant women, as Sibbald & Martin (1998) recall.
In practice, observational studies are used mainly to identify risk factors or when randomized controlled trials would be impossible or unethical.
For example, Larroque et al (1995) investigated the impact of alcohol consumption during pregnancy by examining children a few years later (between 4 and 5 years of age). They compared moderately low and moderately high levels of alcohol consumption and found that children whose mothers consumed the equivalent of four or more glasses of wine per day performed significantly worse on a variety of cognitive assessments. It was also noted that mothers who drank more alcohol were different from those who drank less: the heavier drinkers were less educated, older and smoked cigarettes more often. This difference implies significant biases in comparisons. They proposed methods to assess these biases, but a randomized experiment was not feasible at the time: one could not force mothers to drink, or others to stop drinking. As the U.S. Centers for Disease Control and Prevention noted in 2016[iv], even if causal inference is impossible on the subject, “why take the risk?” Sometimes the precautionary principle is necessary.
The practice of randomization
Randomized experiments are a method that it is surprising to see rejected outright, as Raoult (2020a,b) does. They are nowadays an essential tool in the human sciences, as Imai (2017) reminds us. But their practice is not without danger. As Stenagaga (2017) shows, randomized experiments play a fundamental role not “in medicine” but in the regulatory procedures that frame the practice of medicine. For a drug (or let’s say a treatment) to be approved by the U.S. Food and Drug Administration (FDA), there must typically be two randomized clinical trials that suggest the drug is superior to a placebo. There are no constraints on the number of trials performed. Because negative tests are often unpublished, this practice tends to overestimate the benefits of a treatment, due to publication bias. But in the current health crisis, refusing to use such a technique is neither serious nor reasonable.
References
Benson, K. & Hartz, A.J. (2000) A Comparison of Observational Studies and Randomized, Controlled Trials. New England Journal of Medicine, 342:1878-1886
Boring, E. G. (1954) The Nature and History of Experimental Control. American Journal of Psychology, vol. 67, no. 4, pp. 573–589.
Brownlee, K.A. (1955). Statistics of the 1954 polio vaccine trials. Journal of the American Statistical Association 50, 1005–1013
Charpentier, A., Elie, R. & Remlinger, C. (2020). Reinforcement Learning in Economics and Finance. ArXiv: 2003.10014.
Deaton A. (2010). Instruments, randomization, and learning about development. Journal of Economic Literature, vol. 48, n° 2 : 424-455.
Dehue T. (2001). Establishing the experimenting society: The historical origin of social experimentation according to the randomized controlled design. American Journal of Psychology, 114-2, p. 283-302.
Ferry-Danini, J. (2020). Petite introduction à l’éthique des essais cliniques. Medium.
Freedman, B. (1987). Equipoise and the ethics of clinical research. New England Journal of Medicine, 317: 141–145
Hacking, I. (1988). Telepathy: Origins of Randomization in Experimental Design. Isis, 79(3), 427-451
Headlam, J.W. (1891), Election by Lot at Athens. Cornell University Press.
Imai, K. (2017). Quantitative Social Science: An Introduction. Princeton University Press.
MRC Vitamin Study Research Group (1991). Prevention of neural tube defects. Results of the Medical Research Council Vitamin Study. Lancet 338:131-7.
Larroque, B., Kaminski, M., Dehaene, P., Subtil, D., Delfosse, M.J. & Querleu, D. (1995) Moderate prenatal alcohol exposure and psychomotor development at preschool age. American Journal of Public Health 85, 1654–1661.
Meier, M. (1990) Polio trial: An early efficient clinical trial. Statistics in Medicine, 9, 13–16.
Meldrum, M. (1998) A calculated risk: The Salk polio vaccine field trials of 1954. British Medical Journal 317, 1233–1236.
Moses, L. (1995). Measuring effects without randomized trials? Options, problems, challenges,” Medical Care 33: AS8–AS14.
Paternotte, C. (2020). Contre la méthode ? Medium.
Raoult, D. (2020a). « Le médecin peut et doit réfléchir comme un médecin, et non pas comme un méthodologiste » Le Monde, 25 mars 2020
Raoult, D. (2020b). « L’éthique du traitement contre l’éthique de la recherche », le Pr Didier Raoult critique les « dérives » de la méthodologie. Le quotidien du médecin, 2 avril 2020.
Rosenbaum P. (2017), Observation and Experiment: An Introduction to Causal Inference. Harvard University Press.
Sibbald B. & Martin, R. (1998) Understanding controlled trials: Why are randomised controlled trials important? British Medical Journal. 316:201
Stegenga, J. (2018). Medical Nihilism. Oxford University Press.
Yitschaky, O., Yitschaky, M. & Zadik, Y. (2011) Case report on trial: Do you, Doctor, swear to tell the truth, the whole truth and nothing but the truth? Journal of Medical Case Reports 5, 179.
[i] On line on https://www.youtube.com/watch?v=7TI3Re57X2Y
[ii] As Headlam (1891) reminds us, the drawing of lots was long used in ancient Greece to designate representatives. It was also used to constitute popular juries. This method allowed some to see in it the possibility of divine intervention.
[iii] Actually, some states have opted for this approach. More than 200,000 second graders were vaccinated (but about 125,000 children did not participate in the experiment). All first and third graders were included in the experiment as a control group (not vaccinated). In the first group, 17 cases of paralytic polio per 100,000 were observed (comparable to the 16 cases in the randomized group), but only 46 cases in the supposed control group. This means that there was a gain from 16/57=28% to 17/46=37%, which is far from comparable.
[iv] https://www.cdc.gov/vitalsigns/pdf/2016-02-vitalsigns.pdf