(A brief) history of randomness, and simulation techniques

Hearing “there is a 10% chance of rain today” or “the medical test has a positive predictive value of 75%” shows that the probabilities are now everywhere. A probability is a quantity that is difficult to grasp, but essential when trying to theorize and measure chance, or randomness. And if mathematical theory finally came very late, as Hacking (2006) points out, this did not prevent insurance from developing early enough, and from having the first (actuarial) mortality tables even before the “probability of death” or “life expectancy” had a mathematical basis. And in the same way, many techniques were invented to “generate randomness“, before the explosion of the so-called Monte Carlo methods, in parallel with the development of computing (and the fact that a machine could generate chance).

Why didn’t the Greeks invent the theory of probability?

The question may surprise us, but when we see the knowledge of the Greeks in geometry, between the time of Pythagoras (550 BC) and that of Euclid (300 BC), we may be surprised that no theory of chance has been proposed. It should perhaps be remembered that in the Greek mathematical tradition, the hypothesis must be sufficient to imply the right deduction. Arithmetic, for example, is “knowledge without action“, without, as Plato stated in Le Politique, quoted by Goldschmidt (2003), “are not arithmetic and some other arts of the same family stripped of any attachment to action, and are they not limited to providing knowledge? On the contrary, those concerning framing or any other manual construction have their science linked, so to speak, originally to action“. We cannot then imagine a mathematical theory of chance based on dice rolls.

But beyond this epistemological constraint, if we read the great Greek texts again, fatalism is very present, both in mythology and in philosophy. The oracle of Delphi, and more generally divination, affirms the existence of unbreakable causal chains, through which the present predetermines the future (what we would call hard determinism), as evidenced by the myth of Oedipus. Even statistics can be scary. In 150 BC, Ptolemy, the great astronomer of Alexandria, fought against measurement errors without any theory of error, forcing him to compromise several times to reconcile his theories with discordant observations, as Gingerich (1997) recalls. Going further, it could almost be said that Greek mathematicians would have considered probabilities as a sophism, σόφισμα, an attempt to produce knowledge through ignorance. The frequentist approach may seem objective, but Bayesian probabilities are fundamentally subjective, which seems to support this ancient approach. Plato thought that mathematics and ideas were real, while the physical world is made up of shadows. Time itself was excluded from its reality, so one cannot imagine that chance could be a mathematical object worthy of interest. Atomists could have imagined a theory of probability, but their research program did not. This theory will have to wait a few decades, but this will not prevent men from seeking to produce chance, or even to deduce laws from it (this term “law” recalls the empirical nature of probabilities, namely stability observable by experiments, such as the “law of large numbers” which can be compared to the “law of universal gravitation” or the “law of energy conservation“, unlike geometry or arithmetic.).

The historical importance of dices

Dice are found throughout the history of humanity. A first use of the dice was a divination tool, used in religious ceremonies, even if the first dice were often bones (or ossicles). Their natural asymmetry poses problems of credibility: even the pious believer will wonder about the true will of the gods if he constantly falls on the same side. The objectivity of symmetrical dice quickly became apparent. This symmetry has also allowed the development of games of chance, making the games fair for the different players. Several archaeological sites in Europe or the Near East have uncovered many examples of astragalus, this small bone above the heel, often considered a primitive form of dice. The bone of hoofed animals (beef or goat) was preferred because it was the most symmetrical (the more developed feet have more irregular shapes). Some of them also had worked faces, giving an almost cubic shape. And many of them had signs on their faces. In 2700 BC, a six-sided, hand-sculpted, cubic dice with slightly rounded edges was discovered in Iran. The cubes have the advantage of being able to roll in a rather improbable way (even if several dice have irregularities in density).

In antiquity, dice betting was particularly widespread, as recalled in David’s first chapter (1962). And the lure of winning has led some players to better understand how to maximize their winnings. Symmetry and regularity were the best guarantees to avoid cheaters, and to ensure a form of equity, regularity, with the possibility of obtaining “laws“. But in Roman times, most dice were very imperfect, unbalanced cubes. It was at the end of the Middle Ages (and especially during the Renaissance) that the shape of dice stabilized, and the first developments of probability theory were an attempt to provide conditions of equity for gambling, and to a large extent for dice games. We know in particular the work of Huygens, Fermat or Galileo on the throwing of several dice. The latter showed that by rolling three dice, there is more chance that the sum of the faces is equal to 10 than has 9. Dice are still widely used to introduce probability calculations into schools.

Dungeons and Dragons, dices and insurance

If the six-sided dice are the best known, it is now possible to find other forms of dice, such as the octahedron – the so-called \text{D}8 dice – or the regular decahedron – the \text{D}10.

For symmetry arguments, we prefer regular polyhedra, widely known since Greek times, between Pythagoras who described the tetrahedron (pyramid, 4 faces), the hexahedron (cube, 6 faces) and the dodecahedron (8 faces), and the Athens Thehetus who described the other two: the octahedron (12 faces) and the icosahedron (20 faces). But instead of just rolling one die, players have long used the idea that it is also possible to roll several dice simultaneously. In a very general way, we can roll n dice with k faces. The random variable corresponding to the sum of the faces is then noted nDk. It can be noted that the expectation is \mathbb{E}[n\text{D}k]=n(k+1)/2, and that the variance is \text{Var}[n\text{D}k]=n(k^2-1)/12. Figure 1 shows the distribution of nDk, in the particular case where nk=100 (corresponding to the maximum that can be obtained), with different types of dice. To obtain a result that could be interpreted as a percentage, the product of two \text{D}10s was considered in a role-play. But the distribution was unusual, and very asymmetrical (with an 80% chance of getting less than 50, for example).

Figure 1: distribution of n\text{D}k, n dice throws with k faces (author calculations).

In the 70s, Gary Gygax, worked for an insurance company, while being a great lover of war games (with miniature plastic figurines). These medieval battles were relatively realistic until Gygax introduced elves and wizards. The latter found his work as an insurer very close to the fantastic game he had just invented: Dungeons and Dragons. “If you squint at a character sheet, you’ll see that it’s an actuarial table,” noted Buchanan (2008). “Insurers consider events in a person’s life, and build statistics to know when they will die, and from what. Gary said, “Let’s do this for a dragon. We’re going to roll a die for probabilities“. In the game, for any given situation, there is a α% chance that an event will happen, for example if I am a barbarian fighting an evil wizard with a sword, there is a 30% chance that I will kill him, 70% that I will miss him. And if my sword has magical powers, my chance of missing it is halved. Gygax proposed to use dice to simulate these random events. And even to use another die to model the severity of the damage caused, for example by taking into account the size of the sword. Actuaries recognize traditional property and casualty insurance models.

As several brochures of the game, in particular Gygax (1989), attest, from the beginning Gary Gygax wanted to reproduce the Gaussian law, the famous “Bell Curve”, if possible with as few dice as possible. Using 20-sided dice (then very unpopular, and difficult to get) was the most natural solution, and probably revolutionized role-playing. As shown in Figure 1, 5 rolls of such dice give a normal distribution (with a very satisfactory degree of accuracy), while giving calculations that are easy to do. But this is not the method used by actuaries for scenario generators.

Generate chance

Generating chance has become a very popular technique today for researchers to deal with a number of problems, ranging from behaviour at the molecular level to sampling a population, or solving certain equation systems. Actuaries use them daily to quantify uncertainty. Until a century ago, people who needed random numbers could throw coins, shoot balls at restless urns, or roll dice as we have seen before. But as Mlodinow (2008) points out, when gambling was banned (during prohibition in the 1920s), other techniques had to be found to generate random numbers. One method used was to use the figures published by the American government, as soon as they concerned large amounts, such as the U.S. Treasury Balance, and to use the last 3 figures when we wanted a number between 1 and 999: by displaying a debt of $8,995,800,515,946, we used 946. If the first figures were (more or less) predictable, the intuition was that it was not possible to predict the last ones (without going into a theory of the generation of chance, the fundamental underlying idea is that it is impossible to predict a value. We find the very similar notion of “martingale“, very popular in financial mathematics – and also in gambling). In 1927, British statistician Leonard Tippett used a similar idea to publish a book with 41600 “random numbers“, obtained by taking the mid-point figures of the parish area in England (in a format similar to that in Figure 2).

Figure 2: 60 random numbers, uniformly drawn between 1 and 99999 (source: author).

The same year (1927) a Soviet statistician, Evgueni (Eugene) Slutsky, showed a result that could be seen as the opposite of Tippett’s: in the first case, economic series were used to generate chance, and Slutsky showed that random series could be used to generate all kinds of economic series. At the beginning of the 20th century, many researchers believed that unpredictable events such as wars, crop failures or technological innovations should play a role in economic cycles. But no one really understood how crucial random processes (nowadays we call them “stochastic“) are to understanding how the economy works. Until Eugene Slutsky published his work on “cyclical phenomena” (to use the title of his article), and showed that very simple manipulations of random series (in this case numbers obtained from government lottery draws) could generate undulating models that could not be distinguished from economic cycles. Or, as Slutsky said, any economic series could be seen as a stochastic process obtained as “the sum of random causes“. As Chetverikov (1959) notes, Slutsky then enthusiastically wrote to his wife, who had stayed in Kiev, that he was “lucky to reach a fairly considerable conclusion, to discover the secret of… these undulating movements observed in social phenomena“. These techniques were widely used in physics as “Monte Carlo methods” a few years later, but it was not until the 1970s, with the massive use of moving averages, that Slutsky’s work in economics was recognized as important.

Modelling and random generation have made it possible to make major scientific advances in recent years, and probably also in the coming years, such as the quantum computer (if we accept to see the principle of indeterminacy as a problem of probability). Yet probabilities are a concept that is difficult to manipulate, a source of many paradoxes, as Charpentier (2014) pointed out. And if they are still presented in its frequentist version (based on the law of large numbers), assuming an infinite repetition of experiments (such as dice rolls), how to understand a “predictive probability” used by underwriting actuaries, especially when the insurance claims to be “individualized“.

 

Buchanan, Leigh (2008). Legacy Gary Gygax, 1938-2008. Inc Magazine, Juin 2008.

Charpentier, Arthur (2014). Interprétation, intuition et probabilités, Risques, 99

Chetverikov, NS (1959). The Life and Scientific Work of Slutsky.

David, Florence Nigthingale (1962). Games, Gods, and Gambling: A History of Probability and Statistical Ideas. Charles Griffin & Co.

Gingerich, Owen (1997). The Eye of Heaven: Ptolemy, Copernicus, Kepler. American Institute of Physics.

Goldschmidt, Victor (2003). Le paradigme dans la dialectique platonicienne. Vrin.

Gygax, Gary (1989). Advanced Dungeons & Dragons, Dungeon Masters Guide. TSR.

Hacking, Ian (2006). The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge University Press.

Mlodinow, Leonard (2008) The Drunkard’s Walk: How Randomness Rules our Lives. Vintage Books.

Slutsky, Eugene (1927). The Summation of Random Causes as the Source of Cyclical Processes. Econometrica, 5:2, 105-146.

Tippett, Leonard (1927). Random Sampling Numbers. Cambridge University Press.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (September 11, 2018). (A brief) history of randomness, and simulation techniques. Freakonometrics. Retrieved October 10, 2024 from https://doi.org/10.58079/ovbj


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.