“Ethics in Quantitative Finance”

Just before going to the workshop on dependencies in finance and insurance, Tim Johnson (also known as @TCJUK on Twitter), researcher at Heriot-Watt University in Edinburgh and blogger on http://magic-maths-money.blogspot, sent me a copy of his manuscript entitled Ethics in Quantitative Finance: a pragmatic theory of markets. While opening the book, we think of Peter L. Bernstein, his masterpieces Capital Ideas (or the later Capital Ideas Evolving) as well as Against the Gods. But Tim’s book is quite different.  This book is not really about finance, but about financial valuation and actuarial science. We can clearly see the deep interactions between financial mathematics and actuarial science. About uncertainy, prices and probabilities. And all those topics are embeded with a philosophical perspective

the argument is presented that financial markets are radically uncertain environments, where correspondence theories of truth are meaningless since there are no matters of fact about an uncertain financial future. In the face of this uncertainty, markets are places where “the opinion which is fated to be ultimately agreed to by all who investigate” is sought and opinions are expressed through asset prices. This implies that markets are centres of communicative action and money is behaving as a language. Using Jürgen Habermas’ analysis, this implies that market prices ‒ statements of opinions ‒ must satisfy objective, subjective and social truth criteria. The argument presented is that reciprocity guarantees the objective truth, sincerity guarantees the subjective truth and charity guarantees the rightness of a price. This explains why reciprocity is embedded in financial mathematics.

The book takes a chronological perspective. We start with a chapter Genesis of money and its impact, folllowed by one on Finance and Ethics in Medieval Europe. I should also mention that the book is full of fascinating anecdotes (in those chapters, but also later on) for instance on gambling

Gambling is often regarded as an illicit activity today and is frequently outlawed, but in ancient societies, gambling was often associated with sacrificial practises (…) Gambling was important in archaic societies because it re-distributed resources in a non-subjective manner and so inhibited the formation of hierarchies.

as described in Altman (1985) in the context of contemporary Australian aboriginal goups. I have to admit that I downloaded a dozen articles mentioned in the book, to read them this summer. I agree with Tim when he mentions Fibonacci as a major reference in financial mathematics (and discounting, as discussed in a previous post on this blog, in French – unfortunately – published recently in Risques, see also Davide Castelvecchi’s recent post).

At the start of the thirteenth century Western Europe was going through a financial revolution and the creation and management of the poena, censii, prestiti, societas and Bills of Exchange in an environment of changing values and prices required complex negotiation and calculation. To cope with the situation, the merchants turned to Leonardo Bonacci, better known as Fibonacci, who would change European culture by changing western mathematics.

There is also a series of very interesting thoughts in those chapters on causality, truth, information, starting with Hume’s is/ought problem, as well as Kant’s perspective. All those ideas are very important when we have in mind that prices in financial markets are related to beliefs and (subjective) information (we’ll get back on that issue later on).

We have then a nice chapter on the Philosophical Basis of Modernity, with Descartes, Spinoza or Locke. This is usually not an important issue is other books on the history of financial mathematics, but it is clearly an important issue (just think of algorithmic trading and ethical questions related to artificial intelligence – those points will be discussed in the last chapter). I was glad to see this chapter here. We finally reach the chapter on The financial revolution of the XVIIth Century, followed by The Enlightenment and l’homme éclairThe starting point is simple,

When trade was carried out over longer distances, more money was needed, the time-scales longer and the risks greater. In these circumstances the societas, partnerships of recognisable individuals, were inadequate and new types of commercial organisation, based on the idea of a corporation, emerged in the Italian city states to enable more people to pool their resources in larger scale commercial operations.

In those chapter, we discover more how modern financial markets emerged a few centuries ago

When the value of an asset was uncertain it would be harder for a broker to find property-owners, who could agree a price. In these situations, liquidity was provided by ‘jobbers’ or ‘market-makers’ who ensured that when a property owner, usually dealing through a broker, wished to trade the market had an opinion as to the price of the asset. Jobbers could form an opinion by trading in blanco, so they did not need the resources to buy the physical assets, and jobbers were associated with people with limited resources (…) The practice emerged of jobbers, today known as ‘dealers’ or ‘market-makers’, being required to simultaneously quote ‘bid’ prices, at which they would buy an asset, and ‘offer’ or ‘ask’ prices, at which they would sell, without knowing if the counter-party is seeking to buy or sell the asset ‒ though the quantity would affect the quoted price.

including some remarks on ‘public opinion’ that might be related to so-called predictive markets

The development of trust between the government and the market did not simply appear but was part of a process that saw power migrate from the aristocratic court to the ‘public opinion’ of the propertied middle classes. London’s coffee-houses became central in the formation of this public opinion (…) Like the Greek agora and the Roman forum, the London coffee-house acted as a focus of market practice and legal theory and the middle-classes, following Locke, came to believe that, like money, the law should be a universal, abstract entity.

The next chapter is Practical Mathematics: the development of probability theory, with some etymological perspective (that I love)

Huygens had to translate his Dutch text into Latin so that it satisfied the requirements of the universities. He struggled to find good Latin words for the terms he was using in Dutch, which originated in gambling. He had used the Dutch word kans (chance) for ‘expectation’, which would usually be translated into Latin as sors, and eventually he, or van Schooten, chose expectatio, giving the English term ‘expectation’. However, Huygens had also considered using the Latin word spes which was the classical term for the virtue ‘Hope’. In French, spes was chosen and today the French use espérance when referring to mathematical expectation, while the Dutch, faithful to Stevin’s precedent, use their own term, verwachting, meaning hope, promise, expectation.

Again, I love in that book the actuarial perspective considered to motivate classical financial theories,

The Huygens brothers did not manage to value annuities. This problem would be solved in 1671 by another student of van Schooten’s Johan de Witt, in a report, Waerdye van Lyf-renten Naer Proportie van Los-Renten (On the Valuation of Annuities in Proportion to Redeemable Loans), for the Dutch government. De Witt used the age-old practice of employing the ‘law of one price’, or arbitrage, and argued that to calculate the expectations, and so value, of annuities he should use the value of equivalent debt contracts

Usually, the ‘law of one price‘ is mentioned with a financial mathematics perspective, and it is nice to have the actuarial one. There is also an interesting discussion (that is also in Ian Hacking’s Emergence of Probability) on connexions between beliefs, prices and so-called “probabilities”

The final section is the most significant but has proved problematic because Bernoulli considered situations where the sum of probabilities could be greater than one. To have probabilities summing to more than one is an issue if you think of chance as being a consequence of relative frequency, as discussed in the first parts of the Ars (…) Bernoulli considered situations where probabilities did not sum to one because he was working at a time when what was important was just treatment in financial contracts. It was not necessary that a probability summed to one, only unjust if it did not. Today, this can be understood in terms of gambling through a third party, where the probabilities, inferred from the cost of a game and the expected winnings are never equal to one, and so indicate a lack of reciprocal justice: the book-maker or casino is taking turpe lucrum..

It is actually a very import point that can be related (more generally) to predictive markets (see e.g. Wolfers & Zitzewitz (2004)),

The attitude that it is illogical for probabilities not to sum to one emerged out of a different conception of probability that was developed in the context of gaming by two Frenchmen, Pierre Rémond de Montmort and Abraham de Moive, and would come to dominate representations of probability in the nineteenth century.

In the middle of the book, we reach the chapter on the ascendency of financial mathematics. This chapter starts with a very interesting between the probability to have 0 are events in a given period of time (from the Poisson distribution, the law of small numbers), i.e. e^{-rT} and the standard discounting factor used in continuous time financial models – also e^{-rT}. But then, we have the perspective of economists (rather than philosophers and mathematicians).

Knight felt that economics had split into two strands. There was a mathematical science, which studied closed systems based on distorting assumptions, and a descriptive science, which could deduce nothing. Economics needed to take a middle path that was both realistic and informative (…) At the time, economic theory claimed that markets brought “the value [price] of economic goods to equality with their cost” but this equality, was in fact, only an “occasional accident”. Knight argued that the reason for the theory diverging from the practice arose out of the difference between a ‘known uncertainty’, which he termed a ‘risk’ and an ‘unknown uncertainty’, which he called ‘uncertainty’.

But there are still very interesting points and references on connexions between prices and probabilities

Early in his career, Keynes had written a Treatise on Probability where he had observed that in some cases cardinal probabilities of events could be deduced, in others, ordinal probabilities ‒ one event was more likely than another ‒ could be inferred, but there were a large class of problems that were not reducible to the concept of probability. Keynes’ argument was challenged by a young Cambridge mathematician, Frank Ramsey, who in Truth and Probability (1926) argued that probability relations between a premise and a conclusion could always exist. He defined ‘probability’ as simply ‘a degree of belief’ that could always be decided through a (betting) market. Keynes, a friend and mentor of Ramsey, appears to have been satisfied with the argument and came to believe that the only way to resolve ‘radical uncertainty’ was through discussion. Because Ramsey died young, at the age of 26 in 1930, his approach is more familiar through the Italian, Bruno de Finetti (published 1931) and the American statistician Leonard Savage (published 1954) . Collectively these approaches are considered subjectivist or Bayesian, pointing to their relationship to the eighteenth-century Bayes’ Rule that could be used to update probabilities. De Finetti had enrolled at Milan Polytechnic in 1923 with a view to following in his father’s footsteps into railway engineering but transferred to mathematics and graduated from the University of Milan in 1927. He took a job at the Italian Central Statistical Institute but left to work for an insurance company in Trieste, Assicurazioni Generali, in 1931. He would work as an actuary for the next fifteen years, taking a couple of academic posts along the way. In 1947 he became a full-time academic, finishing his career at La Sapienza University in Rome. De Finetti asserted that “Probability does not exist” because it was merely an expression of an individual’s opinion. He employed the notation ‘Pr’ because it could mean ‘probability’, ‘price’ or ‘prevision’ and could not be tied down. De Finetti argued that in science there were two types of laws: deterministic “necessary and immutable laws; phenomena in nature are determined by their antecedents” and ‘truth-like’ or probable laws that express statistical regularities.

I was glad to see Bruno de Finetti here, because  he is a major reference (and he deserve to be more popular in economics). Before reaching the end of the chapter (with a nice connexion between the “representative agent” and Quételet’s “homme moyen“) there is a brief introduction about portfolio selection, with Harry Markowitz and Arthur D. Roy, and I have to admit that I disagree when it says

The question of portfolio choice was one of balancing the risks of disaster against the opportunities for reward, a version of the Scholastic argument that without risk there could be no profit. The question Roy and Markowitz needed to answer was how should risk be measured. Both Markowitz and Roy chose to use the variance, a measure of the average distance of a sample point from the mean, as a proxy for risk. This is not obvious, since risks are colloquially associated with losses, while variance regards high gains as equally unattractive as high losses and reveals that they were thinking about profit being related to uncertainty.

I think that Roy’s Safety First and the Holding of Assets is much more general, and also probably more interesting (from a philosophical perspective, not a pratical one). This safety-first criterion selects a portfolio based on the criterion that the probability of the portfolio’s return falling below a minimum desired threshold is minimized. What I like about this perspective is that it relates risks to quantile levels, and to ruin probabilies used in actuarial mathematics.  As mentioned in Safety First and the Holding of Assets, assuming that returns are normally distributed means that the risk of the portfolio is related to the variance, but actually, in the philosophical framework, we are not interested by the “average distance from the mean”, but a quantile level. Anyway, that is just a comment on a brief sentence.

Then we have a detailed chapter on The Fundamental Theorem of Asset Pricing. It starts with a difficult mathematical question, with philosophical implications, related to the concept of measures

This solved the problem of worrying about outcomes but left the issue of identifying the probability of events. The most obvious ‘measure’ of an event is to count its elements or the relative size of different events, but this means you must identify each outcome in an event, which is impossible. In associating a probability with an abstract measure, Kolmogorov had freed it from being tied to concepts rooted in counting elements of event sets. (…) In the classical approach, a probability of zero implies impossibility, whereas a probability of one implies certainty. In Kolmogorov’s conception, this is not so straightforward.

Indeed, it is a rather important and complex question, that cannot be solved without a significant mathematical background. I wanted also to add a brief comment to a sentence that uses a word I do not like…

They made this assessment using a mathematical equation, the Gaussian copula, which had been identified in the 1950s.

This idea started with articles such as Wired or the Financial Times in 2009), based on David Li’s work on Gaussian models for credit risk models (see also the paper by Donald MacKenzie and Taylor Spears entitled ‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling). The idea “Gaussian copula” remained, but actually, the underlying strory is much more simple (actually, in 1998, I was developping a credit risk model for a French firm based on that idea). As in Merton’s model for default, there is a default when the “value of the debt” goes above a threshold. And this non-observable value is supposed to be Gaussian. This can be some sort of a probit model. With several companies, it is rather natural to consider a multivariate joint normal distribution, which yields a multivariate probit model. With that model, the latent unobservable distribution is a multivariate Gaussian, just like in portfolio management. I am no a big fan of the terminology, since copulas became popular in finance in the late 90’s. Actually, this model is rather old, and is only a multivariate probit model. Nothing nerdy here actually… There is then an inspiring paragraph on economics, mathematics, and modeling

For economists, mathematics was “part of the plumbing” that supported economic theory, a view similar to the one they had of money as a neutral tool. Mathematicians are concerned with understanding the relationships between objects and mathematics can reveal connections or differences. Trygve Håvelmo had recognised the problem when he had been awarded the Nobel Prize for Economics in 1989. He reflected that his aspirations for introducing mathematics to economics had not been met. He identified the primary issue as being that the economic models that ‘econometricians’ had been trying to apply to the data were probably wrong. More fundamentally, economics never generated new mathematics ‒ ways of seeing relationships ‒ in the way that the physical sciences had stimulated developments in mathematics. Economists had simply adapted concepts from other fields to their own devices.

Then we reach the chapter (with an odd name) entitled Two Women and a Duck: a Pragmatic Theory of Markets, which starts with an interesting point on models

When an idea is taken to be true by a culture but is in fact an illusion it has become an ideology. An argument that goes back, employed by Marx amongst others, is that ideologies emerge out of an intent to deceive, which implies that there is a Laplacian will capable of persuading a community to accept an ideology. A less intentional explanation is that ideologies are simply convenient models

And then, we get back to our discussion on connexions between probabilities and prices. I leave here the complete page, which give a good overview on the style and the perspective of Tim’s book

In general, the martingale measure specifies where the current price of an asset lies in the distribution of future prices. In being based on observed prices, the martingale measure represents an objective pricing measure that should be used in preference to any subjective measures. This idea that prices give probabilities was in Huygens’ Van Rekeningh of 1655 and was the approach de Witt had taken in pricing annuities in 1671. Probability measures based on historic prices yield subjective measures in that they relate to the past, not the future. Jacob Bernoulli, in Ars Conjectandi, considered situations where probabilities did not sum to 1. These were illogical in a frequentist approach to probability but meaningful in representing unfair arbitrages. The objectivity of probability does not arise from the materiality of counting possible outcomes but in the ethical concept of fairness. In markets, as Aristotle had observed, mathematics establishes the equality necessary for justice in exchange, contributing to social cohesion. BSM (Black-Scholes-Merton) guarantees the coherence of its prices on the basis that a price must preclude arbitrage opportunities. Specifically, if a market-maker offered a price that presented an arbitrage, other traders would exploit the market-maker’s obligation to be sincere in offering both bid and ask prices and bankrupt the market-maker. This practical observation had been shown in Frank Ramsey’s argument that probabilities exist for radically uncertain events. Ramsey noted that a standard way of measuring ‘degrees of belief’, or a probability, is through betting odds and went on to formulate some laws of probability, finishing with the observation that
These are the laws of probability, … If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event. This is the ‘Dutch Book’ argument and is an alternative to the ‘Golden Rule’ ‒ “Do to others as you would have them do to you” and re-emerges as Kant’s categorical imperative. It is founded on the moral concepts of fairness and reciprocity, not on material acts of dynamic hedging. Ramsey went on to argue that having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief
obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Then we start a fascinating discussion on truth, and what could be this “true price” given by a mathematical model

The meaning of ‘true’ in relation to prices in markets is unclear because of the uncertainty in finance. The word ‘true’ derives from the Germanic triuwe meaning faithful, reliable or secure and, at its most basic, the truth of a statement rests on whether it corresponds to the facts: it is either true or not that the balance of births and deaths in an English parish in the year 1780 was x. In this conception, a belief is independent of the fact and is true only if it corresponds to the fact. These correspondence theories depend on a statement being verifiable and are central to logical positivism, but are impossible to employ in complex situations or those involving an uncertain future. To deal with this problem of correspondence theories being irrelevant to most human experience, coherence theories emerged out of idealism. For idealists, what was important was that beliefs formed a coherent whole that reflected the unity of knowledge. The problem with this approach is that a perfectly coherent set of beliefs might not correspond to the facts.

To be more specific

In response to the inadequacy of these two approaches to truth, the American philosopher Charles Sanders Peirce proposed a novel definition of truth in the late nineteenth century as “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth”. This conception of truth rests on the idea of a ‘community’ that stands for the ‘all’ that comes to an agreement. A consequence is that knowledge need not be based on rigorous deductions; Peirce said it should resemble a cable of thin interweaving strands rather than a chain of strong links that is vulnerable to a single link failing. The three conceptions of truth ‒ correspondence, coherence and pragmatic ‒ are relevant to finance where they are characterised by three different types of agents.

We start seing the word “pragmatic” that was in the subtitle of the book actually.

The word ‘pragmatism’ ‒ deriving from the Greek pragmatikos meaning ‘business like’ or effective ‒ emphasised experience and practice over the idealism and theory usually associated with philosophy.

and we finally have the explanation of the chapter’s title

Financial markets, made by market-makers making assertions as to the price of an asset that are challenged by traders, are primarily concerned with a community converging on agreement. The idea of markets as places where an understanding of prices is formed, rather than just a place where goods are exchanged, is captured in a Vietnamese proverb that “two women and a duck make a market”. There is nothing in the proverb that suggests either of the women owns the duck; what is implied is that the women will converse and during that discussion they will come to some agreement as to the value of the duck. This highlights that the value of the duck cannot be established based on either an objective valuation or the subjective belief of a single person but in, at least, a three-way interaction between a speaker, an interpreter and the object under discussion. The truth of an individual’s important beliefs can only be confirmed, or refuted, through discussion with others.

And finally, in a conclusion – Some Implications of a Pragmatic Approach to Finance – we have some thoughts about pragmatism, and connexions with models, and algorithmic implementation

While an algorithm can be objective ‒ and deliver reciprocity ‒ it is not so obvious that an algorithm can be sincere in the way that people understand sincerity. It is even more difficult to think of an algorithm as being capable of charity, the most intangible market norm that is also the most human norm. Consequently, the individual borrower is alienated from the lender and the banker’s role as a mentor of the entrepreneur disappears. The bank’s task of optimising the ‘harvesting’ of loans is a departure from the Quaker principle that asked a borrower how they intended to repay a loan in the time agreed.

which is getting back to initial statements of the book, with a modern perspective.

Faced by radical uncertainty, those involved with finance cannot rely only on models rooted in physica to ensure equality between what is given and received. Rather, they must also conform to norms that ensure that their judgements can be trusted; they must ensure sincerity and charity as well as reciprocity. The fundamental concern of algorithmic trading is that, while it could ensure reciprocity and sincerity, it would be difficult to deliver charity. If prices, financial judgements, are determined by an algorithm, it would not stand for the ascendency of machines to human consciousness, but the descent of man to machine as charity disappears. This decline is avoided through the human sciences. These acknowledge the limitations of human understanding and the need to reinforce norms of behaviour through the repetition of stories, which enable individuals to imagine alternative futures and offer lessons of character to guide action. Since mathematics is neither part of physica nor practica the problem does not lie in the use of mathematics but the motivation behind that use. If the likelihood of financial crises is to be reduced, mathematical approaches to finance must be rooted in the human, not the physical, sciences.

This book is really inspiring, not only in the context of financial valuation, but more generally on ethics of modeling in economics, extracting information about ‘the truth’ and giving a value, a price, to an uncertain random quantity. It is clearly worth reading, thanks Tim.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (May 31, 2017). “Ethics in Quantitative Finance” Freakonometrics. Retrieved December 3, 2024 from https://doi.org/10.58079/ov7p


One thought on ““Ethics in Quantitative Finance””

  1. This book seems interesting. Thanks for the review. I’ll pick it up when it gets published. Also thanks for the link to the blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.