Category Archives: Research

From betting to “prediction market”

This is the second part of a series on sports betting

Sports betting has long fascinated economists and statisticians. Griffith (1949) showed early on that horse race bettors put too much money on horses that have little chance of winning, and too little on those that have the best chance of winning. This tendency to underbid on the most likely event has been obaserved in all sports betting, where the “most likely event” is calculated on the basis of recent statistics. And it can be explained in a fundamental way by the mechanics of mutual betting: the bettor opposes his beliefs to those of the crowd, because the various bets will determine the odds.

Predictions, before surveys

Today, in the months leading up to each election, we find ourselves drowned under the polls, conducted every day (and commented on several times a day, as if estimation noise was worth exegesis). As Frédéric Dabi (Deputy Director General of Ifop) pointed out in a debate organised by Risques magazine in 2017, “surveys are an indication of the electoral balance of power, not a prediction”, but it is nevertheless often in the idea of having a prediction that they are used.

But if we go back in time, Rhode & Strumpf (2008) reminds us that other techniques were used, before the idea of surveys became necessary, in particular betting. In 1549, Matteo Dandolo (Ambassador of Veneto) noted that “it is therefore more than clear that the traders are very well informed of the state of the election, and that the employees of the cardinals in conclave (i conclavisti) participate with them in betting, which therefore leads to several tens of thousands of crowns changing hands” as Baumgartner (2003) tells us. Closer to home, betting markets during the elections were popular in the United States until the Second World War. Rhode & Strumpf (2008) suggests several reasons for the loss of interest in the second half of the 20th century: improvements in sampling techniques… and the legalization of horse betting. But online betting sites have revived interest in betting, whatever it may be. Because the sites we mentioned in a previous article are often not limited to sports betting, but also allow betting on a magnitude earthquake, an Oscar winner, or even the observation of the Higgs boson, as proposed by intrade.com, which was liquidated in 2015. As onlinebettingsites.com shows, we could bet on the French elections in 2017, or on the referendum on Brexit (even if for the latter, the predictive markets were not able to reflect the beliefs of the crowds, as an article in The Economist recalls).

Mathématiques du pari-mutuel

The “pari-mutual” theory is not unlike the mutualisation of risks, an important foundation of the insurance mechanism, dear to actuaries. Working in the horse betting markets, Edmund Eisenberg and David Gale obtained, in a short three-page article, Consensus of Subjective Probabilities, relatively general results, as long as the bet is static.

Supposons que I joueurs puissent parier sur J chevaux. Chaque joueur possède une somme totale bi, que l’on normalisera de telle sorte que bi désigne la part de la somme totale misée (et donc b1 +…+ bI =1). Le joueur i peut alors miser la somme bi,j sur le cheval j (avec ici bi,1+…+bi,J = bi). Lorsque les paris sont clôturés, on note pj le montant parié sur le cheval j, autrement dit b1,j+…+bI,j = pj). La contrainte de budget impose que la somme de ces montants soit égale a 1, ce qui donne aux pj une interprétation probabiliste. Nous reviendrons sur l’utilisation de ces « prix » par la suite. On peut aussi noter qj la cote de gain (payoff-odds) définie comme pj-1-1, de telle sorte que pj=(1+ qj) -1. Si on suppose qu’une portion 1-a est gardée par le bookmaker, alors pj= a(1+ qj) -1 et qj =( a -pj)/ pj. Les rendements espérés sur chacun des chevaux doivent être égaux, à l’équilibre, au rendement net attendu, où l’espérance est calculée sous la probabilité p, de manière à refléter les croyances de tous les parieur, soit ici

pjqj+ (1-pj)(-1)= a-1

The key result of the Eisenberg & Gale model is to show that there is a balance in this market. More precisely, the fraction bet on each horse must be equal to the probability of the horse market. To achieve this balance, it is often assumed that the equilibrium ratings are found by an auctioneer (this role will be played by the bookmaker). As Blough (2008) noted, the hypothesis that no wagering is made until the odds are balanced is a hypothesis that is indeed true in horse racing.

If we assume that each bettor is risk neutral (and seeks to maximize his expectation of winning) and that his beliefs are materialized by a probability vectors pi=(pi1,…,piJ) – in the sense that player i thinks that horse j will win with a probability pij – then at equilibrium, if bi,j >0,

pij=pj max{pis/ps}

where argmax{pis/ps}= argmax{pis(qs+1)}

s the horse on which bettor i must bet everything if he bets on a single horse. Blough (2008) elaborates at length on the description of this balance, and extends it to the case where agents potentially have risk aversion (but the same) and potentially different beliefs. This balance is then described as a consensus of belief.

In an article entitled Interpreting the Predictions of Prediction Markets, Charles Manski proposed using this theory to interpret these prices, in conjunction with more traditional approaches in economics, such as Arrow-Debreu prices.

To illustrate this consensus, let us consider a world cup final that should lead either to the victory of A or the victory of B. Let us imagine a contract offering $1 if A wins, and let this contract be offered at price pA. Si on n’autorise pas d’arbitrage, on a une loi du prix unique, et on en déduit que pB = 1-pA. Imaginons un joueur qui pense que la probabilité que A gagne est supérieure à pA, soit, avec les notations précédentes, piA > pA. Dans ce cas, le joueur a intérêt à parier tout son agent sur la victoire de A, c’est-à-dire acheter ce contrat. La demande agrégée pour ce titre sera alors

[b1P[p1A > pA]+…+ bI P[pIA > pA]] / pA

et on aura un équilibre si la demande agrégée pour les deux titres vérifie

[b1P[p1A > pA]+…+ bI P[pIA > pA]] / pA  

= [b1P[p1A < pA]+…+ bI P[pIA < pA]] / pB

 de telle sorte que

pA = b1P[p1A > pA]+…+ biP[piA > pA]] +…+ bIP[pIA > pA]

which allows the prize to be written as an average of the players’ beliefs.

It should be noted here that the balance is static, allowing the bookmaker to just set a rating. Recently, Agrawal et al (2014) proposed an algorithm to balance this market in continuous time. It may also be noted that this notion of equilibrium appears in many algorithms, such as in the so-called Fisher market.

The predictive power of prices

But this idea of seeing in the prices an aggregation of players’ beliefs is not new! In 1655, in Van Rekeningh in Spelen van Gelucken, (published in Latin under the title’De Ratiociniis in Aleæ Ludo’), Christiaan Huyghens proposed to extract information on beliefs from prices. In 1671, Wilhelmina de Witt noted that, as the price of a contract paying an annuity until death could be seen as a weighted average of annuities (with a fixed maturity), by observing the prices of the different insurance contracts, probabilities interpreted as probabilities of survival could be extracted.

These probabilities are “subjective” as Bruno de Finetti or Frank Ramsey called them. The latter did not see probabilities from a frequentist angle, but saw it as a measure of the degree of belief, which could be measured through bets, in Truth and Probability (1926). This is finally what the theory presented by Kenneth Arrow in 1953, and further developed by Gérard Debreu in 1959, introducing the “Arrow-Debreu prices”, says.

Many websites use odds to infer players’ beliefs, which are misrepresented as the probability that a team will win a competition. We can also note the work carried out last summer by doctoral students at the University of Rennes who had compared the odds on online betting sites, and the forecasts obtained by several algorithms (ranging from a naive Bayesian classifier to boosting, SVM or neural networks). A special issue of The Economist, published in 2007, entitled The Future of Futurology, noted that “the most heeded futurists these days are not individuals, but prediction markets, where the informed guesswork of many is consolidated into hard probability”. This idea has now largely returned to the forefront, as predicted in the article by Chen & Pennock (2010) published in AI Magazine.

Agrawal, Shipra, Delage, Erick, Peters, Mark, Wang, Zizhuo & Ye, Yinyu (2014). A Unified Framework for Dynamic Prediction Market Design. Operations Research.

Baron, Ken & Lange, Jeffrey (2006). Parimutuel Applications In Finance: New Markets for New Risks. Springer.

Baumgartner, Frederic (2003) Behind locked doors: a history of papal elections. Palgrave.

Blough, Stephen R. (2008) Differences of opinion at the racetrack. In Efficiency of Racetrack Betting Markets, 323-341, World Scientific.

Chen, Yiling & Pennock, David (2010). Designing Markets for Prediction. AI Magazine.

Decker, Wolfgang & Thuillier, Jean-Paul (2004). Le sport dans l’antiquité. Picard.

Eisenberg, Edmund & Gale, David (1959). Consensus of Subjective Probabilities: The Pari-Mutuel Method. Annals of Mathematical Statistics, 30:1, 165-168.

Griffith, RM (1949) Odds adjustments by American horse-race bettors. The American Journal of Psychology, 62, 290-294.

Manski, Charles (2005) Interpreting the Predictions of Prediction Markets. NBER 10359.

Rhode, Paul, W. & Strumpf, Koleman (2008) Historical Political Futures Markets: An International Perspective. NBER 14377.

[1] Baron & Lange (2006) discusses the comparison between so-called “risk-neutral” valuation in finance (based on the law of single price and arbitrage), and that relating to mutual betting. They thus speak of “self-hedging” because, in a bet, the bettors share the winnings in proportion to their initial bet. This is reminiscent of the way mutual insurance companies operate, where the money used to compensate victims must correspond to the total premiums charged.

 

La valeur de la vie

Un court article, écrit conjointement avec Béatrice Cherrier… Tous les commentaires sont les bienvenus !

En 1928, revenant sur la révolution chinoise de 1925, André Malraux publie son roman Les Conquérants, et glisse « j’ai appris qu’une vie ne vaut rien, mais que rien ne vaut une vie ». Si la formule peut plaire, on imagine qu’elle n’aidera pas trop un décideur public. En 2013, le Commissariat Général à la stratégie et à la prospective, en France, évaluait la valeur d’une vie à 3 millions d’euros. Mais d’où sort ce chiffre ? Et que signifie-t-il vraiment ?

Sauver une vie vaut-il le coup?

Chiffrer une vie est en effet un problème auxquels les assureurs, mais aussi les décideurs publics sont confrontés bien plus souvent qu’il n’y paraît. Après l’effondrement du World Trade Center en 2011, le Congrès américain adopta la Loi sur la sécurité du transport aérien et la stabilisation des systèmes. Cette nouvelle loi prévoyait la création d’un fonds spécial pour indemniser les victimes des attentats du 11 septembre 2001. Le montant de l’indemnité, et les personnes qui y auraient droit, seraient décidées par un fonctionnaire tout-puissant. Ce « special master,» Kenneth Feinberg, explique dans un ouvrage qui revient sur son expérience (Feinberg 2006) que le gouvernement souhaitait éviter une avalanche de poursuites pour préjudices corporels. Cela aurait pu plonger l’industrie du transport aérien dans la tourmente.

Un cadre très strict fut donc instauré : seules les victimes « ayant recu un traitement à l’hopital dans les 72 heures qui suivirent les attaques », blessées aux abords du World Trade Center et du Pengatone, ainsi que leurs conjoints et enfants – mais pas leurs parents – furent déclarés admissibles à une indemnisation. Le fond accorda plus de 7 milliards de dollars à 5 560 victimes et membres de leur famille. Feinberg se devait, légalement, d’étalonner les dommages et intérêts en fonction de la « valeur financière » de la victime décédée. Il dû ainsi expliquer à la femme d’un pompier, par exemple, que son mari valait moins qu’un courtier d’assurance.

En France, le récent passage aux 80km/h sur les routes à deux voies fut aussi partiellement justifié par les vies sauvées. Alors que le premier ministre se réjouissait, en janvier, d’un bilan de 116 vies épargnées, la journaliste Alba Ventura (2019) s’interroge sur RTL : « s’il s’agit de ne sauver qu’une vie, est-ce que ça ne vaut pas le coup ? » Le support radiophonique ajoute à l’ambigüité. Demande-t-elle en fait si « cela ne vaut pas le coût? »  Car le problème est bien celui des méthodes employées par la puissance publique pour chiffrer le prix d’une vie, sauvée ou perdue.

La valeur d’une vie humaine comme taux marginal de substitution

A la fin des années 1940, l’US Air Force cherchait à maximiser les dommages infligés par ses raids aériens contre l’Union Soviétique. Quand un groupe de chercheurs de la RAND Corporation proposa de faire voler un grand nombre d’avions peu coûteux pour leurrer les défenses aériennes soviétiques, les généraux de l’Air Force refusèrent l’idée, arguant que le coût de la vie des pilotes sacrifiés ne figurait pas dans les calculs. Comme le rappelle Spencer Banzhaf (2014),  l’économiste de la défense Jack Hirshleifer choisit alors de calculer valeur de la vie d’un pilote en intégrant le coût de sa formation et de son remplacement. Cette réponse avait l’avantage d’utiliser des grandeurs directement monétaires, et facilement quantifiables. Dans les années 1960, sur l’influence des réflexions autour du capital humain, il fut suggéré d’utiliser une estimation des salaires nets actualisés perçus au cours d’une vie de pilote, supposés refléter l’utilité matérielle du métier. Ces méthodes restaient dans la lignée de celles définies par Louis Dublin et Alfred Lotka pour les compagnies d’assurance dans l’entre-deux-guerres (Cavalin 2016).

S’il reprend comme titre le slogan publicitaire d’une compagnie d’assurance, popularisé par des organismes de sécurité routière ( « The Life You Save May Be Your Own »),Thomas Schelling, prix Nobel d’économie en 2005, publie en 1968 un article qui rompt largement avec cette tradition.  Il utilise en fait le travail de l’un de ses étudiants (et ancien pilote militaire) Jack Carlson, qui cherchait à évaluer si lcertains investissements en matière de sécurité (pour les pilotes) « valaient le coût ». Le coût d’un système d’éjection des avions B-58 était par exemple de l’ordre de 80 000 dollars, pour un gain substantiel sur la probabilité de survie. C’est cette idée de lier la valeur de la vie avec la notion de risque qui permis à Schelling de développer le concept de « valeur statistique » de la vie.

L’innovation majeur de Schelling consistait à impliquer les citoyen.ne.s dans l’évaluation de la valeur de leur propre vie. Puisqu’il était stérile de leur demander de chiffrer leur propre vie de but en blanc, on pouvait en révanche adapter la méthode de Carlson en leur  demandant, par exemple, combien ils et elles seraient prêtes à dépenser pour un airbag, ou un traitement médical, qui diminuerait leur taux de mortalité de 1%. Ainsi, en se plaçant dans un diagramme représentant en abscisse la probabilité de survie où l’espérance de vie résiduelle et en ordonnée la richesse, comme sur la Figure 1, pouvait-on construire des courbes d’indifférence liant richesse et survie : quelle somme accepte-t-on (marginalement) de dépenser pour gagner statistiquement un peu de vie, soit en diminuant sa probabilité de décès, soit en allongeant son espérance de vie ? Dans l’exemple ci-dessous, la valeur de la vie est simplement la dérivée de la courbe d’indifférence.

Figure 1 : Arbitrage entre espérance de vie et richesse.

La valeur de la vie n’est alors pas une grandeur constante, mais dépend de la situation dans laquelle on se trouve. Aussi,SVL=\frac{d\omega}{d\text{E}}=\frac{d\omega}{d\text{p}}suivant qu’on la calcule par rapport à une variation de l’espérance de vie, ou de la probabilité de décès. Si on a une espérance de vie résiduelle plus ou moins grande (à gauche ou à droite), ou si on est plus ou moins riche (en haut ou en bas), la pente ne sera pas la même. Un exemple classique est celui d’une roulette russe, avec un pistolet a douze chambres. Supposons qu’il y a 3 balles, quel serait le montant que l’on serait prêt à payer pour enlever une balle ? Que deviendrait ce montant s’il y avait 9 balles et qu’on souhaite en enlever plusieurs ? Supposons que la valeur statistique de la vie soit de 3 millions d’euros. Dans le premier cas, la probabilité passe de 3/12 à 2/12, soit dp_1=1/12 (soit une baisse de 1/3). Dans le second cas, pour avoir aussi une baisse de 1/3, il faudrait passer de 9/12 a 6/12, soit dp_2=3/12. Si on suppose que la valeur statistique de la vie est constante, alors d\omega_2/d\omega_1=d\text{p}_2/d\text{p}_1=3, et on devrait être prêt à dépenser 3 fois plus pour une même baisse relative de probabilité de décès.

D’un point de vue heuristique, dans le second cas, on est dans une situation un peu désespérée (on a 3 chances sur 4 de mourir) et donc toute solution est bonne à prendre, quelle que soit son prix ! C’est ce que l’on retrouve au travers de la convexité de la courbe de droite sur la Figure 1 : si ma probabilité de décès est élevée (à droite sur l’axe des abscisses), je suis prêt à dépenser beaucoup, pour un faible gain. Cette manière d’évaluer sa propre vie, proposée par Schelling, est souvent, appelée « gunpoint value.»

Sauver ma vie, ou celle d’autrui ?

Mais cette approche répond-elle vraiment à la question de départ ? La vie sauvée par une mesure de sécurité contraignante et couteuse est rarement celle de la personne qui prend une décision. Cette tension est particulièrement visible au sein des débats français sur la mesure de la valeur d’une vie, puisqu’à la différence des Etats-Unis celle-ci est largement le fait d’ingénieurs-économistes recrutés par l’Etat afin de mettre en place des politiques publiques visant a augmenter le bien-être des populations. La question de la sécurité routière est à l’origine d’un article fondateur sur le sujet, présenté par deux ingénieurs des ponts et chaussées, Claude Abraham et Jacques Thedie, au colloque annuel de recherche opérationnelle d’Aix-en-Provence en 1960.

Répondant à la question « combien une collectivité doit dépenser pour sauver une vie humaine », ils pointent deux types d’éléments à quantifier. Les éléments « objectifs » de nature « économique, » quantifiable en actualisant les pertes de salaires directes et les pertes de production et de consommation, grâce un raisonnement pragmatique qui mélange capital humain et analyse macroéonomique. Par exemple, un homme de 41-45 ans a une valeur de production deux fois supérieure à un homme de 56-60 ans, et sa valeur de consommation est 50% plus élevée. Mais la perte d’un homme de plus de 65 ans est en réalité un gain, ce qui montre l’importance d’intégrer les éléments « affectifs.» Puisque leur évaluation est autrement plus difficile, ceux-ci s’en remettent à l’estimation faite par les tribunaux en matière d’indemnisation des dommages personnels, en particulier l’attribution d’un praetium doloris.

Comme le relate Daniel Benamouzig (2005), les aspects théoriques, techniques, éthiques et métaphysiques du principe et de la méthode de la quantification d’une valeur de vie présentés par Abraham et Thédié, et en particulier de l’application de telles méthodes dans le champ de la santé, firent l’objet de débats houleux. Ceux-ci n’ont d’ailleurs toujours pas fait l’objet de résolution consensuelle. Françoise Favre (1970) note par exemple que l’utilisation du calcul économique de la valeur d’une année-vie pour décider si un dépistage systématique du cancer du col de l’utérus doit être mis en place peut conduire par construction une réponse négative. En effet, la valeur marchande du travail féminin qui sert de base au calcul est largement inférieur à celle du travail masculin, ce qui crée des inégalités de traitement hommes-femmes.

Adoptant un cadre éthique et théorique empruntant au choix social, Jacques Drèze (1962) propose une méthode de calcul alternative plus proche de celle développée par Schelling. Une décision publique doivent se fonder sur les préférences de la collectivité issues de l’agregation des utilités individuelles pour la décision en question. Une solution aux problèmes de mesure et d’incommensurabilités soulevé par Abraham et Thédié consiste a poser la question aux citoyens « combien la collectivité doit-elle dépenser, selon vous, pour sauver une vie? » L’utilité de la vie peut-être calculée en identifiant la disposition individuelle subjective à payer pour prolonger sa vie en écartant un risque déterminé, ajoute Drèze. Celui-ci conclue que sa méthode conduit à une estimation de la valeur de vie nettement supérieure à celle à laquelle aboutissent ces collègues. La sensibilité des évaluations aux méthodes de calcul, est, aujourd’hui encore, un problème majeur.

Plusieurs méthodes, plusieurs valeurs ?

Biausque (2011) reprend plusieurs études, afin d’estimer la valeur (statistique) de la vie, face à des risques environnementaux, de sante ou routier que l’on peut résumer dans le Tableau 1.

Environnement Santé Trafic routier
Nb d’études 51 250 65
Moyenne (€) 2 455 982 2 574 149 4 884 853
Minimum (€) 24 427 4 450 267 615
Maximum (€) 7 641 706 22 100 000 17 500 000

Tableau 1 : source Biausque (2011).

On voit que ces calculs sont complexes, et donnent lieu à des ordres de grandeurs très différents les uns des autres. La variabilité entre individus était évoquée dans Feinberg (2006) qui expliquait qu’il pouvait être économiquement juste à dire que la vie d’un trader de 25 ans “valait plus” qu’un pompier de 45 ans. Mais c’est surtout la variabilité entre les méthodes, que l’on retrouve également dans Hugonnier et al. (2018) qui surprend, et dérange, avec un facteur allant de 1 à 20 suivant la méthode utilisée.

état de santé Quintile

0%-20%

Quintile

40%-60%

Quintile

80%-100%

Statistique ‘fair’ 4 380 000 4 400 000 7 890 000
‘very good’ 8 800 000 8 830 000 12 135 000
Gunpoint ‘fair’ 235 000 235 000 422 000
‘very good’ 590 000 590 000 650 000
Capital humain 250 000 390 000 525 000

Tableau 2 : source Hugonnier et al. (2018)

Le Tableau 2 reprend la valeur statistique (inspirée de Drèze 1962), celle basée sur des calculs de capital humain, ainsi qu’une « gunpoint value », en fonction du niveau de richesse de la personne qui décède (niveaux de quantiles) et de l’état de santé de la personne (avant son décès).

Ces tableaux montrent à quel point il est difficile d’évaluer la vie de personnes impliquées dans un accident mortel. On essaye d’imaginer la valeur de la vie d’un « individu représentatif » (peut être en fonction de son état de santé, de son âge, de son revenu). Mais comment faire pour attribuer une valeur à une vie qui n’existe pas encore ? Car nombre de décisions prises aujourd’hui impactant aussi les « générations futures », c’est-à-dire des personnes qui aujourd’hui n’existent pas… Est-il possible de donner une valeur à la vie de ces personnes ? Car c’est normalement ce qu’il convient de faire si on veut mettre en place une politique visant à limiter le réchauffement climatique.

References

Abraham, С. & Thedié, J. 1960 Le prix d’une vie humaine dans les décisions économiques. Revue française de Recherche opérationnelle. 16 : 157-168

Banzhaf, Spenser H. 2014. Retrospectives: The Cold-War Origins of the Value of Statistical Life. Journal of Economic Perspective, 28 :4, 213-226.

Benamouzig, Daniel. 2005. La Santé au miroir de l’Economie. Paris : PUF

Biausque V. 2011, Valeur statistique de la vie humaine : une méta-analyse. OCDE

Cavalin, C. 2016. « La valeur d’une vie statistique : histoire américaine, histoire de la pensée économique. » Incidence 12.

Commissariat général à la stratégie et à la prospective 2013. Éléments pour une révision de la valeur de la vie humaine. http://www.strategie.gouv.fr/

Costa, Dora L. & Kahn Matthew E. 2004. Changes in the value of life, 1940-1980. Journal of Risk and Uncertainty, 29 :2, 159-180

Drèze, Jacques 1962. L’utilité sociale d’une vie humaine. Revue française de recherche opérationnelle 23 : 3 -28

Fabre, Françoise. 1970. « Une étude économique de la prévention et du dépistage précoce du cancer du col de l’utérus » Cahiers du Séminaire d’Econometrie 12, 121-143

Feinberg, Kenneth R. 2006. What Is Life Worth?: The Inside Story of the 9/11 Fund and Its Effort to Compensate the Victims of September 11th. Public Affairs.

Johansson, Per-Olov, 2000. Is there a meaningful definition of the value of statistical life? Journal of Health Economics, 20, 131-139

Hugonnier, J., Pelgrin, F. & St-Amour, P. 2018. Valuing Life as an Asset, as a Statistic and at Gunpoint. Swiss Finance Institute Research Paper 18-27

Lery, Simon 2004. Arbitrages : le prix de la vie. Alternatives Economiques, 223.

Mrozek, Janusz R. & Taylor, Laura O. 2002. What determines the value of life : a meta analysis. Journal of Policy Analysis and Management, 21 :2, 253–270

Schelling, T.C. 1968. ‘The life you save may be your own.’ In Problems in PublicExpenditure Analysis ed. Samuel B. Chase (Washington DC: Brookings Institution) 127–162

Ventura, Alba. 2019. 80km/h : « S’il s’agit de ne sauver qu’une vie, est-ce que ça ne vaut pas le coup ? », RTL, 29 janvier 2019.

A brief history of sports betting

this article was originaly published – in French – in variance.eu

A report by the American Gaming Association (May 2017) estimated that between $100 billion and $400 billion was bet each year on an estimated gross income of between $5 billion and $20 billion, just for sports betting. We will return here to a brief history of sports betting, emphasizing the concept of pari-mutuel betting. We will see, in a second article, the links of this principle with mathematical finance, and insurance.

From games to sports

Sports betting has been around for a long time, even if the origin of the first bet is impossible to date. We can think of the Greeks, inventors of the Olympic Games, where it was not uncommon for spectators to bet among themselves on the winners (Decker & Thuiller, 2004). Closer to home, as Georges Vigarello reminds us, “Under the Ancien Régime, gambling was the subject of a real passion. It takes the form of either betting games or prize games.

The first, bets, are made between people from the same social world, between farmers or between nobles. The second, the prize games, take place during parish celebrations, and show different regional practices, with the struggle in Brittany, or the jump in Provence. We can also think of the confrontations between villages at the soule for example. Among the nobles, prize games are organized for special occasions (birth or wedding). These games were recreational and festive moments.

It was not until the end of the 19th century that gambling became a sport, in line with the hygienist theories of the time. We can think of Baron Pierre de Coubertin, who wanted to “use all the means appropriate to develop our physical qualities to make them serve the collective good” through “sport”. We will find the Baron again in 1887 with the creation of the Union of French Societies of Athletic Sports, the official appearance of the notion of “sport”, replacing that of “game”, as Dietschy & Clastres (2006) points out, noting in passing that this Union is based on amateurism, in reaction against the companies of cycling (from 1860) and walking (around 1870) which resumed the traditions of price and betting games. Around 1890, this union, dedicated to athletics, opened up to other sports (rugby, field hockey, fencing, swimming) which were represented by specialized commissions.

The first bookmakers and gambling

A little earlier, during the Industrial Revolution, horse betting organised by bookmakers developed. These bets were popular in the United Kingdom in the 16th and 17th centuries, but remained reserved for the aristocracy and the landed gentry. And in reality, only horse owners were allowed to bet on the results of these private races, known as “matches”. One of his races, launched by the twelfth Earl of Derby (Edward Smith-Stanley) around 1870, also left its mark on sporting vocabulary. If these races were originally private, Charles II’s passion for these races made them more ambitious, attracting huge crowds, betting more and more important sums. Innkeepers and pub owners were then major promoters of these races, which encouraged owners to organize the races near their establishments. They then naturally became the first bookmakers, organizing the first steeple-chases, a form of race (first created in Ireland) where riders ran from one church tower to another by jumping everything in their path! In 1826, at the stables in Saint Alban, north of London, the idea of horses starting and finishing in the same place was launched, giving rise to modern racecourses.

Betting was not yet regulated and betting on races was based on a credit system. And since gambling near a place where alcohol was available in large quantities can have dramatic consequences, the British government banned gambling in pubs, which led to the opening of betting shops, run by bookmakers, with the adoption of the Gambling Act in 1845. The bookmakers not only played the role of scribes, keeping track of transactions in registers, they also served as arbitrators in betting. The bookmaker has become the intermediary with whom to bet, he receives the bets, but does not bet against the player. The arbitrator does not only act at the end, in the event of a dispute, but above all to make the bet official. Indeed, cash bets were rare, and bookmakers decided whether the items bet had the same value and, if not, what the difference was. One of the players then adds money to a cap. Players put their hands in the hat and remove them, either to agree with the assessment or to indicate their disagreement. This is referred to as “hand in cap”, which refers to the amount of money needed to ensure a fair bet. The word “handicap” was then commonly used in horse betting (to designate disadvantaged participants at the start of a race) and then to have a medical connotation from 1950 onwards.

Thereafter, bookmakers will not lack imagination, introducing cash bets, then offering fixed odds against each horse in a race. Parliament then went backwards with the Suppression of Betting Houses Act in 1853. Credit institutions and games of chance on racetracks were allowed. At the same time, in France, Léon Sari invented the “pari mutuel” in 1857 with Charles de Morny, owner of the Maisons-Laffitte racetracks (which became a building with stands in June 1878). Joseph Oller, who co-founded the Moulin-Rouge, is the concessionaire. As the Senate report on gambling in France reminds us, the law of June 2, 1891 legalizes betting on horse races and establishes the principle of mutualization. As we will see later, this principle means that bettors play against each other and share the winnings (once the legal levies provided for by law have been made for the benefit of the State and the institution of racing). In mathematical finance, we speak of “self-hedging strategy”. In March 1931, the PMU (“pari mutuel urbain”) was born, and it was not until 1985 that the “sports lotto” arrived.

From horses to other sports

The “pool” has long referred in England to a game of cards played for collective stakes, drawing its etymology from the French “hen”, or rather from the old French “hen”, referring to a young poultry (we will find the Latin word pulla, de pullus, the “young animal”), but also “booty” or “looting”. Here we find the idea of playing for money. This use can be traced back to 1870 (in the sense of “collective betting”) before becoming a pool during the First World War, and then to designate a group of people sharing skills. As early as 1920, the term “football pool” was coined, as recalled by Forrest (1999).

In Liverpool, England, John Moores founded Littlewoods in 1923, a retail company, before launching mail order sales, while offering football bets. The most famous game was the “Treble Chance”, where players could choose to bet on 10, 11 or 12 football matches for the coming weekend. Anecdotally, as noted by Forrest & Pérez (20013), when a match could not take place (for example because of rain), a panel of experts appointed by Littlewoods had to model the match, and provide a forecast. After the Second World War, in Europe, we will see the arrival of so-called 1X2 formulas where the player must predict whether, in a set of 12 to 15 games, the home team will win (1), lose (2) or draw (X). It can be noted that these “football pools” could refer to any form of pari-mutuel betting, very strongly resembling a lotto. The main difference is that in the lottery, the draw is supposed to be a pure random process, unlike football matches. And for the players, the difference is significant! In the 1980s, Liverpool was one of the largest private companies in Europe. Before decreasing with the birth of online betting sites….

Internet and online betting

Now, in addition to the betting companies that still exist in the United Kingdom, the strong point of bookmakers is their online presence. The first sites were created around 1995, for example Intertops, which was based on a law passed by the island nation of Antigua and Barbuda (an officially independent, Commonwealth member country) in 1994, granting licences to companies wishing to provide gambling services over the Internet (subsequently, they obtained licences from the Mohawk territory of Kahnawake in Quebec, or Malta). Betting on sports events has quickly become very popular.

In 2000, Betfair was launched, and revolutionized the industry: Betfair itself did not take customer bets, but rather offered customers to place bets between them. These peer-to-peer betting was quickly very popular. In 2002, the first live betting was launched, offering bettors the opportunity to bet on a sporting event while it was taking place. Today, on lƒes larger sites, all kinds of sports are available, whether collective (football, basketball) or individual (tennis, boxing), with possibly a competition involving more than two players or teams (athletics, cycling). The player can choose an objective, which can be a final score (1X2 in football), a number of goals scored, etc., then he concludes the bet by choosing the amount he is willing to bet (the bet). On all sites, no less than 20,000 bets are possible every day.

Decker, Wolfgang & Thuillier, Jean-Paul (2004). Le sport dans l’antiquité. Picard.

Dietschy, Paul & Clastres, Patrick (2006). Sport, société et culture en France du XIXe siècle à nos jours. Hachette, Carré Histoire.

Forrest, David (1999). The Past and Future of the British Football Pools. Journal of Gambling Studies, 15:2, 161-176.

Forrest, David & Pérez, Levi (2013) The Football Pools in The Oxford Handbook of the Economics of Gambling, 147-162

Vigarello, Georges (2004) Le sport est-il encore un jeu ? Sciences Humaines, no 152.

To be continued…. with a post on how bets, predictions and players’ beliefs are linked.

Annual UCSB InsurTech Summit

Just a quick post to mention that an Insurtech Summit will be organized in May 2019, on Friday 3rd, by Mike Ludkovski, and I will be there, with Francois Millard (Vitality Group), Adam Tashman (Carpe Data, Santa Barbara), Emil Valdez (University of Connecticut), and Howard Zail (Elucidor, LLC, New York City). That will be nice… I will actually also give a talk on the Monday before at the actuarial seminar !

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.

A learning machine

Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.

The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).

The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).

Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.

As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).

To be continued (references are online here)…

L’IA pour prédire les émeutes ?

Il y a quelques semaines, j’avais été contacte par un journaliste qui voulait me poser des questions, suite a notre article Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media. Ça a été l’occasion de me replonger dedans… et de voir ce qui a été écrit depuis… Et ce soir, je découvre un peu par hasard que l’article est paru, dans le numéro de Février de Science & Vie…

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Probabilistic Foundations of Econometrics, part 1

In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…

The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.

Foundations of mathematical statistics

As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional laws and likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.

It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written:  \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

(references mentioned above are online here). To be continued…

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.