La valeur de la vie

Un court article, écrit conjointement avec Béatrice Cherrier… Tous les commentaires sont les bienvenus !

En 1928, revenant sur la révolution chinoise de 1925, André Malraux publie son roman Les Conquérants, et glisse « j’ai appris qu’une vie ne vaut rien, mais que rien ne vaut une vie ». Si la formule peut plaire, on imagine qu’elle n’aidera pas trop un décideur public. En 2013, le Commissariat Général à la stratégie et à la prospective, en France, évaluait la valeur d’une vie à 3 millions d’euros. Mais d’où sort ce chiffre ? Et que signifie-t-il vraiment ?

Sauver une vie vaut-il le coup?

Chiffrer une vie est en effet un problème auxquels les assureurs, mais aussi les décideurs publics sont confrontés bien plus souvent qu’il n’y paraît. Après l’effondrement du World Trade Center en 2011, le Congrès américain adopta la Loi sur la sécurité du transport aérien et la stabilisation des systèmes. Cette nouvelle loi prévoyait la création d’un fonds spécial pour indemniser les victimes des attentats du 11 septembre 2001. Le montant de l’indemnité, et les personnes qui y auraient droit, seraient décidées par un fonctionnaire tout-puissant. Ce « special master,» Kenneth Feinberg, explique dans un ouvrage qui revient sur son expérience (Feinberg 2006) que le gouvernement souhaitait éviter une avalanche de poursuites pour préjudices corporels. Cela aurait pu plonger l’industrie du transport aérien dans la tourmente.

Un cadre très strict fut donc instauré : seules les victimes « ayant recu un traitement à l’hopital dans les 72 heures qui suivirent les attaques », blessées aux abords du World Trade Center et du Pengatone, ainsi que leurs conjoints et enfants – mais pas leurs parents – furent déclarés admissibles à une indemnisation. Le fond accorda plus de 7 milliards de dollars à 5 560 victimes et membres de leur famille. Feinberg se devait, légalement, d’étalonner les dommages et intérêts en fonction de la « valeur financière » de la victime décédée. Il dû ainsi expliquer à la femme d’un pompier, par exemple, que son mari valait moins qu’un courtier d’assurance.

En France, le récent passage aux 80km/h sur les routes à deux voies fut aussi partiellement justifié par les vies sauvées. Alors que le premier ministre se réjouissait, en janvier, d’un bilan de 116 vies épargnées, la journaliste Alba Ventura (2019) s’interroge sur RTL : « s’il s’agit de ne sauver qu’une vie, est-ce que ça ne vaut pas le coup ? » Le support radiophonique ajoute à l’ambigüité. Demande-t-elle en fait si « cela ne vaut pas le coût? »  Car le problème est bien celui des méthodes employées par la puissance publique pour chiffrer le prix d’une vie, sauvée ou perdue.

La valeur d’une vie humaine comme taux marginal de substitution

A la fin des années 1940, l’US Air Force cherchait à maximiser les dommages infligés par ses raids aériens contre l’Union Soviétique. Quand un groupe de chercheurs de la RAND Corporation proposa de faire voler un grand nombre d’avions peu coûteux pour leurrer les défenses aériennes soviétiques, les généraux de l’Air Force refusèrent l’idée, arguant que le coût de la vie des pilotes sacrifiés ne figurait pas dans les calculs. Comme le rappelle Spencer Banzhaf (2014),  l’économiste de la défense Jack Hirshleifer choisit alors de calculer valeur de la vie d’un pilote en intégrant le coût de sa formation et de son remplacement. Cette réponse avait l’avantage d’utiliser des grandeurs directement monétaires, et facilement quantifiables. Dans les années 1960, sur l’influence des réflexions autour du capital humain, il fut suggéré d’utiliser une estimation des salaires nets actualisés perçus au cours d’une vie de pilote, supposés refléter l’utilité matérielle du métier. Ces méthodes restaient dans la lignée de celles définies par Louis Dublin et Alfred Lotka pour les compagnies d’assurance dans l’entre-deux-guerres (Cavalin 2016).

S’il reprend comme titre le slogan publicitaire d’une compagnie d’assurance, popularisé par des organismes de sécurité routière ( « The Life You Save May Be Your Own »),Thomas Schelling, prix Nobel d’économie en 2005, publie en 1968 un article qui rompt largement avec cette tradition.  Il utilise en fait le travail de l’un de ses étudiants (et ancien pilote militaire) Jack Carlson, qui cherchait à évaluer si lcertains investissements en matière de sécurité (pour les pilotes) « valaient le coût ». Le coût d’un système d’éjection des avions B-58 était par exemple de l’ordre de 80 000 dollars, pour un gain substantiel sur la probabilité de survie. C’est cette idée de lier la valeur de la vie avec la notion de risque qui permis à Schelling de développer le concept de « valeur statistique » de la vie.

L’innovation majeur de Schelling consistait à impliquer les citoyen.ne.s dans l’évaluation de la valeur de leur propre vie. Puisqu’il était stérile de leur demander de chiffrer leur propre vie de but en blanc, on pouvait en révanche adapter la méthode de Carlson en leur  demandant, par exemple, combien ils et elles seraient prêtes à dépenser pour un airbag, ou un traitement médical, qui diminuerait leur taux de mortalité de 1%. Ainsi, en se plaçant dans un diagramme représentant en abscisse la probabilité de survie où l’espérance de vie résiduelle et en ordonnée la richesse, comme sur la Figure 1, pouvait-on construire des courbes d’indifférence liant richesse et survie : quelle somme accepte-t-on (marginalement) de dépenser pour gagner statistiquement un peu de vie, soit en diminuant sa probabilité de décès, soit en allongeant son espérance de vie ? Dans l’exemple ci-dessous, la valeur de la vie est simplement la dérivée de la courbe d’indifférence.

Figure 1 : Arbitrage entre espérance de vie et richesse.

La valeur de la vie n’est alors pas une grandeur constante, mais dépend de la situation dans laquelle on se trouve. Aussi,SVL=\frac{d\omega}{d\text{E}}=\frac{d\omega}{d\text{p}}suivant qu’on la calcule par rapport à une variation de l’espérance de vie, ou de la probabilité de décès. Si on a une espérance de vie résiduelle plus ou moins grande (à gauche ou à droite), ou si on est plus ou moins riche (en haut ou en bas), la pente ne sera pas la même. Un exemple classique est celui d’une roulette russe, avec un pistolet a douze chambres. Supposons qu’il y a 3 balles, quel serait le montant que l’on serait prêt à payer pour enlever une balle ? Que deviendrait ce montant s’il y avait 9 balles et qu’on souhaite en enlever plusieurs ? Supposons que la valeur statistique de la vie soit de 3 millions d’euros. Dans le premier cas, la probabilité passe de 3/12 à 2/12, soit dp_1=1/12 (soit une baisse de 1/3). Dans le second cas, pour avoir aussi une baisse de 1/3, il faudrait passer de 9/12 a 6/12, soit dp_2=3/12. Si on suppose que la valeur statistique de la vie est constante, alors d\omega_2/d\omega_1=d\text{p}_2/d\text{p}_1=3, et on devrait être prêt à dépenser 3 fois plus pour une même baisse relative de probabilité de décès.

D’un point de vue heuristique, dans le second cas, on est dans une situation un peu désespérée (on a 3 chances sur 4 de mourir) et donc toute solution est bonne à prendre, quelle que soit son prix ! C’est ce que l’on retrouve au travers de la convexité de la courbe de droite sur la Figure 1 : si ma probabilité de décès est élevée (à droite sur l’axe des abscisses), je suis prêt à dépenser beaucoup, pour un faible gain. Cette manière d’évaluer sa propre vie, proposée par Schelling, est souvent, appelée « gunpoint value.»

Sauver ma vie, ou celle d’autrui ?

Mais cette approche répond-elle vraiment à la question de départ ? La vie sauvée par une mesure de sécurité contraignante et couteuse est rarement celle de la personne qui prend une décision. Cette tension est particulièrement visible au sein des débats français sur la mesure de la valeur d’une vie, puisqu’à la différence des Etats-Unis celle-ci est largement le fait d’ingénieurs-économistes recrutés par l’Etat afin de mettre en place des politiques publiques visant a augmenter le bien-être des populations. La question de la sécurité routière est à l’origine d’un article fondateur sur le sujet, présenté par deux ingénieurs des ponts et chaussées, Claude Abraham et Jacques Thedie, au colloque annuel de recherche opérationnelle d’Aix-en-Provence en 1960.

Répondant à la question « combien une collectivité doit dépenser pour sauver une vie humaine », ils pointent deux types d’éléments à quantifier. Les éléments « objectifs » de nature « économique, » quantifiable en actualisant les pertes de salaires directes et les pertes de production et de consommation, grâce un raisonnement pragmatique qui mélange capital humain et analyse macroéonomique. Par exemple, un homme de 41-45 ans a une valeur de production deux fois supérieure à un homme de 56-60 ans, et sa valeur de consommation est 50% plus élevée. Mais la perte d’un homme de plus de 65 ans est en réalité un gain, ce qui montre l’importance d’intégrer les éléments « affectifs.» Puisque leur évaluation est autrement plus difficile, ceux-ci s’en remettent à l’estimation faite par les tribunaux en matière d’indemnisation des dommages personnels, en particulier l’attribution d’un praetium doloris.

Comme le relate Daniel Benamouzig (2005), les aspects théoriques, techniques, éthiques et métaphysiques du principe et de la méthode de la quantification d’une valeur de vie présentés par Abraham et Thédié, et en particulier de l’application de telles méthodes dans le champ de la santé, firent l’objet de débats houleux. Ceux-ci n’ont d’ailleurs toujours pas fait l’objet de résolution consensuelle. Françoise Favre (1970) note par exemple que l’utilisation du calcul économique de la valeur d’une année-vie pour décider si un dépistage systématique du cancer du col de l’utérus doit être mis en place peut conduire par construction une réponse négative. En effet, la valeur marchande du travail féminin qui sert de base au calcul est largement inférieur à celle du travail masculin, ce qui crée des inégalités de traitement hommes-femmes.

Adoptant un cadre éthique et théorique empruntant au choix social, Jacques Drèze (1962) propose une méthode de calcul alternative plus proche de celle développée par Schelling. Une décision publique doivent se fonder sur les préférences de la collectivité issues de l’agregation des utilités individuelles pour la décision en question. Une solution aux problèmes de mesure et d’incommensurabilités soulevé par Abraham et Thédié consiste a poser la question aux citoyens « combien la collectivité doit-elle dépenser, selon vous, pour sauver une vie? » L’utilité de la vie peut-être calculée en identifiant la disposition individuelle subjective à payer pour prolonger sa vie en écartant un risque déterminé, ajoute Drèze. Celui-ci conclue que sa méthode conduit à une estimation de la valeur de vie nettement supérieure à celle à laquelle aboutissent ces collègues. La sensibilité des évaluations aux méthodes de calcul, est, aujourd’hui encore, un problème majeur.

Plusieurs méthodes, plusieurs valeurs ?

Biausque (2011) reprend plusieurs études, afin d’estimer la valeur (statistique) de la vie, face à des risques environnementaux, de sante ou routier que l’on peut résumer dans le Tableau 1.

Environnement Santé Trafic routier
Nb d’études 51 250 65
Moyenne (€) 2 455 982 2 574 149 4 884 853
Minimum (€) 24 427 4 450 267 615
Maximum (€) 7 641 706 22 100 000 17 500 000

Tableau 1 : source Biausque (2011).

On voit que ces calculs sont complexes, et donnent lieu à des ordres de grandeurs très différents les uns des autres. La variabilité entre individus était évoquée dans Feinberg (2006) qui expliquait qu’il pouvait être économiquement juste à dire que la vie d’un trader de 25 ans “valait plus” qu’un pompier de 45 ans. Mais c’est surtout la variabilité entre les méthodes, que l’on retrouve également dans Hugonnier et al. (2018) qui surprend, et dérange, avec un facteur allant de 1 à 20 suivant la méthode utilisée.

état de santé Quintile

0%-20%

Quintile

40%-60%

Quintile

80%-100%

Statistique ‘fair’ 4 380 000 4 400 000 7 890 000
‘very good’ 8 800 000 8 830 000 12 135 000
Gunpoint ‘fair’ 235 000 235 000 422 000
‘very good’ 590 000 590 000 650 000
Capital humain 250 000 390 000 525 000

Tableau 2 : source Hugonnier et al. (2018)

Le Tableau 2 reprend la valeur statistique (inspirée de Drèze 1962), celle basée sur des calculs de capital humain, ainsi qu’une « gunpoint value », en fonction du niveau de richesse de la personne qui décède (niveaux de quantiles) et de l’état de santé de la personne (avant son décès).

Ces tableaux montrent à quel point il est difficile d’évaluer la vie de personnes impliquées dans un accident mortel. On essaye d’imaginer la valeur de la vie d’un « individu représentatif » (peut être en fonction de son état de santé, de son âge, de son revenu). Mais comment faire pour attribuer une valeur à une vie qui n’existe pas encore ? Car nombre de décisions prises aujourd’hui impactant aussi les « générations futures », c’est-à-dire des personnes qui aujourd’hui n’existent pas… Est-il possible de donner une valeur à la vie de ces personnes ? Car c’est normalement ce qu’il convient de faire si on veut mettre en place une politique visant à limiter le réchauffement climatique.

References

Abraham, С. & Thedié, J. 1960 Le prix d’une vie humaine dans les décisions économiques. Revue française de Recherche opérationnelle. 16 : 157-168

Banzhaf, Spenser H. 2014. Retrospectives: The Cold-War Origins of the Value of Statistical Life. Journal of Economic Perspective, 28 :4, 213-226.

Benamouzig, Daniel. 2005. La Santé au miroir de l’Economie. Paris : PUF

Biausque V. 2011, Valeur statistique de la vie humaine : une méta-analyse. OCDE

Cavalin, C. 2016. « La valeur d’une vie statistique : histoire américaine, histoire de la pensée économique. » Incidence 12.

Commissariat général à la stratégie et à la prospective 2013. Éléments pour une révision de la valeur de la vie humaine. http://www.strategie.gouv.fr/

Costa, Dora L. & Kahn Matthew E. 2004. Changes in the value of life, 1940-1980. Journal of Risk and Uncertainty, 29 :2, 159-180

Drèze, Jacques 1962. L’utilité sociale d’une vie humaine. Revue française de recherche opérationnelle 23 : 3 -28

Fabre, Françoise. 1970. « Une étude économique de la prévention et du dépistage précoce du cancer du col de l’utérus » Cahiers du Séminaire d’Econometrie 12, 121-143

Feinberg, Kenneth R. 2006. What Is Life Worth?: The Inside Story of the 9/11 Fund and Its Effort to Compensate the Victims of September 11th. Public Affairs.

Johansson, Per-Olov, 2000. Is there a meaningful definition of the value of statistical life? Journal of Health Economics, 20, 131-139

Hugonnier, J., Pelgrin, F. & St-Amour, P. 2018. Valuing Life as an Asset, as a Statistic and at Gunpoint. Swiss Finance Institute Research Paper 18-27

Lery, Simon 2004. Arbitrages : le prix de la vie. Alternatives Economiques, 223.

Mrozek, Janusz R. & Taylor, Laura O. 2002. What determines the value of life : a meta analysis. Journal of Policy Analysis and Management, 21 :2, 253–270

Schelling, T.C. 1968. ‘The life you save may be your own.’ In Problems in PublicExpenditure Analysis ed. Samuel B. Chase (Washington DC: Brookings Institution) 127–162

Ventura, Alba. 2019. 80km/h : « S’il s’agit de ne sauver qu’une vie, est-ce que ça ne vaut pas le coup ? », RTL, 29 janvier 2019.

A brief history of sports betting

this article was originaly published – in French – in variance.eu

A report by the American Gaming Association (May 2017) estimated that between $100 billion and $400 billion was bet each year on an estimated gross income of between $5 billion and $20 billion, just for sports betting. We will return here to a brief history of sports betting, emphasizing the concept of pari-mutuel betting. We will see, in a second article, the links of this principle with mathematical finance, and insurance.

From games to sports

Sports betting has been around for a long time, even if the origin of the first bet is impossible to date. We can think of the Greeks, inventors of the Olympic Games, where it was not uncommon for spectators to bet among themselves on the winners (Decker & Thuiller, 2004). Closer to home, as Georges Vigarello reminds us, “Under the Ancien Régime, gambling was the subject of a real passion. It takes the form of either betting games or prize games.

The first, bets, are made between people from the same social world, between farmers or between nobles. The second, the prize games, take place during parish celebrations, and show different regional practices, with the struggle in Brittany, or the jump in Provence. We can also think of the confrontations between villages at the soule for example. Among the nobles, prize games are organized for special occasions (birth or wedding). These games were recreational and festive moments.

It was not until the end of the 19th century that gambling became a sport, in line with the hygienist theories of the time. We can think of Baron Pierre de Coubertin, who wanted to “use all the means appropriate to develop our physical qualities to make them serve the collective good” through “sport”. We will find the Baron again in 1887 with the creation of the Union of French Societies of Athletic Sports, the official appearance of the notion of “sport”, replacing that of “game”, as Dietschy & Clastres (2006) points out, noting in passing that this Union is based on amateurism, in reaction against the companies of cycling (from 1860) and walking (around 1870) which resumed the traditions of price and betting games. Around 1890, this union, dedicated to athletics, opened up to other sports (rugby, field hockey, fencing, swimming) which were represented by specialized commissions.

The first bookmakers and gambling

A little earlier, during the Industrial Revolution, horse betting organised by bookmakers developed. These bets were popular in the United Kingdom in the 16th and 17th centuries, but remained reserved for the aristocracy and the landed gentry. And in reality, only horse owners were allowed to bet on the results of these private races, known as “matches”. One of his races, launched by the twelfth Earl of Derby (Edward Smith-Stanley) around 1870, also left its mark on sporting vocabulary. If these races were originally private, Charles II’s passion for these races made them more ambitious, attracting huge crowds, betting more and more important sums. Innkeepers and pub owners were then major promoters of these races, which encouraged owners to organize the races near their establishments. They then naturally became the first bookmakers, organizing the first steeple-chases, a form of race (first created in Ireland) where riders ran from one church tower to another by jumping everything in their path! In 1826, at the stables in Saint Alban, north of London, the idea of horses starting and finishing in the same place was launched, giving rise to modern racecourses.

Betting was not yet regulated and betting on races was based on a credit system. And since gambling near a place where alcohol was available in large quantities can have dramatic consequences, the British government banned gambling in pubs, which led to the opening of betting shops, run by bookmakers, with the adoption of the Gambling Act in 1845. The bookmakers not only played the role of scribes, keeping track of transactions in registers, they also served as arbitrators in betting. The bookmaker has become the intermediary with whom to bet, he receives the bets, but does not bet against the player. The arbitrator does not only act at the end, in the event of a dispute, but above all to make the bet official. Indeed, cash bets were rare, and bookmakers decided whether the items bet had the same value and, if not, what the difference was. One of the players then adds money to a cap. Players put their hands in the hat and remove them, either to agree with the assessment or to indicate their disagreement. This is referred to as “hand in cap”, which refers to the amount of money needed to ensure a fair bet. The word “handicap” was then commonly used in horse betting (to designate disadvantaged participants at the start of a race) and then to have a medical connotation from 1950 onwards.

Thereafter, bookmakers will not lack imagination, introducing cash bets, then offering fixed odds against each horse in a race. Parliament then went backwards with the Suppression of Betting Houses Act in 1853. Credit institutions and games of chance on racetracks were allowed. At the same time, in France, Léon Sari invented the “pari mutuel” in 1857 with Charles de Morny, owner of the Maisons-Laffitte racetracks (which became a building with stands in June 1878). Joseph Oller, who co-founded the Moulin-Rouge, is the concessionaire. As the Senate report on gambling in France reminds us, the law of June 2, 1891 legalizes betting on horse races and establishes the principle of mutualization. As we will see later, this principle means that bettors play against each other and share the winnings (once the legal levies provided for by law have been made for the benefit of the State and the institution of racing). In mathematical finance, we speak of “self-hedging strategy”. In March 1931, the PMU (“pari mutuel urbain”) was born, and it was not until 1985 that the “sports lotto” arrived.

From horses to other sports

The “pool” has long referred in England to a game of cards played for collective stakes, drawing its etymology from the French “hen”, or rather from the old French “hen”, referring to a young poultry (we will find the Latin word pulla, de pullus, the “young animal”), but also “booty” or “looting”. Here we find the idea of playing for money. This use can be traced back to 1870 (in the sense of “collective betting”) before becoming a pool during the First World War, and then to designate a group of people sharing skills. As early as 1920, the term “football pool” was coined, as recalled by Forrest (1999).

In Liverpool, England, John Moores founded Littlewoods in 1923, a retail company, before launching mail order sales, while offering football bets. The most famous game was the “Treble Chance”, where players could choose to bet on 10, 11 or 12 football matches for the coming weekend. Anecdotally, as noted by Forrest & Pérez (20013), when a match could not take place (for example because of rain), a panel of experts appointed by Littlewoods had to model the match, and provide a forecast. After the Second World War, in Europe, we will see the arrival of so-called 1X2 formulas where the player must predict whether, in a set of 12 to 15 games, the home team will win (1), lose (2) or draw (X). It can be noted that these “football pools” could refer to any form of pari-mutuel betting, very strongly resembling a lotto. The main difference is that in the lottery, the draw is supposed to be a pure random process, unlike football matches. And for the players, the difference is significant! In the 1980s, Liverpool was one of the largest private companies in Europe. Before decreasing with the birth of online betting sites….

Internet and online betting

Now, in addition to the betting companies that still exist in the United Kingdom, the strong point of bookmakers is their online presence. The first sites were created around 1995, for example Intertops, which was based on a law passed by the island nation of Antigua and Barbuda (an officially independent, Commonwealth member country) in 1994, granting licences to companies wishing to provide gambling services over the Internet (subsequently, they obtained licences from the Mohawk territory of Kahnawake in Quebec, or Malta). Betting on sports events has quickly become very popular.

In 2000, Betfair was launched, and revolutionized the industry: Betfair itself did not take customer bets, but rather offered customers to place bets between them. These peer-to-peer betting was quickly very popular. In 2002, the first live betting was launched, offering bettors the opportunity to bet on a sporting event while it was taking place. Today, on lƒes larger sites, all kinds of sports are available, whether collective (football, basketball) or individual (tennis, boxing), with possibly a competition involving more than two players or teams (athletics, cycling). The player can choose an objective, which can be a final score (1X2 in football), a number of goals scored, etc., then he concludes the bet by choosing the amount he is willing to bet (the bet). On all sites, no less than 20,000 bets are possible every day.

Decker, Wolfgang & Thuillier, Jean-Paul (2004). Le sport dans l’antiquité. Picard.

Dietschy, Paul & Clastres, Patrick (2006). Sport, société et culture en France du XIXe siècle à nos jours. Hachette, Carré Histoire.

Forrest, David (1999). The Past and Future of the British Football Pools. Journal of Gambling Studies, 15:2, 161-176.

Forrest, David & Pérez, Levi (2013) The Football Pools in The Oxford Handbook of the Economics of Gambling, 147-162

Vigarello, Georges (2004) Le sport est-il encore un jeu ? Sciences Humaines, no 152.

To be continued…. with a post on how bets, predictions and players’ beliefs are linked.

What it the interpretation of the diagonal for a ROC curve

Last Friday, we discussed the use of ROC curves to describe the goodness of a classifier. I did say that I will post a brief paragraph on the interpretation of the diagonal. If you look around some say that it describes the “strategy of randomly guessing a class“, that it is obtained with “a diagnostic test that is no better than chance level“, even obtained by “making a prediction by tossing of an unbiased coin“.

Let us get back to ROC curves to illustrate those points. Consider a very simple dataset with 10 observations (that is not linearly separable)

x1 = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
x2 = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
y = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x1,x2=x2,y=as.factor(y))

here we can check that, indeed, it is not separable

plot(x1,x2,col=c("red","blue")[1+y],pch=19)

Consider a logistic regression (the course is on linear models)

reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))

but any model here can be used… We can use our own function

Y=df$y
S=predict(reg)
roc.curve=function(s,print=FALSE){
  Ps=(S>=s)*1
   
  FP=sum((Ps==1)*(Y==0))/sum(Y==0)
     
  TP=sum((Ps==1)*(Y==1))/sum(Y==1)if(print==TRUE){print(table(Observed=Y,Predicted=Ps))}
   
vect=c(FP,TP)names(vect)=c("FPR","TPR")return(vect)}

or any R package actually

library(ROCR)

perf=performance(prediction(S,Y),"tpr","fpr")

We can plot the two simultaneously here

plot(performance(prediction(S,Y),"tpr","fpr"))
V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

So our code works just fine, here. Let us consider various strategies that should lead us to the diagonal.

The first one is : everyone has the same probability (say 50%)

S=rep(.5,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

Indeed, we have the diagonal. But to be honest, we have only two points here : (0,0) and (1,1). Claiming that we have a straight line is not very satisfying… Actually, note that we have this situation whatever the probability we choose

S=rep(.2,10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])

We can try another strategy, like “making a prediction by tossing of an unbiased coin“. This is what we obtain

set.seed(1)

S=sample(0:1,size=10,replace=TRUE)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

We can also try some sort of “random classifier”, where we choose the score randomly, say uniform on the unit interval

set.seed(1)

S=runif(10)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(0,1,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Let us try to go further on that one. For convenience, let us consider another function to plot the ROC curve

V=Vectorize(roc.curve)(seq(0,1,length=251))

roc_curve=Vectorize(function(x) max(V[2,which(V[1,]<=x)]))

We have the same line as previously

x=seq(0,1,by=.025)

y=roc_curve(x)lines(x,y,type="s",col="red")

But now, consider many scoring strategies, all randomly chosen

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(10)
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,df$y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

The red line is the average of all random classifiers. It is not a straight line, be we observe oscillations around the diagonal.

Consider a dataset with more observations


myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")

myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1

reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))

Y=myocarde$PRONO

S=predict(reg)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

Here is a “random classifier” where we draw scores randomly on the unit interval

S=runif(nrow(myocarde)plot(performance(prediction(S,Y),"tpr","fpr"))

V=Vectorize(roc.curve)(seq(-5,5,length=251))points(V[1,],V[2,])segments(0,0,1,1,col="light blue")

And if we do that 500 times, we obtain, on average

MY=matrix(NA,500,length(y))for(i in 1:500){
  
S=runif(length(Y))
  
V=Vectorize(roc.curve)(seq(0,1,length=251))
  
MY[i,]=roc_curve(x)}plot(performance(prediction(S,Y),"tpr","fpr"),col="white")for(i in 1:500){lines(x,MY[i,],col=rgb(0,0,1,.3),type="s")}lines(c(0,x),c(0,apply(MY,2,mean)),col="red",type="s",lwd=3)segments(0,0,1,1,col="light blue")

So, it looks like me might say that the diagonal is what we have, on average, when drawing randomly scores on the unit interval…

I did mention that an interesting visual tool could be related to the use of the Kolmogorov Smirnov statistic on classifiers. We can plot the two empirical cumulative distribution functions of the scores, given the response Y

score=data.frame(yobs=Y,
                 ypred=predict(reg,type="response"))

f0=c(0,sort(score$ypred[score$yobs==0]),1)

f1=c(0,sort(score$ypred[score$yobs==1]),1)plot(f0,(0:(length(f0)-1))/(length(f0)-1),col="red",type="s",lwd=2,xlim=0:1)lines(f1,(0:(length(f1)-1))/(length(f1)-1),col="blue",type="s",lwd=2)

we can also look at the distribution of the score, with the histogram (or density estimates)

S=score$ypred

hist(S[Y==0],col=rgb(1,0,0,.2),
     probability=TRUE,breaks=(0:10)/10,border="white")hist(S[Y==1],col=rgb(0,0,1,.2),
     probability=TRUE,breaks=(0:10)/10,border="white",add=TRUE)lines(density(S[Y==0]),col="red",lwd=2,xlim=c(0,1))lines(density(S[Y==1]),col="blue",lwd=2)

The underlying idea is the following : we do have a “perfect classifier” (top left corner)

is the supports of the scores do not overlap

otherwise, we should have errors. That the case below

we in 10% of the cases, we might have misclassification

or even more missclassification, with overlapping supports

Now, we have the diagonal

when the two conditional distributions of the scores are identical

Of course, that only valid when n is very large, otherwise, it is only what we observe on average….

On the poor performance of classifiers in insurance models

Each time we have a case study in my actuarial courses (with real data), students are surprised to have hard time getting a “good” model, and they are always surprised to have a low AUC, when trying to model the probability to claim a loss, to die, to fraud, etc. And each time, I keep saying, “yes, I know, and that’s what we expect because there a lot of ‘randomness’ in insurance”. To be more specific, I decided to run some simulations, and to compute AUCs to see what’s going on. And because I don’t want to waste time fitting models, we will assume that we have each time a perfect model. So I want to show that the upper bound of the AUC is actually quite low ! So it’s not a modeling issue, it is a fondamental issue in insurance !

By ‘perfect model’ I mean the following : \Omega denotes the heterogeneity factor, because people are different. We would love to get \mathbb{P}[Y=1|\Omega]. Unfortunately, \Omega  is unobservable ! So we use covariates (like the age of the driver of the car in motor insurance, or of the policyholder in life insurance, etc). Thus, we have data (y_i,\boldsymbol{x}_i)‘s and we use them to train a model, in order to approximate \mathbb{P}[Y=1|\boldsymbol{X}]. And then, we check if our model is good (or not) using the ROC curve, obtained from confusion matrices, comparing y_i‘s and \widehat{y}_i‘s where \widehat{y}_i=1 when \mathbb{P}[Y_i=1|\boldsymbol{x}_i] exceeds a given threshold. Here, I will not try to construct models. I will predict \widehat{y}_i=1 each time the true underlying probability \mathbb{P}[Y_i=1|\omega_i] exceeds a threshold ! The point is that it’s possible to claim a loss (y=1) even if the probability is 3% (and most of the time \widehat{y}=0), and to not claim one (y=0) even if the probability is 97% (and most of the time \widehat{y}=1). That’s the idea with randomness, right ?

So, here p(\omega_1),\cdots,p(\omega_n) denote the probabilities to claim a loss, to die, to fraud, etc. There is heterogeneity here, and this heterogenity can be small, or large. Consider the graph below, to illustrate,

In both cases, there is, on average, 25% chance to claim a loss. But on the left, there is more heterogeneity, more dispersion. To illustrate, I used the arrow, which is a classical 90% interval : 90% of the individuals have a probability to claim a loss in that interval. (here 10%-40%), 5% are below 10% (low risk), and 5% are above 40% (high risk). Later on, we will say that we have 25% on average, with a dispersion of 30% (40% minus 10%). On the right, it’s more 25% on average, with a dispersion of of 15%. What I call dispersion is the difference between the 95% and the 5% quantiles.

Consider now some dataset, with Bernoulli variables y, drawn with those probabilities p(\omega). Then, let us assume that we are able to get a perfect model : I do not estimate a model based on some covariates, here, I assume that I know perfectly the probability (which is true, because I did generate those data). More specifically, to generate a vector of probabilities, here I use a Beta distribution with a given mean, and a given variance (to capture the heterogeneity I mentioned above)

a=m*(m*(1-m)/v-1)
b=(1-m)*(m*(1-m)/v-1)
p=rbeta(n,a,b)

from those probabilities, I generate occurences of claims, or deaths,

Y=rbinom(n,size = 1,prob = p)

Then, I compute the AUC of my “perfect” model,

auc.tmp=performance(prediction(p,Y),"auc")

And then, I will generate many samples, to compute the average value of the AUC. And actually, we can do that for many values of the mean and the variance of the Beta distribution. Here is the code

library(ROCR)
n=1000
ns=200
ab_beta = function(m,inter){
  a=uniroot(function(a) qbeta(.95,a,a/m-a)-qbeta(.05,a,a/m-a)-inter,
            interval=c(.0000001,1000000))$root
  b=a/m-a
  return(c(a,b))
}
Sim_AUC_mean_inter=function(m=.5,i=.05){
  V_auc=rep(NA,ns)
  b=-1
  essai = try(ab<-ab_beta(m,i),TRUE) if(inherits(essai,what="try-error")) a=-1 if(!inherits(essai,what="try-error")){ a=ab[1] b=ab[2] } if((a>=0)&(b>=0)){
    for(s in 1:ns){
      p=rbeta(n,a,b)
      Y=rbinom(n,size = 1,prob = p)
      auc.tmp=performance(prediction(p,Y),"auc")
      V_auc[s]=as.numeric(auc.tmp@y.values)}
    L=list(moy_beta=m,
           var_beat=v,
           q05=qbeta(.05,a,b),
           q95=qbeta(.95,a,b),
           moy_AUC=mean(V_auc),
           sd_AUC=sd(V_auc),
           q05_AUC=quantile(V_auc,.05),
           q95_AUC=quantile(V_auc,.95))
    return(L)}
  if((a<0)|(b<0)){return(list(moy_AUC=NA))}}
Vm=seq(.025,.975,by=.025)
Vi=seq(.01,.5,by=.01)
V=outer(X = Vm,Y = Vi, Vectorize(function(x,y) 
Sim_AUC_mean_inter(x,y)$moy_AUC))
library("RColorBrewer")
image(Vm,Vi,V,
      xlab="Probability (Average)",
      ylab="Dispersion (Q95-Q5)",
      col=
        colorRampPalette(brewer.pal(n = 9, name = "YlGn"))(101))
contour(Vm,Vi,V,add=TRUE,lwd=2)

On the x-axis, we have the average probability to claim a loss. Of course, there is a symmetry here. And on the y-axis, we have the dispersion : the lower, the less heterogeneity in the portfolio. For instance, with a 30% chance to claim a loss on average, and 20% dispersion (meaning that in the portfolio, 90% of the insured have between 20% and 40% chance to claim a loss, or 15% and 35% chance), we have on average a 60% AUC. With a perfect model ! So with only a few covariates, having 55% should be great !

My point here is that with a low dispersion, we cannot expect to have a great AUC (again, even with a perfect model). In motor insurance, from my experience, 90% of the insured are between 3% chance and 20% chance to claim a loss ! That’s less than 20% dispersion ! and in that case, even if the (average) probability is rather small, it is very difficult to expect an AUC above 60% or 65% !

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

Variance of the slope in a regression model

In my “applied linear models” exam, there was a tricky question (it was a multiple choice, so no details were asked). I was simply asking if the following statement was valid, or not

Consider a linear regression with one single covariate, y=\beta_0+\beta_1x_1+\varepsilon and the least-square estimates. The variance of the slope is \text{Var}[\widehat{\beta}_1] Do we decrease this variance if we add one variable, and consider y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon ?

For the exam, the expected answer was simply “no”. In a nutshell, there are two cases where we should expect different changes,

  • if x_1 and x_2 are highly correlated, then we should expect the variance to increase
  • if x_1 and x_2 are not correlated, then we should expect the variance to decrease

We did briefly observed (and discussed) those points on examples during the lecture… but I wanted to go a bit further, since I couldn’t find any analytical results. Let us generate a model y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon, and then compare the variance \text{Var}[\widehat{\beta}_1] on the two fitted modes, depending on the correlation between x_1 and x_2

library(mnormt)
n=200
s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+X[,2]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

Let us generate 500 samples for each value of the correlation, from -0.9 to _0.9

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and let us plot the ratio of the two variances

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

If the ratio exceeds 1, the variance increases when adding a covariate.

Indeed, here, when the two variables are independent, the variance is divided by two. But when covariates are highly correlated, the variance is multiplied by two…

Now, what if, actually, x_2 is not a real explanatory variable : the true model we generate is y=\beta_0+\beta_1x_1+\varepsilon. In that case,

s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

we get our samples as previously

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and we plot those ratios

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

In the case we add a useless variable x_2, whatever the correlation with x_1, it will always, on average, increase the variance of \widehat{\beta}_1.

Annual UCSB InsurTech Summit

Just a quick post to mention that an Insurtech Summit will be organized in May 2019, on Friday 3rd, by Mike Ludkovski, and I will be there, with Francois Millard (Vitality Group), Adam Tashman (Carpe Data, Santa Barbara), Emil Valdez (University of Connecticut), and Howard Zail (Elucidor, LLC, New York City). That will be nice… I will actually also give a talk on the Monday before at the actuarial seminar !

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Do risk classes go beyond stereotypes?

Generalization, stereotypes and clichés

In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.

To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?

The usual suspects

For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!

In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.

Machines, causality, and stereotypes

A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe

P[carries | brushing your teeth] < P[carries | don’t brush your teeth]

brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.

Figure 1: Näytä Data – Author’s visualization

The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.

When actuaries tell each other stories

As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.

But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.

Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.

Hyper-individualization as an answer?

While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…

Antonio, K. & Charpentier, A. (2017).  La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.

Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.

Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98

Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.

Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.

Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

An Open Lab-Notebook Experiment