The book Recent Econometric Techniques for Macroeconomic and Financial Data, edited by Gilles Dufrénot and Takashi Matsuki has been published. With Emmanuel Flachaire, we wrote a chapter on Pareto models for risk management
The book Recent Econometric Techniques for Macroeconomic and Financial Data, edited by Gilles Dufrénot and Takashi Matsuki has been published. With Emmanuel Flachaire, we wrote a chapter on Pareto models for risk management
In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.
“To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?
For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!
In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.
A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe
P[carries | brushing your teeth] < P[carries | don’t brush your teeth]
brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.
Figure 1: Näytä Data – Author’s visualization
The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.
As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.
But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.
Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.
While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…
Antonio, K. & Charpentier, A. (2017). La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.
Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.
Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98
Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.
Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.
Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.
Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.
This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.
In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).
These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.
Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series
Figure 4 : Generalization, under- and over-fitting
Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.
This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).
The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).
Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.
All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…
References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).
Our research chaire ACTINFO, with our colleagues from Lyon, at the DAMI chaire, PREVENT’HORIZON chaire & ACTUARIAT DURABLE chaire, will organize a 2 day conference in Paris, on Insurance, Actuarial Science, Data & Models, in ten days.
We invited Katrien ANTONIO (KU Leuven), Alexandre BOUMEZOUED (Milliman Paris), Alfred GALICHON (New-York University), Pierre-Yves GEOFFARD (Paris School of Economics), Meglena JELEVA (University of Paris Nanterre), Julie JOSSE (Ecole Polytechnique), Florence JUSOT (Paris Dauphine University), Michael LUDKOWSKI (University of California Santa Barbara), François PANNEQUIN (CREST and ENS Paris-Saclay), Florian PELGRIN (Edhec Business School), Dylan POSSAMAI (Columbia University) and Julien TRUFIN (ULB Brussels). More information (including the program) is online.
Yesterday, Andrew Lo spent some time on a nice graph, discussing attitudes towards risk. Here are four assets (thanks @TCJUK for improving the terminology), real data (no information here about time, but it’s the same scale for the four of them)
The question raised was quite simple
if you could invest in one, and only one, asset which one will you pick ?
Tuesday afternoon, I will give a lecture on Data Science & Big Data for Actuaries in Barcelona, for Summer School of the Asociación Española de Gerencia de Riesgos y Seguros. For this two hour lecture, there will be live streming on UBtv. Slides are available online (as usual, download the pdf to get the animated version)
Our joint paper Local Utility and Multivariate Risk Aversion, written with Alfred Galichon and Marc Henry will appear soon in Mathematics of Operations Research.
I will present it, next Thursday in our Risk Workshop, in Rennes, organized by Olivier L’Haridon.
Tomorrow morning, I will give a crash course on risk measures at Louvain-la-Neuve, in Belgium. This is a crash course of PhD students (and researchers) with a long introduction on the univariate static framework (and some mathematical tools that will be interesting later on, such as the Fenchel transform and more generally on convexity, as well as some results on optimal transport). I will also mention what was obtained in decision theory, inspired by Itzhak Gilboa‘s Theory of Decision under Uncertainty. Then I will mention extensions to derive multiple risk measures, based on Marc Henry and Alfred Galichon‘s work. Finally, I will conclude by introducing the difficulty to derive dynamic risk measures.
The slides are based on a document I am still working on. And unfortunately, the deeper I get to explain the roots of the axioms, or the assumptions, the more papers I discover (and I need to read, and understand). So I guess I will need some time to finalize my survey. Note that I decided to skip details on technical issues when working on , and the weak topology on the dual of
. I will try to add additional references in the notes, but I wanted the slides to be as simple as possible. I also want to add more connections with statistical results, such as Neyman Pearson’s lemma, for instance (as mentioned in a paper by Alexander Schied). All my apologies for the typos, too.
In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).
The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let denote a counting random variable, then it said to be Poisson distributed if there is
such that
De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among insured. If individual death probabilities are identical, say
, and if deaths are independent events, then
And if and
, then
Again, this is an asymptotic theorem, which is valid when we have a lot of observations (
), but also, the probability of occurrence should be extremely small (since
), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….
The heuristic of the main theorem, related to the Poisson distribution is the following: let denote i.i.d random variables taking values in
(in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let
. If
as
(or
to be a little bit more specific about the assumptions), let
denote the (random variable characterizing) count of events
, then
can be approximated by a Poisson distribution with parameter
.
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.
n=1000 X=runif(n)*10-1.5 Y=runif(n)*10-1.5 plot(X,Y,axis=FALSE,cex=.6) u=seq(-1,1,by=.01) v=sqrt(1-u^2) polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA) I=(X^2+Y^2)<1 points(X[I],Y[I],cex=.6,pch=19,col="red")
If we run some simulations,
The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.
> (lambda=10*pi) [1] 31.41593 > lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")
To get an interpretation related to insurance modeling, let denote an upper layer in a reinsurance contract, i.e.
for some deductible
. Let
‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible
becomes extremely large (and
), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or http://fire.nist.gov/bfrlpubs/…): if
has a Poisson distribution and, conditionally on
,
are independent identically distributed generalized Pareto random variables, then
has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.
As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).
It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).
He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)
number of death per year |
Empirical counts |
Poisson distribution |
0 | 109 | 108.67 |
1 | 65 | 66.21 |
2 | 22 | 20.22 |
3 | 3 | 4.11 |
4 | 1 | 0.63 |
5 and more | 0 | 0.08 |
It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,
number of hurricanes per year |
empirical frequency |
Poisson frequency |
0 | 30 | 27.16 |
1 | 48 | 47.99 |
2 | 37 | 42.41 |
3 | 29 | 24.98 |
4 | 8 | 11.03 |
5 | 3 | 3.90 |
6 | 3 | 1.15 |
7 | 1 | 0.29 |
8 and more | 0 | 0.08 |
The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period (in years), then the yearly probability of non-occurrence is
.
And the probability of non-occurence over years is then
. It is standard to summarize this property with the following table,
return period |
||||||
Number of years ( |
10 | 20 | 50 | 100 | 200 | |
10 | 65.1% | 40.1% | 18.3% | 9.6% | 4.9% | |
20 | 87.8% | 64.2% | 33.2% | 18.2% | 9.5% | |
50 | 99.5% | 92.3% | 63.6% | 39.5% | 22.5% | |
100 | 99.9% | 99.4% | 86.7% | 63.4% | 39.5% | |
200 | 99.9% | 99.9% | 98.2% | 86.6% | 63.3% |
The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here 63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability , which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then
, which is equal to 0.632.
The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)
Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)
On the other hand
> p=0.00005 > 1-(1-p)^(50*80) [1] 0.1812733 > 1-exp(-50*80*p) [1] 0.1812692
which is the probability that is null when
has a Poisson distribution with parameter
. We clearly see here an application of De Moivre’s approximation in risk management.
Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by
i.e.
> 1-exp(-50*80/7200) [1] 0.4262466
(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).
Marc Henry will present our paper Local Utility and Multivariate Risk Aversion (writen with Alfred Galichon, still available online on http://papers.ssrn.com/) at the Risk Uncertainty and Decision Conference in Evanston, IL, at Northwestern University.
I will try to upload the slides soon…
I will be giving a talk on May 4th, at the Mathematical Finance Days, at HEC Montréal, on multivariate dynamic models for counts. The conference is organized by IFM2 (Institut de Finance Mathématique de Montréal). I will be chairing some session and I will give a talk based on the joint paper with Mathieu Boudreault.
![]() |
![]() |
The slides can be downloaded from the blog,
“In various situations in the insurance industry, in finance, in epidemiology, etc., one needs to represent the joint evolution of the number of occurrences of an event. In this paper, we present a multivariate integer‐valued autoregressive (MINAR) model, derive its properties and apply the model to earthquake occurrences across various pairs of tectonic plates. The model is an extension of Pedelis & Karlis (2011) where cross autocorrelation (spatial contagion in a seismic context) is considered. We fit various bivariate count models and find that for many contiguous tectonic plates, spatial contagion is significant in both directions. Furthermore, ignoring cross autocorrelation can underestimate the potential for high numbers of occurrences over the short‐term. An application to risk management and cat‐bond pricing will be discussed.”
Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here
or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – (either the company defaults, or not), so that
i.e. we can derive the (unconditional) distribution of the sum
so that the probability function of the sum is, assuming that
Thus, the following code can be used to calculate the quantile function
> proba=function(s,a,m,n){ + b=a/m-a + choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)* + dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000, + stop.on.error = FALSE)$value + } > QUANTILE=function(p=.99,a=2,m=.1,n=500){ + V=rep(NA,n+1) + for(i in 0:n){ + V[i+1]=proba(i,a,m,n)} + V=V/sum(V) + return(min(which(cumsum(V)>p))) }
Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here
i.e.
Thus, the correlation between two default indicators is then
Under the assumption that the latent factor is beta distributed
we get
Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,
> PICTURE=function(P){ + A=seq(.01,2,by=.01) + VQ=matrix(NA,length(A),5) + for(i in 1:length(A)){ + VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P) + VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P) + VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P) + VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P) + VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P) + } + plot(A,VQ[,5],type="s",col="red",ylim= + c(0,max(VQ)),xlab="",ylab="") + lines(A,VQ[,4],type="s",col="blue") + lines(A,VQ[,3],type="s",col="black") + lines(A,VQ[,2],type="s",col="blue",lty=2) + lines(A,VQ[,1],type="s",col="red",lty=2) + lines(A,rep(500*P,length(A)),col="grey") + legend(3,max(VQ),c("quantile 99.5%","quantile 99%", + "quantile 97.5%","quantile 95%","quantile 90%","mean"), + col=c("red","blue","black", +"blue","red","grey"), + lty=c(1,1,1,2,2,1),border=n) +}
e.g. with a (marginal) default probability of 15%,
> PICTURE(.15)
On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,
Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),
> PICTURE(.05)
(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,
> PICTURE(.0075)
And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !
The Geneva Association just published on its website an interesting report on variable annuities and systemic risk (online here). Based on a definition of potentially systemically risky activities, on interconnectedness or substitutability, the report claims that since “none of the criteria is triggered”, variables annuities is “not a potentially systemically risk activity”. Even if “short-term effects are conceivable”. I guess it is a diplomatic way to say it…
Note that a series of slides can also be downloaded (there) on insurance and systemic risk. But that deserves a more detailed post.
As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,
Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:
CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney", "beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor") YEARS=1970:2009 BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA) for(i in 1:length(CITIES)){ for(j in 1:length(YEARS)){ city=CITIES[i] year=YEARS[j] localization = paste("http://www.resultsfromtennis.com/", year,"/atp/",city,".html",sep="") essai = try(readLines(localization), silent=TRUE) ERROR404=FALSE if(inherits(essai, "try-error")){ERROR404=TRUE} if(ERROR404==FALSE){ B=scan(localization,"character") SETS=NA LENGTH=NA if(length(B)>270){ I=(substr(B,1,10)=="class=rez>") sum(I) X0=B[I] X3=as.numeric(substr(X0,11,13)) X2=as.numeric(substr(X0,11,12)) X1=as.numeric(substr(X0,11,11)) X0=X3 X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE] X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE] JL=c(which(substr(B,1,9)=="class=nl>"),length(B)) IL=which(substr(B,1,10)=="class=rez>") IC=cut(IL,JL) base=data.frame(IC,X0) LENGTH=as.numeric(tapply(X0,IC,sum)) SETS=as.numeric(tapply(X0,IC,length))/2} BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS) BASE0=rbind(BASE0,BASE)}}} write.table(BASE0,"BASE-TENNIS-TOTAL.txt")
Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,
> I=is.na(TENNIS$LENGTH)==FALSE > BT=TENNIS[I,] > nrow(BT) [1] 72754 > maxr=function(x){max(x,na.rm=TRUE)} > T=paste(BT$TRNMT,BT$YEAR) > DUREE=tapply(BT$SETS,T,maxr) > LISTE=names(DUREE[DUREE>3]) > BT=BT[T%in%LISTE,]
so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),
The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).
If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,
The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,
and similarly with the Hill plot (assuming that tails are Pareto type….)
Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have
> seuil=68+0.25 > GPD1=gpd(X,seuil,method = "ml") > GPD2=gpd(X,seuil,method = "pwm") > > xi=GPD1$par.ests[1] > mu=seuil > beta=GPD1$par.ests[2] > x=180 > P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta)) > as.numeric((1-GPD1$p.less.thresh)*P) [1] 5.621281e-09 > > xi=GPD2$par.ests[1] > mu=seuil > beta=GPD2$par.ests[2] > x=180 > P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta)) > as.numeric((1-GPD2$p.less.thresh)*P) [1] 3.027095e-09
I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…
In less than 48 hours, last week two friends mentioned the Millennium Bridge as an illustration of a risk management concept. There are several documents with that example, here (for the initial idea of using the Millennium Bridge to illustrate issues in risk management) here or there, e.g.
When we mention resonance effects on bridges, we usually thing of the Tacoma Narrows Bridge (where strong winds set the bridge oscillating) or the Basse-Chaîne Bridge (in France, which collapsed on April 16, 1850, when 478 French soldiers marched across it in lockstep). In the first case, there is nothing we can do about it, but for the second one, this is why soldiers are required to break step on bridges.
But for the Millennium bridge, a ‘positive feedback‘ phenomenon (known as Synchronous Lateral Excitation in physics) has been observed: the natural sway motion of people walking caused small sideways oscillations in the bridge, which in turn caused people on the bridge to sway in step, increasing the amplitude of the oscillations and continually reinforcing the effect. That has been described in a nice paper in 2005 (here). In the initial paper by Jon Danielsson and Hyun Song Shin, they note that “what is the probability that a thousand people walking at random will end up walking exactly in step? It is tempting to say “close to zero”, or “negligible”. After all, if each person’s step is an independent event, then the probability of everyone walking in step would be the product of many small numbers – giving us a probability close to zero. Presumably, this is the reason why Arup – the bridge engineers – did not take this into account. However, this is exactly where endogenous risk comes in. What we must take into account is the way that people react to their environment. Pedestrians on the bridge react to how the bridge is moving. When the bridge moves under your feet, it is a natural reaction for people to adjust their stance to regain balance. But here is the catch. When the bridge moves, everyone adjusts his or her stance at the same time. This synchronized movement pushes the bridge that the people are standing on, and makes the bridge move even more. This, in turn, makes the people adjust their stance more drastically, and so on. In other words, the wobble of the bridge feeds on itself. When the bridge wobbles, everyone adjusts their stance, which sets off an even worse wobble, which makes the people adjust even more, and so on. So, the wobble will continue and get stronger even though the initial shock (say, a gust of wind) has long passed. It is an example of a force that is generated and amplified within the system. It is an endogenous response. It is very different from a shock that comes from a storm or an earthquake which are exogenous to the system.”
And to go further, they point out that this event is rather similar to what is observed in financial markets (here) by quoting The Economist from October 12th 2000 “So-called value-at-risk models (VaR) blend science and art. They estimate how much a portfolio could lose in a single bad day. If that amount gets too large, the VAR model signals that the bank should sell. The trouble is that lots of banks have similar investments and similar VAR models. In periods when markets everywhere decline, the models can tell everybody to sell the same things at the same time, making market conditions much worse. In effect, they can, and often do, create a vicious feedback loop. “