Tag Archives: risk

Workshop “Networks, Games and Risk”

Monday we had our workshop “Networks, Games and Risk“,

with Renaud Bourles (Centrale Marseille, Aix-Marseille School of Economics, and Institut Universitaire de France, France), Vincent Boucher (Université Laval, Québec, Canada), Federico Bobbio (Université de Montréal, Montréal, Canada), Leonie Baumann (McGill University, Montréal, Canada), Fallou Niakh (CREST, ENSAE, Institut Polytechnique de Paris, France) and Philipp Ratz (Université du Québec à Montréal, Montréal, Canada)

It was extremely interesting !

Networks, Games and Risk

On Monday, December 18th, 2023, we organize, at UQAM, a workshop on “Networks, Games and Risk“.

Decentralized risk-sharing markets are markets for risk exchange in which a pool of individuals agree to mutually insurer each other, without recourse to a centralized insurance provider. Some important problems to examine in these markets are the following:

  • The coalitional stability of the pool, or the formation of risk-sharing networks (subcoalitions) within the pool.
  • The Pareto-efficiency of allocations along risk-sharing networks.
  • The structure of allocation mechanisms within networks, that is, the mappings that transform feasible allocations into other feasible allocations within networks.

Examining these problems requires an interdisciplinary approach, drawing from economic theory, insurance and actuarial science, game theory, and related fields of applications. It is the aim of this workshop to bring together researchers from various fields, to discuss open problems in the theory of decentralized risk-sharing along networks, as well as potential interdisciplinary approaches to tackle these problems.

Do risk classes go beyond stereotypes?

Generalization, stereotypes and clichés

In Thinking, Fast and Slow, Daniel Kahneman discusses at length the importance of stereotypes in understanding many decision-making processes. A so-called System 1 is used for quick decision-making: it allows us to recognize people and objects, helps us focus our attention, and encourages us to fear spiders. It is based on knowledge stored in memory and accessible without intention, and without effort. It can be contrasted with System 2, which allows for more complex decision-making, requiring discipline and sequential reflection. In the first case, our brain uses the stereotypes that govern judgments of representativeness, and uses this heuristic to make decisions. If I cook a fish for friends who have come to eat, I open a bottle of white wine. The cliché “fish goes well with white wine” allows me to make a decision quickly, without having to think about it. Stereotypes are statements about a group that are accepted (at least provisionally) as facts about each member. Whether correct or not, stereotypes are the basic tools for thinking about categories in System 1. But in many cases, a more in-depth, more sophisticated reflection – corresponding to System 2 – will make it possible to make a more judicious, even optimal decision. Without choosing any red wine, a pinot noir could perhaps also be perfectly suitable for roasted red mullets.

To generalize is to be an idiot, to particularize is the alone distinction of merit” wrote William Blake around 1800, annotating speeches by the painter Joshua Reynolds. Stigmatizing an entire population because of a minority in a decision-making process is a misleading generalization, often punished by society. Moral punishment, but sometimes also legal (when hiring for example) in a society that tends to be civilized, asking not to draw erroneous conclusions about an individual from the statistics of a group to which he would be attached. But isn’t that what the actuary does every day?

The usual suspects

For Schauer (2009), this “generalization“, condemned by William Blake, is probably the actuary’s raison d’être: “to be an actuary is to be a specialist in generalization, and actuaries engage in a form of decision-making that is sometimes called actuarial“. If I decide to insure a sports car, I have I am given risky driving characteristics that probably belong to the majority of sports car owners, attributes that I may not share. And as we noted in the introduction, insurance companies, of course, are not the only ones that operate actuarially, according to Schauer’s definition. We all do it, much more often than most of us would probably recognize. We do this when we choose airlines based on their safety record, punctuality or lost luggage. We do this when we associate personal characteristics (a visible tattoo, black or brightly coloured clothing) with behavioural characteristics (such as a propensity for violence) that these personal characteristics would seem to indicate. And we operate in this way when we engage in stereotypes that may be harmless on the basis of nationality, for example by calling French people are rude, or Scots all wear kilts, while at the same time acknowledging that more pernicious stereotypes on the basis of ethnic origin, gender, sexual orientation are too widespread today! As the misconception of the word “prejudice” indicates, many people believe that it is unfair to make individual decisions based on non-universal group characteristics. Even if group allocations have a solid statistical basis. Because the big difference between actuarial science and everyday life is that actuaries have to use a large number of observations. On a personal level, I can thus decide not to travel with such an airline anymore because on three trips, I have experienced two bad experiences. Before deciding that travel insurance deserves a higher premium when flying with this company, it takes more than three observations!

In fact, the question is often whether an insurance company’s refusal to provide coverage, or to increase the premiums it charges for the same coverage, is an injustice when it is based on an actuarially justified (but perhaps not universal) generalization. As Leemens (2000) noted, the question was asked of the legislator when insurers observed that Jewish women from Eastern Europe were particularly vulnerable to breast and ovarian cancer. At the end of 2012, the European Court of Justice put an end to all discrimination based on the gender of policyholders: insurers were no longer able to differentiate between insurance product prices according to whether the member was male or female. But the use of age is still allowed. Indeed, age is often an indicator of a possible decrease in vision or hearing, slower reaction time (and increased risk of sudden disability), etc. And although there are many individual variations, the available data provide important empirical justification.

Machines, causality, and stereotypes

A major criticism of machine learning models is the lack of interpretation. But very often, the validation of econometric models requires a narrative built around stereotypes. And this narrative is essential, as Pearl & Mackenzie (2018) reminds us. Indeed, in the “The Ladder of Causation“, there are three levels. At the first level, we find the notion of association (or correlation), or even conditional probability, which serve as a basis for the constitution of stereotypes: if we observe

P[carries | brushing your teeth] < P[carries | don’t brush your teeth]

brushing teeth will be associated with a decrease in the probability of having carries. It is also the basis for regression methods, which are based on correlations between the variable of interest and others, wrongly called explanatory. In Figure 1, we can see the daily cycling traffic in Helsinki, and the average temperature. We will tend to prefer the one on the left, showing the evolution of the number of cyclists as a function of temperature, suggesting that temperature could explain the number of cyclists, and not the other way around. But the stereotype doesn’t necessarily focus on the causal link: if I see a lot of cyclists passing through the window, I’ll tell myself it must be hot, or at least warm.

Figure 1: Näytä Data – Author’s visualization

The first level answers the question “what if I see…?“(e.g. “what cycling traffic to expect if the temperature reaches 20°C? “) and this task can be perfectly accomplished by a machine. The second level is the one that makes it possible to understand an effect, an intervention. The question is then “what if I do…? “. To use our example, we are trying to understand the importance of brushing our teeth on the appearance of cavities. What if brushing your teeth is more natural for children with good teeth? We see the third level of the scale coming up, asking the question “what if I had done…?“and based on the idea of a counterfactual model. We are no longer content to measure correlations, we will build a model explaining what would happen by making a change in the causal variables: what would really happen if the child who did not brush his teeth began to do so? For Pearl & Mackenzie (2018) a human being (maybe even an actuary) can make these more advanced arguments than a machine can (yet) do. And very often, these causal patterns are stereotyped. As Charpentier & Diago Barry (2015) points out, in epidemiology, researchers have long questioned the explanation to be given to the fact that small babies of smokers have a higher probability of survival than babies of non-smoking mothers. The intuition that something is wrong comes from prejudices, stereotypes that we have, and that a machine cannot have.

When actuaries tell each other stories

As Antonio & Charpentier (2017) noted, the European “gender directive” has confused many insurers who used gender to construct their rates, as the latter was highly correlated with the frequency of claims. But by introducing telematic data, gender was no longer significant in the regression. Gender has long been used as a proxy to capture an effect that can be observed using telematic data, giving rise to many sexist stereotypes and other stereotypes.

But the stories also make it possible to decide between a false correlation (“spurious correlation“) and a correlation that could be interpreted. In Figure 2, we have life expectancy at birth, a variable that we could try to explain in a pension study context, for example, by French department. On the right, two variables taken at random: the number of licenses of a tennis club, and the number of advertising agencies. Stereotypes are what will allow us to construct a causal graph, allowing us to understand why there is such a strong correlation between these variables and life expectancy.

Figure 2: Life expectancy at birth for men, left. At the centre, number of tennis licenses per 100,000 inhabitants (source FFT). On the right, number of advertising agencies per 100,000 inhabitants (source INSEE, code NAF 7311Z). Visualization of the author.

Hyper-individualization as an answer?

While William Blake condemned stereotypes by saying “to generalize is to be an idiot“, he also clearly went further, continuing with “to particularize is the alone distinction of merit“. This individualisation is also advocated by more and more insurers, and even desired by many insureds. But as Grace & Terry (2002) pointed out, many policyholders suffer from a significant optimism bias – “if I have an accident, it will not be my fault” – leading them to doubt the insurer’s classification – “I’m less risky than the others“. And morality seems to prove them right, against actuaries. Yet, not only is generality not, in general, unjust, but justice itself can have considerable elements of generality. To the extent that justice is centred on equity and to the extent that equity itself is closely linked to equality, then equity, and therefore justice, can now be seen as itself based on the idea of generality. The just society is not necessarily a society in which each individual is treated as an isolated set of unique attributes, requiring individualized attention. On the contrary, in some cases, the just society is a society in which generality is not only unavoidable, but also necessary for justice itself. And pooling risks together is the natural response in an insurance context. And it might not be such a big deal if that class is not as homogenous at it could be, or as we would have expected it to be…

Antonio, K. & Charpentier, A. (2017).  La tarification par genre en assurance, corrélation ou causalité ?. Risques. 110 : 107-110.

Charpentier, A. & Diago Barry, A. (2015). Big data : passer d’une analyse de corrélation à une interprétation causale. Risques, 101: 107-111.

Grace, J. & Terry, M. (2002). Exploring the Causes of Comparative Optimism. Psychologica Belgica. 42: 65–98

Kahneman, D. (2011).Thinking, Fast and Slow. FSG Eds.

Leemens, T. (2000). Selective Justice, Genetic Discrimination, and Insurance: Should We Single Out Genes in Our Laws? McGill law journal. Revue de droit de McGill 45(2):347-412.

Pearl, J. & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schauer, F.F. (2009). Profiles, Probabilities, and Stereotypes. Harvard University Press.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Insurance, Actuarial Science, Data and Models

Our research chaire ACTINFO, with our colleagues from Lyon, at the DAMI chaire,  PREVENT’HORIZON chaire & ACTUARIAT DURABLE chaire, will organize a 2 day conference in Paris, on Insurance, Actuarial Science, Data & Models, in ten days.

We invited Katrien ANTONIO (KU Leuven), Alexandre BOUMEZOUED (Milliman Paris), Alfred GALICHON (New-York University), Pierre-Yves GEOFFARD (Paris School of Economics), Meglena JELEVA (University of Paris Nanterre), Julie JOSSE (Ecole Polytechnique), Florence JUSOT (Paris Dauphine University), Michael LUDKOWSKI (University of California Santa Barbara), François PANNEQUIN (CREST and ENS Paris-Saclay), Florian PELGRIN (Edhec Business School), Dylan POSSAMAI (Columbia University) and Julien TRUFIN (ULB Brussels). More information (including the program) is online.

Picking an asset to invest

Yesterday, Andrew Lo spent some time on a nice graph, discussing attitudes towards risk. Here are four assets (thanks  for improving the terminology), real data (no information here about time, but it’s the same scale for the four of them)

The question raised was quite simple

if you could invest in one, and only one, asset which one will you pick ?

Continue reading Picking an asset to invest

Graduate Crash Course on Risk Measures

Tomorrow morning, I will give a crash course on risk measures at Louvain-la-Neuve, in Belgium. This is a crash course of PhD students (and researchers) with a long introduction on the univariate static framework (and some mathematical tools that will be interesting later on, such as the Fenchel transform and more generally on convexity, as well as some results on optimal transport). I will also mention what was obtained in decision theory, inspired by Itzhak Gilboa‘s Theory of Decision under Uncertainty. Then I will mention extensions to derive multiple risk measures, based on Marc Henry and Alfred Galichon‘s work. Finally, I will conclude by introducing the difficulty to derive dynamic risk measures.

The slides are based on a document I am still working on. And unfortunately, the deeper I get to explain the roots of the axioms, or the assumptions, the more papers I discover (and I need to read, and understand). So I guess I will need some time to finalize my survey. Note that I decided to skip details on technical issues when working on , and the weak topology on the dual of . I will try to add additional references in the notes, but I wanted the slides to be as simple as possible. I also want to add more connections with statistical results, such as Neyman Pearson’s lemma, for instance (as mentioned in a paper by Alexander Schied). All my apologies for the typos, too.

The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

  • The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let https://latex.codecogs.com/gif.latex?N denote a counting random variable, then it said to be Poisson distributed if there is https://latex.codecogs.com/gif.latex?\lambda\in(0,\infty) such that

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall%20k\in\mathbb{N}

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among https://latex.codecogs.com/gif.latex?n insured. If individual death probabilities are identical, say https://latex.codecogs.com/gif.latex?p, and if deaths are independent events, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall%20k\in\{0,1,\cdots,n\}
And if https://latex.codecogs.com/gif.latex?n\rightarrow\infty and  https://latex.codecogs.com/gif.latex?np\rightarrow%20\lambda, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)\rightarrow%20e^{-\lambda}\frac{\lambda^k}{k!}Again, this is an asymptotic theorem, which is valid when we have a lot of observations (https://latex.codecogs.com/gif.latex?n\rightarrow\infty), but also, the probability of occurrence should be extremely small (since https://latex.codecogs.com/gif.latex?p\sim\lambda/n), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

  • The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let https://latex.codecogs.com/gif.latex?X_1,%20\cdots,X_n denote i.i.d random variables taking values in https://latex.codecogs.com/gif.latex?%20\mathbb{R}^d (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let https://latex.codecogs.com/gif.latex?\mathcal{A}_n\subset\mathbb{R}^d. If  https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)\rightarrow%200 as https://latex.codecogs.com/gif.latex?n\rightarrow\infty (or https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)=O(n^{-1}) to be a little bit more specific about the assumptions), let https://latex.codecogs.com/gif.latex?N denote the (random variable characterizing) count of events https://latex.codecogs.com/gif.latex?\{X_i%20\in%20\mathcal{A}_n\}, then https://latex.codecogs.com/gif.latex?N can be approximated by a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda%20=n%20\times%20\mathbb%20P(X_i%20\in%20\mathcal{A}_n).
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-28-a%CC%80-11.14.21.png

To get an interpretation related to insurance modeling, let https://latex.codecogs.com/gif.latex?\mathcal{A} denote an upper layer in a reinsurance contract, i.e. https://latex.codecogs.com/gif.latex?\mathcal{A}=\{x%3Ed\} for some deductible https://latex.codecogs.com/gif.latex?d. Let https://latex.codecogs.com/gif.latex?X_i‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible https://latex.codecogs.com/gif.latex?d becomes extremely large (and https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A})\rightarrow%200), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if https://latex.codecogs.com/gif.latex?N has a Poisson distribution and, conditionally on https://latex.codecogs.com/gif.latex?Nhttps://latex.codecogs.com/gif.latex?X_1,\cdots,X_N are independent identically distributed generalized Pareto random variables, then https://latex.codecogs.com/gif.latex?\max\{X_1,\cdots,X_N\} has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

  • The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

  • Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

number of
death per
year
Empirical
counts
Poisson
distribution
0 109 108.67
1 65 66.21
2 22 20.22
3 3 4.11
4 1 0.63
5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

number of
hurricanes
per year
empirical
frequency
Poisson
frequency
0 30 27.16
1 48 47.99
2 37 42.41
3 29 24.98
4 8 11.03
5 3 3.90
6 3 1.15
7 1 0.29
8 and more 0 0.08
  • Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period https://latex.codecogs.com/gif.latex?T (in years), then the yearly probability of non-occurrence is https://latex.codecogs.com/gif.latex?1-(1/T).

And the probability of non-occurence over https://latex.codecogs.com/gif.latex?n years is then https://latex.codecogs.com/gif.latex?1-[1-(1/T)]^n. It is standard to summarize this property with the following table,

return period https://latex.codecogs.com/gif.latex?T

Number of years (https://latex.codecogs.com/gif.latex?n) without catastrophes

10 20 50 100 200
10 65.1% 40.1% 18.3% 9.6% 4.9%
20 87.8% 64.2% 33.2% 18.2% 9.5%
50 99.5% 92.3% 63.6% 39.5% 22.5%
100 99.9% 99.4% 86.7% 63.4% 39.5%
200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability https://latex.codecogs.com/gif.latex?1/T=1/n, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then https://latex.codecogs.com/gif.latex?1-\exp(-1), which is equal to 0.632.

  • Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor https://latex.codecogs.com/gif.latex?p is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq0)=1-(1-p)^{50%20\times%2080}

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq%200)\neq%2050\times%2080\times%20p

On the other hand

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq 0)=1-(1-p)^{50\times80%20}%20\sim1-\exp\left(-50\times80\times%20p%20\right)

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that https://latex.codecogs.com/gif.latex?N is null when https://latex.codecogs.com/gif.latex?N has a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda=50\times80\times%20p. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

https://latex.codecogs.com/gif.latex?1-\exp(-50\times%2080/7200)

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).

Talk on bivariate count times series in finance and risk management

I will be giving a talk on May 4th, at the Mathematical Finance Days, at HEC Montréal, on multivariate dynamic models for counts. The conference is organized by IFM2 (Institut de Finance Mathématique de Montréal). I will be chairing some session and I will give a talk based on the joint paper with Mathieu Boudreault.

The slides can be downloaded from the blog,

In various situations in the insurance industry, in finance, in epidemiology, etc., one needs to represent the joint evolution of the number of occurrences of an event. In this paper, we present a multivariate integer‐valued autoregressive (MINAR) model, derive its properties and apply the model to earthquake occurrences across various pairs of tectonic plates. The model is an extension of Pedelis & Karlis (2011) where cross autocorrelation (spatial contagion in a seismic context) is considered. We fit various bivariate count models and find that for many contiguous tectonic plates, spatial contagion is significant in both directions. Furthermore, ignoring cross autocorrelation can underestimate the potential for high numbers of occurrences over the short‐term. An application to risk management and cat‐bond pricing will be discussed.

http://freakonometrics.free.fr/ringfire.gif