Pre-conference day at CIMAT, Guanajauto (Mexico), “Probability and Machine Learning“. Finalizing the last details, with the incredible local team, financial services, etc. So far, no last-minute imponderables, fingers crossed. At lunchtime, Hélène Guérin gives a presentation at the probability seminar, and tomorrow morning, the conference begins. Thanks again to our sponsors, SCOR Foundation for Science CIMAT the probability lab of Centre de recherches mathématiques (CRM) and the actuarial science lab, Quantact.
Tag Archives: probability
Workshop on Probability and Machine Learning, in Guanajuato, Mexico
Next week, we organize the first Montréal – Guanajuato Workshop on Probability and Machine Learning, at CIMAT, Centro de Investigación en Matemáticas.
with Arturo Jaramillo Gil (CIMAT), Saraí Hernández-Torres (UNAM), Emilien Joly (CIMAT), Sandra Palau (UNAM), Courtney Paquette (McGill), Elliot Paquette (McGill), José Luis Pérez (CIMAT), James Melbourne (CIMAT), and Jean-François Renaud (UQAM), among invited speakers.
The Centro de Investigación en Matemáticas (CIMAT), located in Guanajuato, Mexico, is a leading research institution focused on mathematics, statistics, and computer science. Part of Mexico’s National System of Public Research Centers (CONACYT), CIMAT excels in both theoretical and applied research, fostering innovation and solving complex real-world problems. Its vibrant academic environment supports advanced studies, offering master’s and doctoral programs, while promoting interdisciplinary collaboration. Housed in the picturesque city of Guanajuato, a UNESCO World Heritage Site, CIMAT attracts top researchers and students from around the globe, contributing significantly to scientific and technological advancement in Mexico and beyond.
The first goal of the workshop is to bring together researchers, and scholars from Québec and Mexico in the field of probability theory, and Machine learning. With an emphasis on both theoretical foundations and practical applications, the conference will feature research presentations from speakers who represent a range of career stages at the faculty level. This will foster the exchange of ideas and opportunities for new collaborations.
The second goal was to encourage and promote student mobility between Mexico and Québec. The workshop will feature short talks by graduate students and postdoc fellows, who will have opportunity to present their work, but also to exchange with different researchers. This will enable them to enrich their academic network, and may well open up mobility opportunities for them in the future. Yet, Dante Mata Lopez is sharing the office with the team (Agathe, Marouane), and this summer, two interns will join our team: Allison Lara Nieva, from the Universidad Nacional Autónoma de México (Agathe Fernandes Machado will be involved in the supervision) and Fabian Dominguez Lopez, from the Universidad de Guanajuato, working with Hélène Guérin and Arsène Brice Zotsa Ngoufack, who will be involved in the supervision.
Picture credit for the poster: Yuko Nishikawa (Yuko Nishikawa is a Brooklyn-based Japanese multidisciplinary artist and designer, known for her organic, dreamlike works. She grew up in the seaside town of Chigasaki (茅ヶ崎市), south of Tokyo).
Conference on New Developments in Probability, in September
The next iteration of the Conference on New Developments in Probability (CNDP) will take place from September 26-28, 2024 [Thursday morning-Saturday noon] in Montreal, Quebec, Canada. The event will be hosted by Women in Probability and the Centre de Recherches Mathématiques (CRM)…
From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration
Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,
The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.
Probabilistic Foundations of Econometrics, part 1
In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…
The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.
Foundations of mathematical statistics
As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:
- To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
- The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
- The maximum likelihood method is a good tool for estimating parameters.
In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.
Conditional laws and likelihood
Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.
It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written: \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.
The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}.
The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.
Residuals
It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.
An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.
However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.
(references mentioned above are online here). To be continued…
[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).
[2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations.
[3] In the sense that the difference between variance matrices is a positive matrix.
How to Estimate the Occurrence Probability of Natural Catastrophes
Tomorrow will start the two-day workshop on “Natural Catastrophe Prevention and Insurance: Market and Policy Issues” at ETH Zürich. My slides are now available online, on “How to Estimate the Occurrence Probability of Natural Catastrophes”
The “probability to win” is hard to estimate…
Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.
Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have n students, and 50 questions
set.seed(1) n=10 M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n) |
Let X_{i,j} denote the score of student i at question j. Let S_{i,j} denote the cumulated score, i.e. S_{i,j}=X_{i,1}+\cdots+X_{i,j}. At step j, I can get some sort of prediction of the final score, using \hat{T}_{i,j}=50\times S_{i,j}/j. Here is the code
SM=apply(M,2,cumsum) NB=SM*50/(1:50) |
We can actually plot it
plot(NB[,1],type="s",ylim=c(0,50)) abline(h=25,col="blue") for(i in 2:n) lines(NB[,i],type="s",col="light blue") lines(NB[,3],type="s",col="red") |
But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !
Let’s try to see how we can do it… If after j questions, the students has 25 correct answer, the probability should be 1 – i.e. if S_{i,j}\geq 25 – since he cannot fail. Another simple case is the following : if after j questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if S_{i,j}+(50-i+1)< 25 the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least 25-S_{i,j} correct answers, out of 50-j questions, when the probability of success is actually S_{i,j}/j. We recognize the survival probability of a binomial distribution. The code is then simply
PB=NB*NA for(i in 1:50){ for(j in 1:n){ if(SM[i,j]>=25) PB[i,j]=1 if(SM[i,j]+(50-i+1)<25) PB[i,j]=0 if((SM[i,j]<25)&(SM[i,j]+(50-i+1)>=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i) }} |
So if we plot it, we get
plot(PB[,1],type="s",ylim=c(0,1)) abline(h=25,col="red") for(i in 2:n) lines(PB[,i],type="s",col="light blue") lines(PB[,3],type="s",col="red") |
which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !
Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),
If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed
PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…
(A brief) history of randomness, and simulation techniques
Hearing “there is a 10% chance of rain today” or “the medical test has a positive predictive value of 75%” shows that the probabilities are now everywhere. A probability is a quantity that is difficult to grasp, but essential when trying to theorize and measure chance, or randomness. And if mathematical theory finally came very late, as Hacking (2006) points out, this did not prevent insurance from developing early enough, and from having the first (actuarial) mortality tables even before the “probability of death” or “life expectancy” had a mathematical basis. And in the same way, many techniques were invented to “generate randomness“, before the explosion of the so-called Monte Carlo methods, in parallel with the development of computing (and the fact that a machine could generate chance). Continue reading (A brief) history of randomness, and simulation techniques
Articles for the Probability and Statistics Project
Here are some articles for the project for the graduate crash course on probability and statistics,
- The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time
- An investigation of the false discovery rate and the misinterpretation of p-values
- Final collapse of the NeyMan-Pearson decision theoretic framework and rise of neoFisherian
- p – value, a true test of statistical significance? a cautionary note
- Statistical tests, p values, confidence intervals, and power: a guide to misinterpretations
- P Values and Statistical Practice
- Points of significance: Power and sample size
For those willing to work on datasets, consider success per school in a national exam, over time (here is the file)
base=read.table("http://freakonometrics.free.fr/brevet_rennes.csv", header=TRUE,sep=";",dec=",")
Some datasets will be uploaded soon
Probability and Statistics
Next week, I will start a short course on probability and statistics. The slides of the course are now online. There will be more information soon about the exam and the projects.
The odds of a cluster of airplane accidents
Recently, there have been a lot of airplane accidents.
- July, 17th 2014, Hrabove, Ukraine, Malaysia Airlines, Boeing 777, fatalities 298 (/298)
- July, 23rd 2014, Magong, Taiwan, TransAsia Airways, ATR 72-500, fatalities 47 (/58)
- July, 24th 2014, Aguelhok, Mali, Air Algerie, Mc Donnell Douglas MD-83, fatalities 116 (/116)
It is simple to find a lot of datasets about airplane crashes. For instance on http://ntsb.gov/aviationquery. The dataset is nice, with a lot of information,
> planes=read.table( + "cbad3ca6-6b8f-4c98-9ee0-601faAviationData.txt", + sep="|",header=TRUE)
for instance the exact location of the crashes,
> library(maps) > map("world", interior = FALSE) > points(planes$Longitude,planes$Latitude, + pch=19,cex=planes$Total.Fatal.Injuries/50, + col="red")
Continue reading The odds of a cluster of airplane accidents
Generating functions
Today, I wanted to publish a post on generating functions, based on discussions I had with Jean-Francois while having our coffee after lunch a couple of times already. The other reason is that I publish my post while my student just finished their Probability exam (and there were a few questions on generating functions).
- A short introduction (back on a specific exercise)
In the Probability exam, I included an exercise we’ve seen in class, last week. The question is the following (question 16 in the form – in French). Let for
and
for
be the cumulative distribution function of some random variable
, i.e.
. What is the moment generating function of
, i.e.
?
Consider some (we’ll see later on if some additional constraint are necessary). The tricky part of this exercice appears extremely fast, actually: how could you write
? I mean, in any probability textbook, the standard answer is
- if
is discrete,
- if
is (absolutely) continuous,
where is the density of
. Here,
is clearly not a discrete variable. But is it (absolutely) continuous. My (strong) belief is that you need to plot that distribution function to see how it looks like,
, for all
(following recent discussions with Philippe Reka, I will try to post more hand-made graphs)
Ooops. It looks like we have a discontinuity in 0. So we have to be a bit carefull here : is neither continuous nor discrete. Let us use the double projection formula,
which can also be writen, if ,
This is simply the idea of saying that the overall average is a barycenter of the average per subgroup. Here, and let
while
(note that
). Thus,
Let us consider the three different components.
and
(since it is is a real-valued constant), and here . So finally, we should compute
. Observe that
given
is a (absolutely) continuous random variable, with a density. To get it, observe that for all
,
and , i.e.
given
is an exponential distribution.
Hence, is a mixture between an exponential variable and a Dirac mass in
. This was actually the tricky part of the question since it is not obvious when we see (only) the formula above.
From now on, it is just high-school level computations,
if (for the first time, we see that the function is not defined everywhere). If we put all the expressions together,
- Monte Carlo computations
If we are lazy (and trust me, I am extremely lazy), it is possible to use Monte Carlo simulations to compute that function,
> F=function(x) ifelse(x<0,0,1-exp(-x)/3) > Finv=function(u) uniroot(function(x) F(x)-u,c(-1e-9,1e4))$root
or (to avoid the problem of the discontinuity)
> Finv=function(u) ifelse(3*u>1,0,uniroot(function(x) + F(x)-u,c(-1e-9,1e4))$root))
Here, the inverse is simple to get, so we can faster the code using
> Finv=function(u) ifelse(3*u>1,0,-log(3*u))
Then, we use
> rF=function(n) Vectorize(Finv)(runif(n)) > M=function(t,n=10000) mean(exp(t*rF(n))) > Mtheo=function(t) (3-2*t)/(3-3*t) > u=seq(-2,1 ,by=.1) > v=Vectorize(M)(u) > plot(u,v,type="b",col='blue') > lines(u,Mtheo(u),col="red")
The problem with Monte Carlo simulations is that they should be used only if they are valid. If mean, I can compute
> set.seed(1) > M(3) [1] 5748134
Finite sum can always be computed, numerically. Even if here, does not exist (or to be more precise, is not finite). It is like the average of a Cauhy sample… I can always compute it, even if the expected value does not exists…
> set.seed(1) > mean(rcauchy(1000000)) [1] 0.006069028
This is related to questions I tried to ask a few years ago in a paper, where I wanted to test if (or not). Almost all the tests I know are actually based on that assumption… But this is not the point here. My point is that those generating functions are interesting, when then exist. And perhaps working with characteristic function is a better idea.
- Generating functions
Now, to get back on the begining of last course, generating functions are interesting for a lot of reasons. But first of all, let us define those function properly.
The moment generating function exists if it is finite on a neighbourhood of
(there is an
such that for all
,
). In that case, there exists some (open) interval
such that for all
,
, called the convergence strip of the moment generating function.
This function is said to be moment generating, since if exists (as defined in the previous paragraph), then all moments exist, for all
,
. This is basically due to the fact that, for all
,
as
, so, for all
large enough,
. And before, it is always possible to use a multiplicative constant,
for some . Thus,
if is small enough (namely
belongs to the convergence strip).
Now, if we use Taylor’s expansion,
and
If we look at the value of the derivative of that function at point 0, then
As we’ve seen last week in class, it is possible to define a moment generating function in higher dimension, for some random vector ,
for some . It is again a moment generating function since crossed derivatives (taken a point
) are cross-moments. For instance,
Some, moment generating functions are interesting if you want to derive moments of a given distribution. Another interesting feature is that this moment generating function (under certain conditions) fully characterize the distribution of the random variable, in the sense that if for some
,
for all
, then
.
- From moment generating functions to characteristic functions
The problem with the moment generating function is that the function is defined (only) on some neighborhood of , and we should be careful. The other problem is that it does exist only for distribution in
. Which might be a strong assumption.
Thus, an interesting idea is to consider not on the real line, but on the imaginary line.
Thus, let for some
. Actually, not some, but all
, since
so the characteristic function always exists. Paul Lévy proved in 1925 that the characteristic function completely characterizes the distribution.
Now, if we look at it quickly, it looks like we did not change a lot of things here, and we should be able to write
If we want to do things properly, let us look at Gut (2005) for instance. Assume that is defined on some interval
. It is then possible to define a function
(this time, it is no longer a real-valued function) as
which is well defined on some strip .
and
are then restriction of that function respectively on the imaginary line, and the real line. That function
is clearly holomorphic, and thus, the value it takes on such a strip is fully determined by the values it takes on the real interval
. Thus, the moment generating function will completely characterize the distribution.
But it has to be defined on some neighbourhood of . Which is not trivial actually… I mean, I nonlife insurance, we see a lot a Pareto distributions.
- Fast Fourier Transform
Recall Euler’s formula,
Thus, we should not be surprised to see Fourier’s transform. From this formula, we can write
Using some results in Fourier analysis, we can prove that probability function satisfies (if the random variable has a Dirac mass in x)
which can also be written,
And a similar relationship can be obtained if the distribution is absolutely continuous at point ,
Actually, since we work with real-valued random variables, the complex area was just a detour, and we can prove that actually,
It is then possible to get the cumulative distribution function using Gil-Peleaz’s inversion formula, obtained in 1951,
Nice isn’t it. It means, anyone working on financial markets know those formulas, used to price options (see Carr & Madan (1999) for instance). And the good thing is that any mathematical or statistical software can be used to compute those formulas.
- Characteristic function and actuarial science
Now, what is the interest of all that in actuarial science ? Characteristic functions are interesting when we deal with sums of independent random variables, since the characteristic function of the sum is simple the product of the characteristic functions. They are also interesting when dealing with compound sums1. Consider the problem of computing the 99.5% quantile of the compound sum of Gamma random variable, i.e.
where are i.i.d. and
. The strategy is to discretize the loss amounts,
> n <- 2^20; > p <- diff(pgamma(0:n-.5,alpha,beta))
Then, the code to compute , we use
> f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n
To compute the 99.5% quantile, we just use
> sum(cumsum(f)<.995)
That’s extremely simple, isn’it. Want me to do it for real ? Consider the following losses amounts
> set.seed(1) > X <- rexp(200,rate=1/100) > print(X[1:5]) [1] 75.51818 118.16428 14.57067 13.97953 43.60686
Let us fit a gamma distribution. We can use
> fitdistr(X,"gamma") shape rate 1.309020256 0.013090411 (0.117430137) (0.001419982)
or
> f <- function(x) log(x)-digamma(x)-log(mean(X))+mean(log(X)) > alpha <- uniroot(f,c(1e-8,1e8))$root > beta <- alpha/mean(X) > alpha [1] 1.308995 > beta [1] 0.01309016
Whatever, we have the parameters of our Gamma distribution for individual losses. And assume that the mean of the Poisson counting variable is
> lambda <- 100
Again, it is possible to use monte carlo simulations, if we can easily generate a compound sum. We can use the following generic code: first we need functions to generate the two kinds of variables of interest,
> rN.P <- function(n) rpois(n,lambda) > rX.G <- function(n) rgamma(n,alpha,beta)
then, we can use (see here for a discussion on possible codes)
> rcpd4 <- function(n,rN=rN.P,rX=rX.G){ + return(sapply(rN(n), function(x) sum(rX(x))))}
If we generate one million variables, we can get an estimator for the quantile,
> set.seed(1) > quantile(rcpd4(1e6),.995) 99.5% 13651.64
Another idea is to remember a proporty of the Gamma distribution: a sum of independent Gamma distributions is still Gamma (with additional assumptions on the parameters, but here we consider identical Gamma distributions). Thus, it is possible to compute the cumulative distribution function of the compound sum,
> F <- function(x,lambda=100,nmax=1000) {n <- 0:nmax + sum(pgamma(x,n*alpha,beta)*dpois(n,lambda))}
(or at least a approximation). If we invert that function, we get our quantile
> uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root [1] 13654.43
Which is consistent with our monte carlo computation. Now, we can also use fast Fourier transform here,
> n <- 2^20; lambda <- 100 > p <- diff(pgamma(0:n-.5,alpha,beta)) > f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n
> sum(cumsum(f)<.995) [1] 13654
Now, if it is simple, is it efficient ? Let us compare for instance computation time to get those three outputs,
> system.time(quantile(rcpd4(1e5),.995)) user system elapsed 2.453 0.106 2.611 > system.time(uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root) user system elapsed 0.041 0.012 0.361 > system.time(sum(cumsum(Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n)<.995)) user system elapsed 0.527 0.020 0.560
Computations here are comparable with the (numerical) inversion of the cumulative distribution function. Except that here, we were lucky: if the distribution is not Gamma but log normal, the second algorithm cannot be used.
1. This numerical example is taken from the first chapter of Computational Actuarial Science with R, to appear in a few months.
Couples de variables aléatoires
Vendredi, suite du cours ACT2121, de préparation pour l’examen P de la SOA (probability). Un nouveaux jeu d’exercices, sur le thème 13 (tel que classifié dans le livre de Jacques Labelle, qui servira de référence pour ce cours)
- Couples de variables aléatoires #13 ACT2121-A2013-13.pdf
Proba, intra 1
Un rapide billet pour partager le sujet de l’examen intra de la semaine passée avec des éléments de correction (incluant des statistiques de réponse, comme pour la session passée). Toutes les remarques sur mes corrections sont les bienvenues…
Variables aléatoires continues
Suite du cours ACT2121, de préparation pour l’examen P de la SOA (probability). Un nouveaux jeu d’exercices, sur les thèmes 7 et 8 (tel que classifié dans le livre de Jacques Labelle, qui servira de référence pour ce cours)
- Variables aléatoires continues #7 ACT2121-A2013-7.pdf
- Loi exponentielle #8 ACT2121-A2013-8.pdf
Des éléments de correction de l’intra 1 seront bientôt mis en ligne.