Category Archives: Computations

Reverse Engineering with Correlated Features

In econometric modeling, I usually have a problem with correlated features. A few weeks ago, I was discussing feature selection when features are correlated. This week, I was wondering about reverse engineering when features might be correlated (not to say very correlated). The way I see reverse engineering is the following

  1. someone has some dataset, and based on that dataset, a model was fitted. But we cannot see how it works….
  2. we can generate “fake data”, feed the model with those data, and get predictions
  3. based on those predictions, we wish we can fit a model that should be close to the the ‘true’ model used
  4. one way to measure how good our model is is to compare predictions on the initial data with our model with the original dataset (or the initial ‘true’ values if we use generated datasets).

Continue reading Reverse Engineering with Correlated Features

Data Science pour l’Actuariat, partie 1

La formation Data Science pour l’Actuariat a commencé cette semaine. Je faisais la première intervention, avec un cours d’introduction au R avancé, très largement inspiré de http://adv-r.had.co.nz/ (et de l’introduction du livre Computational Actuarial Science, with R).

Le soir, Françoise Soulié Fogelman faisait un exposé pour la leçon inaugurale.

Computational Actuarial Science, with R

The book Computational Actuarial Science, with R is officially out. In the introduction of the book, and on the website of CRC, it is mentioned that the datasets can be found “in an R package on CRAN“, which is unfortunately incorrect. Some datasets are too large, so the package can not be uploaded on CRAN. Hopefully, Christophe host the package on his website.

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/")

or

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/", 
type = "source")

Here are the files :

Insurance datasets

A collection of datasets, originally for the book ‘Computational Actuarial Science with R’ edited by Arthur Charpentier (CAS with R). Now, the package contains a large variety of actuarial datasets.

Version: 0.9-8
Published: 2014-05-21
Author: Christophe Dutang
Maintainer: Christophe Dutang <christophe.dutang at ensimag.fr>
License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)]
NeedsCompilation: no

Downloads:

Reference manual: CASdatasets.pdf
Package source: CASdatasets_0.9-8.tar.gz
Package installation: go to this page
Windows binaries: r-release: CASdatasets_0.9-8.zip
OS X Snow Leopard binaries: r-release: CASdatasets_0.9-8.tgz
OS X Mavericks binaries: r-release: CASdatasets_0.9-8.tgz
Old sources: CASdatasets archive

Bayesian Wizardry for Muggles

Monday, I will be giving the closing talk of the R in Insurance Conference, in London, on Bayesian Computations for Actuaries, or to be more specific, Getting into Bayesian Wizardry… (with the eyes of a muggle actuary). The animated version of the slides (since we will spend some time on MCMC algorithm, I thought that animated graphs could be more informative) can be downloaded from here.

Those slides are based on the chapter writen with Ben Escoto for the Computational Actuarial Science with R book, and some previous work.

Generating functions

Today, I wanted to publish a post on generating functions, based on discussions I had with Jean-Francois while having our coffee after lunch a couple of times already. The other reason is that I publish my post while my student just finished their Probability exam (and there were a few questions on generating functions).

  • A short introduction (back on a specific exercise)

In the Probability exam, I included an exercise we’ve seen in class, last week. The question is the following (question 16 in the form – in French). Let  for  and  for  be the cumulative distribution function of some random variable , i.e. . What is the moment generating function of , i.e.  ?

Consider some  (we’ll see later on if some additional constraint are necessary). The tricky part of this exercice appears extremely fast, actually: how could you write  ? I mean, in any probability textbook, the standard answer is

  • if  is discrete,

  • if  is (absolutely) continuous,

where  is the density of . Here,  is clearly not a discrete variable. But is it (absolutely) continuous. My (strong) belief is that you need to plot that distribution function to see how it looks like, , for all 

(following recent discussions with Philippe Reka, I will try to post more hand-made graphs)

Ooops. It looks like we have a discontinuity in 0. So we have to be a bit carefull here :  is neither continuous nor discrete. Let us use the double projection formula,

which can also be writen, if ,

This is simply the idea of saying that the overall average is a barycenter of the average per subgroup. Here, and let  while  (note that ). Thus,

Let us consider the three different components.

 

and

(since it is is a real-valued constant), and here . So finally, we should compute . Observe that  given  is a (absolutely) continuous random variable, with a density. To get it, observe that for all ,

and , i.e.  given  is an exponential distribution.

Hence,  is a mixture between an exponential variable and a Dirac mass in . This was actually the tricky part of the question since it is not obvious when we see (only) the formula above.

From now on, it is just high-school level computations,

if  (for the first time, we see that the function is not defined everywhere). If we put all the expressions together,

  • Monte Carlo computations

If we are lazy (and trust me, I am extremely lazy), it is possible to use Monte Carlo simulations to compute that function,

> F=function(x) ifelse(x<0,0,1-exp(-x)/3)
> Finv=function(u) uniroot(function(x) F(x)-u,c(-1e-9,1e4))$root

or (to avoid the problem of the discontinuity)

> Finv=function(u) ifelse(3*u>1,0,uniroot(function(x)
+ F(x)-u,c(-1e-9,1e4))$root))

Here, the inverse is simple to get, so we can faster the code using

> Finv=function(u) ifelse(3*u>1,0,-log(3*u))

Then, we use

> rF=function(n) Vectorize(Finv)(runif(n))
> M=function(t,n=10000) mean(exp(t*rF(n)))
> Mtheo=function(t) (3-2*t)/(3-3*t)
> u=seq(-2,1 ,by=.1)
> v=Vectorize(M)(u)
> plot(u,v,type="b",col='blue')
> lines(u,Mtheo(u),col="red")

The problem with Monte Carlo simulations is that they should be used only if they are valid. If mean, I can compute

> set.seed(1)
> M(3)
[1] 5748134

Finite sum can always be computed, numerically. Even if here, https://latex.codecogs.com/gif.latex?\mathbb{E}(e^{3X}) does not exist (or to be more precise, is not finite). It is like the average of a Cauhy sample… I can always compute it, even if the expected value does not exists…

> set.seed(1)
> mean(rcauchy(1000000))
[1] 0.006069028

This is related to questions I tried to ask a few years ago in a paper, where I wanted to test if  (or not). Almost all the tests I know are actually based on that assumption… But this is not the point here. My point is that those generating functions are interesting, when then exist. And perhaps working with characteristic function is a better idea.

  • Generating functions

Now, to get back on the begining of last course, generating functions are interesting for a lot of reasons. But first of all, let us define those function properly.

The moment generating function  exists if it is finite on a neighbourhood of https://latex.codecogs.com/gif.latex?0 (there is an https://latex.codecogs.com/gif.latex?a%3E0 such that for all https://latex.codecogs.com/gif.latex?t\in[-a,+a], https://latex.codecogs.com/gif.latex?M_X(t)%3C\infty). In that case, there exists some (open) interval https://latex.codecogs.com/gif.latex?(a,b)\in\overline{R} such that for all https://latex.codecogs.com/gif.latex?t\in(a,b), https://latex.codecogs.com/gif.latex?M_X(t)%3C\infty, called the convergence strip of the moment generating function.

This function is said to be moment generating, since if https://latex.codecogs.com/gif.latex?M_X(\cdot) exists (as defined in the previous paragraph), then all moments exist, for all https://latex.codecogs.com/gif.latex?k\in%20\mathbb{N}\backslash\{0\}, https://latex.codecogs.com/gif.latex?\mathbb{E}\left(\vert%20X\vert^k\right)%3C\infty. This is basically due to the fact that, for all https://latex.codecogs.com/gif.latex?k\in%20\mathbb{N}\backslash\{0\}https://latex.codecogs.com/gif.latex?x^k\exp(-\vert%20t\vert%20x)\rightarrow%200 as https://latex.codecogs.com/gif.latex?x\rightarrow\infty, so, for all https://latex.codecogs.com/gif.latex?x large enough, https://latex.codecogs.com/gif.latex?x^k%20\leq%20\exp(\vert%20t\vert%20x). And before, it is always possible to use a multiplicative constant,

for some https://latex.codecogs.com/gif.latex?K. Thus,

if https://latex.codecogs.com/gif.latex?t is small enough (namely https://latex.codecogs.com/gif.latex?[-t,+t] belongs to the convergence strip).

Now, if we use Taylor’s expansion,

and

If we look at the value of the derivative of that function at point 0, then

As we’ve seen last week in class, it is possible to define a moment generating function in higher dimension, for some random vector https://latex.codecogs.com/gif.latex?\boldsymbol{X}=(X_1,\cdots,X_d),

for some https://latex.codecogs.com/gif.latex?\boldsymbol{t}\in\mathbb{R}^d. It is again a moment generating function since crossed derivatives (taken a point https://latex.codecogs.com/gif.latex?\boldsymbol{0}) are cross-moments. For instance,

 

Some, moment generating functions are interesting if you want to derive moments of a given distribution. Another interesting feature is that this moment generating function (under certain conditions) fully characterize the distribution of the random variable, in the sense that if for some https://latex.codecogs.com/gif.latex?%20h%3E0,
https://latex.codecogs.com/gif.latex?%20M_X(t)=M_Y(t) for all https://latex.codecogs.com/gif.latex?%20t\in(-h,+h), then https://latex.codecogs.com/gif.latex?X\overset{\mathcal{L}}{=}Y.

  • From moment generating functions to characteristic functions

The problem with the moment generating function is that the function is defined (only) on some neighborhood of https://latex.codecogs.com/gif.latex?%200, and we should be careful. The other problem is that it does exist only for distribution in https://latex.codecogs.com/gif.latex?%20L_\infty. Which might be a strong assumption.

Thus, an interesting idea is to consider https://latex.codecogs.com/gif.latex?%20\mathbb{E}\left(%20e^{tX}%20\right) not on the real line, but on the imaginary line.

Thus, let https://latex.codecogs.com/gif.latex?%20\phi_X(t)=\mathbb{E}\left(%20e^{i%20tX}%20\right) for some https://latex.codecogs.com/gif.latex?%20t\in\mathbb{R}. Actually, not some, but all https://latex.codecogs.com/gif.latex?%20t\in\mathbb{R}, since

so the characteristic function always exists. Paul Lévy proved in 1925 that the characteristic function completely characterizes the distribution.

Now, if we look at it quickly, it looks like we did not change a lot of things here, and we should be able to write

https://latex.codecogs.com/gif.latex?%20\phi_X(t)=M_X(i%20t)

If we want to do things properly, let us look at Gut (2005) for instance. Assume that https://latex.codecogs.com/gif.latex?%20M_X(\cdot) is defined on some interval https://latex.codecogs.com/gif.latex?%20(-a,+a). It is then possible to define a function  (this time, it is no longer a real-valued function) as

which is well defined on some strip .
https://latex.codecogs.com/gif.latex?%20\phi_X(\cdot)and https://latex.codecogs.com/gif.latex?%20M_X(\cdot) are then restriction of that function respectively on the imaginary line, and the real line. That function https://latex.codecogs.com/gif.latex?%20\Gamma_X(\cdot) is clearly holomorphic, and thus, the value it takes on such a strip is fully determined by the values it takes on the real interval https://latex.codecogs.com/gif.latex?%20(-a,+a). Thus, the moment generating function will completely characterize the distribution.

But it has to be defined on some neighbourhood of https://latex.codecogs.com/gif.latex?%200. Which is not trivial actually… I mean, I nonlife insurance, we see a lot a Pareto distributions.

  • Fast Fourier Transform

Recall Euler’s formula,

Thus, we should not be surprised to see Fourier’s transform. From this formula, we can write

Using some results in Fourier analysis, we can prove that probability function satisfies (if the random variable has a Dirac mass in x)

which can also be written,

And a similar relationship can be obtained if the distribution is absolutely continuous at point ,

Actually, since we work with real-valued random variables, the complex area was just a detour, and we can prove that actually,

It is then possible to get the cumulative distribution function using Gil-Peleaz’s inversion formula, obtained in 1951,

Nice isn’t it. It means, anyone working on financial markets know those formulas, used to price options (see Carr & Madan (1999) for instance). And the good thing is that any mathematical or statistical software can be used to compute those formulas.

  • Characteristic function and actuarial science

Now, what is the interest of all that in actuarial science ? Characteristic functions are interesting when we deal with sums of independent random variables, since the characteristic function of the sum is simple the product of the characteristic functions. They are also interesting when dealing with compound sums1. Consider the problem of computing the 99.5% quantile of the compound sum of Gamma random variable, i.e.

https://latex.codecogs.com/gif.latex?%20S=\sum_{n=1}^N%20X_i

where https://latex.codecogs.com/gif.latex?%20X_i\sim\mathcal{G}(\alpha,\beta) are i.i.d. and https://latex.codecogs.com/gif.latex?%20N\sim\mathcal{P}(\lambda). The strategy is to discretize the loss amounts,

> n <- 2^20; 
> p <- diff(pgamma(0:n-.5,alpha,beta))

Then, the code to compute https://latex.codecogs.com/gif.latex?%20\tilde%20f(s)=\mathbb{P}(S\in[s\pm1/2]), we use

> f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n

To compute the 99.5% quantile, we just use

> sum(cumsum(f)<.995)

That’s extremely simple, isn’it. Want me to do it for real ? Consider the following losses amounts

> set.seed(1)
> X <- rexp(200,rate=1/100)
> print(X[1:5])
[1] 75.51818 118.16428 14.57067 13.97953 43.60686

Let us fit a gamma distribution. We can use

> fitdistr(X,"gamma")
      shape         rate    
  1.309020256   0.013090411 
 (0.117430137) (0.001419982)

or

> f <- function(x) log(x)-digamma(x)-log(mean(X))+mean(log(X))
> alpha <- uniroot(f,c(1e-8,1e8))$root
> beta <- alpha/mean(X)
> alpha
[1] 1.308995
> beta
[1] 0.01309016

Whatever, we have the parameters of our  Gamma distribution for individual losses. And assume that the mean of the Poisson counting variable is

> lambda <- 100

Again, it is possible to use monte carlo simulations, if we can easily generate a compound sum. We can use the following generic code: first we need functions to generate the two kinds of variables of interest,

> rN.P <- function(n) rpois(n,lambda)
> rX.G <- function(n) rgamma(n,alpha,beta)

then, we can use (see here for a discussion on possible codes)

> rcpd4 <- function(n,rN=rN.P,rX=rX.G){
+ return(sapply(rN(n), function(x) sum(rX(x))))}

If we generate one million variables, we can get an estimator for the quantile,

> set.seed(1)
> quantile(rcpd4(1e6),.995)
   99.5% 
13651.64

Another idea is to remember a proporty of the Gamma distribution: a sum of independent Gamma distributions is still Gamma (with additional assumptions on the parameters, but here we consider identical Gamma distributions). Thus, it is possible to compute the cumulative distribution function of the compound sum,

> F <- function(x,lambda=100,nmax=1000) {n <- 0:nmax
+ sum(pgamma(x,n*alpha,beta)*dpois(n,lambda))}

(or at least a approximation). If we invert that function, we get our quantile

> uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root
[1] 13654.43

Which is consistent with our monte carlo computation. Now, we can also use fast Fourier transform here,

> n <- 2^20; lambda <- 100
> p <- diff(pgamma(0:n-.5,alpha,beta))
> f <- Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n
> sum(cumsum(f)<.995)
[1] 13654

Now, if it is simple, is it efficient ? Let us compare for instance computation time to get those three outputs,

> system.time(quantile(rcpd4(1e5),.995))
       user      system     elapsed 
      2.453       0.106       2.611 
> system.time(uniroot(function(x) F(x)-.995,c(1e-8,1e8))$root)
       user      system     elapsed
      0.041       0.012       0.361 
> system.time(sum(cumsum(Re(fft(exp(lambda*(fft(p)-1)),inverse=TRUE))/n)<.995))
       user      system     elapsed
      0.527       0.020       0.560

Computations here are comparable with the (numerical) inversion of the cumulative distribution function. Except that here, we were lucky: if the distribution is not Gamma but log normal, the second algorithm cannot be used.

1. This numerical example is taken from the first chapter of Computational Actuarial Science with R, to appear in a few months.

Compound Poisson and vectorized computations

Yesterday, I was asked how to write a code to generate a compound Poisson variables, i.e. a series of random variables  where  is a counting random variable (here Poisson disributed) and where the ‘s are i.i.d (and independent of ), with the convention  when . I came up with the following algorithm, but I was wondering if it was possible to get a better one…

>  rcpd=function(n,rN,rX){
+  N=rN(n)
+  X=rX(sum(N))
+  I=as.factor(rep(1:n,N))
+  S=tapply(X,I,sum)
+  V=as.numeric(S[as.character(1:n)])
+  V[is.na(V)]=0
+  return(V)}

Here, consider – to illustrate – the case where  and ,

>  rN.P=function(n) rpois(n,5)
>  rX.E=function(n) rexp(n,2)

We can generate a sample

>  S=rcpd(1000,rN=rN.P,rX=rX.E)

and check (using simulation) than 

> mean(S)
[1] 2.547033
> mean(rN.P(1000))*mean(rX.E(1000))
[1] 2.548309

and that 

> var(S)
[1] 2.60393
> mean(rN.P(1000))*var(rX.E(1000))+
+ mean(rX.E(1000))^2*var(rN.P(1000))
[1] 2.621376

If anyone might think of a faster algorithm, I’d be glad to hear about it…

Convolution(s) et lois composées, aspects numériques

Nous avons vu en cours que l’outil de base pour travailler sur les sommes de variables aléatoires était la convolution,

(f * g )(t)  , stackrel{mathrm{def}}{=} int_{-infty}^{infty} f(tau)cdot g(t - tau), dtau
= int_{-infty}^{infty} f(t-tau)cdot g(tau), dtau.

permettant d’obtenir la densité ou la fonction de répartition de la somme de deux variables indépendantes.

Cet outils permet en théorie d’obtenir la forme d’une loi composée, i.e. d’une somme dont le nombre de termes est aléatoire.

Pour les calculs numériques de la convolée, Vincent Goulet a programmé pas  mal de choses dans la librairie actuar. Numériquement, il faut discrétiser la loi des coûts de sinistres. Si on suppose par exemple que les coûts individuels sont discrets, on peut obtenir la distribution simplement

> fx <- c(0, 0.15, 0.2, 0.25, 0.125, 0.075,
+         0.05, 0.05, 0.05, 0.025, 0.025)
> pn <- c(0.05, 0.1, 0.15, 0.2, 0.25, 0.15, 0.06, 0.03, 0.01)
> Fs <- aggregateDist("convolution", model.freq = pn,
+                     model.sev = fx, x.scale = 25)
> summary(Fs)
Aggregate Claim Amount Empirical CDF:
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
    0.0   150.0   275.0   314.5   400.0  2000.0 
> c(Fs(0), diff(Fs(25 * 0:21))) # probability mass function
 [1] 0.05000000 0.01500000 0.02337500 0.03467500 0.03257656 0.03578639
 [7] 0.03980787 0.04356232 0.04751800 0.04903380 0.05189806 0.05137886
[13] 0.05118691 0.05030486 0.04818189 0.04575882 0.04280890 0.03937836
[19] 0.03574568 0.03196808 0.02832446 0.02478833
> plot(Fs)

On peut aussi utiliser des méthodes d’approximation récursives,

> Fs <- aggregateDist(“recursive”, model.freq = “poisson”,
+                     model.sev = fx, lambda = 3, x.scale = 25)
> plot(Fs)

Ce qui donne le graphique suivant

Des méthodes d’approximation de type normal power sont aussi programmées, ainsi que les techniques de simulations. Par exemple ici pour une survenance Poissonienne avec des coûts Gamma, on écrit

> Fs <- aggregateDist("recursive", model.freq = "poisson",
+ model.sev = fx, lambda = 3, x.scale = 25)
> plot(Fs)

Enfin, pour lier avec le DM que je donnerais mercredi, Vincent Goulet a aussi programmé une méthode d’estimation de quantile,

> model.freq <- expression(data = rpois(3))
> model.sev <- expression(data = rgamma(100, 2))
> Fs <- aggregateDist("simulation", nb.simul = 1000,
+                     model.freq, model.sev)