# Reinterpreting Lee-Carter Mortality Model

Last week, while I was giving my crash course on R for insurance, we’ve been discussing possible extensions of Lee & Carter (1992) model. If we look at the seminal paper, the model is defined as follows

# Binomial regression model

Most of the time, when we introduce binomial models, such as the logistic or probit models, we discuss only Bernoulli variables, $Y_i\sim\mathcal{B}(p(\boldsymbol{X_i}))$. This year (actually also the year before), I discuss extensions to multinomial regressions$Y_i\sim\mathcal{M}(\boldsymbol{p}(\boldsymbol{X_i}))$where $\boldsymbol{p}(\cdot)$ is a function on some simplex. The multinomial logistic model was mention here. The idea is to consider, for instance with three possible classes

$\boldsymbol{p}(\boldsymbol{X_i})=(p_A(\boldsymbol{X_i}),p_B(\boldsymbol{X_i}),p_C(\boldsymbol{X_i}))\in\mathcal{S}_2$

the following model

$p_A(\boldsymbol{X_i})=\frac{\exp[\boldsymbol{X}_i'\boldsymbol{\alpha}]}{1+\exp[\boldsymbol{X}_i'\boldsymbol{\alpha}]+\exp[\boldsymbol{X}_i'\boldsymbol{\beta}]}$

$p_B(\boldsymbol{X_i})=\frac{\exp[\boldsymbol{X}_i'\boldsymbol{\beta}]}{1+\exp[\boldsymbol{X}_i'\boldsymbol{\alpha}]+\exp[\boldsymbol{X}_i'\boldsymbol{\beta}]}$

and

$p_C(\boldsymbol{X_i})=\frac{1}{1+\exp[\boldsymbol{X}_i'\boldsymbol{\alpha}]+\exp[\boldsymbol{X}_i'\boldsymbol{\beta}]}$

# Exposure with binomial responses

Last week, we’ve seen how to take into account the exposure to compute nonparametric estimators of several quantities (empirical means, and empirical variances) incorporating exposure. Let us see what can be done if we want to model a binomial response. The model here is the following: ,

• the number of claims $N_i$ on the period $[0,1]$ is unobserved
• the number of claims $Y_i$ on $[0,E_i]$ is observed (as well as $E_i$)

that can be visualize below

Consider the case where the variable of interest is not the number of claims, but simply the indicator of the occurrence of a claim. Then we wish to model the event $\{N=0\}$ versus $\{N>0\}$, interpreted as non-occurrence and occurrence. Given the fact that we can only observe $\{Y=0\}$ versus $\{Y>0\}$. Having an inclusion is not enough to derive a model. Actually, with a Poisson process model, we can get easily that

$\mathbb{P}(Y=0) = \mathbb{P}(N=0)^E$

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense. Assume that the probability of not having a claim can be explained by some covariates, denoted $\boldsymbol{X}$, through some link function (using the GLM terminology),

$\mathbb{P}(N=0|\boldsymbol{X})=h(\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta})$

Now, since we do observe $Y$ – and not $N$ – we have

$\mathbb{P}(Y=0|\boldsymbol{X},E)=h(\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta})^E$

The dataset we will use is always the same

> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
> sinistres=sinistre[sinistre$garantie=="1RC",] > sinistres=sinistres[sinistres$cout>0,]
> T=table(sinistres$nocontrat) > T1=as.numeric(names(T)) > T2=as.numeric(T) > nombre1 = data.frame(nocontrat=T1,nbre=T2) > I = contrat$nocontrat%in%T1
> T1= contrat$nocontrat[I==FALSE] > nombre2 = data.frame(nocontrat=T1,nbre=0) > nombre=rbind(nombre1,nombre2) > sinistres = merge(contrat,nombre) > sinistres$nonsin = (sinistres$nbre==0) The first model we can consider is based on the standard logistic approach, i.e. $\mathbb{P}(Y=0|\boldsymbol{X},E)=\left(\frac{\exp(\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta})}\right)^E$ That’s nice, but difficult to handle with standard functions. Nevertheless, it is always possible to compute numerically the maximum likelihood estimator of $\boldymbol{\beta}$ given $(Y_i,\boldsymbol{X}_i,E_i)$. > Y=sinistres$nonsin
> X=cbind(1,sinistres$ageconducteur) > E=sinistres$exposition
> logL = function(beta){
+ 	pi=(exp(X%*%beta)/(1+exp(X%*%beta)))^E
+ 	-sum(log(dbinom(Y,size=1,prob=pi)))
+ }
> optim(fn=logL,par=c(-0.0001,-.001),
+ method="BFGS")
$par [1] 2.14420560 0.01040707$value
[1] 7604.073
$counts function gradient 42 10$convergence
[1] 0
$message NULL > parametres=optim(fn=logL,par=c(-0.0001,-.001), + method="BFGS")$par

Now, let us look at alternatives, based on standard regression models. For instance a binomial-log model. Because the exposure appears as a power of the annual probability, everything would be fine if $h$ was the exponential function (or $h^{-1}$ was the log link function), since

$\mathbb{P}(Y=0|\boldsymbol{X},E)=\exp(E+\boldsymbol{X}^{\text{\sffamily T}}\boldsymbol{\beta})$

Now, if we try to code it, it starts quickly to be problematic,

> reg=glm(nonsin~ageconducteur+offset(exposition),
Error: no valid set of coefficients has been found: please supply starting values

I tried (almost) everything I could, but I could not get rid of that error message,

> startglm=c(0,-.001)
> names(startglm)=c("(Intercept)","ageconducteur")
> etaglm=rep(-.01,nrow(sinistresI))
> etaglm[sinistresI$nonsin==0]=-10 > muglm=exp(etaglm) > reg=glm(nonsin~ageconducteur+offset(exposition), + data=sinistresI,family=binomial(link="log"), + control = glm.control(epsilon=1e-5,trace=TRUE,maxit=50), + start=startglm, + etastart=etaglm,mustart=muglm) Deviance = NaN Iterations - 1 Error: no valid set of coefficients has been found: please supply starting values So I decided to give up. Almost. Actually, the problem comes from the fact that $\mathbb{P}(Y=0)$ is closed to 1. I guess everything would be nicer if we could work with probability close to 0. Which is possible, since $\mathbb{P}(Y>0)=1-\mathbb{P}(Y=0) = 1-[1-\mathbb{P}(N>0)]^E$ where $\mathbb{P}(N>0)$ is close to 0. So we can use Taylor’s expansion, $\mathbb{P}(Y>0)\sim1-1+E\cdot \mathbb{P}(N>0)]=E\cdot \mathbb{P}(N>0)]$ Here, the exposure does no longer appears as a power of the probability, but appears multiplicatively. Of course, there are higher order terms. But let us forget them (so far). If – one more time – we consider a log link function, then we can incorporate the exposure, or to be more specific, the logarithm of the exposure. > regopp=glm((1-nonsin)~ageconducteur+offset(log(exposition)), + data=sinistresI,family=binomial(link="log")) which now works perfectly. Now, to see a final model, perhaps we should get back to our Poisson regression model since we do have a model for the probability that $\mathbb{P}(Y=\cdot)$. > regpois=glm(nbre~ageconducteur+offset(log(exposition)), + data=sinistres,family=poisson(link="log")) We can now compare those three models. Perhaps, we should also include the prediction without any explanatory variable. For the second model (actually, it does run without any explanatory variable), we run > regreff=glm((1-nonsin)~1+offset(log(exposition)), + data=sinistres,family=binomial(link="log")) so that the prediction is here > exp(coefficients(regreff)) (Intercept) 0.06776376 This value is comparable with the logistic regression, > logL2 = function(beta){ + pi=(exp(beta)/(1+exp(beta)))^E + -sum(log(dbinom(Y,size=1,prob=pi)))} > param=optim(fn=logL2,par=.01,method="BFGS")$par
> 1-exp(param)/(1+exp(param))
[1] 0.06747777

But is quite different from the Poisson model,

> exp(coefficients(glm(nbre~1+offset(log(exposition)),
(Intercept)
0.07279295

Let us produce a graph, to compare those models,

> age=18:100
> yml1=exp(parametres[1]+parametres[2]*age)/(1+exp(parametres[1]+parametres[2]*age))
> plot(age,1-yml1,type="l",col="purple")
> yp=predict(regpois,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> yp1=1-exp(-yp)
> ydl=predict(regopp,newdata=data.frame(ageconducteur=age,
+ exposition=1),type="response")
> plot(age,ydl,type="l",col="red")
> lines(age,yp1,type="l",col="blue")
> lines(age,1-yml1,type="l",col="purple")
> abline(h=exp(coefficients(regreff)),lty=2)

Observe here that the three models are quite different. Actually, with two models, it is possible to run more complex regression, e.g. with splines, to visualize the impact of the age on the probability of having – or not – a car accident. If we compare the Poisson regression (still in red) and the log-binomial model, with Taylor’s expansion, we get

The next step is to see how to incorporate the exposure in a tree. But that’s another story…

# Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – (either the company defaults, or not), so that

i.e. we can derive the (unconditional) distribution of the sum

so that the probability function of the sum is, assuming that

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value + } > QUANTILE=function(p=.99,a=2,m=.1,n=500){ + V=rep(NA,n+1) + for(i in 0:n){ + V[i+1]=proba(i,a,m,n)} + V=V/sum(V) + return(min(which(cumsum(V)>p))) } Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here i.e. Thus, the correlation between two default indicators is then Under the assumption that the latent factor is beta distributed we get Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function, > PICTURE=function(P){ + A=seq(.01,2,by=.01) + VQ=matrix(NA,length(A),5) + for(i in 1:length(A)){ + VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P) + VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P) + VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P) + VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P) + VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P) + } + plot(A,VQ[,5],type="s",col="red",ylim= + c(0,max(VQ)),xlab="",ylab="") + lines(A,VQ[,4],type="s",col="blue") + lines(A,VQ[,3],type="s",col="black") + lines(A,VQ[,2],type="s",col="blue",lty=2) + lines(A,VQ[,1],type="s",col="red",lty=2) + lines(A,rep(500*P,length(A)),col="grey") + legend(3,max(VQ),c("quantile 99.5%","quantile 99%", + "quantile 97.5%","quantile 95%","quantile 90%","mean"), + col=c("red","blue","black", +"blue","red","grey"), + lty=c(1,1,1,2,2,1),border=n) +} e.g. with a (marginal) default probability of 15%, > PICTURE(.15) On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation, Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function), > PICTURE(.05) (with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse, > PICTURE(.0075) And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital ! # de Finetti’s theorem and exchangeability This week, we will start to work on multivariate models, and non-independence. The first idea to discuss non-independence will be to use the concept ofexchangeability. A sequence of random variable is said to be exchangeable if for all for any permutation of . A standard example is the case where, with and Since , a necessary condition is that i.e. Since this inequality should hold for all it comes that necessarily . de Finetti (1931): Let be a sequence of random variables with values in is exchangeable if and only if there exists a distribution function on such that where . Note that is the distribution function of random variable A nice proof of that result can be found in Heath & Sudderth (1995) – see alsoSchervish (1995)Chow & Teicher (1997) or Durrett (2010) and also probably in several bayesian books because that result has a strong interpretation in bayesian inference (as far as I understood, see e.g. Jaynes (1982)). From the exchangeability condition, for any permutation of , that can be inverted in The idea is then to extend the size of the vector , i.e. for all , define so that, if we condition on , but since given the sum of components of , all possible rearrangements of the ones among the elements are equally likely, we can write The first idea is to work on the blue term, and to invocate a theorem of approximation of the hypergeometric distribution to a binomial distribution , when becomes large. Then Let and let denote the cumulative distribution function of . The idea is then to write the sum as an integral, with respect to that distribution, The theorem is then obtained since , i.e. In the case of non-binary sequences, there is an extension of the previous result, Hewitt & Savage (1955): Let be a sequence of random variables with values in . is exchangeable if and only if there exists a measure on such that where is the measure associated to the empirical measure and For instance, in the Gaussian case mentioned earlier, if then where i.e. conditionally on , the are conditionally independent, with distribution . The proof can be found in Kingman (1978) and is based on martingale arguments. Note that in the Gaussian case, where are i.i.d. random variables. To go further on exchangeability and related topics, see Aldous (1985) (see also here). This construction can be used in credit risk, to model defaults in an homogeneous portfolio, see e.g. Frey (2001), Assuming a Beta distribution for the latent factor, we can derive the probability distribution of the sum Since if we assume that – given the latent factor – (either the company defaults, or not), i.e. Thus, we can derive the (unconditional) distribution of the sum i.e. > proba=function(s,a,m,n){ + b=a/m-a + choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)* + dbeta(t,a,b)},lower=0,upper=1)$value
+ }

Based on that function, it is possible to plot the probability distribution over . In the upper corner is plotted the density of the Beta distribution.

> a=2
> m=.2
+ n=10
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
> barplot(V,names.arg=0:10)

Those two theorems are extremely close,

De Finetti’s theorem: a random sequence  of  random variables is exchangeable if and only if ‘s are conditionnally independent, conditionnally on some random variable .

Hewitt-Savage’s theorem: a random sequence  is exchangeable if and only if ‘s are conditionnally independent, conditionnally on some sigma-algebra

Olshen (1974), proposed an interesting discussion about those theorems, see also in the Encyclopedia of Statistical Science,

The subtle difference between those two theorem is also discussed in Freedman (1965)

# Will I ever be a bayesian statistician ? (part 1)

Last week, during the workshop on Statistical Methods for Meteorology and Climate Change (here), I discovered how powerful bayesian techniques could be, and that there were more and more bayesian statisticians. So, if I was to fully understand applied statisticians in conferences and workshops, I really have to understand basics of bayesian statistics. I have published some time ago some posts on bayesian statistics applied to actuarial problems (here or there), but so far, I always thought that bayesian was a synonym for magician. To be honest, I am a Muggle, and I have not been trained as a bayesian. But I can be an opportunist…

So I decided to publish some posts on bayesian techniques, in order to prove that it is actually not that difficult to implement.

As far as I understand it, in bayesian statistics, the parameter is considered as a random variable (which is also the case, in classical mathematical statistics). But here, here assume that this parameter does have a parametric distribution….
Consider a classical statistical problem: assume we have a sample i.i.d. with distribution . Here we note

since parameter  is a random variable. The idea is to assume that has a (so called a priori) distribution, e.g.

So far it was simple. The idea is then to consider the posterior distribution of , given the observations . Thus, we need to compute the distribution of which is here extremely simple (due to properties of the Gaussian distribution), i.e.

where

And them, it becomes extremely natural to consider as an estimator of given our sample data (and thus, we also have a confidence interval since we know the distribution of given the observations ).
In order to be sure that we understood, consider now a heads and tails problem, i.e. . Note, first, that \theta has support . So we need a distribution on that support. Why not a beta distribution ? E.g.

Thus,

and

From Bayes formula,

and we get easily

which is the density of a Beta distribution, i.e.

prior=dbeta(u,a,b)
posterior=dbeta(u,a+y,n-y+b)

The estimator proposed is then the expected value of that conditional distribution,

Note that

Further, it is possible to derive confidence intervals using quantiles of the posterior distribution.
On the graphs below, we consider the following heads/tails sample

A first idea is to consider a uniform prior distribution.

A second idea is to consider an asymmetric beta distribution. First, with an asymmetry on the left,

or on the right

Finally a third idea is simply to get back to the standard Gaussian approximation,

If we compare the four models, we obtain (the plain black line is the Gaussian approximated distribution for the empirical mean), and red lines are obtained from prior beta distributions

The code to generate those graphs is the following
a1=1; b1=1
D1[1,]=dbeta(u,a,b)
a2=4; b2=2
D2[1,]=dbeta(u,a,b)
a3=2; b3=4
D3[1,]=dbeta(u,a,b)
setseed(1)
S=sample(0:1,size=100,replace=TRUE)
COULEUR=rev(rainbow(120))
D1=D2=D3=D4=matrix(NA,101,length(u))
for(s in 1:100){
y=sum(S[1:s])
D1[s+1,]=dbeta(u,a1+y,s-y+b1)
D2[s+1,]=dbeta(u,a2+y,s-y+b2)
D3[s+1,]=dbeta(u,a3+y,s-y+b3)
D4[s+1,]=dnorm(u,y/s,sqrt(y/s*(1-y/s)/s))
plot(u,D1[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D1[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D2[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D2[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D3[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D3[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[1,],col="white",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D4[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[s+1,],col="black",lwd=2,type="l",
ylim=c(0,8),xlab="",ylab="")
lines(u,D1[1+i,],col="blue")
lines(u,D2[1+i,],col="red")
lines(u,D3[1+i,],col="purple")
points(y/s,0,pch=3,cex=2)
}

Here, we can see that computations are simple if the prior distribution has a distribution which is the conjugate of the observations’ distribution (see here for the list of prior and posterior standard distributions).
So far, I have two questions that naturally show up

• is it possible to start with a neutral prior distribution, non informative ?
• what if we are no longer working with conjugate distributions ?

Well, I guess I have to work a bit more to answer those questions…. to be continued