Tag Archives: Poisson

Tweedie regression, or Poisson-Gamma regressions ?

Yesterday, I was chating with a young and enthousiastic actuary, who asked a nice (and classical) question: is it the same, or not to use a Tweedie regression, or two regressions (Poisson, and Gamma). For distributions, the two are equivalent, but when we have heterogeneity and explanatory variable, I really think that using all information, and running two regressions is much more interesting.

Homogeneous case

In the homogenous case, without any explanatory variable, the Tweedie distribution and compound Poisson-gamma distribution are equivalent representation (i.e., it is simply a reparametrization)

Consider a Tweedie distribution, with variance function power p\in(1,2), mean \mu and scale parameter \phi, then it is a compound Poisson model,

  • N\sim\mathcal{P}(\lambda) with \lambda=\displaystyle{\frac{\phi \mu^{2-p}}{2-p}}
  • Y_i\sim\mathcal{G}(\alpha,\beta) with \alpha=\displaystyle{-\frac{p-2}{p-1}}\text{~and~}\beta=\displaystyle{\frac{\phi \mu^{1-p}}{p-1}}

Conversely, consider a compound Poisson model N\sim\mathcal{P}(\lambda) and Y_i\sim\mathcal{G}(\alpha,\beta), then

  • variance function power is p=\displaystyle{\frac{\alpha+2}{\alpha+1}}
  • mean is \mu=\displaystyle{\frac{\lambda \alpha}{\beta}}
  • scale (nuisance) parameter is
    \phi=\displaystyle{\frac{[\lambda\alpha]^{\frac{\alpha+2}{\alpha+1}-1}\beta^{2-\frac{\alpha+2}{\alpha+1}}}{\alpha+1}}

So the two are equivalent…

Heterogeneous case

Now, in the context of regressionN_i\sim\mathcal{P}(\lambda_i)\text{ with }\lambda_i=\exp[\boldsymbol{x}_i^\top\boldsymbol{\beta}_{\lambda}]
andY_{j,i}\sim\mathcal{G}(\mu_i,\phi)\text{ with }\mu_i=\exp[\boldsymbol{x}_i^\top\boldsymbol{\beta}_{\mu}]
Then S_i=Y_{1,i}+\cdots+Y_{N,i} has a Tweedie distribution

  • variance function power is p=\displaystyle{\frac{\phi+2}{\phi+1}}
  • mean is \lambda_i \mu_i
  • scale parameter is\displaystyle{\frac{\lambda_i^{\frac{1}{\phi+1}-1}}{\mu_i^{\frac{\phi}{\phi+1}}}\left(\frac{\phi}{1+\phi}\right)}

There are 1+2\text{dim}(\boldsymbol{X}) degrees of freedom here. And a Tweedie regression is

  • variance function power is p\in(1,2)
  • mean is \mu_i=\exp[\boldsymbol{x}_i^{\top}\boldsymbol{\beta}_{\text{Tweedie}}]
  • scale parameter is \phi

There are now 2+\text{dim}(\boldsymbol{X}) degrees of freedom.

In the actuarial terminology

  • N is the annual claim frequency
  • Y is the cost of single claims
  • S is the annual cost for a single insurance policy

As explained in our book, frequency and costs can be explained by different features, so that itself is a motifivation to consider two models. But consider the following simulated data

n = 1e4
a=2
set.seed(123)
x = runif(n)
etan = exp(-2+a*x)
N = rpois(n,etan)
dfn = data.frame(y=N,x=x)
I=rep(1:n,N)
etaz = exp(2-a*x[I])
Z = rgamma(sum(N),etaz,20)
dfz = data.frame(y=Z,x=x[I])
S=tapply(Z,as.factor(I),sum)
V=as.numeric(S[as.character(1:n)])
V[is.na(V)]=0
dfy = data.frame(y=V,x=x)

We can run two regressions, for the frequency, and for the costs

regn = glm(y~x, family=poisson(link="log"),data=dfn)
regz = glm(y~x, family=Gamma(link="log"),data=dfz)

For the tweedie regression, let us find the optimal power parameter

library(statmod)
library(tweedie)
glmtw = function(t){
m = glm(y~x, family=tweedie(var.power = t, link.power = 0),data=dfy)
d = NULL
if(t == 1) d = 1
AICtweedie(m, dispersion = d)
}
vt = seq(1.01,1.99,length=251)
vg = Vectorize(glmtw)(vt)
plot(vt,vg,log="y",type="l")
i=which.min(vg)

and consider the associated Tweedie regression.

regy = glm(y~x, family=tweedie(var.power = vt[i], link.power = 0),data=dfy)

For frequency, there is a clear increase of the average frequency with x (and significant)

summary(regy)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.00822 0.04101 -73.356 <2e-16 ***
x           -0.02226 0.07154  -0.311  0.756
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for Tweedie family taken to be 0.6516459)

For the individual costs, there is a clear decline of the average cost with x (and highly significant)

summary(regn)

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.01508 0.04135 -48.73 <2e-16 ***
x            1.99036 0.05887  33.81 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Now, if we consider the average cost for the policy, we have

summary(regy)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.00822 0.04101 -73.356 <2e-16 ***
x           -0.02226 0.07154  -0.311  0.756
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for Tweedie family taken to be 0.6516459)

I.e., the average annual cost for a single policy does not depend on x (it is clearly not significant). As the product of the frequency and the average costs tells more or less the same story…

If the outcome, the price, is the same, one could agree that having here the two regressions is much more informative for risk management (if one wants to introduce deductibles for instance).

Model selection, AIC and Tweedie regression

Just some simple codes to illustrate some points we will discuss this week, for the last course on GLMs, before the final exam.  We have mentioned that the Gamma distribution belongs to the exponential, so we can run a regression, and compute the associated AIC,

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> m1 = glm( test.data~1, family=Gamma(link=log))
> AIC(m1)
[1] 3997.332

The Gamma distribution is also a special case of the Tweedie distribution, with power 2

> library(statmod)
> library(tweedie)
> m2 = glm( test.data~1, family=tweedie(link.power=0, var.power=2) )
> AIC(m2)
[1] NA

Unfortunately, we cannot compute the AIC, and we need a trick (with the appropriate R function).

> AICtweedie(m2)
[1] 3997.332

Of course, we can do the same with the Poisson distribution, which also belongs to the exponential family

> test.data = rpois(n=2000, lambda=1)
> m3 = glm( test.data~1, family=poisson(link=log))
> m4 = glm( test.data~1, family=tweedie(link.power=0, var.power=1) )
> AIC(m3)
[1] 5124.61

Here, we have a problem with the AICtweedie function

> AICtweedie(m4)
[1] Inf

because we need to specify the dispersion parameter

> AICtweedie(m4, dispersion=1)
[1] 5124.61

We can now check: we generate some Gamma sample, and fit various Tweedie distribution, changing simply the variance function (which is a power function)

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2.7,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum of the AIC is close to 2, corresponding to the Gamma distribution

We can also try with a Poisson

> set.seed(123)
> test.data = rpois(n=2000, lambda=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum is now close to 1, corresponding to the Poisson distriubtion (the variance is equal to the average)

Let us now try some compound Poisson distribution,

> rcpd=function(n,lambda,shape,scale){
+ N=rpois(n,lambda)
+ X=rgamma(sum(N),shape=shape, scale=scale)
+ I=as.factor(rep(1:n,N))
+ S=tapply(X,I,sum)
+ V=as.numeric(S[as.character(1:n)])
+ V[is.na(V)]=0
+ return(V)}

Let us generate some compound Poisson random variables, with Poisson distribution with average 1, and with Gamma jumps, with average and variance 1,

> set.seed(123)
> test.data = rcpd(n=2000, 1,1,1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ }
> vt = seq(1.1,1.9,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal value for the power function is here 1.5, based on the AIC (relationships between Tweedie parameters and the compound Poisson ones are given in the slides)

We can now play a little bit with the variance of the jumps: they still have aveage 1, but they now have a smaller variance

> set.seed(123)
> test.data = rcpd(n=2000, 1,3,1/3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal power for the Tweedie is closer to one, closer to the Poison case

while if we increase the variance of the jumps

> set.seed(123)
> test.data = rcpd(n=2000, 1,1/3,3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

the optimal power is higher, closer to the Gamma distribution.

STT5100, quiz (régression de Poisson #1)

Pour la fin de la semaine, j’avais donné un petit quiz, basé sur la base suivante, qui donne le nombre de cyclistes à une intersection, à New York City,

download.file("http://freakonometrics.free.fr/NYCVelo.RData","velo.RData")
load("velo.RData")
str(base)
'data.frame':	214 obs. of  7 variables:
 $ Date    : chr  "1-Apr-17" "2-Apr-17" "3-Apr-17" "4-Apr-17" ...
 $ HIGH_T  : num  46 62.1 63 51.1 63 48.9 48 55.9 66 73.9 ...
 $ LOW_T   : num  37 41 50 46 46 41 43 39.9 45 55 ...
 $ PRECIP  : num  0 0 0.03 1.18 0 0.73 0.01 0 0 0 ...
 $ BB_COUNT: int  606 2021 2470 723 2807 461 1222 1674 2375 3324 ...
 $ DAY     : chr  "Sam" "Dim" "Lun" "Mar" ...
 $ DIFF_T  : num  9 21.1 13 5.1 17 7.9 5 16 21 18.9 ......

En utiisant une régression de Poisson, il fallait prédire combien de cyclistes passeront un dimanche, s’il fait une température maximale de 85F, minimale de 70F, et s’il ne pleut pas. Et il fallait voir ce que donnerait la prévision pour un lundi.

newbase = data.frame(DAY=as.factor(c("Lun","Dim")),
 HIGH_T=c(85,85),LOW_T=c(70,70),
 PRECIP=c(0,0))

Faisons un modèle avec toutes les variables explicatives.

reg0 = glm(BB_COUNT~HIGH_T+LOW_T+PRECIP+DAY,data=base,family=poisson)

Rajoutons aussi une indicatrice pour indiquer les jours où il ne pleut pas,

reg = glm(BB_COUNT~HIGH_T+LOW_T+PRECIP+I(PRECIP==0)+DAY,data=base,family=poisson)
summary(reg) 
 
Coefficients:
                     Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept)         6.8844970  0.0110463 623.241   &lt;2e-16 ***
HIGH_T              0.0210950  0.0003133  67.328   &lt;2e-16 ***
LOW_T              -0.0114006  0.0003351 -34.024   &lt;2e-16 ***
PRECIP             -0.6570450  0.0071899 -91.384   &lt;2e-16 ***
I(PRECIP == 0)TRUE  0.1303908  0.0033283  39.176   &lt;2e-16 ***
DAYJeu              0.1683475  0.0049690  33.880   &lt;2e-16 ***
DAYLun              0.1144129  0.0050480  22.665   &lt;2e-16 ***
DAYMar              0.1655886  0.0049936  33.160   &lt;2e-16 ***
DAYMer              0.1688035  0.0049190  34.317   &lt;2e-16 ***
DAYSam              0.0466447  0.0051838   8.998   &lt;2e-16 ***
DAYVen              0.1003536  0.0050919  19.709   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for poisson family taken to be 1)
 
    Null deviance: 70021  on 213  degrees of freedom
Residual deviance: 26493  on 203  degrees of freedom
AIC: 28580
 
Number of Fisher Scoring iterations: 4

Tout semble vraiment significatif. Et même avoir du sens (si on regarde les signes, par exemple). Et si on a peur de rater un effet non-linéaire, on peut mettre des splines sur toutes les variables continues

library(gam)
reggam = gam(BB_COUNT~bs(HIGH_T)+bs(LOW_T)+bs(PRECIP)+I(PRECIP==0)+DAY,data=base,family=poisson)
plot(reggam, se=TRUE)

pour la température maximale, ou pour la température minimale

et la courbe suivante pour la précipitation, avec un lissage linéaire entre l’observation maximale (3) et celle juste avant (vers 1.8)

On peut aussi régresser sur la température minimale, et l’écart entre le maximum et le minimum (dans un modèle linéaire les modèles sont équivalents, mais avec de transformation nonlinéaire, la différence pourrait donner un modèle plus simple)

library(gam)
reggam2 = gam(BB_COUNT~bs(DIFF_T)+bs(LOW_T)+bs(PRECIP)+I(PRECIP==0)+DAY,data=base,family=poisson)
plot(reggam2, se=TRUE)

On peut maintenant comparer les quatre modèles, et leurs prévision. Par exemple, pour le modèle linéaire (avec la variable indicatrice indiquant qu’il ne pleut pas),

P = predict(reg,newdata=newbase,type="response",se.fit=TRUE)

pour le lundi, on obtient l’intervalle de confiance à 95% pour \widehat{\lambda}

P$fit[1]+c(-2,2)*P$se.fit[1]
[1] 3349.842 3401.395

et pour le dimanche l’intervalle de confiance à 95% est

P$fit[2]+c(-2,2)*P$se.fit[2]
[1] 2987.497 3033.861

On peut visualiser les quatre intervalles de confiance (avec, de haut en bas, le second modèle gam, le premier, le modèle linéaire avec l’indicatrice d’absence de pluie, puis le premier modèle linéaire)

alors que pour le dimanche, on a

Autrement dit, en changeant de modèle, on change beaucoup les intervalles de confiance sur la prévision (et les intervalles sont parfois complètement disjoints). Ce qui n’est pas forcément une bonne nouvelle.

Exotic link functions for GLMs

In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :

  • The square root link (for the Poisson model)

Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2

Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}

So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.

  • The complementary log-log function for the Bernoulli model

Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})

So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression

n=563
set.seed(1)
base=data.frame(X1=rnorm(n),X2=rnorm(n))
lambda=base$X1+base$X2
base$Y=rpois(n,exp(lambda))
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
x=base$SEX
base$SEX="M"
base$SEX[x=="0"]="F"
x=base$CHILDREN
base$CHILDREN="YES"
base$CHILDREN[x==0]="NO"
regPois = glm(Y~.,data=base,family=poisson(link="log"))
lambda = predict(regPois,type="response")
regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit"))
prob = predict(regBinom, type="response")
plot(prob,exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

In that case the two models are very different. But actually, so is the second one

regBinom = glm((Y&gt;0)~.,data=base,family=binomial(link="cloglog"))
prob = predict(regBinom, type="response")
plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1)
abline(a=0,b=1,lty=2,col="red")

How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,

library(pscl)
regZIP = zeroinfl(Y ~ . | ., data = base)
summary(regZIP)
 
Count model coefficients (poisson with log link):
             Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept) -0.002274   0.048413  -0.047    0.963    
X1           1.019814   0.026186  38.945   &lt;2e-16 ***
X2           1.004814   0.024172  41.570   &lt;2e-16 *** 
Zero-inflation model coefficients (binomial with logit link): 
            Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept) -4.90190    2.07846  -2.358   0.0184 *
X1          -2.00227    0.86897  -2.304   0.0212 *
X2          -0.01545    0.96121  -0.016   0.9872  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…

Bailey (1963) and Poisson regression on two factors

Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
'data.frame':	563 obs. of  9 variables:
 $ SEX         : int  1 0 0 1 1 0 0 1 0 1 ...
 $ AGE         : num  37 27 32 57 22 32 22 57 32 22 ...
 $ YEARMARRIAGE: num  10 4 15 15 0.75 1.5 0.75 15 15 1.5 ...
 $ CHILDREN    : int  0 0 1 1 0 0 0 1 1 0 ...
 $ RELIGIOUS   : int  3 4 1 5 2 2 2 2 4 4 ...
 $ EDUCATION   : int  18 14 12 18 17 17 12 14 16 14 ...
 $ OCCUPATION  : int  7 6 1 6 6 5 1 4 1 4 ...
 $ SATISFACTION: int  4 4 4 5 3 5 3 4 2 5 ...
 $ Y           : int  0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y,
              religion=as.factor(base$RELIGIOUS),
              occupation=as.factor(base$OCCUPATION),
              expo = 1)
(E=xtabs(expo~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1  8  4 16  9  0
       2 23  3 11 17 56 36  6
       3 29  1 10 12 39 25  2
       4 38  7 12 21 59 44  2
       5 13  1  3 10 19 19  3
(N=xtabs(y~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1 13  3 13  7  0
       2  1  1 13 10 25 43 10
       3 15  0 12 11 34 35  1
       4 24  1  3 15 11  9 10
       5  6  0  0  6 11  7  0

The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be

sum(N)/sum(E)
[1] 0.6305506

The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…

Consider here some starting values for A_i‘s and B_j‘s

A=rep(1,length(levels(df$religion)))
B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E)
A
[1] 1 1 1 1 1
B
[1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506

The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j

E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.5222025  0.6305506  5.0444050  2.5222025 10.0888099  5.6749556  0.0000000
       2 14.5026643  1.8916519  6.9360568 10.7193606 35.3108348 22.6998224  3.7833037
       3 18.2859680  0.6305506  6.3055062  7.5666075 24.5914742 15.7637655  1.2611012
       4 23.9609236  4.4138544  7.5666075 13.2415631 37.2024867 27.7442274  1.2611012
       5  8.1971581  0.6305506  1.8916519  6.3055062 11.9804618 11.9804618  1.8916519
sum(B*E[1,])
[1] 26.48313
sum(B*E[2,])
[1] 95.84369
apply(t(B*t(E)),1,sum)
        1         2         3         4         5 
 26.48313  95.84369  74.40497 115.39076  42.87744 
sum(A*E[,1])
[1] 107
sum(A*E[,2])
[1] 13
apply(A*E,2,sum)
  1   2   3   4   5   6   7 
107  13  44  64 189 133  13

From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively

A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
B=apply(N,2,sum)/apply(A*E,2,sum)

Let it iterate one thousand times

for(i in 1:1000){
  A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
  B=apply(N,2,sum)/apply(A*E,2,sum)
}

We obtain here

A
        1         2         3         4         5 
1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 
B
        1         2         3         4         5         6         7 
0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,

apply(N,1,sum)
  1   2   3   4   5 
 41 103 108  73  30 
apply(E * A%*%t(B),1,sum)
  1   2   3   4   5 
 41 103 108  73  30

as well as sums per colums

apply(N,2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21 
apply(E * A%*%t(B),2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21

Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates

reg=glm(y~religion+occupation,data=df,family=poisson)
summary(reg)
Coefficients:
            Estimate Std. Error z value Pr(&gt;|z|)    
(Intercept) -0.32604    0.21325  -1.529 0.126285    
religion2   -0.38832    0.18791  -2.066 0.038783 *  
religion3   -0.03829    0.18585  -0.206 0.836771    
religion4   -0.85470    0.19757  -4.326 1.52e-05 ***
religion5   -0.84233    0.24416  -3.450 0.000561 ***
occupation2 -0.57758    0.59549  -0.970 0.332083    
occupation3  0.59022    0.21349   2.765 0.005699 ** 
occupation4  0.43588    0.20603   2.116 0.034381 *  
occupation5  0.04265    0.17590   0.242 0.808399    
occupation6  0.50587    0.17360   2.914 0.003569 ** 
occupation7  1.27415    0.26298   4.845 1.27e-06 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

First of all, observe that the total sum of predictions equals the total sum of observations

yp = predict(reg,type="response")
sum(yp)
[1] 355
sum(df$y)
[1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation)
           df$occupation
df$religion          1          2          3          4          5          6          7
          1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
          2 11.2586112  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
          3 20.1450813  0.3898804 12.5342484 12.8899708 28.2722424 28.8008726  4.9677044
          4 11.6678703  1.2063307  6.6483904  9.9707300 18.9053460 22.4055332  2.1957997
          5  4.0413464  0.1744790  1.6827951  4.8070914  6.1639761  9.7955975  3.3347148
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5]))
a/a[1]
          religion2 religion3 religion4 religion5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 
A/A[1]
        1         2         3         4         5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for B_j‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11]))
b/b[1]
            occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 
  1.0000000   0.5612551   1.8043769   1.5463210   1.0435773   1.6584203   3.5756477 
B/B[1]
        1         2         3         4         5         6         7 
1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).

Actuariat de l’Assurance Non-Vie #3

Mardi, seconde partie du cours d’actuariat, avec les modèles de classification le matin, mais l’après midi, on devrait commencer les modèles de comptage. Les slides sont en ligne.

Le matin, je dois intervenir 15 minutes à l’IHP vers 9 heures, dans un colloque sur Artificial Intelligence for Fintech and Insurtech: si le RER traîne un peu trop, entre Luxembourg et Lozère, j’aurais peut-être 5 minutes de retard….

Econometrics vs. Machine Learning with Temporal Patterns

A few months ago, I did publish a (long) post entitled ‘some thoughts on economics, mathematics, econometrics, machine learning, etc‘. In that post, I was discussing possible differences between foundations of econometrics, and machine learning. I wanted to get back today on an important point, related to training/sampling datasets, when we have temporal data.

I was discussing this morning, with a student of the Data Science for Actuaries program, an interesting point related to claim frequency models, for insurance ratemaking. Since the goal is to predict claims frequency (to assess the level of the insurance premium), he suggested to use old data to train the model, and more recent one to test it. The problem is that the model did not incorporate any temporal pattern, and we got surprising results.

Consider here a simple dataset,

> set.seed(1)
> n=50000
> X1=runif(n)
> T=sample(2000:2015,size=n,replace=TRUE)
> L=exp(-3+X1-(T-2000)/20)
> E=rbeta(n,5,1)
> Y=rpois(n,L*E)
> B=data.frame(Y,X1,L,T,E)

Claims frequency is driven by a Poisson process, with one covariate, X1, and we assume that the intensity decreases (with an exponential rate). Consider here a standard linear regression, without any time effect

> reg=glm(Y~X1+offset(log(E)),data=B,
+ family=poisson)

We can also compute the empirical annualized claims frequency

> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B[abs(B$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp=Vectorize(p)(seq(.05,.95,by=.1))

and plot the two curves on the same graph,

> plot(seq(.05,.95,by=.1),vp,type="b")
> lines(u,exp(v),lty=2,col="red")

This is what we usually do in econometrics. In machine learning, and more specifically to assess the quality of the model, and for model selection, it is common to split the dataset in two parts. A training sample, and a validation sample. Consider some randomized training/validation samples, then fit a model on the training sample, and finally use it to get a prediction,

> idx=sample(1:nrow(B),size=nrow(B)*7/8)
> B_a=B[idx,]
> B_t=B[-idx,]
> reg=glm(Y~X1+offset(log(E)),data=B_a,
+ family=poisson)
> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_t=Vectorize(p)(seq(.05,.95,by=.1))
> lines(seq(.05,.95,by=.1),vp_t,col="red")

The blue curve is the prediction on the training sample (as we usually do in econometrics), but then the red curve is the prediction on the testing sample. Here, volatility probably comes from the small size of the testing sample (1 observation out of 8).

Now, what if we use the year as a splitting criteria : we fit a model on old years to fit a model, and we test it on recent years,

> B_a=subset(B,T<2014)
> B_t=subset(B,T>=2014)
> reg=glm(Y~X1+offset(log(E)),data=B_a,family=poisson)
> u=seq(0,1,by=.01)
> v=predict(reg,newdata=data.frame(X1=u,E=1))
> p=function(x){
+   B=B_a[abs(B_a$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_a=Vectorize(p)(seq(.05,.95,by=.1))
> plot(seq(.05,.95,by=.1),vp_a,col="blue")
> lines(u,exp(v),lty=2)
> p=function(x){
+   B=B_t[abs(B_t$X1-x)<.1,]
+   sum(B$Y)/sum(B$E)
+ }
> vp_t=Vectorize(p)(seq(.05,.95,by=.1))
> lines(seq(.05,.95,by=.1),vp_t,col="red")

Clearly, we miss something here…

We were looking at such a graph this morning, and it took me some time to understand how training and validation samples were designed, and that there was a possible temporal effect (actually, this morning, it was based on a 3 year training sample, and a 1 year validation sample).

Since there is a temporal pattern, let us capture it. As an econometrician, let me use a regression model

> reg=glm(Y~X1+T+offset(log(E)),data=B,
+ family=poisson)
> C=coefficients(reg)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[1]+C[3]*(2000:2015)))
> lines(u,v,lty=2,col="red")

(I focus only on the evolution of the temporal variate on that graph).

Here, we use a linear model, but there are usually no reason to assume linearity. So we might consider splines

> library(splines)
> reg=glm(Y~X1+bs(T)+offset(log(E)),
+ data=B,family=poisson)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> v2=predict(reg,newdata=data.frame(X1=0,
+ T=2000:2015,E=1))
> plot(2000:2015,exp(v2),type="b")
> lines(u,v,lty=2,col="red")

But here again, why should we assume that there is an underlying smooth function? There might be some ruptures… So let us consider a regression on factors

> reg=glm(Y~0+X1+as.factor(T)+offset(log(E)),
+ data=B,family=poisson)
> C=coefficients(reg)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[2:17]),type="b")
> lines(u,v,lty=2,col="red")

An alternative might be to consider some more general model, like a regression tree

> library(rpart)
> reg=rpart(Y~X1+T+offset(log(E)),data=B,
+ method="poisson",cp=1e-4)
> p=function(t){
+   B=B[B$T==t,]
+   B$E=1
+   mean(predict(reg,newdata=B))
+ }
> y_m=Vectorize(function(t) p(t))(2000:2015)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3+.5)
> plot(2000:2015,y_m,ylim=c(.02,.085),type="b")
> lines(u,v,lty=2,col="red")

Here, it seems that something went wrong. I guess it’s coming from the exposure. So consider a simplier model, on the annualized frequency, and with weights that are related to the exposure

> reg=rpart(Y/E~X1+T,data=B,weights=B$E,cp=1e-4)
> p=function(t){
+   B=B[B$T==t,]
+   B$E=1
+   mean(predict(reg,newdata=B))
+ }
> y_m=Vectorize(function(t) p(t))(2000:2015)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3+.5)
> plot(2000:2015,y_m,ylim=c(.02,.085),type="b")
> lines(u,v,lty=2,col="red")

That was for the econometrician perspective. With a machine learning perspective, consider a training sample (here based on old data) and a validation sample (based on more recent ones)

> B_a=subset(B,T<2014)
> B_t=subset(B,T>=2014)

If we consider a model, it is easy to get a prediction on recent years, even if the model was designed to model older ones,

> reg_a=glm(Y~X1+T+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> plot(2000:2015,exp(C[1]+C[3]*c(2000:2013,
+ NA,NA)),type="b")
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(C[1]+C[3]*2014:2015),
+ pch=19,col="blue")

But if we use years as factors, things are more complicated.

> reg_a=glm(Y~0+X1+as.factor(T)+offset(log(E)),
+ data=B_a,family=poisson)
> C=coefficients(reg_a)
> RMSE=function(A){
+   L=exp(C[1]*B_t$X1+ A[1]*(B_t$T==2014) + A[2]*(B_t$T==2015))
+   Y_t=L*B_t$E
+   sum( (Y_t - B_t$Y )^2)}
> i=optim(c(.4,.4),RMSE)$par
> plot(2000:2015,c(exp(C[2:15]),NA,NA),)
> u=seq(1999,2016,by=.1)
> v=exp(-(u-2000)/20-3)
> lines(u,v,lty=2,col="red")
> points(2014:2015,exp(i),pch=19,col="blue")

becase we need to get a prediction on levels that were not in our training sample. Here, we minimize the RMSE to quantify factor levels for recent years. And the output is not that bad.

So yes, it is possible to get a training dataset on older data, and test it on recent years. But one should be careful, and take into account, properly, temporal patterns.

Simple Distributions for Mixtures?

The idea of GLMs is that given some covariates has a distribution in the exponential family (Gaussian, Poisson, Gamma, etc). But that does not mean that  has a similar distribution… so there is no reason to test for a Gamma model for  before running a Gamma regression, for instance. But are there cases where it might work? That the non-conditional distribution is the same (same family at least) than the conditional ones?

For instance, if  has a joint Gaussien distribution, then both marginals are Gaussian, but also . So, in that case, if the covariate is normally distributed, it is possible to have a Gaussian distribution also for . The econometric interpretation is that with a standard Gaussian linear model, if is normally distributed, not only the conditional distribution  is Gaussian but also the non-conditional distribution of .

> set.seed(1)
> n=1e3
> X=rnorm(n,10,2)
> Y=1+3*X+rnorm(n)
> plot(X,Y,xlim=c(4,20))

Indeed, here the distribution of  is also Gaussian

> library(nortest)
> ad.test(Y)

	Anderson-Darling normality test

data:  Y
A = 0.23155, p-value = 0.802

> shapiro.test(Y)

	Shapiro-Wilk normality test

data:  Y
W = 0.99892, p-value = 0.8293

(not only from a statistical point of view, the thoery of Gaussian random vectors confirms that the non-conditional distribution is Gaussian actually)

Here  is continuous. What if we consider a finite mixture here, i.e. takes only a finite number of values? Actually, Teicher (1963) proved that it is not possible to have a non-conditional Gaussian distribution for . But in practice, would we really reject the Gaussian assumption, for ? If the number of classes is to small, yes. But with a large number of classes (a sufficiently large number of mixture components), it is possible,

> pv=function(k=2){
+ n=1e4
+ X=rnorm(n,10,2)
+ Q=quantile(X,(0:k)/k)
+ Q[1]=0
+ Xc=cut(X,Q,labels=1:k)
+ XcN=tapply(X,Xc,mean)
+ Xn=XcN[as.numeric(Xc)]
+ Y=1+3*Xn+rnorm(n)
+ ad.test(Y)$p.value}
 
> plot(2:100,Vectorize(pv)(2:100),type="l")
> abline(h=.05,col="red")

So here, it could be possible to have also a Gaussian distribution, for . As least to accept that assumption, statistically.

In the context of a Poisson regression, it is well know that it’s not possible to have at the same time  that is Poisson distributed (that’s a Poisson regression) and also  that is Poisson distributed. That simply comes from the fact that

while

and because of the conditional Poisson distribution, then

Thus,

So  cannot be Poisson distribution. But again, it could be possible, if heterogeneity is not too large, to accept the null assumption of a Poisson distribution for .

More generally, it is very difficult to have a distribution family for   that is also the distribution of the non-conditional variable . In the context of a finite mixture ( takes a finite number of values),Teicher (1963) proved that it was not not possible, neither for the Gaussian distribution nor the Gamma distribution. An to go further, check Monfrini (2002) (thanks Romuald for point out the reference).

Hence, as a keep saying, before running a regression model on with some given family, it is never a good idea to check if the non-conditional distribution  has the same distribution. Because there is no reason, usually, to remain in the same family.