This Tuesday, I will give a talk on fairness at the Akur8 ratemaking seminar. Slides are available online.
Tag Archives: ratemaking
The myth of interpretability of econometric models
There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model.
We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency Y is driven by some non-observable risk factor \Theta, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor \Theta is strongly correlated with X_1, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of Y on X_1 to derive our actuarial pricing model. But assume that, possibly, risk factor \Theta is also strongly correlated with X_2, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”.
Of course, since \Theta is strongly correlated with X_1 and X_2, it means that X_1 and X_2 are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to illustrate.
set.seed(123) n=1e5 Theta=rnorm(n) X1=Theta+rnorm(n)/8 X2=Theta+rnorm(n)/8 L=exp(-3+Theta) Y=rpois(n,L) B=data.frame(Y,X1,X2) |
Our first idea was to consider a model where Y is “explained” by the first variable X_1,
g1=glm(Y~X1,data=B,family=poisson) summary(g1) Coefficients: Estimate Std. Error z value Pr(>|z|) (Inter.) -2.97778 0.01544 -192.88 <2e-16 *** X1 0.97926 0.01092 89.64 <2e-16 *** |
As expected, our variable is “significant”, but also, probably more interesting, X_2, has no impact on the residuals
B$e1=residuals(g1,type="pearson") g1e=lm(e1~X2,data=B) summary(g1e) Coefficients: Estimate Std. Error t value Pr(>|t|) (Inter.) 0.0003618 0.0031696 0.114 0.909 X2 0.0028601 0.0031467 0.909 0.363 |
The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers.
But we can also consider the other story. We can consider a model where Y is “explained” by the second variable X_2,
g2=glm(Y~X2,data=B,family=poisson) summary(g2) Coefficients: Estimate Std. Error z value Pr(>|z|) (Inter.) -2.97724 0.01544 -192.81 <2e-16 *** X2 0.97915 0.01093 89.56 <2e-16 *** |
Here also we have a valid model, that can be interpreted, and here also X_1, has no impact on the residuals
B$e2=residuals(g2,type="pearson") g2e=lm(e2~X1,data=B) summary(g2e) Coefficients: Estimate Std. Error t value Pr(>|t|) (Inter.) 0.0004863 0.0031733 0.153 0.878 X1 0.0027979 0.0031504 0.888 0.374 |
The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver.
So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable
AIC(g1) [1] 51013.39 AIC(g2) [1] 51013.15 |
Why not incorporate the two explanatory variables X_1 and X_2, at the same time, in our regression model, and let “the model” decide what to do…?
g=glm(Y~X1+X2,data=B,family=poisson) summary(g) Coefficients: Estimate Std. Error z value Pr(>|z|) (Inter.) -2.98132 0.01547 -192.723 2e-16 *** X1 0.49310 0.06226 7.920 2.38e-15 *** X2 0.49375 0.06225 7.931 2.17e-15 *** |
It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it? So why not try some LASSO here?
library(glmnet) fit=glmnet(x=as.matrix(B[,c("X1","X2")]), y=B$Y,family="poisson") plot(fit,xvar="lambda") |
Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here?
Large claims, and ratemaking
During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost , given some covariates .Here is the dataset we’ll use,
> sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt", + header=TRUE,sep=";") > sinistres=sinistre[sinistre$garantie=="1RC",] > sinistres=sinistres[sinistres$cout>0,] > contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt", + header=TRUE,sep=";") > couts=merge(sinistres,contrat) > tail(couts) nocontrat no garantie cout exposition zone puissance agevehicule 1919 6104006 11933 1RC 5376.04 0.37 E 6 1 1920 6107355 12349 1RC 51.63 0.74 E 4 1 1921 6108364 13229 1RC 1320.00 0.74 B 9 1 1922 6109171 11567 1RC 1320.00 0.74 B 13 1 1923 6111208 14161 1RC 970.20 0.49 E 10 5 1924 6111650 14476 1RC 1940.40 0.48 E 4 0 ageconducteur bonus marque carburant densite region 1919 32 57 12 E 93 10 1920 45 57 12 E 72 10 1921 32 100 12 E 83 0 1922 56 50 12 E 93 13 1923 30 90 12 E 53 2 1924 69 50 12 E 93 13
Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.
> age=0:20 > reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"), + data=couts) > Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")
For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,
> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT) > sigma <- summary(reglm.sp)$sigma > mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age)) > Pln <- exp(mu+sigma^2/2)
We can plot those two predictions on a single graph,
> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4) > lines(age,Pln,col="blue",type="b")
Here it is,
Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,
Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,
i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,
> couts[which.max(couts$cout),] cout exposition zone puissance agevehicule ageconducteur 7842 4024601 0.22 B 9 13 19 marque carburant densite region 7842 2 E 93 24
One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write
where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)
> s= 10000 > couts$normal=(couts$cout<=s) > mean(couts$normal) [1] 0.9818087
which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,
> indice = which(couts$cout>s) > mean(couts$cout[indice]) [1] 34471.59 > library(splines) > regB=glm(cout~bs(agevehicule),data=couts, + subset=indice,family=Gamma(link="log")) > ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response") > ypB2=mean(couts$cout[indice])
the second one to model normal claims individual costs,
> indice = which(couts$cout<=s) > mean(couts$cout[indice]) [1] 1335.878 > regA=glm(cout~bs(agevehicule),data=couts, + subset=indice,family=Gamma(link="log")) > ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response") > ypA2=mean(couts$cout[indice])
And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred
> regC=glm(normal~bs(agevehicule),data=couts,family=binomial) > ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response") > regC2=glm(normal~1,data=couts,family=binomial) > ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")
Note that we to have, each time something that can be interpreted either as , or – i.e. no covariate is considered on the later. On the graph below, we did plot
where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.
(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium). If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.
To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too
From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…