A reinsurance case study for tomorrow’s class. The goal will be to price some nonproportional reinsurance contract, for business interruption claims. Consider the following dataset,
> library(gdata) > db=read.xls( + "https://perso.univ-rennes1.fr/arthur.charpentier/SIN_1985_2000-PE.xls", + sheet=1) Content type 'application/vnd.ms-excel' length 183808 bytes (179 Kb) open URL ================================================== downloaded 179 Kb
As for any (standard) insurance contract, there are two parts in the pricing
- the expected number of claims
- the average cost of individual claims
Here, we do not have covariates (but it might be possible to use some, like the kind of industry, the location, etc).
Let us start with the expected number of claims, per year. Here is the daily frequency,
The data are rather old… but somehow, it is a good thing since after ten years, we can expect that most of the claims have been settled (we’ll discuss claims dynamic starting next week). To plot the graph above, we use
> date=db$DSUR > D=as.Date(as.character(date),format="%Y%m%d") > vD=seq(min(D),max(D),by=1) > sD=table(D) > d1=as.Date(names(sD)) > d2=vD[-which(vD%in%d1)] > vecteur.date=c(d1,d2) > vecteur.cpte=c(as.numeric(sD),rep(0,length(d2))) > base=data.frame(date=vecteur.date,cpte=vecteur.cpte) > plot(vecteur.date,vecteur.cpte,type="h",xlim=as.Date(as.character( + c(19850101,20111231)),format="%Y%m%d"))
Then, we can get a prediction of the daily number of business interruption claims, e.g. for any day in 2010 (assume that we had to price a reinsurance contract a few years ago), using a (standard) Poisson regression
> regdate=glm(cpte~date,data=base,family=poisson(link="log")) > nd2010=data.frame(date=seq(as.Date(as.character(20100101),format="%Y%m%d"), + as.Date(as.character(20101231),format="%Y%m%d"),by=1)) > pred2010 =predict(regdate,newdata=nd2010,type="response") > sum(pred2010) [1] 159.4757
Observe that using old data has drawbacks, since we got much more uncertainty if we use a regression on time (to include some possible trend)
Say we have something like 160 claims over a given year, on average.
> plot(D,db$COUTSIN,type="h")
Let us now focus on the cost of those claims. We have 2,400 claims in our dataset, to fit a model (or at least estimate how much a reinsurance contract might cost us). Assume that we would like to purchase a reinsurance contract for our very large claims. Like the two largest per year. Over 16 years, the decutible should be close to the cost of the 32nd largest claim, which was close to 15 million.
> quantile(db$COUTSIN,1-32/2400)/1e6 98.66667% 15.34579 > abline(h=quantile(db$COUTSIN,1-32/2400),col="blue")
So consider some reinsurance contract with a deductible of 15 million. Unfortunately, we cannot find unlimited covers. So let us assume that a reinsurance company agrees for such a deductible, but with a limited cover of 35 million. The average cost (for the reinsurance company) is where
A first idea is to look at the first cost, i.e. the empirical average of that indemnity, on our portfolio. The indemnity function is
> indemn=function(x) pmin((x-15)*(x>15),50-15)
we can check on a few losses that it is actually what we wish to compute
> indemn(5) [1] 0 > indemn(20) [1] 5 > indemn(50) [1] 35
Now, if the compute the average repayment by the reinsurance company, over 16 years, we get
> mean(indemn(db$COUTSIN/1e6)) [1] 0.1624292
So, per claim, the reinsurance company will pay, on average 162,430. With 160 claims per year, the pure premium should be close to 26 million
> mean(indemn(db$COUTSIN/1e6))*160 [1] 25.98867
(again, for a 35 million cover, for some claims that should occur, on average, twice a year). As we will see, a standard model in reinsurance is the Pareto distribution (or to be more specific, a Generalized Pareto one),
There are three parameters here
- the threshold (that we will consider as fixed, but we will see its impact on reinsurance pricing)
- the scale parameter (called in R)
- the tail index
The strategy is to consider a threshold below our deductible, e.g. 12 million. Then, given that the loss exceed 12 million, we can fit a Generalized Pareto distribution,
> gpd.PL <- gpd(db$COUTSIN,12e6)$par.ests > gpd.PL xi beta 7.004147e-01 4.400115e+06
and compute
> E <- function(yinf,ysup,xi,beta,threshold){ + as.numeric(integrate(function(x) (x-yinf)*dgpd(x,xi,mu=threshold,beta), + lower=yinf,upper=ysup)$value+ + (1-pgpd(ysup,xi,mu=threshold,beta))*(ysup-yinf)) + }
Here, given that a claim exceeds 12 million, the average repayment is close to 6 million
> E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6) [1] 6058125
Now, we have to take into account the probability to reach 12 million, which is here
> mean(db$COUTSIN>12e6) [1] 0.02639296
So, if we summarize, we have on average 160 claims per year,
> p [1] 159.4757
Only 2.6% will exceed 12 million
> mean(db$COUTSIN>12e6) [1] 0.02639296
So, the yearly frequency of claism larger than 12 million is 4.2 claims
> p*mean(db$COUTSIN>12e6) [1] 4.209036
And for a claim that exceed 12 million, the average repayment is
> E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6) [1] 6058125
So, the pure premium should be close to
> p*mean(db$COUTSIN>12e6)*E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6) [1] 25498867
which (hopefully) is close to the empirical value we got. Actually, it is also possible to look at the impact of the threshold parameter, since it is clearly and intermediate value that could be changed. I mean, why 12 and not 10? Consider
> esp=function(threshold=12e6,p=sum(pred2010)){ + (gpd.PL <- gpd(db$COUTSIN,threshold)$par.ests) + return(p*mean(db$COUTSIN>threshold)*E(15e6,50e6,gpd.PL[1],gpd.PL[2],threshold)) + }
We can plot the pure premium as a function of that threshold,
> seuils=seq(1e6,15e6,by=1e6) > plot(seuils,Vectorize(esp)(seuils),type="b",col="red")
which is between 24 and 26 for large thresholds. Again, that is only the first step, and we can price a higher reinsurance layer, like a reinsurance contract with a deductible of 50 million (we have our previous reinsurance contract for claims below that threshold), and a cover of 50 million, for instance. For those high layers, it become interesting to have a parametric model, which should be more robust than the empirical average.