Tag Archives: pricing

Talk at University of Illinois Urbana-Champaign

This Friday, it is our semester break in Montréal, so I will be giving a talk at the University of Illinois Urbana-Champaign, on fairness and discrimination in actuarial pricing. As mentioned in the abstract, the talk will be based on two recent papers, The Fairness of Machine Learning in Insurance: New Rags for an Old Man? and A fair pricing model via adversarial learning

Slides are now online.

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

The myth of interpretability of econometric models

There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model.

We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency Y is driven by some non-observable risk factor \Theta, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor \Theta is strongly correlated with X_1, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of Y on X_1 to derive our actuarial pricing model. But assume that, possibly, risk factor \Theta is also strongly correlated with X_2, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”.

Of course, since \Theta is strongly correlated with X_1 and X_2, it means that X_1 and X_2 are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to  illustrate.

 set.seed(123)
 n=1e5
 Theta=rnorm(n)
 X1=Theta+rnorm(n)/8
 X2=Theta+rnorm(n)/8
 L=exp(-3+Theta)
 Y=rpois(n,L)
 B=data.frame(Y,X1,X2)

Our first idea was to consider a model where Y is “explained” by the first variable X_1,

 g1=glm(Y~X1,data=B,family=poisson)
 summary(g1)
 
Coefficients:
         Estimate Std. Error z value Pr(>|z|)    
(Inter.) -2.97778    0.01544 -192.88   <2e-16 ***
X1        0.97926    0.01092   89.64   <2e-16 ***

As expected, our variable is “significant”, but also, probably more interesting, X_2, has no impact on the residuals

 B$e1=residuals(g1,type="pearson")
 g1e=lm(e1~X2,data=B)
 summary(g1e)
 
Coefficients:
          Estimate Std. Error t value Pr(>|t|)
(Inter.) 0.0003618  0.0031696   0.114    0.909
X2       0.0028601  0.0031467   0.909    0.363

The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers.

But we can also consider the other story. We can consider a model where Y is “explained” by the second variable X_2,

 g2=glm(Y~X2,data=B,family=poisson)
summary(g2)
 
Coefficients:
         Estimate Std. Error z value Pr(>|z|)    
(Inter.) -2.97724    0.01544 -192.81   <2e-16 ***
X2        0.97915    0.01093   89.56   <2e-16 ***

Here also we have a valid model, that can be interpreted, and here also X_1, has no impact on the residuals

 B$e2=residuals(g2,type="pearson")
 g2e=lm(e2~X1,data=B)
 summary(g2e)
 
Coefficients:
          Estimate Std. Error t value Pr(>|t|)
(Inter.) 0.0004863  0.0031733   0.153    0.878
X1       0.0027979  0.0031504   0.888    0.374

The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver.

So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable

 AIC(g1)
[1] 51013.39
 AIC(g2)
[1] 51013.15

Why not incorporate the two explanatory variables X_1 and X_2, at the same time, in our regression model, and let “the model” decide what to do…?

 g=glm(Y~X1+X2,data=B,family=poisson)
 summary(g)
 
Coefficients:
         Estimate Std. Error  z value Pr(>|z|)    
(Inter.) -2.98132    0.01547 -192.723    2e-16 ***
X1        0.49310    0.06226    7.920 2.38e-15 ***
X2        0.49375    0.06225    7.931 2.17e-15 ***

It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it?  So why not try some LASSO here?

library(glmnet)
fit=glmnet(x=as.matrix(B[,c("X1","X2")]), 
    y=B$Y,family="poisson")
plot(fit,xvar="lambda")

Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here?

Pricing Game, the results

Thursday, I will be in Paris, to discuss the results we got from the pricing game. I will present 12 ou 13 models sent to me, an discuss what happened when I created a market, where the models were competing. One or two models were clearly underestimating the losses, so with the results as they were send, each time, one company goy 80% market share and over 250% loss ratio. So I decided to normalize all the premiums, so that the average premium was the same, for all the companies. Slides are now available.

Pricing Game

In November, with Romuald Elie and Jérémie Jakubowicz, we will organize a session during the 100% Actuaires day, in Paris, based on a “pricing game“. We provide two datasets, (motor insurance, third party claims), with 2  years of experience, and 100,000 policies. Each ‘team’ has to submit premium proposal for 36,000 potential insured for the third year (third party, material + bodily injury).

We will work as a ‘price aggergator’ with all the teams, with simple matching rules (the cheapest is chosen, or more complex rules, based on random selection among cheap insurers). The complete description is available on line.

R codes to read the datasets are

> training <- read.csv2(
+ "http://freakonometrics.free.fr/training.csv")
> dim(training)
[1] 100021     20
> pricing <- read.csv2(
+ "http://freakonometrics.free.fr/pricing.csv")
> dim(pricing)
[1] 36311    15

Everyone is invited to play! The more, the merrier….

“Improving Segmentation” (using Lorenz curves, or sort of)

This afternoon, André did send me an interesting graph about the use of Lorenz curve in the context of insurance pricing (and modeling)

It is some sort of Lorenz curve, upside-down, with on the x-axis the proportion of the population, and on the y-axis the proportion of the losses. The important point is that the population is sorted according the their risk, i.e. their premium. The code to generate such a curve is actually quite simple,

L <- function(u,varx="premium",vary="losses"){
  base=base[order(base[,varx],decreasing=TRUE),]
  base$cum=(1:nrow(base))/nrow(base)
return(sum(base[base$cum<=u,vary])/
sum(base[,vary]))}
 
vu=seq(0,1,by=.01)
vv=Vectorize(function(u) L(u))(vu)

My concern was more on two labels on the figure, with on the top-left “perfect pricing” and on the first diagonal “average pricing“. What could that possibly mean? Is there even such a thing as a “perfect pricing“? In order to understand what we plot here, let us generate some dataset, and fit some model. Including things that might be seen as the “perfect model“: the price base on the parameters used to generate the data, and the model used to generate the data, fitted on the data.

Continue reading “Improving Segmentation” (using Lorenz curves, or sort of)

Pricing Game (100% Actuaires)

Début Novembre, avec Romuald Elie et Jérémie Jakubowicz, on devrait animer lors la journée 100% Actuaires un “pricing game“. Nous mettons à disposition une base de données, en assurance automobile, avec 2 ans d’observations, avec 100,000 polices d’assurance. Chaque équipe doit envoyer des propositions de primes pour un peu plus de 36,000 assurés potentiels, pour la 3ème année, en RC automobile (matériel et corporel, les deux informations – fréquence et coût – étant dans les bases mises à disposition).

On jouera ensuite le rôle de courtier (ou de comparateur de prix) entre les différentes équipes (avec un principe assez simple, l’assuré choisi l’assureur le moins cher, tous offrant les mêmes garanties, ou quelques autres variantes pour pimenter un peu l’analyse). Le descriptif complet est en ligne.

Les codes R pour lire les bases sont les suivants

> training <- read.csv2(
+ "http://freakonometrics.free.fr/training.csv")
> dim(training)
[1] 100021     20
> pricing <- read.csv2(
+ "http://freakonometrics.free.fr/pricing.csv")
> dim(pricing)
[1] 36311    15

Tout le monde peut participer, inutile d’être inscrit à la journée pour m’envoyer quelque chose ! Par contre, je veux (comme indiqué dans le descriptif) une base avec le numéro de police, et la prime proposée, mais aussi le code et un rapide descriptif de la méthodologie, et des variables utilisées. Rendez-vous en Novembre pour l’analyse des résultats du jeu.

Pricing options on multiple assets

I am a big fan of trees. It is a very nice way to see how financial pricing works, for derivatives. An with a matrix-based language (R for instance), it is extremely simple to compute almost everything. Even options multiple assets. Let us see how it works. But first, I have to assume that everyone knows about trees, and risk neutral probabilities, and is familiar with standard financial derivatives. Just in case, I can upload some old slides of the first course on asset pricing we gave a few years ago at École Polytechnique.

Let us get back on the pricing of (European) call options, with trees.The idea is simple. We have to fix the number of periods. Let us start with only one (as described in the slides above). The stock has price and can go either up, and then have price or go down, and have price . And the fundamental theorem of asset pricing says that we do not really care about probabilities of going up, or down. Assuming that we can buy or sell that stock, and that a risk free asset is available on the market, it is possible to price any contingent financial product, like a financial option. Since we know the final value of the option when the stock goes either up, or down, it is possible to replicate the payoff of that option using the stock and the risk free asset. And we can prove that the price of the option is simply

where the probability is the so-called risk neutral probability

So, we’ve done it here with only one single period, but it is possible to extend it to multiperiods. The idea is to keep that multiplicative representation of possible values of the stock, and to get a recombinant tree. At step 2, the stock can take only three different values: went up twice, went down twice, or went up and down (or the reverse, but we don’t care: this is the point of recombining). If we write things down, then we can prove that

for some probability parameter (the so-call risk neutral probability, if it is unique). But we do not really care about those closed formula, the goal is to write an algorithm which computes the tree, and return the price of a call option (say). But before starting, we have to make a connection between that model with up and down prices, and the parameters of the Black-Scholes diffusion, for the stock price. The idea is to identify the first and the second moment, i.e.

(where, under the risk neutral probability, the trend is the risk free rate) and

The code might look like that

n=5; T=1; r=0.05; sigma=.4;S=50;K=50
price=function(n){
u.n=exp(sigma*sqrt(T/n));
d.n=1/u.n
p.n=(exp(r*T/n)-d.n)/(u.n-d.n)
SJ=matrix(0,n+1,n+1)
SJ[1,1]=S
for(i in(2:(n+1)))
{for(j in(1:i)){SJ[i,j]=S*u.n^(i-j)*d.n^(j-1)}}
OPT=matrix(0,n+1,n+1)
OPT[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
for(i in(n:1))
{for(j in(1:i)){OPT[i,j]=exp(-r*T/n)*(OPT[i+1,j]*p.n+
(1-p.n)*OPT[i+1,j+1])}}
return(OPT[1,1])
}

We can plot the evolution of the price, as a function of the number of time periods (or subdivision of the time interval, from now till maturity of the European option),

N=10:400
V=Vectorize(price)(N)
plot(N,V,type="l")

Note that we can compare with the Black-Scholes price of this call option, given by

where

and

d1=1/(sigma*sqrt(T))*(log(S/K)+(r+sigma^2/2)*T)
d2=d1-sigma*sqrt(T)
BS=S*pnorm(d1)-K*exp(-r*T)*pnorm(d2)
abline(h=BS,lty=2,col="red")

The code is clearly not optimal, but at least, we see what’s going on. For instance, we do not need a matrix when we calculate using backward recursions the price of the option. We can just keep a single vector. But this matrix is nice, because we can use it to price American options. For instance, with the code below, we compare the price of an American put option, and the price of European put option.

price.american=function(n,opt="put"){
u.n=exp(sigma*sqrt(T/n)); d.n=1/u.n
p.n=(exp(r*T/n)-d.n)/(u.n-d.n)
SJ=matrix(0,n+1,n+1)
SJ[1,1]=S
for(i in(2:(n+1)))
{for(j in(1:i)) {SJ[i,j]=S*u.n^(i-j)*d.n^(j-1)}}
OPTe=matrix(0,n+1,n+1)
OPTa=matrix(0,n+1,n+1)
if(opt=="call"){
OPTa[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
OPTe[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
}
if(opt=="put"){
OPTa[n+1,]=(K-SJ[n+1,])*(SJ[n+1,]<K)
OPTe[n+1,]=(K-SJ[n+1,])*(SJ[n+1,]<K)
}
for(i in(n:1))
{
for(j in(1:i))
{if(opt=="call"){
OPTa[i,j]=max((SJ[i,j]-K)*(SJ[i,j]>K),
exp(-r*T/n)*(OPTa[i+1,j]*p.n+
(1-p.n)*OPTa[i+1,j+1]))}
if(opt=="put"){
OPTa[i,j]=max((K-SJ[i,j])*(K>SJ[i,j]),
exp(-r*T/n)*(OPTa[i+1,j]*p.n+
(1-p.n)*OPTa[i+1,j+1]))}

OPTe[i,j]=exp(-r*T/n)*(OPTe[i+1,j]*p.n+
(1-p.n)*OPTe[i+1,j+1])}}
priceop=c(OPTe[1,1],OPTa[1,1])
names(priceop)=c("E","A")
return(priceop)}

It is possible to compare those price, obtained on trees, with prices given by closed (approximated) formulas.

> d1=1/(sigma*sqrt(T))*(log(S/K)+(r+sigma^2/2)*T)
> d2=d1-sigma*sqrt(T)
> (BS=-S*pnorm(-d1)+K*exp(-r*T)*pnorm(-d2)  )
[1] 6.572947
> N=10:200
> M=Vectorize(price.american)(N)
> plot(N,M[1,],type='l',col='blue',ylim=range(M))
> lines(N,M[2,],type='l',col='red')
> abline(h=BS,lty=2,col='blue')
> library(fOptions)
> (am=BAWAmericanApproxOption(TypeFlag =
+ "p", S = S,X = K, Time = T, r = r,
+ b = r, sigma =sigma)@price)
[1] 6.840335
> abline(h=am,lty=2,col='red')

Another great thing with trees, is that it becomes possible to plot to region where it is optimal to exercise our right to sell the stock.

Let us move now to a model with two assets, as suggested by Rubinstein (1994). First, observe that a discretization of two independent Brownian motions will be based on two independent random walk, taking values

i.e. both went up (NW), both went down (SE), and one went up while the other went down (either NE or SW). With independent and symmetric random walks, the probabilities will be respectively 1/4. An if we move one step foreward, we have the following tree.

Here it is still recombining. But the size will increase much faster than in the univariate case. Now, assume that there might be some correlation. Then one can consider the following values, to have a specific correlation,

And again, the idea is then to identify the first two moments. This gives us the following system of equations for the four respective (risk neutral) probabilities

For those willing to do the maths, please do. The answer should be

and for the last one

The code here looks like that

price.spead=function(n){
T=1; r=0.05; K=0
S1=105
S2=100
sigma1=0.4
sigma2=0.3
rho=0.5
u1.n=exp(sigma1*sqrt(T/n)); d1.n=1/u1.n
u2.n=exp(sigma2*sqrt(T/n)); d2.n=1/u2.n

v1=r-sigma1^2/2; v2=r-sigma2^2/2
puu.n=(1+rho+sqrt(T/n)*(v1/sigma1+v2/sigma2))/4
pud.n=(1-rho+sqrt(T/n)*(v1/sigma1-v2/sigma2))/4
pdu.n=(1-rho+sqrt(T/n)*(-v1/sigma1+v2/sigma2))/4
pdd.n=(1+rho+sqrt(T/n)*(-v1/sigma1-v2/sigma2))/4
k=0:n
un=matrix(1,n+1,1)
SJ= (S1 * d1.n^k * u1.n^(n-k-1)) %*% t(un) -
un %*%t(S2 * d2.n^k * u2.n^(n-k-1))
OPT=(SJ)*(SJ>K)
for(k in(n:1))
{
OPT0=matrix(0,k,k)
for(i in(1:k))
{
for(j in(1:k))
{OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)}}
OPT=OPT0}
return(OPT[1,1])}

If we look at the details, consider two periods, like on the figure above, the are nine values for the spread,

> n=2
> SJ
[,1]      [,2]       [,3]
[1,]  32.02217  84.86869 119.443578
[2,] -47.84652   5.00000  39.574891
[3,] -93.20959 -40.36308  -5.788184

and the payoff of the option is here

> OPT
[,1]     [,2]      [,3]
[1,] 32.02217 84.86869 119.44358
[2,]  0.00000  5.00000  39.57489
[3,]  0.00000  0.00000   0.00000

So if we go backward of one step, we have the following square of values

> k=n
> OPT0<-matrix(0,k,k)
> for(i in(1:k))
+ {
+   for(j in(1:k))
+   {
+     OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
+ OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)
+ }
+ }
> OPT0
[,1]      [,2]
[1,] 22.2741190 58.421275
[2,]  0.5305465  5.977683

The idea is then to move backward once more,

> OPT=OPT0
> OPT0<-matrix(0,k,k)
> for(i in(1:k))
+ {
+   for(j in(1:k))
+   {
+     OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
+ OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)
+ }
+ }
> OPT0
[,1]
[1,] 16.44106

Here calculations are much (much) longer,

> price.spead(250)
[1]  15.66496

and again, it is possible to use standard approximations to compare that price with a more standard one,

> (sp=SpreadApproxOption(TypeFlag =
+ "c", S1 = 105, S2 = 100, X = 0,
+ Time = 1, r = .05, sigma1 = .4,
+ sigma2 = .3, rho = .5)@price)
[1]  15.65077

Well, playing with trees is nice, but it might not be optimal for complex products. Next time, we’ll discuss other techniques…

Economie de l’incertain, finance et asset pricing

Pour le premier cours à l’X, je replace Alfred Galichon. Les slides sont en ligne, ainsi que les énoncés des exercices. Le cours (ainsi que la PC auront lieu Amphi Painlevé).

En guise de complément sur l’exercice 2 sur l’APT, et une explication plus claire, je renvoie aux pages 162-171 du Asset Pricing de Cochrane. En particulier la dernière question portant sur l’approximate APT est traité (avec les mêmes notations, cf aussi ici).

Pricing catastrophe options in incomplete markets

Exposé sur Pricing catastrophe options in incomplete markets, à la conférence Actuarial and Financial Mathematics Conference (interplay between Finance and Insurance), à Bruxelles.

Cet exposé présentait la problématique de la valorisation d’options sur indices catastrophes (en marché incomplets). Une version détaillée apparaîtra dans les Proceedings.