Category Archives: ACTINFO

On the “correlation” between a continuous and a categorical variable

Let us get back on the Titanic dataset,

loc_fichier = "http://freakonometrics.free.fr/titanic.RData"
download.file(loc_fichier, "titanic.RData")
load("titanic.RData")
base = base[!is.na(base$Age),]

On consider two variables, the age x (the continuous one) and the survivor indicator y (the qualitative one)

X = base$Age
Y = base$Survived

It looks like the age might be a valid explanatory variable in the logistic regression,

summary(glm(Survived~Age,data=base,family=binomial))
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)  
(Intercept) -0.05672    0.17358  -0.327   0.7438  
Age         -0.01096    0.00533  -2.057   0.0397 *
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 964.52  on 713  degrees of freedom
Residual deviance: 960.23  on 712  degrees of freedom
AIC: 964.23

The significance test here has a p-value just below 4%. Actually, one can relate it with the value of the deviance (the null deviance and the residual deviance). Recall thatD=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\widehat{\boldsymbol{\mu}})\big)whileD_0=2\big(\log\mathcal{L}(\boldsymbol{y})-\log\mathcal{L}(\overline{y})\big)Under the assumption that x is worthless, D_0-D tends to a \chi^2 distribution with 1 degree of freedom. And we can compute the p-value dof that likelihood ratio test,

1-pchisq(964.52-960.23,1)
[1] 0.03833717

(which is consistent with a Gaussian test). But if we consider a nonlinear transformation

summary(glm(Survived~bs(Age),data=base,family=binomial))
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)   0.8648     0.3460   2.500 0.012433 *  
bs(Age)1     -3.6772     1.0458  -3.516 0.000438 ***
bs(Age)2      1.7430     1.1068   1.575 0.115299    
bs(Age)3     -3.9251     1.4544  -2.699 0.006961 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 964.52  on 713  degrees of freedom
Residual deviance: 948.69  on 710  degrees of freedom

which seems to be “more significant”

1-pchisq(964.52-948.69,3)
[1] 0.001228712

So it looks like the variable x is interesting here.

To visualize the non-null correlation, one can consider the condition distribution of x given y=1, and compare it with the condition distribution of x given y=0,

ks.test(X[Y==0],X[Y==1])
 
	Two-sample Kolmogorov-Smirnov test
 
data:  X[Y == 0] and X[Y == 1]
D = 0.088777, p-value = 0.1324
alternative hypothesis: two-sided

i.e. with a p-value above 10%, the two distributions are not significatly different.

F0 = function(x) mean(X[Y==0]<=x)
F1 = function(x) mean(X[Y==1]<=x)
vx = seq(0,80,by=.1)
vy0 = Vectorize(F0)(vx)
vy1 = Vectorize(F1)(vx)
plot(vx,vy0,col="red",type="s")
lines(vx,vy1,col="blue",type="s")

(we can also look at the density, but it looks like that there is not much to see)

An alternative is discretize variable x and to use Pearson’s independence test,

k=5
LV = quantile(X,(0:k)/k)
LV[1] = 0
Xc = cut(X,LV)
table(Xc,Y)
           Y
Xc           0  1
  (0,19]    85 79
  (19,25]   92 45
  (25,31.8] 77 50
  (31.8,41] 81 63
  (41,80]   89 53
chisq.test(table(Xc,Y))
 
	Pearson's Chi-squared test
 
data:  table(Xc, Y)
X-squared = 8.6155, df = 4, p-value = 0.07146

The p-value is here 7%, with five categories for the age. And actually, we can compare the p-value

pvalue = function(k=5){
LV = quantile(X,(0:k)/k)
LV[1] = 0
Xc = cut(X,LV)
chisq.test(table(Xc,Y))$p.value}
vk = 2:20
vp = Vectorize(pvalue)(vk)
plot(vk,vp,type="l")
abline(h=.05,col="red",lty=2)

which gives a p-value close to 5%, as soon as we have enough categories. In the slides of the course (STT5100), I claim that actually, the age is an important variable when trying to predict if a passenger survived. Test mentioned here are not as conclusive, nevertheless…

Actuarial Pricing Game, with Reinsurance

The Third Actuarial Pricing Game is still open, the deadline for submission is still February 25th. As mentioned in the instructions, for those willing to play in a market where reinsurance is available, here are the prices offered by some reinsurance company.

As mentioned in the description, the price is per insurance policy, per year. Players should send me their premiums in a csv file, gross of reinsurance, and mention in the email that they want to purchase treaty (A) – for instance (and mention explicitly in the object of the email that they want to play in this specific market, where reinsurance is available).

Third Actuarial Pricing Game

With the support of ACTINFO Chair and the (French) Institute of Actuaries, our Third Actuarial Pricing Game starts today ! There is a toolbox file available online, with

  • a description of the game : the rules, the dates, and a description of the datasets
  • 3 datasets : one underwriting and one claims databases, for year 0 (training data) and one underwriting dataset to enter the game

Anyone can play. Students from various programs around the world, as well as practitioners are welcome to play. It can be by teams, and there are no limit on the size. And there is no registration: to start playing, teams have to submit a dataset before the deadline (end of February), to pricing-game@univ-rennes1.fr.

Teasing for the Third Actuarial Pricing Game

We will launch within the next few days the Third Actuarial Pricing Game. The goal will be to replicate the behavior of insurance markets over time. There will be a first stage (January-February) where players will have to build a pricing model for motor insurance policies on the basis of 50,000 contracts observed Year 0 (characteristics of contracts, policyholder, and claims datasets). Proposals for premiums for Year 1 will have to be provided for the same contracts (updated for the age, the location, the car model, etc.). Players will use only the underwriting datasets to provide premiums for Year 1.

Between February 25th (deadline for the proposal for premiums) and March 1st, we will replicate an insurance market by creating competition among insurers (players), and by setting simple rules to match drivers and insurers (randomly among the cheapest at the beginning of the game, and then, as time goes by, we will add inertia, i.e. a tendency to stay with the same insurer if the latter is not too expensive). Once the drivers and the playees are matched we will provide claims information for the insured that each player has won. A new premium proposal must then be submitted at the end of Year 1 (i.e. at the end of March). Etc.

We will here try to replicate the a car insurance market over four years. The goal (for players) will be to maximize the profit of the insurance company over those four years. To make the game more realistic, insurers are assumed to have capital, and they can remain in the game only if their yearly loss ratio (claims over premium) is less than 150%. More information next week (rules, and training datasets).

R in Insurance, 2017

Following the successfull conferences in London (2013, 2014, 2016) and in Amsterdam (2015), the next edition will take place in Paris. The R in insurance 2017 will take place in ENSAE on June 8.

This one-day conference will focus again on applications in insurance and actuarial science that use R, the lingua franca for statistical computation. The intended audience of the conference includes both academics and practitioners who are active or interested in the applications of R in insurance. The two invited speakers are Katrien Antonio (KU Leuven) and Julie Seguela (Covea). It will be a nice event !