From Simpson’s paradox to pies

Today, I wanted to publish a post on economics, and decision theory. And probability too… Those who do follow my blog should know that I am a big fan of Simpson’s paradox. I also love to mention it in my
econometric classes. It does raise important questions, that I do relate to multicolinearity, and interepretations of regression models, with multiple (negatively correlated) explanatory variables. This paradox has amazing pedogological virtues. I did mention it several times on this blog (I should probably mention that I discovered this paradox via Marco Scarsini, who did learn me a lot of things, in decision theory and in probability). For those who do not know this paradox, here is an example that Marco gave in one of his talk, a few years ago. Consider the following statistics, when healthy people entered in some hospital

hospital total survivors deaths survival
rate
hospital A 600 590 10 98%
hospital B 900 870 30 97%

while, when sick people entered in the same hospitals

hospital total survivors deaths survival
rate
hospital A 400 210 190 53%
hospital B 100 30 70 30%

Somehow, whatever your health situation, you should choose hospital A. Now, if we agregate

hospital total survivors deaths survival
rate
hospital A 1000 800 200 80%
hospital B 1000 900 100 90%

i.e. without any doubts, people should choose hospital B.

Actually, Simpson’s paradox is called Simpson’s paradox because Colin Blyth named it that way in 1972, in his paper entitled on Simpson’s paradox and the sure-thing principle (an economic article in a statistical journal), that can be downloaded from http://www.stat.cmu.edu/~fienberg/…. He found this paradox in a paper published in 1951 by Edward Simpson, even if other papers actually did mention it earlier. The most popular application is probably admission at Berckley’s graduate studiesprograms, and sex bias, see Bickel, Hammel & O’Connell (1975), that can be downloaded from http://www.unc.edu/~nielsen/…. I also mentioned a geometric interpretation of this paradox a few years ago on my blog, which is so simple to understand that the paradox is no longer a paradox actually, since on the example above, we had

and

while

With symbolic notations, one can have at the same time

and

with also

as shown on the graph below

There should be connection between Simpson’s paradox and the ecological fallacy (which is an issue I recently discovered and that I found extremely interesting, related again to difficulties of interpreting
regressions). But that’s another story. My point today is that Colin Blyth did mention another nice paradox, that is related, this time, to stochastic orderings. The idea is the following. Consider the three spinners drawn below (imagine some arrows in those circles)

  • spinner A: no matter where the arrow stops, the gain is 3,
  • spinner B: 56% chances to gain 2, 22% chances to gain 4, and 22% chances to gain 6,
  • spinner C: 51% chances to gain 1, 49% chances to gain 5.

Instead of spinners, it is also possible to consider three different lotteries,

You play against a friend, you pick a spinner, while the friend picks another. Everyone flick his arrow, the highest number wins (no matter the difference). Let us compute the odds. First case, A against B, from
A’s perspective

B-2 B-4 B-6
A-3 56%
+1
win
22%
-1
lose
22%
-3
lose

In that case, A has 56% chance of beating B. Second case, A against C, from A’s perspective,

C-1 C-5
A-3 51%
+1
win
49%
-2
lose
In that case, A has 51% chance of beating C. Third (an final) case, B against C, from B’s perspective. Assuming independence between the spinners, joint probabilities can easily be computed,
C-1 C-5
B-2 28.56%
+1
win
27.44%
-3
lose
B-4 11.22%
+3
win
10.78%
-1
lose
B-6 11.22%
+5
win
10.78%
+1
win
In that case, B has 61.78% chance of beating C. So, if we try to summarize,
  • A is the best choice, since it beats both with – always – more than 50% chance,
  • C is the worst choice, since it is beaten by both with – always – more than 50% chance,
Now, assume that you play not against one friend, but two friends. An everyone picks a different spinner. Let
us compute the odds, one more time. First case, A against B and C, from A’s perspective
B-2
C-1
B-2
C-5
B-4
C-1
B-4
C-5
B-6
C-1
B-6
C-5
A-3 28.56%
+1
win
27.44%
-2
lose
11.22%
-1
lose
10.78%
-1
lose
11.22%
-3
lose
10.78%
-3
lose
In that case, A has 28.56% chance of beating B and C. Second case, B against A and C, from B’s perspective,
A-3
C-1
A-3
C-5
B-2 28.56%
-1
lose
27.44%
-2
lose
B-4 11.22%
+1
win
10.78%
-1
lose
B-6 11.22%
+3
win
10.78%
+1
win
In that case, B has 33.22% chance of beating A and B.Third (an final) case, C against A, from C’s perspective,
A-3
B-2
A-3
B-4
A-3
B-6
C-1 28.56%
-2
lose
11.22%
-3
lose
11.22%
-5
lose
C-5 27.44%
+2
win
10.78%
+1
win
10.78%
-1
lose

In that case, C has 38.22% chance of beating A and B. So, if we try to summarize, this time

  • C is the best choice, since has (strictly) more than 1/3 chances to win, which the highest probability
  • A is the worst choice, since has (strictly) less than 1/3 chances to win, which the lowest probability

Odd isn’t it ? Now, is there an interpretation of that paradox ? Yes, Martin Gardner, in his paper on induction and probability, mentioned the case of drug testing. The value we had with the spinner is the health level, rated from 1 to 6. Thus, taking drug A, you always get an average health level of 3. With drug C, on the other hand, you get either very sick (level 1) or very well (level 5). Consider now a doctor who wants to maximize the patient’s chance of being well. If only pills A and C are available, then the doctor should choose A. This is what we’ve seen in the first part. Assume that now a company delivers a third pill, called drug B. Then the doctor should find C more interesting…. Odd, isn’t it ?

Colin Blyth gave a more amusing application. Assume that you like to go to the restaurant, and you like get a dessert there. Dessert A – the apple pie – is the average one, with a standard level, that you rank 3 (on a scale from 1 to 6). Dessert C – the cheese cake – can either be awfull (ranked 1) or delicious (ranked 5). You’d better go for the apple pie if you want to maximize the probability of not being disappointed (i.e. maximizing your “best chance” according to Colin Blyth, but I guess it can be interpreted as regret minimization too). Now assume that dessert B – the blueberry pie – is available (with ranks given by the spinner). Then you should go for the cheese cake. I let you imagine the discussion that you can have, then, with your favorite waitress

– Hi Mr Freakonometrics, do you want a piece of apple pie ? (yes, actually she also comes frequently on my blog, and knows me from my pseudo…)

– Probably. But actually, I was wondering if you did have your blueberry pie today ?

– Yes, in fact we do….

– Great, in that case, I’ll go for the cheese cake.

She’ll probably think that I am freak… so I hope she’ll come and read my post, to understand that, actually, it does make a lot of sense to go for what was supposed to be my worst case.

Modeling individual losses with mixtures

Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following,

> n=nrow(couts)
> plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green")

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.10.26.png

It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here,

with

  • a distribution for small claims, https://latex.codecogs.com/gif.latex?{\color{Blue}%20f_1(}\cdot{\color{Blue}%20)}, e.g. an exponential distribution
  • a Dirac mass in https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\kappa}, i.e. https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\delta_{\kappa}(}\cdot{\color{Magenta}%20)}
  • a distribution for larger claims, https://latex.codecogs.com/gif.latex?{\color{Red}%20f_3(}\cdot{\color{Red}%20)}, e.g. a Gamma, or a lognormal, distribution
>  I1=which(couts$cout<1120)
>  I2=which((couts$cout>=1120)&(couts$cout<1220))
>  I3=which(couts$cout>=1220)
>  (p1=length(I1)/nrow(couts))
[1] 0.3284823
>  (p2=length(I2)/nrow(couts))
[1] 0.4152807
>  (p3=length(I3)/nrow(couts))
[1] 0.256237
>  X=couts$cout
>  (kappa=mean(X[I2]))
[1] 1171.998
>  X0=X[I3]-kappa
>  u=seq(0,10000,by=20)
>  F1=pexp(u,1/mean(X[I1]))
>  F2= (u>kappa)
>  F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa)
>  F=F1*p1+F2*p2+F3*p3
>  lines(u,F)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.13.43.png

In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e.

https://latex.codecogs.com/gif.latex?f(y|\boldsymbol{X})%20=%20p_1(\boldsymbol{X})%20{\color{Blue}%20f_1(}y|\boldsymbol{X}{\color{Blue}%20)}%20+%20p_2(\boldsymbol{X})%20{\color{Magenta}%20\delta_{\kappa}(}y{\color{Magenta}%20)}%20+%20p_3(\boldsymbol{X})%20{\color{Red}%20f_3(}y|\boldsymbol{X}{\color{Red}%20)}

which yield the following premium model,

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s_1)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple}%20{{\underbrace{\mathbb{E}(Y|Y\in(%20s_1,s_2],%20\boldsymbol{X})%20}_{B}}\cdot%20{\underbrace{\mathbb{P}(Y\in(%20s_1,s_2]|%20\boldsymbol{X})}_{D}}}}\\+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s_2,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s_2|%20\boldsymbol{X})}_{D}}}}

For the https://latex.codecogs.com/gif.latex?{\color{Blue}%20A}https://latex.codecogs.com/gif.latex?{\color{Magenta}%20B} and https://latex.codecogs.com/gif.latex?{\color{Red}%20C} terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if https://latex.codecogs.com/gif.latex?(\pi,1-\pi)=(\pi_1,\pi_2), then

https://latex.codecogs.com/gif.latex?\log%20\frac{\pi}{1-\pi}=\log%20\frac{\pi_1}{\pi_2}%20=\boldsymbol{X}%27\boldsymbol{\beta}

i.e.

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

and

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

To derive a multivariate extension, write

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

and

https://latex.codecogs.com/gif.latex?\pi_3%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

Again, maximum likelihood techniques can be used, since

https://latex.codecogs.com/gif.latex?\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto%20\prod_{i=1}^n%20\prod_{j=1}^3%20\pi_{i,j}^{Y_{i,j}}

where here, variable https://latex.codecogs.com/gif.latex?Y_{i}  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus,

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto%20\sum_{i=1}^n%20\sum_{j=1}^2%20\left(Y_{i,j}%20\boldsymbol{X}_i%27\boldsymbol{\beta}_j\right)%20-%20n_i\log\left[1+1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)\right]

and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g.

> seuils=c(0,1120,1220,1e+12)
> couts$tranches=cut(couts$cout,breaks=seuils,
+ labels=c("small","fixed","large"))
> head(couts,5)
  nocontrat    no garantie    cout exposition zone puissance agevehicule
1      1870 17219      1RC 1692.29       0.11    C         5           0
2      1963 16336      1RC  422.05       0.10    E         9           0
3      4263 17089      1RC  549.21       0.65    C        10           7
4      5181 17801      1RC  191.15       0.57    D         5           2
5      6375 17485      1RC 2031.77       0.47    B         7           4
  ageconducteur bonus marque carburant densite region tranches
1            52    50     12         E      73     13    large
2            78    50     12         E      72     13    small
3            27    76     12         D      52      5    small
4            26   100     12         D      83      0    small
5            46    50      6         E      11     13    large

Then, we can run a multinomial regression, from

> library(nnet)

using some selected covariates

> reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts)
# weights:  30 (18 variable)
initial  value 2113.730043 
iter  10 value 2063.326526
iter  20 value 2059.206691
final  value 2059.134802 
converged

The output is here

> summary(reg)
Call:
multinom(formula = tranches ~ ageconducteur + agevehicule + zone + 
    carburant, data = couts)

Coefficients:
      (Intercept) ageconducteur agevehicule      zoneB      zoneC
fixed  -0.2779176   0.012071029  0.01768260 0.05567183 -0.2126045
large  -0.7029836   0.008581459 -0.01426202 0.07608382  0.1007513
           zoneD      zoneE      zoneF   carburantE
fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715
large  0.3434686  0.1803350 -0.1969320  0.039414682

Std. Errors:
      (Intercept) ageconducteur agevehicule     zoneB     zoneC     zoneD
fixed   0.2371936   0.003738456  0.01013892 0.2259144 0.1776762 0.1838344
large   0.2753840   0.004203217  0.01189342 0.2746457 0.2122819 0.2151504
          zoneE     zoneF carburantE
fixed 0.1830139 0.3377169  0.1106009
large 0.2160268 0.3624900  0.1243560

To visualize the impact of a covariate (one, only), one can use also spline functions

> library(splines)
> reg=multinom(tranches~agevehicule,data=couts)
# weights:  9 (4 variable)
initial  value 2113.730043 
final  value 2072.462863 
converged
> reg=multinom(tranches~bs(agevehicule),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2070.496939
iter  20 value 2069.787720
iter  30 value 2069.659958
final  value 2069.479535 
converged

For instance, if the covariate is the age of the car, we do have the following probabilities

> predict(reg,newdata=data.frame(agevehicule=5),type="probs")
    small     fixed     large 
0.3388947 0.3869228 0.2741825

and for all ages from 0 to 20,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.02.55.png

For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities

> reg=multinom(tranches~bs(densite),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2068.469825
final  value 2068.466349 
converged
> predict(reg,newdata=data.frame(densite=90),type="probs")
    small     fixed     large 
0.3484422 0.3473315 0.3042263

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.05.29.png

Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset

> sbaseA=couts[couts$tranches=="small",]
> sbaseB=couts[couts$tranches=="fixed",]
> sbaseC=couts[couts$tranches=="large",]

with a threshold given by

> (k=mean(sousbaseB$cout))
[1] 1171.998

Then, let us run our four models,

> reg=multinom(tranches~bs(densite),data=couts)
> regA=glm(cout~bs(densite),data=sousbaseA,family=Gamma(link="log"))
> regB=glm(cout~1,data=sousbaseB,family=Gamma(link="log"))
> regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log"))

We can now compute predictions based on those models,

> nouveau=data.frame(densite=seq(10,100))
> proba=predict(reg,newdata=nouveau,type="probs")
> predA=predict(regA,newdata=nouveau,type="response")
> predB=predict(regB,newdata=nouveau,type="response")
> predC=predict(regC,newdata=nouveau,type="response")+k
> pred=cbind(predA,predB,predC)

To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset),

> cbind(proba,pred)[seq(10,90,by=10),]
       small     fixed     large    predA    predB    predC
10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904
20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890
30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030
40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051
50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994
60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833
70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601
80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078
90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379

Now, it is possible to plot those figures in a graph,

> barplot(t(proba*pred))
> abline(h=mean(couts$cout),lty=2)

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-15-a%CC%80-11.50.47.png

(the dotted horizontal line is the average cost of a claim, in our dataset).