Tag Archives: multinomial

Multinomial Logit as an Iterated Logit Regression

For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions.

Consider here a response variable Y with a multinomial distribution (3 factors to have something more general than the binomial), taking values \{A,B,C\}, with respective probabilities \mathbf{p}=(p_A,p_B,p_C). Here is a code to generate some multinomial variables

msample=function(A,B,C){
Y=rep(NA,B)
for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])}
return(Y)
}

and here is a code to generate a dataset with n rows,

generate3=function(n,x,pb=c(-2,0)){
set.seed(x)
X1=runif(n)
X2=runif(n)
X3=runif(n)
s1=pb[1]+X1+X2
s2=pb[2]-X1+X2
P1=exp(s1)/(1+exp(s1)+exp(s2))
P2=exp(s2)/(1+exp(s1)+exp(s2))
Y=msample(0:2,n,cbind(1-P1-P2,P1,P2))
df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3)
return(df)
}

Let us generate a training dataset and a validation one

pb=c(.31,.42)
DF1=generate3(1000,1,pb=pb)
DF2=generate3(500,2,pb=pb)

With a multivariate logistic regression
\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}

For convenience, consider the most popular factor in our training dataset

modalite=names(sort(table(DF1$Y),decreasing = TRUE))

Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions.

library(nnet)
reg=multinom(as.factor(Y) ~ ., data = DF1)
mp1=predict (reg, DF1, "probs")
mp2=predict (reg, DF2, "probs")

An alternative can be the following.
consider a first regression model on the Bernoulli variable Y_A=\mathbf{1}(Y=A). Actually, we will consider the most important factor, but for convenience, assume that it is A.
\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}
On our dataset, estimate that model, and get predictions. In the case where Y\neq A, define another Bernoulli variable Y_B=\mathbf{1}(Y=B|Y\neq A). We can estimate that model and derive two probabilities, \mathbb{P}(Y=B|Y\neq A) and \mathbb{P}(Y=C|Y\neq A) (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. \mathbb{P}[Y=A] is obtained from the first model, and we can derive the other two from \mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A] and \mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A].

reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial)
reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial)
p11=predict (reg1, newdata=DF1, type="response")
p12=predict (reg2, newdata=DF1, type="response")
p21=predict (reg1, newdata=DF2, type="response")
p22=predict (reg2, newdata=DF2, type="response")
mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12))
mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22))
colnames(mmp1)=colnames(mmp2)=modalite

Let us compare the predicted probabilites, on the same dataset (here the training dataset)

> mmp1[1:9,c("0","1","2")]
0 1 2
1 0.19728737 0.4991805 0.3035321
2 0.17244580 0.5648537 0.2627005
3 0.19291753 0.5971058 0.2099767
4 0.09087176 0.7787304 0.1303978
5 0.23400225 0.4083022 0.3576955
6 0.18063647 0.6637352 0.1556283
7 0.13188881 0.7402710 0.1278401
8 0.13776970 0.6524959 0.2097344
9 0.12325864 0.6790336 0.1977078
> mp1[1:9,c("0","1","2")]
0 1 2
1 0.19691036 0.5022692 0.3008205
2 0.17123189 0.5680647 0.2607034
3 0.19293066 0.5984402 0.2086291
4 0.08821851 0.7813318 0.1304497
5 0.23470739 0.4109990 0.3542936
6 0.18249687 0.6602168 0.1572863
7 0.13128711 0.7400898 0.1286231
8 0.13525341 0.6553618 0.2093848
9 0.12090016 0.6815915 0.1975084

The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions.

Applications of Chi-Square Tests

This morning, in our mathematical statistical class, we’ve seen the use of the chi-square test. The first one was related to some goodness of fit of a multinomial distribution. Assume that . In order to test  against , use the statistic

Under . For instance, we have the number of weddings, in a large city, per season,

> n=c(301,356,413,262)

We want to test if weddings are celebrated uniformely over the year, i.e. .

> np=rep(sum(n)/4,4)
> cbind(n,np)
       n  np
[1,] 301 333
[2,] 356 333
[3,] 413 333
[4,] 262 333
> Q=sum( (n-np)^2/np  )
> Q
[1] 39.02102

This quantity should be compared with the quantile of the chi-square distribution

> qchisq(.95,df=4-1)
[1] 7.814728

but it is also possible to compute the p-value,

> 1-pchisq(Q,df=4-1)
[1] 1.717959e-08

Here, we reject the assumption that weddings are celebrated uniformly over the year.

Continue reading Applications of Chi-Square Tests

Inference for the Multinomial Distribution

This morning, in our mathematical statistical class, we’ve seen briefly the multinomial distribution, and statistical inference.  has a  distribution if its probability function is

with  and .

The maximum likelihood estimator is then the optimum of

We use Lagrange multiplier to solve this constrained optimization problem,

First order conditions are here

and

Thus,

From

we can easily get that Lagrande multiplier is . And then

One can easily get that this maximum likelihood estimator is unbiased, since . Actually, we can easily prove that

and that , while . The trick to get the later is simple,

and . Thus, we can easily get the covariance. From that term, we can write that

with

while

Supervised Classification, beyond the logistic

In our data-science class, after discussing limitations of the logistic regression, e.g. the fact that the decision boundary line was a straight line, we’ve mentioned possible natural extensions. Let us consider our (now) standard dataset

 clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
 clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
 x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
 y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
 z <- c(1,1,1,1,1,0,0,1,0,0)
 df <- data.frame(x,y,z)
 plot(x,y,pch=19,cex=2,col=clr1[z+1])

One can consider a quadratic function of the covariates (instead of a linear one)

 reg=glm(z~x+y+I(x^2)+I(y^2)+I(x*y),
     data=df,family=binomial)
 summary(reg)
 
 pred_1 <- function(x,y){
 predict(reg,newdata=data.frame(x=x,
 y=y),type="response")>.5 }
 
 x_grid<-seq(0,1,length=101)
 y_grid<-seq(0,1,length=101)
 z_grid <- outer(x_grid,y_grid,pred_1)
 image(x_grid,y_grid,z_grid,col=clr2)
 points(x,y,pch=19,cex=2,col=clr1[z+1])

Continue reading Supervised Classification, beyond the logistic

Supervised Classification, Logistic and Multinomial

We will start, in our Data Science course,  to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue)

> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
> clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
> x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
> y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
> z <- c(1,1,1,1,1,0,0,1,0,0)
> df <- data.frame(x,y,z)
> plot(x,y,pch=19,cex=2,col=clr1[z+1])

To get a prediction, i.e. a partition of the space in two parts, consider some logistic regression

> reg=glm(z~x+y,data=df,family=binomial)
> summary(reg)
 
Call:
glm(formula = z ~ x + y, family = binomial, data = df)
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.6593  -0.4400   0.2564   0.5830   1.5374  
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   -1.706      1.999  -0.854    0.393
x             -5.489      5.360  -1.024    0.306
y              8.568      5.515   1.554    0.120
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 13.4602  on 9  degrees of freedom
Residual deviance:  8.1445  on 7  degrees of freedom
AIC: 14.144
 
Number of Fisher Scoring iterations: 5

Given some point, the predicted class is obtained using

> pred_1 <- function(x,y){
+ predict(reg,newdata=data.frame(x=x,
+ y=y),type="response")>.5
+ }

(here, the predicted class is simply the one that is the most likely). To visualize it use

> x_grid<-seq(0,1,length=101)
> y_grid<-seq(0,1,length=101)
> z_grid <- outer(x_grid,y_grid,pred_1)
> image(x_grid,y_grid,z_grid,col=clr2)
> points(x,y,pch=19,cex=2,col=clr1[z+1])

Since the logistic regression is a (generalized) linear model, the line that separate the two regions is a straight line.

Continue reading Supervised Classification, Logistic and Multinomial

Modeling individual losses with mixtures

Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following,

> n=nrow(couts)
> plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green")

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.10.26.png

It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here,

with

  • a distribution for small claims, https://latex.codecogs.com/gif.latex?{\color{Blue}%20f_1(}\cdot{\color{Blue}%20)}, e.g. an exponential distribution
  • a Dirac mass in https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\kappa}, i.e. https://latex.codecogs.com/gif.latex?{\color{Magenta}%20\delta_{\kappa}(}\cdot{\color{Magenta}%20)}
  • a distribution for larger claims, https://latex.codecogs.com/gif.latex?{\color{Red}%20f_3(}\cdot{\color{Red}%20)}, e.g. a Gamma, or a lognormal, distribution
>  I1=which(couts$cout<1120)
>  I2=which((couts$cout>=1120)&(couts$cout<1220))
>  I3=which(couts$cout>=1220)
>  (p1=length(I1)/nrow(couts))
[1] 0.3284823
>  (p2=length(I2)/nrow(couts))
[1] 0.4152807
>  (p3=length(I3)/nrow(couts))
[1] 0.256237
>  X=couts$cout
>  (kappa=mean(X[I2]))
[1] 1171.998
>  X0=X[I3]-kappa
>  u=seq(0,10000,by=20)
>  F1=pexp(u,1/mean(X[I1]))
>  F2= (u>kappa)
>  F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa)
>  F=F1*p1+F2*p2+F3*p3
>  lines(u,F)

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.13.43.png

In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e.

https://latex.codecogs.com/gif.latex?f(y|\boldsymbol{X})%20=%20p_1(\boldsymbol{X})%20{\color{Blue}%20f_1(}y|\boldsymbol{X}{\color{Blue}%20)}%20+%20p_2(\boldsymbol{X})%20{\color{Magenta}%20\delta_{\kappa}(}y{\color{Magenta}%20)}%20+%20p_3(\boldsymbol{X})%20{\color{Red}%20f_3(}y|\boldsymbol{X}{\color{Red}%20)}

which yield the following premium model,

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s_1)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple}%20{{\underbrace{\mathbb{E}(Y|Y\in(%20s_1,s_2],%20\boldsymbol{X})%20}_{B}}\cdot%20{\underbrace{\mathbb{P}(Y\in(%20s_1,s_2]|%20\boldsymbol{X})}_{D}}}}\\+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s_2,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s_2|%20\boldsymbol{X})}_{D}}}}

For the https://latex.codecogs.com/gif.latex?{\color{Blue}%20A}https://latex.codecogs.com/gif.latex?{\color{Magenta}%20B} and https://latex.codecogs.com/gif.latex?{\color{Red}%20C} terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if https://latex.codecogs.com/gif.latex?(\pi,1-\pi)=(\pi_1,\pi_2), then

https://latex.codecogs.com/gif.latex?\log%20\frac{\pi}{1-\pi}=\log%20\frac{\pi_1}{\pi_2}%20=\boldsymbol{X}%27\boldsymbol{\beta}

i.e.

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

and

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta})}

To derive a multivariate extension, write

https://latex.codecogs.com/gif.latex?\pi_1%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

https://latex.codecogs.com/gif.latex?\pi_2%20=%20\frac{\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

and

https://latex.codecogs.com/gif.latex?\pi_3%20=%20\frac{1}{1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)}

Again, maximum likelihood techniques can be used, since

https://latex.codecogs.com/gif.latex?\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto%20\prod_{i=1}^n%20\prod_{j=1}^3%20\pi_{i,j}^{Y_{i,j}}

where here, variable https://latex.codecogs.com/gif.latex?Y_{i}  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus,

https://latex.codecogs.com/gif.latex?\log%20\mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto%20\sum_{i=1}^n%20\sum_{j=1}^2%20\left(Y_{i,j}%20\boldsymbol{X}_i%27\boldsymbol{\beta}_j\right)%20-%20n_i\log\left[1+1+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}%27\boldsymbol{\beta}_2)\right]

and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g.

> seuils=c(0,1120,1220,1e+12)
> couts$tranches=cut(couts$cout,breaks=seuils,
+ labels=c("small","fixed","large"))
> head(couts,5)
  nocontrat    no garantie    cout exposition zone puissance agevehicule
1      1870 17219      1RC 1692.29       0.11    C         5           0
2      1963 16336      1RC  422.05       0.10    E         9           0
3      4263 17089      1RC  549.21       0.65    C        10           7
4      5181 17801      1RC  191.15       0.57    D         5           2
5      6375 17485      1RC 2031.77       0.47    B         7           4
  ageconducteur bonus marque carburant densite region tranches
1            52    50     12         E      73     13    large
2            78    50     12         E      72     13    small
3            27    76     12         D      52      5    small
4            26   100     12         D      83      0    small
5            46    50      6         E      11     13    large

Then, we can run a multinomial regression, from

> library(nnet)

using some selected covariates

> reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts)
# weights:  30 (18 variable)
initial  value 2113.730043 
iter  10 value 2063.326526
iter  20 value 2059.206691
final  value 2059.134802 
converged

The output is here

> summary(reg)
Call:
multinom(formula = tranches ~ ageconducteur + agevehicule + zone + 
    carburant, data = couts)

Coefficients:
      (Intercept) ageconducteur agevehicule      zoneB      zoneC
fixed  -0.2779176   0.012071029  0.01768260 0.05567183 -0.2126045
large  -0.7029836   0.008581459 -0.01426202 0.07608382  0.1007513
           zoneD      zoneE      zoneF   carburantE
fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715
large  0.3434686  0.1803350 -0.1969320  0.039414682

Std. Errors:
      (Intercept) ageconducteur agevehicule     zoneB     zoneC     zoneD
fixed   0.2371936   0.003738456  0.01013892 0.2259144 0.1776762 0.1838344
large   0.2753840   0.004203217  0.01189342 0.2746457 0.2122819 0.2151504
          zoneE     zoneF carburantE
fixed 0.1830139 0.3377169  0.1106009
large 0.2160268 0.3624900  0.1243560

To visualize the impact of a covariate (one, only), one can use also spline functions

> library(splines)
> reg=multinom(tranches~agevehicule,data=couts)
# weights:  9 (4 variable)
initial  value 2113.730043 
final  value 2072.462863 
converged
> reg=multinom(tranches~bs(agevehicule),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2070.496939
iter  20 value 2069.787720
iter  30 value 2069.659958
final  value 2069.479535 
converged

For instance, if the covariate is the age of the car, we do have the following probabilities

> predict(reg,newdata=data.frame(agevehicule=5),type="probs")
    small     fixed     large 
0.3388947 0.3869228 0.2741825

and for all ages from 0 to 20,

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.02.55.png

For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities

> reg=multinom(tranches~bs(densite),data=couts)
# weights:  15 (8 variable)
initial  value 2113.730043 
iter  10 value 2068.469825
final  value 2068.466349 
converged
> predict(reg,newdata=data.frame(densite=90),type="probs")
    small     fixed     large 
0.3484422 0.3473315 0.3042263

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-16.05.29.png

Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset

> sbaseA=couts[couts$tranches=="small",]
> sbaseB=couts[couts$tranches=="fixed",]
> sbaseC=couts[couts$tranches=="large",]

with a threshold given by

> (k=mean(sousbaseB$cout))
[1] 1171.998

Then, let us run our four models,

> reg=multinom(tranches~bs(densite),data=couts)
> regA=glm(cout~bs(densite),data=sousbaseA,family=Gamma(link="log"))
> regB=glm(cout~1,data=sousbaseB,family=Gamma(link="log"))
> regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log"))

We can now compute predictions based on those models,

> nouveau=data.frame(densite=seq(10,100))
> proba=predict(reg,newdata=nouveau,type="probs")
> predA=predict(regA,newdata=nouveau,type="response")
> predB=predict(regB,newdata=nouveau,type="response")
> predC=predict(regC,newdata=nouveau,type="response")+k
> pred=cbind(predA,predB,predC)

To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset),

> cbind(proba,pred)[seq(10,90,by=10),]
       small     fixed     large    predA    predB    predC
10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904
20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890
30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030
40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051
50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994
60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833
70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601
80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078
90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379

Now, it is possible to plot those figures in a graph,

> barplot(t(proba*pred))
> abline(h=mean(couts$cout),lty=2)

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-15-a%CC%80-11.50.47.png

(the dotted horizontal line is the average cost of a claim, in our dataset).