Tag Archives: bayesian

Classification from scratch, linear discrimination 8/8

Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis.

Bayes (naive) classifier

Consider the follwing naive classification rulem^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}orm^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}(where \mathbb{P}[\mathbf{X}=\mathbf{x}] is the density in the continuous case).

In the case where y takes two values, that will be standard \{0,1\} here, one can rewrite the later asm^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}and the set\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}is called the decision boundary.

Assume that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then explicit expressions can be derived.m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}where r_y^2 is the Manalahobis distance, r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]

Let \delta_ybe defined as\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)the decision boundary of this classifier is \{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}which is quadratic in {\color{blue}{\mathbf{x}}}. This is the quadratic discriminant analysis. This can be visualized bellow.

The decision boundary is here

But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

In that case, actually, \delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y) and the decision frontier is now linear in {\color{blue}{\mathbf{x}}}. This is the linear discriminant analysis. This can be visualized bellow

Here the two samples have the same variance matrix and the frontier is

Link with the logistic regression

Assume as previously that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}is equal to \mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}which is linear in \mathbf{x}\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule.

Observe furthermore that the slope is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}which is maximal when \mathbf{\omega} is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], when \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

Homebrew linear discriminant analysis

To compute vector \mathbf{\omega}

m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean)
m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean)
Sigma = var(myocarde[,1:7])
omega = solve(Sigma)%*%(m1-m0)
omega
                 [,1]
FRCAR -0.012909708542
INCAR  1.088582058796
INSYS -0.019390084344
PRDIA -0.025817110020
PAPUL  0.020441287970
PVENT -0.038298291091
REPUL -0.001371677757

For the constant – in the equation \omega^T\mathbf{x}+b=0 – if we have equiprobable probabilities, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

Application (on the small dataset)

In order to visualize what’s going on, consider the small dataset, with only two covariates,

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
m0 = apply(df[df$y=="0",1:2],2,mean)
m1 = apply(df[df$y=="1",1:2],2,mean)
Sigma = var(df[,1:2])
omega = solve(Sigma)%*%(m1-m0)
omega
         [,1]
x1 -2.640613174
x2  4.858705676


Using R regular function, we get

library(MASS)
fit_lda = lda(y ~x1+x2 , data=df)
fit_lda
 
Coefficients of linear discriminants:
            LD1
x1 -2.588389554
x2  4.762614663

which is the same coefficient as the one we got with our own code. For the constant, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

If we plot it, we get the red straight line

plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")])
abline(a=b/omega[2],b=-omega[1]/omega[2],col="red")


As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters

points(m0["x1"],m0["x2"],pch=4)
points(m1["x1"],m1["x2"],pch=4)
segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue")
points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19)

Of course, we can also use R function

predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)


One can also consider the quadratic discriminent analysis since it might be difficult to argue that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1

fit_qda = qda(y ~x1+x2 , data=df)

The separation curve is here

plot(df$x1,df$x2,pch=19,
col=c("blue","red")[1+(df$y=="1")])
predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)

How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in , the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time( 
  stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé 
      0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean))
+ }

We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations,

> vN = c(10,100,1000,10000)
> T = Vectorize(time_lm)(vN)

we can then plot it

> plot(vN,T[2,]/1e6,log="xy",col="red",type="b",
+      xlab="Number of Observations",ylab="Time")
> lines(vN,T[1,]/1e6,col="blue",type="b")

It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones).

And actually, the same story olds for logistic regressions. Consider the following dataset

> n=1e6
> X=runif(n)
> S=-3+2*X+rnorm(n)
> Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S)))
> B=data.frame(X,Y)

The frequentist version of the logistic regression is

> glm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = glm(Y~X,data=B[idx,],family=binomial)
+   summary(reg)
+ }

and the Bayesian one, using stan,

> stan_glm = "
+ data {
+ int N;
+ vector[N] x;
+ int<lower=0,upper=1> y[N];
+ }
+ parameters {
+ real alpha;
+ real beta;
+ }
+ model {
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ y ~ bernoulli_logit(alpha + beta * x);
+ }
+ "
> stanmodel = stan_model(model_code = stan_glm) )
> glm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+        data = list(N=n,
+        x = X[idx],
+        y = Y[idx]),
+        iter = 1000, warmup=200)
+   summary(fit)
+ }

Again, we can see how long it takes to run those regression models

> time_gl m= function(n){
+   M = microbenchmark(glm_freq(n),
+   glm_bayes(n),times=50)
+   return(apply( matrix(M$time,nrow=2),1,mean))
+ }

 

Will I ever be a bayesian statistician ? (part 1)

Last week, during the workshop on Statistical Methods for Meteorology and Climate Change (here), I discovered how powerful bayesian techniques could be, and that there were more and more bayesian statisticians. So, if I was to fully understand applied statisticians in conferences and workshops, I really have to understand basics of bayesian statistics. I have published some time ago some posts on bayesian statistics applied to actuarial problems (here or there), but so far, I always thought that bayesian was a synonym for magician. To be honest, I am a Muggle, and I have not been trained as a bayesian. But I can be an opportunist…

So I decided to publish some posts on bayesian techniques, in order to prove that it is actually not that difficult to implement.

As far as I understand it, in bayesian statistics, the parameter is considered as a random variable (which is also the case, in classical mathematical statistics). But here, here assume that this parameter does have a parametric distribution….
Consider a classical statistical problem: assume we have a sample http://freakonometrics.free.fr/blog/bayy1.png i.i.d. with distribution http://freakonometrics.free.fr/blog/bayy2.png. Here we note

http://freakonometrics.free.fr/blog/bayy3.png

since parameter http://freakonometrics.free.fr/blog/bayyyyy001.png is a random variable. The idea is to assume that http://freakonometrics.free.fr/blog/bayyyyy001.png has a (so called a priori) distribution, e.g.

http://freakonometrics.free.fr/blog/bayy4.png

So far it was simple. The idea is then to consider the posterior distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png, given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png. Thus, we need to compute the distribution of http://freakonometrics.free.fr/blog/bayyyyyy03.png which is here extremely simple (due to properties of the Gaussian distribution), i.e.

http://freakonometrics.free.fr/blog/bayyyyyy04.png

where

http://freakonometrics.free.fr/blog/bayyyyyy05.png

And them, it becomes extremely natural to consider http://freakonometrics.free.fr/blog/bayy20.png as an estimator of given our sample data (and thus, we also have a confidence interval since we know the distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png).
In order to be sure that we understood, consider now a heads and tails problem, i.e. http://freakonometrics.free.fr/blog/bayy5.png. Note, first, that \theta has support http://freakonometrics.free.fr/blog/bayy6.png. So we need a distribution on that support. Why not a beta distribution ? E.g.

http://freakonometrics.free.fr/blog/bayy7.png

Thus,

http://freakonometrics.free.fr/blog/bayy8.png

and

http://freakonometrics.free.fr/blog/bayy9.png

From Bayes formula,

http://freakonometrics.free.fr/blog/bayy10.png

and we get easily

http://freakonometrics.free.fr/blog/bayy11.png

which is the density of a Beta distribution, i.e.

http://freakonometrics.free.fr/blog/bayy12.png
prior=dbeta(u,a,b)
posterior=dbeta(u,a+y,n-y+b)

The estimator proposed is then the expected value of that conditional distribution,

http://freakonometrics.blog.free.fr/public/perso/bayyyyyyyyyyy.png

Note that

http://freakonometrics.free.fr/blog/bayy13.png

Further, it is possible to derive confidence intervals using quantiles of the posterior distribution.
On the graphs below, we consider the following heads/tails sample

A first idea is to consider a uniform prior distribution.

http://freakonometrics.free.fr/blog/bayes-cv-1.gif

A second idea is to consider an asymmetric beta distribution. First, with an asymmetry on the left,

http://freakonometrics.free.fr/blog/bayes-cv-3.gif

or on the right
http://freakonometrics.free.fr/blog/bayes-cv-2.gif

Finally a third idea is simply to get back to the standard Gaussian approximation,

http://freakonometrics.free.fr/blog/bayes-cv-gauss.gif

If we compare the four models, we obtain (the plain black line is the Gaussian approximated distribution for the empirical mean), and red lines are obtained from prior beta distributions

http://freakonometrics.free.fr/blog/bayes-cv-all.gif

The code to generate those graphs is the following
a1=1; b1=1
D1[1,]=dbeta(u,a,b)
a2=4; b2=2
D2[1,]=dbeta(u,a,b)
a3=2; b3=4
D3[1,]=dbeta(u,a,b)
setseed(1)
S=sample(0:1,size=100,replace=TRUE)
COULEUR=rev(rainbow(120))
D1=D2=D3=D4=matrix(NA,101,length(u))
for(s in 1:100){
y=sum(S[1:s])
D1[s+1,]=dbeta(u,a1+y,s-y+b1)
D2[s+1,]=dbeta(u,a2+y,s-y+b2)
D3[s+1,]=dbeta(u,a3+y,s-y+b3)
D4[s+1,]=dnorm(u,y/s,sqrt(y/s*(1-y/s)/s))
plot(u,D1[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D1[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D2[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D2[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D3[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D3[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[1,],col="white",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D4[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[s+1,],col="black",lwd=2,type="l",
ylim=c(0,8),xlab="",ylab="")
lines(u,D1[1+i,],col="blue")
lines(u,D2[1+i,],col="red")
lines(u,D3[1+i,],col="purple")
points(y/s,0,pch=3,cex=2)
}

Here, we can see that computations are simple if the prior distribution has a distribution which is the conjugate of the observations’ distribution (see here for the list of prior and posterior standard distributions).
So far, I have two questions that naturally show up

  • is it possible to start with a neutral prior distribution, non informative ?
  • what if we are no longer working with conjugate distributions ?

Well, I guess I have to work a bit more to answer those questions…. to be continued