Tag Archives: conjugate

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Will I ever be a bayesian statistician ? (part 2)

A few weeks ago, I started a series of posts on the magic of bayesian statistics from the eyes of a muggle (see http://freakonometrics.hypotheses.org/2191). It might be time to go a bit further…. And today, I wanted to discuss the choice of the a priori distribution of the parameter (which was mentioned in the commentary of the previous post). As far as I understood, there are several houses with different ideas on how to choose it.

  • The conjugate house

The first idea (used in the previous post, here) is to consider an exponential distribution. To be formal, those distributions can be written as

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-10.gif

(in a form as general as possible), i.e.

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-11.gif

Here http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif is somehow the new parameter of the distribution. Then, a conjugate priorhttp://freakonometrics.blog.free.fr/public/perso2/bayes-prior-12.gif for the parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif of the exponential family is given by

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-15.gif

where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-16.gif (where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-17.gif is the dimension of http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif).

The conjugate prior is interesting since, when combined with the likelihood (and normalized), produces a posterior distribution which is of the same type as the prior. And a lot of standard distributions have a conjugate prior. E.g.

  • For a Bernoulli distribution, i.e. http://freakonometrics.blog.free.fr/public/perso2/conj-00c.gif are i.i. with distribution http://freakonometrics.blog.free.fr/public/perso2/conju-01.gif, assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, , then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conju-03.gif
  • For a binomial distribution,  http://freakonometrics.blog.free.fr/public/perso2/conju-05.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conj-06.gif
  • For a Negative Binomial distribution, http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-20.gif,  assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posterioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conj-08.gif
  • For a Poisson distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-09.gif, assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conj-10.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gifis still gamma, with parameter
http://freakonometrics.blog.free.fr/public/perso2/conj-13.gif
  • For a Geometric distribution,http://freakonometrics.blog.free.fr/public/perso2/conj-15.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-22.gif
  • For an Exponential distributionhttp://freakonometrics.blog.free.fr/public/perso2/conj-23.gif assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif ishttp://freakonometrics.blog.free.fr/public/perso2/bayes-prior-21.gif , then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-22.gif
  • For a Gaussian distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-28.gif assume that http://freakonometrics.blog.free.fr/public/perso2/conj-31.gif, then
http://freakonometrics.blog.free.fr/public/perso2/conj-35.gif

  • For a gamma distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-43.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is  http://freakonometrics.blog.free.fr/public/perso2/bayes-000000.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-23.gif
  • For a Pareto distribution, , assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-21.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters

http://freakonometrics.blog.free.fr/public/perso2/bayes----ooooo.gif

  • The non-informative or vague house

So far, the choice of the prior was not neutral, in the sense that the a priori of the statistician will have an influence on a posteriori distributions (we’ll discuss that point further later on). We could be interested by the case where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-12.gif is somehow t neutral. A famous example is the case of  http://freakonometrics.blog.free.fr/public/perso2/conju-01.gifdistribution. Between 1745 and 1784, Pierre Simon Laplace observed 393,386 birth of boys versus 377,555 birth of girls (or 251,527 boys versus 241,945 girls if we consider the initial article, for birth before 1770). He wanted to quantify the probability that p, the provability to have a boy, exceed 1/2. He assume that a priorip was uniform on the unit interval claiming that it was being as neutral as possible. But it is not that correct.
The idea of noninformative prior is that we should get an equivalent result when considering a transformed parameter. So assume that the parameter is no longer http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-03.gif, but http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif, where http://freakonometrics.blog.free.fr/public/perso2/bayes-----0000.gif (for some bijective transformation). The distribution (density) of http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif is then

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-01.gif

Let http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-02.gif denote Fisher information of parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-03.gif,

http://freakonometrics.blog.free.fr/public/perso2/bayes-priori-04.gif

Then, Fisher information of parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif is

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-06.gif

which can be written

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-07.gif

So if we want a distribution invariant by transformation of the parameter (http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-08.gif), it seems natural to consider

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-09.gif

or at least something proportional to that square root, since we want to get a density.
Thus, from Jeffrey’s principle, the prior distribution for a single parameter is noninformative if it is taken proportional to the square root of Fisher’s information measure. For those who want to go further, see Noninformative Priors Do Not Exist or  A Catalog of Noninformative Priors.

  • For the Poisson distribution, the Jeffreys prior for the rate parameter  is
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-poisson.gif
  • For the Bernoulli distribution, the Jeffreys prior for the probability parameter  is
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-binomial.gif

This is the arcsine distribution and is a beta distribution with parameters 1/2.

  • The expert house

The idea is quite simple. We need a prior distribution so that

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-32.gif

But assume that we have already seen a similar problem before. For instance, I remember Eric Parent mentioning the case of river flow models. If we have two similar rivers, then it might be interesting to use information on one river as an a prioriinformation for the second one, something like

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-33.gif

I guess it is also possible to use meta-regression to get an aggregation of experts opinion.

To go further on bayesian statistics, I suggest to go on Albus Dumbledore’s og,here, or the the blog of some PhD (and postdoc) students in Hogwarts, there. Or if you can wait, a dozen other posts will come soon (well, let’s hope so). The next one will probably be on a posteriori calculations (which is the natural step since we’ve seen a priori choice here).

Will I ever be a bayesian statistician ? (part 1)

Last week, during the workshop on Statistical Methods for Meteorology and Climate Change (here), I discovered how powerful bayesian techniques could be, and that there were more and more bayesian statisticians. So, if I was to fully understand applied statisticians in conferences and workshops, I really have to understand basics of bayesian statistics. I have published some time ago some posts on bayesian statistics applied to actuarial problems (here or there), but so far, I always thought that bayesian was a synonym for magician. To be honest, I am a Muggle, and I have not been trained as a bayesian. But I can be an opportunist…

So I decided to publish some posts on bayesian techniques, in order to prove that it is actually not that difficult to implement.

As far as I understand it, in bayesian statistics, the parameter is considered as a random variable (which is also the case, in classical mathematical statistics). But here, here assume that this parameter does have a parametric distribution….
Consider a classical statistical problem: assume we have a sample http://freakonometrics.free.fr/blog/bayy1.png i.i.d. with distribution http://freakonometrics.free.fr/blog/bayy2.png. Here we note

http://freakonometrics.free.fr/blog/bayy3.png

since parameter http://freakonometrics.free.fr/blog/bayyyyy001.png is a random variable. The idea is to assume that http://freakonometrics.free.fr/blog/bayyyyy001.png has a (so called a priori) distribution, e.g.

http://freakonometrics.free.fr/blog/bayy4.png

So far it was simple. The idea is then to consider the posterior distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png, given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png. Thus, we need to compute the distribution of http://freakonometrics.free.fr/blog/bayyyyyy03.png which is here extremely simple (due to properties of the Gaussian distribution), i.e.

http://freakonometrics.free.fr/blog/bayyyyyy04.png

where

http://freakonometrics.free.fr/blog/bayyyyyy05.png

And them, it becomes extremely natural to consider http://freakonometrics.free.fr/blog/bayy20.png as an estimator of given our sample data (and thus, we also have a confidence interval since we know the distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png).
In order to be sure that we understood, consider now a heads and tails problem, i.e. http://freakonometrics.free.fr/blog/bayy5.png. Note, first, that \theta has support http://freakonometrics.free.fr/blog/bayy6.png. So we need a distribution on that support. Why not a beta distribution ? E.g.

http://freakonometrics.free.fr/blog/bayy7.png

Thus,

http://freakonometrics.free.fr/blog/bayy8.png

and

http://freakonometrics.free.fr/blog/bayy9.png

From Bayes formula,

http://freakonometrics.free.fr/blog/bayy10.png

and we get easily

http://freakonometrics.free.fr/blog/bayy11.png

which is the density of a Beta distribution, i.e.

http://freakonometrics.free.fr/blog/bayy12.png
prior=dbeta(u,a,b)
posterior=dbeta(u,a+y,n-y+b)

The estimator proposed is then the expected value of that conditional distribution,

http://freakonometrics.blog.free.fr/public/perso/bayyyyyyyyyyy.png

Note that

http://freakonometrics.free.fr/blog/bayy13.png

Further, it is possible to derive confidence intervals using quantiles of the posterior distribution.
On the graphs below, we consider the following heads/tails sample

A first idea is to consider a uniform prior distribution.

http://freakonometrics.free.fr/blog/bayes-cv-1.gif

A second idea is to consider an asymmetric beta distribution. First, with an asymmetry on the left,

http://freakonometrics.free.fr/blog/bayes-cv-3.gif

or on the right
http://freakonometrics.free.fr/blog/bayes-cv-2.gif

Finally a third idea is simply to get back to the standard Gaussian approximation,

http://freakonometrics.free.fr/blog/bayes-cv-gauss.gif

If we compare the four models, we obtain (the plain black line is the Gaussian approximated distribution for the empirical mean), and red lines are obtained from prior beta distributions

http://freakonometrics.free.fr/blog/bayes-cv-all.gif

The code to generate those graphs is the following
a1=1; b1=1
D1[1,]=dbeta(u,a,b)
a2=4; b2=2
D2[1,]=dbeta(u,a,b)
a3=2; b3=4
D3[1,]=dbeta(u,a,b)
setseed(1)
S=sample(0:1,size=100,replace=TRUE)
COULEUR=rev(rainbow(120))
D1=D2=D3=D4=matrix(NA,101,length(u))
for(s in 1:100){
y=sum(S[1:s])
D1[s+1,]=dbeta(u,a1+y,s-y+b1)
D2[s+1,]=dbeta(u,a2+y,s-y+b2)
D3[s+1,]=dbeta(u,a3+y,s-y+b3)
D4[s+1,]=dnorm(u,y/s,sqrt(y/s*(1-y/s)/s))
plot(u,D1[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D1[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D2[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D2[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D3[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D3[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[1,],col="white",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D4[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[s+1,],col="black",lwd=2,type="l",
ylim=c(0,8),xlab="",ylab="")
lines(u,D1[1+i,],col="blue")
lines(u,D2[1+i,],col="red")
lines(u,D3[1+i,],col="purple")
points(y/s,0,pch=3,cex=2)
}

Here, we can see that computations are simple if the prior distribution has a distribution which is the conjugate of the observations’ distribution (see here for the list of prior and posterior standard distributions).
So far, I have two questions that naturally show up

  • is it possible to start with a neutral prior distribution, non informative ?
  • what if we are no longer working with conjugate distributions ?

Well, I guess I have to work a bit more to answer those questions…. to be continued