# Confidence vs. Credibility Intervals

Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach.

The (simple) way I see it is the following,

• for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random
• for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed

Or to quote Frequentism and Bayesianism: A Python-driven Primer,  a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of $\theta$ falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of $\theta$ will fall within it”.

To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were $n$ = 1047 policies. And 159 claims.

Consider the standard (frequentist) confidence interval. What does that mean that $$\overline{x}\pm\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}$$is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size $n$, with the same probability as the empirical one, i.e. $\widehat{\theta}$ (which is the meaning of “from data of this sort”). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,

> xbar <- 159
> n <- 1047
> ns <- 100
> M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)

I generate 100 samples of size $n$. For each sample, I compute the mean, and the confidence interval, from the previous relationship

> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n)
> IC=t(apply(M,2,fIC))
> MN=apply(M,2,mean)

Then we plot all those confidence intervals. In red when they do not contain the empirical mean

> k=(xbar/n<IC[,1])|(xbar/n>IC[,2])
> plot(MN,1:ns,xlim=range(IC),axes=FALSE,
+ xlab="",ylab="",pch=19,cex=.7,
+ col=c("blue","red")[1+k])
> axis(1)
> segments(IC[,1],1:ns,IC[,2],1:
+ ns,col=c("blue","red")[1+k])
> abline(v=xbar/n)

Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a $\mathcal{B}(\alpha,\beta)$ distribution. We’ve seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely

$\mathcal{B}\left(\alpha+\sum x_i,\beta+n-\sum x_i\right)$

Based on that property, the confidence interval is based on quantiles of that (posterior) distribution

> u=seq(.1,.2,length=501)
> v=dbeta(u,1+xbar,1+n-xbar)
> plot(u,v,axes=FALSE,type="l")
> I=u<qbeta(.025,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> I=u>qbeta(.975,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> axis(1)

What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)

> pk <- rbeta(ns,1+xbar,1+n-xbar)

In green, below, we can visualize the histogram of those values

> hist(pk,prob=TRUE,col="light green",
+ border="white",axes=FALSE,
+ main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))

And here again, let us generate samples, and compute the empirical probabilities,

> M=matrix(rbinom(n*ns,size=1,prob=rep(pk,
+ each=n)),nrow=n)
> MN=apply(M,2,mean)

Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,

> abline(v=qbeta(c(.025,.975),1+xbar,1+
+ n-xbar),col="red",lty=2)
> points(pk,seq(1,40,length=ns),pch=19,cex=.7)
> k=(MN<qbeta(.025,1+xbar,1+n-xbar))|
+ (MN>qbeta(.975,1+xbar,1+n-xbar))
> points(MN,seq(1,40,length=ns),
+ pch=19,cex=.7,col=c("blue","red")[1+k])
> segments(MN,seq(1,40,length=ns),
+ pk,seq(1,40,length=ns),col="grey")

More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,

## 12 thoughts on “Confidence vs. Credibility Intervals”

1. Vincent says:

Hi

Do you have a PDF version of this article because the equations don’t display anymore (there is an error message saying that you are “over quota”).

2. Vincent says:

Hi

Do you have a PDF version of this article because the equations don’t show anymore (there is an error message saying that you are “over quota”).

3. alex cristian varga says:

i wish to be a translate

4. AM says:

Very interesting & powerful

Time to publish all slides … and even a MOOC ?

5. Dominik says:

Dear Arthur,
thank you for that interesting post. It’s maybe a minor point, but why are you assuming the observed sample ratio as the true parameter for generating your artifical data? I would rather think the other way round: I would assume a true, unobserved parameter value giving rise to a population from which repeatedly samples are drawn. Then confidence intervals are computed for each samples. We expect that 95% of those intervals will cover the true values, i.e., a single interval has a probability of 0.95 to cover the true value.

1. in the frequentist, I believe that it is how it should be done : given your observed empirical parameter, you sample. I cannot use the “true” parameter… since it is unknown.
The 95% confidence interval is centered not on the true value, but on the empirical one ! So I do believe that it is how frequentist statisticians think…

1. Dominik says:

When I understood your post correctly, you are sampling from a distribution with the empirical mean as parameter and then construct zillions of confidence intervals from those samples. I think in this case, 95% of those intervals will include the empirical mean. However, what we actually care about is the true parameter from which the observed data stem used to computed this empirical mean. And I think this true one will not lie in 95% of your intervals.
I cooked up some code that attempts to show this using the standard normal as an example and uploaded it here:
http://pastebin.com/Y954D73r

2. SN says:

No this is not how frequentist statisticians think. Frequentist statisticians think in terms of repeating the experiment. They assume a true, unknown value of the mean of the sampling distribution (the “population mean”). Each time the experiment is repeated, a new sample mean is observed. What is the interest of saying “If I sample around the empirical mean, then ….” ?

6. Thanks, I may borrow this for my last class as well!

1. Thanks Xi’an ! glad you like it…! if you want some more examples – in an actuarial context – you can find some in my slides (some graphs contain animations… since it is the most convenient way I have to explain simulation based results).

This site uses Akismet to reduce spam. Learn how your comment data is processed.