Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach.
The (simple) way I see it is the following,
- for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random
- for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed
Or to quote Frequentism and Bayesianism: A Python-driven Primer, a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of \theta falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of \theta will fall within it”.
To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were = 1047 policies. And 159 claims.
Consider the standard (frequentist) confidence interval. What does that mean that \overline{x}\pm\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size n, with the same probability as the empirical one, i.e. \widehat{\theta} (which is the meaning of “from data of this sort”). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,
> xbar <- 159 > n <- 1047 > ns <- 100 > M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)
I generate 100 samples of size . For each sample, I compute the mean, and the confidence interval, from the previous relationship
> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n) > IC=t(apply(M,2,fIC)) > MN=apply(M,2,mean)
Then we plot all those confidence intervals. In red when they do not contain the empirical mean
> k=(xbar/n<IC[,1])|(xbar/n>IC[,2]) > plot(MN,1:ns,xlim=range(IC),axes=FALSE, + xlab="",ylab="",pch=19,cex=.7, + col=c("blue","red")[1+k]) > axis(1) > segments(IC[,1],1:ns,IC[,2],1: + ns,col=c("blue","red")[1+k]) > abline(v=xbar/n)
Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a distribution. We’ve seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely
Based on that property, the confidence interval is based on quantiles of that (posterior) distribution
> u=seq(.1,.2,length=501) > v=dbeta(u,1+xbar,1+n-xbar) > plot(u,v,axes=FALSE,type="l") > I=u<qbeta(.025,1+xbar,1+n-xbar) > polygon(c(u[I],rev(u[I])),c(v[I], + rep(0,sum(I))),col="red",density=30,border=NA) > I=u>qbeta(.975,1+xbar,1+n-xbar) > polygon(c(u[I],rev(u[I])),c(v[I], + rep(0,sum(I))),col="red",density=30,border=NA) > axis(1)
What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)
> pk <- rbeta(ns,1+xbar,1+n-xbar)
In green, below, we can visualize the histogram of those values
> hist(pk,prob=TRUE,col="light green", + border="white",axes=FALSE, + main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))
And here again, let us generate samples, and compute the empirical probabilities,
> M=matrix(rbinom(n*ns,size=1,prob=rep(pk, + each=n)),nrow=n) > MN=apply(M,2,mean)
Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,
> abline(v=qbeta(c(.025,.975),1+xbar,1+ + n-xbar),col="red",lty=2) > points(pk,seq(1,40,length=ns),pch=19,cex=.7) > k=(MN<qbeta(.025,1+xbar,1+n-xbar))| + (MN>qbeta(.975,1+xbar,1+n-xbar)) > points(MN,seq(1,40,length=ns), + pch=19,cex=.7,col=c("blue","red")[1+k]) > segments(MN,seq(1,40,length=ns), + pk,seq(1,40,length=ns),col="grey")
More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (November 26, 2014). Confidence vs. Credibility Intervals. Freakonometrics. Retrieved October 10, 2024 from https://doi.org/10.58079/ouxs
Dear Arthur,
Many thanks for the post! It was super-helpful for me in the thinking through the differences between the two types of intervals.
I am puzzling over your Bayesian code, and think you may be double-counting the uncertainty. In particular, I think you’re actually “done” when you’ve computed pk (sampled from the posterior distribution). I believe your next step of using the sampled parameters (probabilities) in pk for sampling outcomes from the binomial likelihood is “compounding” (double-counting) the uncertainty. Apologies; I don’t know how to format code in comments. But, try replacing this:
M=matrix(rbinom(n*ns,size=1,prob=rep(pk,each=n)),nrow=n)
MN=apply(M,2,mean)
abline(v=qbeta(c(.025,.975),1+xbar,1+n-xbar),col=”red”,lty=2)
points(pk,seq(1,40,length=ns),pch=19,cex=.7)
k=(MNqbeta(.975,1+xbar,1+n-xbar))
points(MN,seq(1,40,length=ns),pch=19,cex=.7,col=c(“blue”,”red”)[1+k])
segments(MN,seq(1,40,length=ns),pk,seq(1,40,length=ns),col=”grey”)
With this:
abline(v=qbeta(c(.025,.975),1+xbar,1+n-xbar),col=”red”,lty=2)
k=(pkqbeta(.975,1+xbar,1+n-xbar))
points(pk,seq(1,40,length=ns),pch=19,cex=.7,col=alpha(c(“blue”,”red”)[1+k], 0.1))
cat(“% samples outside of CI:”, sum(k)/ns*100, “\n”)
Then, if you set ns to, say, 5000, you’ll see that we *are* getting ~5% of samples outside of the CI. (As you increase the number of sims via ns, you can get arbitrarily close to 5%.) Whereas with your code sum(k)/ns*100 is more like 16.5%.
One reason I can think of for using the binomial likelihood on the posterior parameters is to compute a “posterior predictive distribution” — that is, a distribution of *outcomes* (vs. of *parameters*). To get it, you just plug the posterior (parameters) into the likelihood function:
ppSamples = rbinom(ns, size=n, prob=pk)
Which gives us a posterior distribution of insurance claim numbers, centered around 159. This distribution accounts for both “parameter uncertainty” (via the posterior parameter distribution) and “outcome uncertainty” (via the likelihood function).
You certainly can take more than one “predictive” sample from the likelihood per sampled posterior parameter, but it doesn’t have any effect on the distribution. E.g., ppSamples2 = rbinom(ns*100, size=n, prob=rep(pk, 100) gets the same distribution as ppSamples.
Hi
Do you have a PDF version of this article because the equations don’t display anymore (there is an error message saying that you are “over quota”).
Thx in advance for your answer
Hi
Do you have a PDF version of this article because the equations don’t show anymore (there is an error message saying that you are “over quota”).
Thx in advance for your answer
i wish to be a translate
Very interesting & powerful
Time to publish all slides … and even a MOOC ?
Dear Arthur,
thank you for that interesting post. It’s maybe a minor point, but why are you assuming the observed sample ratio as the true parameter for generating your artifical data? I would rather think the other way round: I would assume a true, unobserved parameter value giving rise to a population from which repeatedly samples are drawn. Then confidence intervals are computed for each samples. We expect that 95% of those intervals will cover the true values, i.e., a single interval has a probability of 0.95 to cover the true value.
in the frequentist, I believe that it is how it should be done : given your observed empirical parameter, you sample. I cannot use the “true” parameter… since it is unknown.
The 95% confidence interval is centered not on the true value, but on the empirical one ! So I do believe that it is how frequentist statisticians think…
When I understood your post correctly, you are sampling from a distribution with the empirical mean as parameter and then construct zillions of confidence intervals from those samples. I think in this case, 95% of those intervals will include the empirical mean. However, what we actually care about is the true parameter from which the observed data stem used to computed this empirical mean. And I think this true one will not lie in 95% of your intervals.
I cooked up some code that attempts to show this using the standard normal as an example and uploaded it here:
http://pastebin.com/Y954D73r
No this is not how frequentist statisticians think. Frequentist statisticians think in terms of repeating the experiment. They assume a true, unknown value of the mean of the sampling distribution (the “population mean”). Each time the experiment is repeated, a new sample mean is observed. What is the interest of saying “If I sample around the empirical mean, then ….” ?
Thanks, I may borrow this for my last class as well!
Thanks Xi’an ! glad you like it…! if you want some more examples – in an actuarial context – you can find some in my slides (some graphs contain animations… since it is the most convenient way I have to explain simulation based results).