Confidence vs. Credibility Intervals

Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate - using Monte Carlo simulations - the difference between classical statistics, and the Bayesien approach.

The (simple) way I see it is the following,

  • for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random
  • for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed

Or to quote Frequentism and Bayesianism: A Python-driven Primer,  a Bayesian statistician would say "given our observed data, there is a 95% probability that the true value of http://latex.codecogs.com/gif.latex?\theta falls within the credible region" while a Frequentist statistician would say "there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of http://latex.codecogs.com/gif.latex?\theta will fall within it".

To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were http://latex.codecogs.com/gif.latex?n = 1047 policies. And 159 claims.

Consider the standard (frequentist) confidence interval. What does that mean that

http://latex.codecogs.com/gif.latex?\overline{x}\pm%201.96%20\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}

is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size http://latex.codecogs.com/gif.latex?n, with the same probability as the empirical one, i.e. http://latex.codecogs.com/gif.latex?\widehat{\theta} (which is the meaning of "from data of this sort"). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,

> xbar <- 159
> n <- 1047
> ns <- 100
> M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)

I generate 100 samples of size http://latex.codecogs.com/gif.latex?n. For each sample, I compute the mean, and the confidence interval, from the previous relationship

> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n)
> IC=t(apply(M,2,fIC))
> MN=apply(M,2,mean)

Then we plot all those confidence intervals. In red when they do not contain the empirical mean

> k=(xbar/n<IC[,1])|(xbar/n>IC[,2])
> plot(MN,1:ns,xlim=range(IC),axes=FALSE,
+ xlab="",ylab="",pch=19,cex=.7,
+ col=c("blue","red")[1+k])
> axis(1)
> segments(IC[,1],1:ns,IC[,2],1:
+ ns,col=c("blue","red")[1+k])
> abline(v=xbar/n)

Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a http://latex.codecogs.com/gif.latex?\mathcal{B}(\alpha,\beta) distribution. We've seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely

http://latex.codecogs.com/gif.latex?\mathcal{B}\left(\alpha+\sum%20x_i,\beta+n-\sum%20x_i\right)

Based on that property, the confidence interval is based on quantiles of that (posterior) distribution

> u=seq(.1,.2,length=501)
> v=dbeta(u,1+xbar,1+n-xbar)
> plot(u,v,axes=FALSE,type="l")
> I=u<qbeta(.025,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> I=u>qbeta(.975,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> axis(1)

What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)

> pk <- rbeta(ns,1+xbar,1+n-xbar)

In green, below, we can visualize the histogram of those values

> hist(pk,prob=TRUE,col="light green",
+ border="white",axes=FALSE,
+ main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))

And here again, let us generate samples, and compute the empirical probabilities,

> M=matrix(rbinom(n*ns,size=1,prob=rep(pk,
+ each=n)),nrow=n)
> MN=apply(M,2,mean)

Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,

> abline(v=qbeta(c(.025,.975),1+xbar,1+
+ n-xbar),col="red",lty=2)
> points(pk,seq(1,40,length=ns),pch=19,cex=.7)
> k=(MN<qbeta(.025,1+xbar,1+n-xbar))|
+ (MN>qbeta(.975,1+xbar,1+n-xbar))
> points(MN,seq(1,40,length=ns),
+ pch=19,cex=.7,col=c("blue","red")[1+k])
> segments(MN,seq(1,40,length=ns),
+ pk,seq(1,40,length=ns),col="grey")

More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,

Somewhere else, part 188

http://freakonometrics.free.fr/mouseclic.gif

Some posts and articles worth reading

Continue reading

Somewhere else, part 187

Some posts and articles, worth reading,

Continue reading

Somewhere else, part 186

Some posts and articles worth reading

Continue reading

Somewhere else, part 185

Some posts and articles worth reading, found somewhere else

Continue reading

Somewhere else, part 184

Some posts and articles worth reading, published somewhere else

Continue reading

Somewhere else, part 183

Some posts and articles worth reading,

Continue reading

Selection_808

Mais que s'est-il passé pendant la Première Guerre Mondiale?

La réponse courte est que des gens sont morts. Beaucoup. Cela étant dit, on ne dit pas grand chose. On peut comparer les pyramides des âges pour mieux comprendre ce qui a pu se passer. Juste avant la guerre (en 1913), la pyramide des âges ressemblait à ça (en utilisant les données de mortality.org)

> EXPO  <- read.table(
+ "http://freakonometrics.free.fr/Exposures-France.txt", header=TRUE,skip=2)
> EM=EXPO$Male
> EF=EXPO$Female
> Y= EXPO$Year
> A= EXPO$Age
> I=which(A=="110+")
> base=data.frame(Female=EF,Male=EM,Y=Y,Ages=A)
> base=base[-I,]
> France1913=base[base$Y==1913,]
> France1919=base[base$Y==1919,]
> France1913$Ages=as.numeric(
+ as.character(France1913$Ages))
> France1919$Ages=as.numeric(
+ as.character(France1919$Ages))
> France1913=France1913[,c("Male","Female",
+ "Ages")]
> library(pyramid)
> plot(c(0,100), c(0,100), type="n", 
+ frame=FALSE, axes=FALSE, xlab="", ylab="",
+ main="Pyramide des Ages, France, 1913")
> pyramidf(France1913, frame=c(10, 75, 0, 90), 
+ Clab="", Lcol="skyblue", Rcol="pink",
+ Cstep=10, Laxis=0:4*60000, AxisFM="d")

En revanche, juste après la guerre (en 1919), la pyramide des âges des âges ressemblait à celle là Continue reading

Somewhere else, part 182

Some posts and articles worth reading, found somewhere else, on the web

Continue reading

Selection_165

Formation R à la CIMA, au Gabon

Je vais bientôt descendre une semaine au Gabon, au siège de la CIMA, la Conférence Interafricaine des Marchés d'Assurance, pour une formation à R.

Souhaitant un cours aussi interactif que possible, je n'ai pas prévu de transparents. Je mettrais toutefois en ligne des documents, si nécessaire. J'avais fait un billet, la semaine passée, pour expliquer comment importer les tables de mortalité. On travaillera sur ces bases pour apprendre à manipuler les fonctions de base de R, lundi.

Sinon, en guise d'introduction, je mentionnerais quelques livres ou notes de cours, en pdf,

  • “R pour les débutants” par Emmanuel Paradis, (pdf)
  • “Introduction à la programmation en R, Quatrième édition” par Vincent Goulet (pdf)
  • “Brise Glace-R (ouvrir la voie aux pôles statistiques)” par Andrew Robinson et Arnaud Schloesing (pdf)
  • “Introduction à R” par Julien Barnier (pdf)
  • “Aide mémoire R” par Mayeul Kauffmann (pdf)
  • “Lire ; Compter ; Tester... avec R”  (pdf)
  • “L'Actuariat avec R” par Arthur Charpentier et Christophe Dutang (pdf)

et je peux aussi mentionner les slides d'Ewen Gallic,

(pour les derniers transparents, je doute toutefois qu'on parlera de ggplot2, je pense qu'on se limitera aux fonctions de base). A suivre...

Selection_793

Excel (and French people) are such a pain in the...

A few days ago, I published a post entitled extracting datasets from excel files in a zipped folder, because I wanted to use datasets that were online, in some (zipped) excel format. The first difficult part was the folder with a non-standard character (the French é). Because next week I should be using those dataset in a crash course in Gabon (in Africa), I wanted to make sure that everthing will go fine when we will run the code. And discussing @3wen's trick on Day 1 was maybe not the best way to explain that R is a very simple tool that should be used for data analysis...

To make thing easier, I did upload the xlsx files on my webpage. I wanted to use the xlsx R package. Unfortunately, on my linux laptop, I have troubles installing that package.

Continue reading

Le Mans Insurance & Finance Risk Colloquium

This Thursday and Friday, a Colloqium on Insurance and Finance risks will take place in the University of Le Mans. I will be giving a talk on non- and semi-parametric inference for risk measures, inspired by recent work with Emmanuel Flachaire. Our first paper log-transform kernel density estimationof income distribution is online  on http://papers.ssrn.com/id=2514882, and should appear soon. The second one is still in progress, codes are still running. I will upload the slides once the working paper is available...

An Open Lab-Notebook Experiment


Research blogs