Confidence vs. Credibility Intervals

Tomorrow, for the final lecture of the Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach.

The (simple) way I see it is the following,

  • for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random
  • for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed

Or to quote Frequentism and Bayesianism: A Python-driven Primer,  a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of \theta falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of \theta will fall within it”.

To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were https://latex.codecogs.com/gif.latex?n = 1047 policies. And 159 claims.

Consider the standard (frequentist) confidence interval. What does that mean that \overline{x}\pm\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size n, with the same probability as the empirical one, i.e. \widehat{\theta} (which is the meaning of “from data of this sort”). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,

> xbar <- 159
> n <- 1047
> ns <- 100
> M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)

I generate 100 samples of size https://latex.codecogs.com/gif.latex?n. For each sample, I compute the mean, and the confidence interval, from the previous relationship

> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n)
> IC=t(apply(M,2,fIC))
> MN=apply(M,2,mean)

Then we plot all those confidence intervals. In red when they do not contain the empirical mean

> k=(xbar/n<IC[,1])|(xbar/n>IC[,2])
> plot(MN,1:ns,xlim=range(IC),axes=FALSE,
+ xlab="",ylab="",pch=19,cex=.7,
+ col=c("blue","red")[1+k])
> axis(1)
> segments(IC[,1],1:ns,IC[,2],1:
+ ns,col=c("blue","red")[1+k])
> abline(v=xbar/n)

Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a https://latex.codecogs.com/gif.latex?\mathcal{B}(\alpha,\beta) distribution. We’ve seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely

https://latex.codecogs.com/gif.latex?\mathcal{B}\left(\alpha+\sum%20x_i,\beta+n-\sum%20x_i\right)

Based on that property, the confidence interval is based on quantiles of that (posterior) distribution

> u=seq(.1,.2,length=501)
> v=dbeta(u,1+xbar,1+n-xbar)
> plot(u,v,axes=FALSE,type="l")
> I=u<qbeta(.025,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> I=u>qbeta(.975,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> axis(1)

What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)

> pk <- rbeta(ns,1+xbar,1+n-xbar)

In green, below, we can visualize the histogram of those values

> hist(pk,prob=TRUE,col="light green",
+ border="white",axes=FALSE,
+ main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))

And here again, let us generate samples, and compute the empirical probabilities,

> M=matrix(rbinom(n*ns,size=1,prob=rep(pk,
+ each=n)),nrow=n)
> MN=apply(M,2,mean)

Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,

> abline(v=qbeta(c(.025,.975),1+xbar,1+
+ n-xbar),col="red",lty=2)
> points(pk,seq(1,40,length=ns),pch=19,cex=.7)
> k=(MN<qbeta(.025,1+xbar,1+n-xbar))|
+ (MN>qbeta(.975,1+xbar,1+n-xbar))
> points(MN,seq(1,40,length=ns),
+ pch=19,cex=.7,col=c("blue","red")[1+k])
> segments(MN,seq(1,40,length=ns),
+ pk,seq(1,40,length=ns),col="grey")

More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,

Mais que s’est-il passé pendant la Première Guerre Mondiale?

La réponse courte est que des gens sont morts. Beaucoup. Cela étant dit, on ne dit pas grand chose. On peut comparer les pyramides des âges pour mieux comprendre ce qui a pu se passer. Juste avant la guerre (en 1913), la pyramide des âges ressemblait à ça (en utilisant les données de mortality.org)

> EXPO  <- read.table(
+ "http://freakonometrics.free.fr/Exposures-France.txt", header=TRUE,skip=2)
> EM=EXPO$Male
> EF=EXPO$Female
> Y= EXPO$Year
> A= EXPO$Age
> I=which(A=="110+")
> base=data.frame(Female=EF,Male=EM,Y=Y,Ages=A)
> base=base[-I,]
> France1913=base[base$Y==1913,]
> France1919=base[base$Y==1919,]
> France1913$Ages=as.numeric(
+ as.character(France1913$Ages))
> France1919$Ages=as.numeric(
+ as.character(France1919$Ages))
> France1913=France1913[,c("Male","Female",
+ "Ages")]
> library(pyramid)
> plot(c(0,100), c(0,100), type="n", 
+ frame=FALSE, axes=FALSE, xlab="", ylab="",
+ main="Pyramide des Ages, France, 1913")
> pyramidf(France1913, frame=c(10, 75, 0, 90), 
+ Clab="", Lcol="skyblue", Rcol="pink",
+ Cstep=10, Laxis=0:4*60000, AxisFM="d")

En revanche, juste après la guerre (en 1919), la pyramide des âges des âges ressemblait à celle là Continue reading Mais que s’est-il passé pendant la Première Guerre Mondiale?

Formation R à la CIMA, au Gabon

Je vais bientôt descendre une semaine au Gabon, au siège de la CIMA, la Conférence Interafricaine des Marchés d’Assurance, pour une formation à R.

Souhaitant un cours aussi interactif que possible, je n’ai pas prévu de transparents. Je mettrais toutefois en ligne des documents, si nécessaire. J’avais fait un billet, la semaine passée, pour expliquer comment importer les tables de mortalité. On travaillera sur ces bases pour apprendre à manipuler les fonctions de base de R, lundi.

Sinon, en guise d’introduction, je mentionnerais quelques livres ou notes de cours, en pdf,

  • “R pour les débutants” par Emmanuel Paradis, (pdf)
  • “Introduction à la programmation en R, Quatrième édition” par Vincent Goulet (pdf)
  • “Brise Glace-R (ouvrir la voie aux pôles statistiques)” par Andrew Robinson et Arnaud Schloesing (pdf)
  • “Introduction à R” par Julien Barnier (pdf)
  • “Aide mémoire R” par Mayeul Kauffmann (pdf)
  • “Lire ; Compter ; Tester… avec R”  (pdf)
  • “L’Actuariat avec R” par Arthur Charpentier et Christophe Dutang (pdf)

et je peux aussi mentionner les slides d’Ewen Gallic,

(pour les derniers transparents, je doute toutefois qu’on parlera de ggplot2, je pense qu’on se limitera aux fonctions de base). A suivre…

Excel (and French people) are such a pain in the…

A few days ago, I published a post entitled extracting datasets from excel files in a zipped folder, because I wanted to use datasets that were online, in some (zipped) excel format. The first difficult part was the folder with a non-standard character (the French é). Because next week I should be using those dataset in a crash course in Gabon (in Africa), I wanted to make sure that everthing will go fine when we will run the code. And discussing @3wen‘s trick on Day 1 was maybe not the best way to explain that R is a very simple tool that should be used for data analysis…

To make thing easier, I did upload the xlsx files on my webpage. I wanted to use the xlsx R package. Unfortunately, on my linux laptop, I have troubles installing that package.

Continue reading Excel (and French people) are such a pain in the…

Dear reader, who are you?

While working on an updated (not to say revised) version of Blogging in Academia, a Personal Experience, I noticed that I now have a better understanding of me as a blogger (this academic alter-ego, to use Vanessa Paz Dennen’s words), the role of blogs within academia, and the community of academic-bloggers. But tonight, in the train, I was asking myself about the readers. I mean You. I know from the stats of my blog that a few thousand people will read this post, that you are 32% American, 21% French, 6% Brazilian and 3% Swedish. That it is quite unlikely that you use a Linux computer. But I don’t know much more…

Since I have always claimed that my blog was a place to experiment, I wanted to try one! Dear reader, if you have 10 seconds, can you please post a comment, with your age, your gender, and your background (I’d like to know if you’re a student, a colleague, an economist, a mathematician, a biologist, a journalist, an astronaut, a lion tamer, an actuary, a modeler, a model, a robot… ). If you want to spend more time explaining how you ended up on my blog, I’d be glad to know… But at least, your age, your gender, and your ‘academic background’ (since I have some sort of academic blog), that would be great! If you come frequently on my blog, you should start to know me, and I thought it would be great if I could know more about you…

Le Mans Insurance & Finance Risk Colloquium

This Thursday and Friday, a Colloqium on Insurance and Finance risks will take place in the University of Le Mans. I will be giving a talk on non- and semi-parametric inference for risk measures, inspired by recent work with Emmanuel Flachaire. Our first paper log-transform kernel density estimationof income distribution is online  on http://papers.ssrn.com/id=2514882, and should appear soon. The second one is still in progress, codes are still running. I will upload the slides once the working paper is available…

Shapefiles from Isodensity Curves

Recently, with @3wen, we wanted to play with isodensity curves. The problem is that it is difficult to get – numerically – the equation of the contour (even if we can easily plot it). Consider the following surface (just for fun, in order to illustrate the idea)

> f=function(x,y) x*y+(1-x)*(1-y)
> u=v=seq(0,1,length=21)
> v=seq(0,1,length=11)
> f=outer(u,v,f)
> persp(u,v,f,theta=angle,phi=10,box=TRUE,
+ shade=TRUE,ticktype="detailed",xlab="",
+ ylab="",zlab="",col="yellow")

For instance, assume that we want to locate areas where the density exceed 0.7 (here in the lower left corner, SW, and the upper right corner, NE)

> image(u,v,f)
> contour(u,v,f,add=TRUE,levels=.7)

Continue reading Shapefiles from Isodensity Curves

Puissance et tests statistiques (simples)

Pour comprendre la notion de puissance dans un test statistique, reprenons Statistical power analysis for the behavioral sciences de Jacob Cohen. Comme il le rappelle Fisher (dans The Design of Experiments) ne souhaite pas “prouver” qu’une hypothèse (appelée H_0) est valide, mais il espère, en réalité, rejeter l’hypothèse en question (c’est ce que rappellent d’ailleurs Hubbard et Bayarri en 2003). Autrement dit, supposons que l’on cherche à faire un test médical, et à détecter si une personne est malade (ou pas). Si une personne est généralement malade dès lors qu’un taux de quelque chose est faible (noté https://latex.codecogs.com/gif.latex?\theta), on va chercher à tester https://latex.codecogs.com/gif.latex?H_0:\theta_\star%3E\theta_0. Dans un test statistique, on espère rejeter l’hypothèse nulle https://latex.codecogs.com/gif.latex?H_0.

The power of a statistical test of a null hypothesis is the probability that it will lead to the rejection of the null hypothesis.

Continue reading Puissance et tests statistiques (simples)