Pour le quatrième cours d’actuariat de l’assurance non-vie à l’ENSAE, la semaine prochaine, on abordera la notion de sur-dispersion dans les modèles de comptage. Les slides sont en ligne (la version pdf téléchargeable est plus complète que celle sur slideshare)
Monthly Archives: October 2015
Actuariat de l’Assurance Non-Vie #4
Pour le troisième cours d’actuariat de l’assurance non-vie à l’ENSAE, on abordera la théorie des modèles linéaires généralisés Les slides sont en ligne (la version pdf téléchargeable est plus complète que celle sur slideshare)
2015, October 21
I am almost sure I have already lived that day before,
Statistical Tests: Asymptotic, Exact, ou based on Simulations?
This morning, in our mathematical statistics course, we’ve been discussing the ‘proportion test‘, i.e. given a sample of Bernoulli trials, with , we want to test
against
A natural test (which can be related to the maximum likelihood ratio test) is based on the statistic
The test function is here
To get the bounds of the acceptance region, we need the distribution of , under . Consider here a numerical application
n=20 p=.5 set.seed(1) echantillon=sample(0:1,size=n, prob=c(1-p,p), replace=TRUE)
- the asymptotic distribution
The first (and standard idea) is to use the central limit theorem, since
So, under ,
Then while . The acceptance region is then between the two red lines, below,
T=sqrt(n)*(mean(echantillon)-.5)/ sqrt(mean(echantillon)* (1-mean(echantillon))) u=seq(-3,3,by=.01) v=dnorm(u) plot(u,v,type="l",lwd=2) abline(v=qnorm(.025),col="red") abline(v=qnorm(.975),col="red") abline(v=T,col="blue")
- the exact distribution
Here we use the fact that
Using transformation of the ‘density’, we can (at least numerically) compute the (exact) distribution of
u=seq(-3,3,by=.01) v=sqrt(.5*(1-.5))*n*dbinom(round( (sqrt(.5*(1-.5))*u/sqrt(n)+.5)*n), size=n,prob=.5)/sqrt(n)
Here I used a round value, it guess it would be better with a floor function, but here the graph looks symmetric (which is something I like)
abline(v=sqrt(n)*(qbinom(.025,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red") abline(v=sqrt(n)*(qbinom(.975,size=n,prob=.5)/n-.5)/sqrt(.5*(1-.5)),col="red") lines(u,v,type="s")
- distribution based on Monte Carlo simulations
Probably more interesting, here we do not use the fact that we might know the distribution of the mean. We just generate random samples, under , and then compute ,
T=rep(NA,1000) for(i in 1:1000){ x=sample(0:1,size=n, prob=c(1-.5,.5), replace=TRUE) m=mean(x) T[i]=(m-.5)/sqrt(m*(1-m))*sqrt(n)} lines(density(T),lwd=2) abline(v=quantile(T,.025),col="red") abline(v=quantile(T,.975),col="red")
Where does that 2 come from in the likelihood ratio test?
This afternoon, in class, we’ve seen Wald test, the likelihood-ratio test, and finally the score test. All of them rely on the same idea
and then, use that if with , we can write
Or – slightly more interesting – if , then
Then one can get that
Based on that property, we can derive Wald statistics,
that can be visualized below
The score test is a test on the square of the slope
The idea for the likelihood ratio test is to consider
Observe that can be written, using Taylor’s expansion
for some . The first term is null, since the maximum likelihood estimator is precisely at the maximum of the (log) likelihood. So
That’s more or less where the 2 comes from. Then observe that
and therefore
This test will be discussed further next week (since it is related to Neyman-Pearson’s theorem), but also, that result can be used to derive confidence intervals. With a log-likelihood as follows
it is possible to get a confidence interval for the parameter by looking for‘s such that
We will discuss that idea later on, in the context of profile likelihood.
Tests, Power and Significance
In the mathematical statistics course today, we started talking about tests, and decision rules. To illustrate all the concepts introduced today, we considered the case where we have a sample with . And we want to test
against
In the course, we’ve seen that we could use a test based on the order statistics . The test would be
i.e. if we choose , and if , we choose .
From the definition of the first order risk,
we can easily get that
Thus, the power is then
To visualize it, use the following parameters
n=5 alpha=.1 theta0=1
Then
C1=theta0*(1-alpha)^(1/n) theta=seq(0,2,by=.01) P1=(1-(theta0/theta)^n*(1-alpha))*(theta>C1) plot(theta,P1,type="l",lwd=2,col="blue",xlab="",ylab="Power")
Note that, so far, we did never consider the maximum of our sample. Assume that the maximum is , then we can compute the -value,
Here it is
PV=(1-theta^n)*(theta<=1) plot(theta,PV,type="l",lwd=2,col="blue",xlab="",ylab="p-value")
Now, why not consider another test, based on the minimum (since we have the distribution of the minimum of a sample from a uniform distribution). The test is the same as before
but here, the threshold is
The power of the test is here
This test has the same significance level (by construction), but the power of the test is clearly lower than the one we got using the maximum of our sample, when
C2=theta0*(1-alpha^(1/n)) P2=(1-(theta0/theta)*(1-alpha^(1/n)))^n*(theta>C2) lines(theta,P2,type="l",lwd=2,col="red")
Why not consider a test based on ? The problem is that we need the distribution (more specifically the survival distribution) of . We can compute it, numerically. But that might be painful. An alternative is to consider some approximation, based on the central limit theorem, i.e.
Our test is based on , and to get the same significance as before, use
The power of the test is then
Here it is
mu=2*(theta0/2) s2=2^2*(theta0^2/12)/n C3=qnorm(1-alpha,mu,sqrt(s2)) (P=1-pnorm(C3,theta,sqrt(s2)))*(theta>C3) lines(theta,P)
Observe here that the test based on the maximum is not more powerful than the one based on the average (I just wonder if it could be due to the Gaussian approximation…).
Pricing Game Dead Line, Oct. 21st
The pricing game (on insurance data) will end on October21st. We need to get the prediction by that date….
Actuariat de l’Assurance Non-Vie #3
Pour le troisème cours d’actuariat de l’assurance non-vie à l’ENSAE (oui, j’ai pris un peu d’avance), on abordera la modélisation de la fréquence de sinistres. Les slides sont en ligne,
Visualising a Circular Density
This afternoon, Jean-Luc asked me some help about an old post I did publish, minuit, l’heure du crime; and some graphs published a few days after, where I used a different visualisation, in another post.
The idea is that the hour can be seen as circular, in the sense that 23:58 is actually very close to 00:03. So when we use a nonparametric kernel estimator of time events, we have to take into account that property. More specifically, consider the density of an angle, i.e. a function f(\cdot) such that \int_0^{2\pi}f(\omega)d\omega=1
with a circular relationship, in the sense that f(\omega+2\pi)=f(\omega).
In the dataset sent by Jean-Luc, we have some thefts in a big city, in France. The dataset is a simple spreadsheet with one columns, with ’00:20′ or ’17:45′ inside. Those are more or less reported time of thefts, as declared to the police.
B=read.table("Temp_Heures_VV.csv",header=TRUE, sep=";") HM=as.character(B[,1]) H=substr(HM,1,(nchar(HM)-3)) M=substr(HM,(nchar(HM)-1),(nchar(HM))) X=as.numeric(H)+as.numeric(M)/60
The time is a number from 0 to 24.
U=seq(0,1,by=1/250) O=U*2*pi U12=seq(0,1,by=1/24) O12=U12*2*pi OM=2*pi*X/24 XL=c(X-24,X,X+24) d=density(X) d=density(XL,bw=d$bw,n=1500) I=which((d$x>=6)&(d$x<=30)) Od=d$x[I]/24*2*3.141592-3.141592/2 Dd=d$y[I]/max(d$y)+1
The idea to get a nice density estimation is to use a simple mirror technique : we have three versions of the data, one for today, one for yesterday, and one for tomorrow. Of course, we have to use a shorter bandwidth.
R=1/24/max(d$y)/3+1 plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in 3.14159/12*(0:12)){ segments(-cos(i),-sin(i),cos(i),sin(i),col="grey")} segments(.9*cos(O12),.9*sin(O12), 1.1*cos(O12),1.1*sin(O12)) text(.7,0,"6") text(-.7,0,"18") text(0,-.7,"12") text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2) AX=R*cos(Od);AY=-R*sin(Od) BX=Dd*cos(Od);BY=-Dd*sin(Od) COUL=rep("blue",length(AX)) COUL[R<Dd]="red" CM=cm.colors(200) a=trunc(100*Dd/R) COUL=CM[a] segments(AX,AY,BX,BY,col=COUL,lwd=2) lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)
The dotted line would be a uniform distribution over the day. The true distribution is the black bold line. The area in purple is when we have more crimes, and the blue line is when we have less crimes. The blue area is equal to the purple one. There is a clear symmetry in the evening around midnight (but not during the day : 6 am is not the same as 6 pm). This graph is the circular visualisation of the kernel density estimator, the same way the rose diagram is the circular visualisation of the histogram.
Calcul(s) d’information de Fisher
La semaine passée, on avait fait quelques calculs pour obtenir l’information de Fisher pour des lois classiques. Je voulais juste remettre au propre les calculs pour les lois à plusieurs paramètres. Pour la loi Gamma,
la log-vraisemblance s’écrit
de telle sorte que
Ici, même pas besoin de prendre une espérance car la Hessienne est constante
Actuariat de l’Assurance Non-Vie #2
Pour le second cours d’actuariat de l’assurance non-vie à l’ENSAE, qui aura lieu lundi après midi, les slides présentant les modèles classiques pour prédire des variables factorielles (classification) sont en ligne,
Actuariat de l’Assurance Non-Vie #1
Lundi prochain aura lieu le premier cours d’actuariat de l’assurance non-vie à l’ENSAE. Les slides d’introduction générale à la valorisation en assurance dommage sont en ligne
Mutualisation et Segmentation en Assurance
L’article Segmentation et Mutualisation, les deux faces d’une même pièce, coécrit avec Michel Denuit et Romuald Elie paraîtra dans les jours à venir, dans un numéro autour du big data en assurance, avec des articles de Patrick Thourot, sur la tarification du pay as you drive, de Lucie Taleyson, sur la tarification dans les assurances collectives, et tout plein d’articles passionnants, signés François-Xavier Hay ou encore Arnaud Chaput.
L’article (disponible en pdf) présente l’impact de la segmentation dans un environnement concurrentiel. Il d’un exemple simple, pour ne pas dire simpliste. Il sera perfectionné avec le pricing game organisé mi novembre. A suivre donc…