Statistiques, de la théorie à la pratique

Afin d’illustrer un peu le cours, je mets en ligne ici. Il s’agit de coûts de sinistres ayant dépassé 1,000$ (qui ont été ordonnés de manière croissante). Pour importer la base en R, le code est simplement,

> X=read.table(
"http://freakonometrics.blog.free.fr/public/data/sinistres.txt")$x
> X
 [1] 1.003 1.016 1.023 1.027 1.037 1.039 1.0
[13] 1.061 1.072 1.078 1.082 1.087 1.094 1.0
[25] 1.110 1.112 1.117 1.132 1.138 1.141 1.1
[37] 1.180 1.186 1.187 1.190 1.193 1.203 1.2
[49] 1.321 1.326 1.338 1.342 1.343 1.344 1.3
[61] 1.428 1.432 1.442 1.457 1.463 1.466 1.4
[73] 1.551 1.553 1.566 1.584 1.632 1.695 1.6
[85] 1.881 1.893 1.897 1.958 2.000 2.175 2.2
[97] 3.045 3.103 4.495 5.614

Pour information, l’histogramme ressemble à ça

On va supposer que l’on peut ajuster une loi de Pareto, dont la fonction de survie est donnée par

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-1.gif

et de densité

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-2.gif

L’espérance est

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-3.gif

et la variance

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-5.gif
  • Estimateur de la méthode des moments
http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-6.gif

Numériquement, on obtient

> mean(X)/(mean(X)-1)
[1] 2.946827
  • Estimateur du maximum de vraisemblance

On peut écrire la vraisemblance, et on recherche – numériquement – la maximum (ou le minimum de l’opposé de la log-vraisemblance car la fonction recherche ici les minimum),

> f=function(x,a){
+ a/x^(a+1)
+ }
> LogV = function(a,echantillon){
+ -sum(log(f(echantillon,a)))
+ }
> optim(fn=LogV,par=2,echantillon=X,method="BFGS")$par
[1] 2.845093

Mais on aurait pu faire mieux… car ici le maximum de vraisemblance peut s’écrire analytiquement

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-7.gif
> length(X)/(sum(log(X)))
[1] 2.845093

(tout va bien, on trouve pareil).

  • Estimateur par régression linéaire

On peut noter que

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-8.gif

ce qui veut dire qu’on peut tracer le nuage de points du log des couts, et s’ils sont alignés suivant une droite (passant par l’origine), la pente nous donnera un estimateur de notre paramètre. Le plus simple est de prendre la pente de la droite qui passe par l’origine, et par le centre de gravité du nuage,

> Z=X
> Y=log((length(Z):1)/length(Z))
> X=log(Z)
> plot(X,Y)
> -mean(Y)/mean(X)
[1] 2.753414
  • Estimation à l’aide de la médiane

Enfin, notons que la médiane est

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-9.gif

Un estimateur naturel est alors

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-11.gif

Numériquement, c’est alors tout simple,

> log(2)/log(quantile(X,.5))
     50% 
2.413719

Moralité, on a quatre estimateurs, qui nous donne quatre valeurs numériques différentes

2.9468 2.8451 2.7534 2.4137

Pour ceux qui veulent trouver d’autres estimateurs (car on peut en construire des centaines), une référence intéressante est la suivante,
Quandt, R. E. (1966). Old and new methods of estimation and the Pareto distribution, Metrika, 10, 55-82 (ici)
Le but de la statistique mathématique est de mieux connaître les propriétés (fondamentales) de ces estimateurs afin de choisir le “meilleur“. Et c’est un problème important, par exemple si on souhaite mettre en place un contrat de réassurance (et transférer les très grands sinistres). Par exemple si on veut transférer les sinistres dépassant 100,000$, la probabilité de dépasser ce seuil est 10 fois plus grande avec le dernier estimateur par rapport au premier…

> 1/100^2.4137
[1] 1.48799e-05
 
> 1/100^2.9468
[1] 1.277615e-06

Moralité, les résultats théoriques que l’on aborde en ce moment vont nous aider à résoudre des questions très pratiques…. à suivre.

When will my papers appear as references (if they do…) ?

Following my post on citations in academic journals, I wanted to go one step further in the understanding of the dynamic of citations. So here, the dataset looks like that: for each article, we have the name of the journal, the year of publication (also the title of the article, but here we do not use it, as well as the authors), and more interesting, the number of citations in journals (any kind of academic journal) published in 1996, 1997, …, 2011. Of course, articles published in 1999 might have their first citation only starting in 1999.

base[1000:1002,]
     Publication.Year
7188             1999
7191             1999
7195             1999
     Document.Title
7188 Sequential inspection 
7191 On equitable resource approach
7195 Method for strategic  
                                        Authors     ISSN       Journal.Title
7188                         Yao D.D., Zheng S. 0030364X Operations Research
7191                                    Luss H. 0030364X Operations Research
7195 Seshadri S., Khanna A., Harche F., Wyle R. 0030364X Operations Research
     Volume Issue X139 DEV1996 DEV1997 DEV1998 DEV1999 DEV2000 DEV2001 DEV2002
7188     47     3    0       0       0       0       0       1       0       2
7191     47     3    0       0       0       0       0       0       2       0
7195     47     3    0       0       0       0       0       0       0       0
     DEV2003 DEV2004 DEV2005 DEV2006 DEV2007 DEV2008 DEV2009 DEV2010 DEV2011
7188       0       0       0       1       0       0       0       0       0
7191       3       4       1       4       4       8       4       6       1
7195       0       1       2       2       1       0       1       0       0
     X130655 X0 X130794
7188       4  0       4
7191      37  0      37
7195       7  0       7

The first step is to aggregate data, not to look at each article, but to look at all paper published in 1999 (say). And then, we look at the number in citations the year of publication, the year after, two years after, etc. It will appear in a triangle since if we look at articles published in 2010, there is only on possible year for citations (2010, since I removed 2011).

VOL=rev(unique(base$Volume))
VOL=VOL[is.na(VOL)==FALSE]
TRIANGLE=matrix(NA,16,16)
for(v in VOL){
k=k+1
sb=base[base$Volume==v,9:24]
sb=sb[is.na(sb[,1])==FALSE,]
TRIANGLE[k,1:(17-k)]=apply(sb,2,sum)[k:16]}

Then, a standard idea (at least in insurance business, for claims payment development) is to consider that data are Poisson distributed, and the number of citations should depend on the year of publication of the article (a row effect) and the development (how many years after are we looking at, i.e. a column effect). More formally, let http://freakonometrics.blog.free.fr/public/perso2/citationD01.gif denote the number of citations of articles published year http://freakonometrics.blog.free.fr/public/perso2/citD02.gifduring year http://freakonometrics.blog.free.fr/public/perso2/citD03.gif (or after http://freakonometrics.blog.free.fr/public/perso2/citD04.gif years). And we assume that http://freakonometrics.blog.free.fr/public/perso2/citD05.gif

TRIANGLE=TRIANGLE[-16,]
TRIANGLE=TRIANGLE[,-16]
Y=as.vector(TRIANGLE)
YEAR=rep(1996:2010,15)
DEV =rep(1:15,each=15)
baseT=data.frame(Y,YEAR,DEV)
reg=glm(Y~as.factor(YEAR)+as.factor(DEV),
data=baseT,family=poisson)

Since those are incremental values, in order to look at the paper of distribution, we need to sum them on a line. Thus, we can plot

http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-1.gif
http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-2.gif

(because we used factors, the first component has been replaced by the constant in the regression) or a normalized version to compare among journals. For instance, we would like to get 100 citations over 15 years.

DYN=exp(c(reg$coefficients[1],reg$coefficients[1]+
reg$coefficients[16:29]))
DYNN=cumsum(DYN)/sum(DYN)
plot(0:15,DYNN)

And this is what we get, for several academic journals,

The pattern is rather different. For instance, in Health Economics, citations is a quick process: more than 40% of citations obtained over 15 years, were obtained during the first 4 years. On the other hand, in the Journal of Finance, it is much smaller: less than 15% of the citations were obtained during the first 4 years (on average). So it means that comparing citation based index (namely g or h) is a difficult exercise, especially with you researchers in different areas. The same gor index for young researcher, publishing either in Stochastic Processes and their Applications or Annals of Statistics, means that after 3 years, it can be 50% higher.


Now it is possible to look more into details, with below JRSS-B (on applied statistics). Note that here, citations come extremely slowly… to it might not be a good “strategy” (assuming that a researcher’s target is simply to get – quickly – a high citation index) for a young researcher to publish in JRSS-B

On the other hand, Biometrika is much faster (both are on applied statistics, but we’ve seen here that they were not in the same cluster)

We can also observe that Annals of Probability
and Stochastic Processes and their Applications

have (almost) similar patterns (SPA might be a bit faster). Anyway, I have been surprised to see that in theoretical journals citations are extremely fast. Especially if we compare with the Journal of Finance for instance

where I though citations were extremely fast. But I might have a non-correct interpretation: it might simply mean that in the Journal of Finance it is common to cite old papers (published 10 or 15 years ago), maybe more common that in stochastic processes…
Anyway, all suggestions about the interpretation are welcomed !

Talk at Laval University on natural catastrophes

On Tuesday, I will be giving a talk at the Département de finance, assurance et immobilier, at the Faculté des sciences de l’administration. The talk will be on natural catastrophes, and on government intervention. The slides will be upladed soon (since we are still revising the paper we wrote Benoît, cf here: actually, we did not look at EU maximizers but RDEU maximizer with a quantile-based distortion). I will write a more detailed post once the working paper is finished.