Tag Archives: computer

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

Informatique (sans ordinateur), partie 1

Pendant les vacances, avec mon fils, on s’est amusé à faire les premières activités proposés dans le manuel Informatique Sans Ordinateur, disponible sur http://csunplugged.org/ (où plusieurs activités complémentaires peuvent être téléchargées, mais en anglais seulement). Pour des raisons personnelles, mon fils est de plus en plus amené à utiliser l’ordinateur à l’école. Et je dois avouer que ça me gêne de la voir manipuler un outils qu’il ne maîtrise pas vraiment (je renvoie d’ailleurs à un billet de Dr Goulu qui disait la même chose au début de l’année). Que les choses soient claires : je ne prétend pas non plus maîtriser l’informatique ! Mais comme je l’ai déjà raconté sur ce blog, à son âge, je codais mes premiers jeux (souvent en recopiant des codes en BASIC), et j’ai l’impression que manipulant les ordinateurs depuis presque 30 ans m’a permis d’avoir un peu de recul, en tous les cas plus que lui. Et je dois avouer que le manque de culture en informatique de la génération entre la mienne et celle de mon fils me surprend. Bref, comme je l’avais dit dans un précédant billet, ça me dérange qu’une génération autant amenée à utiliser l’outil informatique en sache aussi peu à ce sujet. Aussi, quand j’ai découvert ce petit programme d’activités le mois dernier, j’ai eu envie de tenter l’expérience.

Au départ, je pensais proposer les activités à mon fils et à ma fille (qui a 8 ans) mais cette dernière était plongée dans des dessins quand on a débuté, et je n’ai pas réussi à la faire décrocher. Pour ceux qui veulent tenter l’expérience, il y a une logique dans les activités, et je pense qu’il serait dommage se rater les premières. Bref, je n’ai pas re-proposé à ma fille de se joindre à nous. Mais on verra aux prochaines vacances….

Le document propose une douzaine d’activités (davantage si on traîne sur http://csunplugged.org/activities). La première partie (que je vais évoquer ici) porte sur la représentation de l’information.

  • section 1 : le système binaire

La première activité est vraiment bien faite. On apprend la base 2, l’écriture en 0 et en 1, les codes ASCII, et les notions de 32 et 64 bits. Histoire d’illustrer, je vais coder un peu (sur ordinateur cette fois) pour expliquer comment ça fonctionne.

> base2=function(x,n=8){
+ Base.b=rep(0,n)
+ ndigits=(floor(logb(x,base=2))+1)
+ for(i in 1:ndigits){
+ Base.b[n-i+1]=(x%%2)
+ x=(x %/% 2)}
+ plot(0:1,0:1,xlab="",ylab="",
+ axes=FALSE, xlim=c(0,n),ylim=c(0,1),col="white")
+ for(i in 1:n){
+ polygon(i-1+c(.1,.1,.9,.9),c(.1,.9,.9,.1),lwd=2,
+ col=c("white","red")[1+(Base.b[i]==1)])}
+ return(Base.b)}

On découpe des bouts de cartons, et on les juxtapose, de manière à écrire des nombres.

Regardons par exemple le nombre 17. Rouge pour un 1, et blanc pour un 0.

> base2(17)
[1] 0 0 0 1 0 0 0 1

Le premier jeu est d’écrire quelques nombres, pour se familiariser. Ensuite, on voit que c’est même facile de faire des opérations de base. Par exemple, si on multiplie par 2, c’est facile : on décale vers la gauche, et on rajoute un carré blanc, tout à droite

> base2(17*2)
[1] 0 0 1 0 0 0 1 0

Amusant, non ? On peut aussi faire des additions, par exemple, 12 s’écrit

> base2(12)
[1] 0 0 0 0 1 1 0 0

et si on somme 12 et 17, on obtient

> base2(12+17)
[1] 0 0 0 1 1 1 0 1

où on raisonne comme en base 10. En fait, dans cet exemple, on retrouve que 0+1=1+0=1 et que 0+0=0 (oui, il n’y a pas de retenue dans cet exemple). Ensuite, on utilise une décomposition en nombre de l’alphabet pour coder des lettres (A-1, B-2, C-3, etc), puis rapidement, on passe aux codes ASCII, C’est très ludique ! Je dois avouer qu’on en a profité pour faire une digression par les codes secrets, mais j’en reparlerais une autre fois. Bref, cette première activité nous a emballé !

  • section 2 : pixeliser et dessiner.

Suite logique de l’étape précédente, on a ensuite évoqué la pixellisation. Cette activité me parlait beaucoup, car c’est précisément ce que je faisais quand j”avais l’âge de mon fils (ou presque). Sur le MO5 de la maison, il y a avait essentiellement deux jeux, sur cassettes: un jeu de voitures, et super tennis. Au tennis, les personnages étaient assez simples avec 4 positions possibles: l’attente, le service, le coup droit, et le revers. Ensuite, la figure était juste translatée (on avait déjà évoqué ce point il y a quelques mois).

Mais autant ça me parle énormément, autant je me suis demandé ce que ça pouvait dire aujourd’hui, car les images sont incroyablement lissées… On a pris une image sur mon ordinateur, et on a zoomé, zoomé…

… en vain. On ne voit plus vraiment les pixels sur les dessins. C’est dommage ! Heureusement, j’ai fini par trouver quelques exemple pour illustrer ce concept (finalement assez théorique) de la pixellisation. Je ne me sentais pas très à l’aise pour parler de lissage avec mon fils. Ce sont des choses que j’explique à l’occasion à mes étudiants, et ça me gêne que mon fils de 11 ans sache plus de choses que mes étudiants. En fait, ce dilemme m’a tiraillé tout au long de nos activités, pour être honnête… D’un côté, ce sont des exercices pour enfants de primaire, et de l’autre, ce sont des choses que certains étudiants en formation universitaire de mathématique se devraient de maîtriser.

Pour revenir un peu à l’activité, c’était amusant, et ça annonçait un peu le principe de la compression (qui viendrait avec la prochaine activité). Mais trouver une tasse, et une image de Saturne en coloriant des cases, ça a vite lassé mon fils. On a fini par abandonner sans vraiment finir l’activité. Cela dit, j’ai découvert par la suite qu’il y avait des activités amusantes en ligne, sur http://csunplugged.org/activities, avec en particulier une discussion sur les traits et les cercles (en anglais seulement)

  • section 3 : compresser et zipper

Cette activité était…. surprenante. On a joué avec des algorithmes de compression, type LZ77 utilisé pour zipper des fichiers. Le principe est assez amusant…. Par exemple. dans la phrase

on retrouve des blocs de lettres. Alors l’idée (difficile à suivre si on se limite aux indications données) est que si on a plus de 2 lettres qui se répètent, on peut faire un pointage.

On va alors remplacer le second bloc par de l’information expliquant aller chercher l’information,

Aussi, la phrase devient

Autrement dit, on lui dit de reculer d’un certain nombre de caractères, et d’utiliser un bloc de caractères, d’une certaine longueur. Amusant non ? En plus, on peut utiliser une écriture récursive, et remplacer par des lettres non encore définies. Comme le joli exemple suivant

On recule de 2 lettres, et on prend un bloc de 3 lettres. Amusant non ? C’est facile à faire, et on peut facilement lire un fichier compressé. Par contre, compresser soit même un texte est plus compliqué ! On s’est longtemps interrogé sur la marche à suivre, car l’aogorithme est mal expliqué. Manifestement, on part de la gauche, et si on retrouve un bloc de 2 caractères déjà lu (ou plus), on pointe dessus. Manifestement, selon ce qu’on y lit, on peut passer de 2500 caractères à 500. C’est dommage que la compression en question ne soit pas mentionnée. Par contre, on imagine bien que plus le texte est long, plus on va retrouver des blocs déjà vus. Et l’idée d’illustrer par un poème est brillante !

On s’est bien amusé pendant cette activité, on a fait des essais, mais on ne sait pas si ce qu’on fait est “optimal“. Cela dit,  m’a donné envie de lire davantage, d’autant que plusieurs articles évoquent ce point (et que la notion d’information et d’entropie se cachent derrière, mais on va la voir dans quelques sections).

  • section 4 : gérer les erreurs et l’exemple des codes barres

On a ensuite une activité réellement amusante, sur la gestion des erreurs. Ça commence par un petit tour de magie et l’écriture binaire…. On considère la grille suivante (avec les cartes utilisées dans la première activité: une face rouge, une face blanche)

Les couleurs ont étés mises au hasard… on a constitué une grille 5×5 en posant les cartes au hasard (ce sont les cartes sur le tapis bleu). Pour ceux qui sont autour… c’est le truc du tour de magie. Maintenant, je lasse mon fils retourner une carte au hasard, sur le tapis bleu (en fait, je pense qu’il peut retourner celle qu’il veut)

Pendant qu’il la retournait, je n’ai pas regardé, et il va falloir maintenant retrouver quelle carte a été retournée… En fait, le truc pour la retrouver, c’est que les cartes à droite, et en bas, ont été mises de telle sorte que le nombre de cartes rouges par ligne, et par colonne soit pair !

En plus, si deux cartes avaient été retournées, j’aurais aussi pu les retrouver. Amusant non ? L’idée est vraiment géniale…. Ensuite, on apprend à vérifier les codes d’identité bancaire ou de code ISBN, pour les livres. Oui, le code qui figure sur les livres, au dos.

Bon, le soucis est que sur ce code barre, il y a manifestement un problème, car le code généré n’est pas le bon. En fait, le code ISBN qu’on va utiliser est le suivant

> isbn=1466592591
> checkcode=function(n){
+ l=as.character(n)
+ while(nchar(l)<10) l=paste("0",l,sep="")
+ a=substr(l,1,9)
+ y=as.numeric(substr(l,10,10))
+ x=as.numeric(unlist(strsplit(a,"")))
+ s=sum(x*(10:2))
+ z=11-(s%%11)
+ return(z==y)
+ }

Sur notre code à 10 chiffres, le dernier chiffre va servir à vérifier si les premiers sont bons, ou pas. Tout est expliqué dans la définition du International Standard Book Number sur Wikipedia. C’est presque simple, à condition de maîtriser le reste de la division par 11 (comme le montre le code précédant)

> checkcode(isbn)
[1] TRUE

On a pris quelques livres dans la bibliothèque, et je lui demandais de me dicter le code…. C’était assez amusant comme exemple… Mais mon fils a voulu essayer un nombre au hasard… et il a trouvé un code ISBN du premier coup…. Bon, il avait une chance sur 12.

> mean(Vectorize(checkcode)(trunc(1e10*runif(1000000))))
[1] 0.08327
  • section 5 : jeu du “devine le chiffre auquel je pense” et système binaire

La dernière section a été de loin la plus intéressante ! On effleure l’information (au sens de Shannon) et le logarithme (en base 2)… J’avais évoqué des idées similaires dans un vieux billet, justement, suite à un jeu avec les enfants. On voit en particulier la construction par arbre de la méthode dichotomique… Par exemple, si je pense à un nombre (entier) entre 0 et 7, la méthode pour trouver le nombre, la plus efficace, est la suivante :

Ce qui est amusant, c’est qu’on retrouve la décomposition en base binaire des nombres,

avec d’abord 2² puis 2¹ et enfin 2⁰. Amusant, non ? On a ensuite fait un test (et ma fille est venu se joindre à nous). On a commencé par “choisir un nombre entre 1 et 100” puis “choisir un nombre entre 1 et 1000“, à tour de rôle. On a vu qu’il fallait environ 7 coups pour réussir avec 100 chiffres possibles, et une dizaine avec 1000. Sans parler de logarithme, on a vu qu’il fallait chercher les exposants de 2 pour attendre 100, ou 1000.

Mais le plus intéressant, c’est que j’ai commencé, pour montrer qu’on divise en deux, à chaque fois. Je commençais avec “plus grand que 50 ?” puis “plus grand que 25 ?” puis “plus grand que 12 ?” etc. Mon fils a opté pour une stratégie plus étrange, en commençant par couper au milieu “plus grand que 50 ?” puis (directement) “plus grand que 5 ?“. En fait, en jouant plusieurs fois, je me suis rendu compte que sa stratégie pouvait être (en quelque sorte) optimale, car sa sœur ne prend visiblement pas des nombres de manière uniforme entre les deux bornes: elle a tendance à prendre des petits ou des très grands nombres. Sur une partie, il a ainsi être plus rapide que ma stratégie soit-disant optimale, en gagnant en 5 coups. On a ensuite eu une longue discussion sur ce que pouvait être la stratégie optimale. C’était vraiment une discussion intéressante… qui s’est poursuivie avec l’activité suivante, de classement et de tri. Mais on en reparlera bientôt !

Quand on voit les ressources qui traînent en ligne, je ne comprends pas que l’enseignement de l’informatique en primaire ne soit pas obligatoire : c’est incroyablement ludique ! Bref, on s’est vraiment bien amusé avec ces activités, et on a appris plein de choses. Bon, peut être que le fait d’être resté autour de -30°C ces derniers jours n’a pas vraiment permis de proposer d’alternatives…

Modèle de régression et interaction(s) entre facteurs

Dans un modèle de régression, on veut écrire

Quand on se limite à un modèle linéaire, on écrit

ou encore

Mais on de doute que l’on rate quelque chose… en particulier, on va rater toutes les interactions possibles. On peut croiser les variables, et supposer que

qui peut s’étendre d’avantage, à l’ordre 3,

voire davantage.

Supposons que nos variables  soient ici qualitatives, et plus précisément binaires. Prenons un exemple simple, avec des données (classiques) en risque de crédit1. On peut trouver la base via

library(evtree)
db=GermanCredit

ou encore directement

myVariableNames = c("checking_status","duration","credit_history",
"purpose","credit_amount","savings","employment","installment_rate",
"personal_status","other_parties","residence_since","property_magnitude",
"age","other_payment_plans","housing","existing_credits","job",
"num_dependents","telephone","foreign_worker","class")

GermanCredit = read.table(
"http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data",
header=FALSE,col.names=myVariableNames)

Retenons pour commencer trois variables explicatives,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"))
reg=glm(Y~X1+X2+X3,data=db,family=binomial)
summary(reg)

La régression sans interaction donne ici

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5431  -0.8421  -0.6295   1.3994   1.9999  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -1.8544     0.1699 -10.915  < 2e-16 ***
X1TRUE        0.3363     0.1496   2.249   0.0245 *  
X2TRUE        1.3462     0.2347   5.735 9.76e-09 ***
X3TRUE        1.0001     0.1787   5.596 2.19e-08 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1143.6  on 996  degrees of freedom
AIC: 1151.6

Number of Fisher Scoring iterations: 4

Il existe plusieurs interactions possibles ici (limitons nous aux paires). C’est ce que l’on observe quand on fait la régression

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3 + X1:X2 + X1:X3 + X2:X3, family = binomial, 
    data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5369  -0.8281  -0.6439   1.3954   1.9638  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)   -1.77109    0.20070  -8.825  < 2e-16 ***
X1TRUE         0.30296    0.33737   0.898 0.369186    
X2TRUE         0.88353    0.54255   1.628 0.103421    
X3TRUE         0.87709    0.22583   3.884 0.000103 ***
X1TRUE:X2TRUE -0.37917    0.49343  -0.768 0.442225    
X1TRUE:X3TRUE  0.09178    0.37278   0.246 0.805522    
X2TRUE:X3TRUE  0.80923    0.58185   1.391 0.164293    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1221.7  on 999  degrees of freedom
Residual deviance: 1141.0  on 993  degrees of freedom
AIC: 1155

Number of Fisher Scoring iterations: 4

On peut faire un dessin pour visualiser les interactions : on a trois sommets (nos trois variables), et on visualiser les interactions

indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",
xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne ici, pour nos trois variables

Ce modèle pourrait sembler incomplet, car on ne regarde que les interactions entre les modalités, par paires. En fait, c’est parce qu’il manque (visuellement) les variables non-croisées. On peut les rajouter si on veut (au risque de surcharger le dessin)

cercle=function(c,r,cl) lines(c[1]+r*cos(seq(0,2*pi,length=501)),
c[2]+r*sin(seq(0,2*pi,length=501)),col=cl)

reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=db,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Si on change le ‘sens‘ de nos variables (en recodant a l’envers, en permutant les vrais et les faux), on obtient le graphique suivant

dbinv=db
dbinv[,2:k]=1-dbinv[,2:k]
reg=glm(Y~X1+X2+X3+X1:X2+X1:X3+X2:X3,data=dbinv,family=binomial)
indices=cbind(c(1,2,3),c(1,1,2),c(2,3,3))
k=3
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

qui peut alors être comparé au graphique précédant

Avec 5 variables, on augmente les interactions possibles… même si beaucoup risquent d’être non-significatifs. On peut déjà se focaliser sur les paires possibles d’interactions croisées. Pour simplifier le code, on va utiliser deux fonctions locales,

vrepeach=function(x,e){
v=NULL
for(i in 1:length(e)){v=c(v,rep(x[i],each=e[i]))}
return(v)}
vreplength=function(x,l){
v=NULL
for(i in 1:length(l)){v=c(v,x[l[i]:length(x)])}
return(v)}

et ensuite, on adapte le code précédant

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

ce qui donne un schéma plus complexe,

On peut aussi prendre juste 2 variables, prenant 3 et 4 modalités respectivement. On va extraire deux variables indicatrices pour la première (la modalité restante sera la modalité de référence) et trois pour la seconde,

db=data.frame(Y=GermanCredit$class-1,
X1=GermanCredit$checking_status=="A12",
X2=GermanCredit$checking_status=="A13",
X3=GermanCredit$checking_status=="A14",
X4=GermanCredit$employment%in%c("A72","A73"),
X5=GermanCredit$employment%in%c("A74","A75"))
k=5
indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}
for(i in 1:k){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

On voit que plusieurs interactions ne sont alors plus possibles, sur la partie gauche (les trois modalités de la même variable) et sur la partie droite

On peut d’ailleurs simplifier les graphs, en ne visualisant que les interactions significatives.

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

soit ici

Ici, une seule interactions croisée est significative, et presque toutes les variables le sont. Et si on reprend le modèle avec 5 facteurs,

db=data.frame(Y=GermanCredit$class-1,X1=GermanCredit$checking_status%in%c("A12","A13"),
X2=GermanCredit$credit_history%in%c("A30","A31"),
X3=GermanCredit$savings%in%c("A61","A62"),
X4=GermanCredit$employment%in%c("A71","A72"),
X5=GermanCredit$other_payment_plans=="A143")

indices=cbind(1:(k*(k-1)/2),vrepeach(1:(k-1),(k-1):1),vreplength(2:k,1:(k-1)))
formule="Y~1"
for(i in 1:k) formule=paste(formule,"+X",i,sep="")
for(i in 1:nrow(indices)) formule=paste(formule,"+X",indices[i,2],":X",indices[i,3],sep="")
reg=glm(formule,data=db,family=binomial)
theta=pi/2+2*pi*(0:(k-1))/k
sommetX=cos(theta)
sommetY=sin(theta)
plot(sommetX,sommetY,cex=1,axes=FALSE,xlab="",ylab="",xlim=c(-1.5,1.5),ylim=c(-1.5,1.5))
for(i in 1:nrow(indices)){
if(!is.na(coefficients(reg)[1+k+i])){
if(summary(reg)$coefficients[1+k+i,4]<.1){
segments(sommetX[indices[i,2]],sommetY[indices[i,2]],
sommetX[indices[i,3]],sommetY[indices[i,3]],col="grey")
text(mean(sommetX[indices[i,2:3]]),mean(sommetY[indices[i,2:3]]),
trunc(10000*coefficients(reg)[1+k+i])/10000)
}}}
for(i in 1:k){
if(summary(reg)$coefficients[1+i]<.1){
cercle(c(cos(theta)[i]*1.18,sin(theta)[i]*1.18),.18,"grey")
text(cos(theta)[i]*1.35,sin(theta)[i]*1.35,
trunc(10000*coefficients(reg)[1+i])/10000)
}}
points(sommetX,sommetY,cex=6,pch=19,col="yellow")
points(sommetX,sommetY,cex=6,pch=1)
text(sommetX,sommetY,1:k)

on obtient

Je ne sais pas si mes graphiques sont pertinents, ou pas. Mais je trouve ça joli. En fait, je suis tombé un peu par hasard2 sur les Tables de Taguchi, développées par Gen’ichi Taguchi (田口 玄一). Le soucis est que je n’ai rien compris… Enfin, disons que je croyais comprendre, puis j’ai continué à faire des dessins… Si quelqu’un pourrait m’expliquer sur mon exemple les graphiques de Taguchi, je suis preneur ! car je doute que ce soit ce que je fais depuis tout à l’heure…

1. Cette base est largement utilisée dans le quatrième chapitre de Computational Actuarial Science with R, à paraître dans les mois à venir.

2.En l’occurence, le hasard est @Benavent qui a suscité ma curiosité ce matin en me parlant de ces tables, dont je n’avais alors jamais entendu parlé ! J’avais même lu rapidement Taniguchi (谷口 ジロー) et je ne voyais pas le rapport avec les statistiques….

Pricing reinsurance contracts, another case study

A reinsurance case study for tomorrow’s class. The goal will be to price some nonproportional reinsurance contract, for business interruption claims. Consider the following dataset,

> library(gdata)
>  db=read.xls(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/SIN_1985_2000-PE.xls",
+  sheet=1)
Content type 'application/vnd.ms-excel' length 183808 bytes (179 Kb)
open URL
==================================================
downloaded 179 Kb

As for any (standard) insurance contract, there are two parts in the pricing

  • the expected number of claims
  • the average cost of individual claims

Here, we do not have covariates (but it might be possible to use some, like the kind of industry, the location, etc).

Let us start with the expected number of claims, per year. Here is the daily frequency,

The data are rather old… but somehow, it is a good thing since after ten years, we can expect that most of the claims have been settled (we’ll discuss claims dynamic starting next week). To plot the graph above, we use

> date=db$DSUR
> D=as.Date(as.character(date),format="%Y%m%d")
> vD=seq(min(D),max(D),by=1)
> sD=table(D)
> d1=as.Date(names(sD))
> d2=vD[-which(vD%in%d1)]
> vecteur.date=c(d1,d2)
> vecteur.cpte=c(as.numeric(sD),rep(0,length(d2)))
> base=data.frame(date=vecteur.date,cpte=vecteur.cpte)
> plot(vecteur.date,vecteur.cpte,type="h",xlim=as.Date(as.character(
+ c(19850101,20111231)),format="%Y%m%d"))

Then, we can get a prediction of the daily number of business interruption claims, e.g. for any day in 2010 (assume that we had to price a reinsurance contract a few years ago), using a (standard) Poisson regression

> regdate=glm(cpte~date,data=base,family=poisson(link="log"))
> nd2010=data.frame(date=seq(as.Date(as.character(20100101),format="%Y%m%d"),
+ as.Date(as.character(20101231),format="%Y%m%d"),by=1))
> pred2010 =predict(regdate,newdata=nd2010,type="response")
> sum(pred2010)
[1] 159.4757

Observe that using old data has drawbacks, since we got much more uncertainty if we use a regression on time (to include some possible trend)

Say we have something like 160 claims over a given year, on average.

> plot(D,db$COUTSIN,type="h")

Let us now focus on the cost of those claims. We have 2,400 claims in our dataset, to fit a model (or at least estimate how much a reinsurance contract might cost us). Assume that we would like to purchase a reinsurance contract for our very large claims. Like the two largest per year. Over 16 years, the decutible should be close to the cost of the 32nd largest claim, which was close to 15 million.

> quantile(db$COUTSIN,1-32/2400)/1e6
98.66667% 
 15.34579 
> abline(h=quantile(db$COUTSIN,1-32/2400),col="blue")

So consider some reinsurance contract with a deductible of 15 million. Unfortunately, we cannot find unlimited covers. So let us assume that a reinsurance company agrees for such a deductible, but with a limited cover of 35 million. The average cost (for the reinsurance company) is https://latex.codecogs.com/gif.latex?\mathbb{E}(g(X)) where

https://latex.codecogs.com/gif.latex?g(x)=\min\{35,\max\{x-15,0\}\}

A first idea is to look at the first cost, i.e. the empirical average of that indemnity, on our portfolio. The indemnity function is

> indemn=function(x) pmin((x-15)*(x>15),50-15)

we can check on a few losses that it is actually what we wish to compute

> indemn(5)
[1] 0
> indemn(20)
[1] 5
> indemn(50)
[1] 35

Now, if the compute the average repayment by the reinsurance company, over 16 years, we get

> mean(indemn(db$COUTSIN/1e6))
[1] 0.1624292

So, per claim, the reinsurance company will pay, on average 162,430. With 160 claims per year, the pure premium should be close to 26 million

> mean(indemn(db$COUTSIN/1e6))*160
[1] 25.98867

(again, for a 35 million cover, for some claims that should occur, on average, twice a year). As we will see, a standard model in reinsurance is the Pareto distribution (or to be more specific, a Generalized Pareto one),

There are three parameters here

  • the threshold https://latex.codecogs.com/gif.latex?\mu (that we will consider as fixed, but we will see its impact on reinsurance pricing)
  • the scale parameter https://latex.codecogs.com/gif.latex?\sigma (called https://latex.codecogs.com/gif.latex?\beta in R)
  • the tail index https://latex.codecogs.com/gif.latex?\xi

The strategy is to consider a threshold below our deductible, e.g. 12 million. Then, given that the loss exceed 12 million, we can fit a Generalized Pareto distribution,

> gpd.PL <- gpd(db$COUTSIN,12e6)$par.ests
> gpd.PL
          xi         beta 
7.004147e-01 4.400115e+06

and compute

>  E <- function(yinf,ysup,xi,beta,threshold){
+    as.numeric(integrate(function(x) (x-yinf)*dgpd(x,xi,mu=threshold,beta),
+    lower=yinf,upper=ysup)$value+
+    (1-pgpd(ysup,xi,mu=threshold,beta))*(ysup-yinf))
+  }

Here, given that a claim exceeds 12 million, the average repayment is close to 6 million

> E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6)
[1] 6058125

Now, we have to take into account the probability to reach 12 million, which is here

> mean(db$COUTSIN>12e6)
[1] 0.02639296

So, if we summarize, we have on average 160 claims per year,

> p
[1] 159.4757

Only 2.6% will exceed 12 million

> mean(db$COUTSIN>12e6)
[1] 0.02639296

So, the yearly frequency of claism larger than 12 million is 4.2 claims

> p*mean(db$COUTSIN>12e6)
[1] 4.209036

And for a claim that exceed 12 million, the average repayment is

> E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6)
[1] 6058125

So, the pure premium should be close to

> p*mean(db$COUTSIN>12e6)*E(15e6,50e6,gpd.PL[1],gpd.PL[2],12e6)
[1] 25498867

which (hopefully) is close to the empirical value we got. Actually, it is also possible to look at the impact of the threshold parameter, since it is clearly and intermediate value that could be changed. I mean, why 12 and not 10? Consider

> esp=function(threshold=12e6,p=sum(pred2010)){
+  (gpd.PL <- gpd(db$COUTSIN,threshold)$par.ests)
+  return(p*mean(db$COUTSIN>threshold)*E(15e6,50e6,gpd.PL[1],gpd.PL[2],threshold))
+  }

We can plot the pure premium as a function of that threshold,

> seuils=seq(1e6,15e6,by=1e6)
> plot(seuils,Vectorize(esp)(seuils),type="b",col="red")

which is between 24 and 26 for large thresholds. Again, that is only the first step, and we can price a higher reinsurance layer, like a reinsurance contract with a deductible of 50 million (we have our previous reinsurance contract for claims below that threshold), and a cover of 50 million, for instance. For those high layers, it become interesting to have a parametric model, which should be more robust than the empirical average.

 

Generating your own normal distribution table

It might sounds incredibly old fashion, but for my the exam for the ACT2121 probability course (to prepare for the exam P of the Society of Actuaries), I will provide a standard normal distribution table. The problem is that it is never the one we’re looking for (sometimes it is the survival function, sometimes it is the cumulative distribution function, sometimes we consider only positive values, etc). Here is the one that will be given for the exam, this Friday.

Now, here is the code to generate it.

I did use the following code to generate the table (in a latex format),

> u=seq(0,3.09,by=0.01)
> p=pnorm(u)
> m=matrix(p,ncol=10,byrow=TRUE

We have here the table that we wish to have in our table,

> options(digits=4)
> m
        [,1]   [,2]   [,3]   [,4]   [,5]   [,6]   [,7]   [,8]   [,9]  [,10]
 [1,] 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
 [2,] 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
 [3,] 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
 [4,] 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
 [5,] 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
 [6,] 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
 [7,] 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
 [8,] 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
 [9,] 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
[10,] 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
[11,] 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
[12,] 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
[13,] 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
[14,] 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
[15,] 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
[16,] 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
[17,] 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
[18,] 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
[19,] 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
[20,] 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
[21,] 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
[22,] 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
[23,] 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
[24,] 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
[25,] 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
[26,] 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
[27,] 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
[28,] 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
[29,] 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
[30,] 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
[31,] 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
> rownames(m)=seq(0,3,b=.1)
> colnames(m)=seq(0,.09,by=.01)

To put it in a nice latex format, we can use

> library(xtable)
> newm=xtable(m,digits=4)
> print.xtable(newm, type="latex", file="nor1.tex")

We now have a simple tex file containing a table.

\begin{table}[ht]
\centering
\begin{tabular}{rrrrrrrrrrr}
  \hline
 & 0 & 0.001 & 0.002 & 0.003 & 0.004 & 0.005 & 0.006 & 0.007 & 0.008 & 0.009 \\ 
  \hline
0 & 0.5000 & 0.5040 & 0.5080 & 0.5120 & 0.5160 & 0.5199 & 0.5239 & 0.5279 & 0.5319 & 0.5359 \\ 
  0.1 & 0.5398 & 0.5438 & 0.5478 & 0.5517 & 0.5557 & 0.5596 & 0.5636 & 0.5675 & 0.5714 & 0.5753 \\ 
  0.2 & 0.5793 & 0.5832 & 0.5871 & 0.5910 & 0.5948 & 0.5987 & 0.6026 & 0.6064 & 0.6103 & 0.6141 \\ 
  0.3 & 0.6179 & 0.6217 & 0.6255 & 0.6293 & 0.6331 & 0.6368 & 0.6406 & 0.6443 & 0.6480 & 0.6517 \\ 
  0.4 & 0.6554 & 0.6591 & 0.6628 & 0.6664 & 0.6700 & 0.6736 & 0.6772 & 0.6808 & 0.6844 & 0.6879 \\ 
  0.5 & 0.6915 & 0.6950 & 0.6985 & 0.7019 & 0.7054 & 0.7088 & 0.7123 & 0.7157 & 0.7190 & 0.7224 \\ 
  0.6 & 0.7257 & 0.7291 & 0.7324 & 0.7357 & 0.7389 & 0.7422 & 0.7454 & 0.7486 & 0.7517 & 0.7549 \\ 
  0.7 & 0.7580 & 0.7611 & 0.7642 & 0.7673 & 0.7704 & 0.7734 & 0.7764 & 0.7794 & 0.7823 & 0.7852 \\ 
  0.8 & 0.7881 & 0.7910 & 0.7939 & 0.7967 & 0.7995 & 0.8023 & 0.8051 & 0.8078 & 0.8106 & 0.8133 \\ 
  0.9 & 0.8159 & 0.8186 & 0.8212 & 0.8238 & 0.8264 & 0.8289 & 0.8315 & 0.8340 & 0.8365 & 0.8389 \\ 
  1 & 0.8413 & 0.8438 & 0.8461 & 0.8485 & 0.8508 & 0.8531 & 0.8554 & 0.8577 & 0.8599 & 0.8621 \\ 
  1.1 & 0.8643 & 0.8665 & 0.8686 & 0.8708 & 0.8729 & 0.8749 & 0.8770 & 0.8790 & 0.8810 & 0.8830 \\ 
  1.2 & 0.8849 & 0.8869 & 0.8888 & 0.8907 & 0.8925 & 0.8944 & 0.8962 & 0.8980 & 0.8997 & 0.9015 \\ 
  1.3 & 0.9032 & 0.9049 & 0.9066 & 0.9082 & 0.9099 & 0.9115 & 0.9131 & 0.9147 & 0.9162 & 0.9177 \\ 
  1.4 & 0.9192 & 0.9207 & 0.9222 & 0.9236 & 0.9251 & 0.9265 & 0.9279 & 0.9292 & 0.9306 & 0.9319 \\ 
  1.5 & 0.9332 & 0.9345 & 0.9357 & 0.9370 & 0.9382 & 0.9394 & 0.9406 & 0.9418 & 0.9429 & 0.9441 \\ 
  1.6 & 0.9452 & 0.9463 & 0.9474 & 0.9484 & 0.9495 & 0.9505 & 0.9515 & 0.9525 & 0.9535 & 0.9545 \\ 
  1.7 & 0.9554 & 0.9564 & 0.9573 & 0.9582 & 0.9591 & 0.9599 & 0.9608 & 0.9616 & 0.9625 & 0.9633 \\ 
  1.8 & 0.9641 & 0.9649 & 0.9656 & 0.9664 & 0.9671 & 0.9678 & 0.9686 & 0.9693 & 0.9699 & 0.9706 \\ 
  1.9 & 0.9713 & 0.9719 & 0.9726 & 0.9732 & 0.9738 & 0.9744 & 0.9750 & 0.9756 & 0.9761 & 0.9767 \\ 
  2 & 0.9772 & 0.9778 & 0.9783 & 0.9788 & 0.9793 & 0.9798 & 0.9803 & 0.9808 & 0.9812 & 0.9817 \\ 
  2.1 & 0.9821 & 0.9826 & 0.9830 & 0.9834 & 0.9838 & 0.9842 & 0.9846 & 0.9850 & 0.9854 & 0.9857 \\ 
  2.2 & 0.9861 & 0.9864 & 0.9868 & 0.9871 & 0.9875 & 0.9878 & 0.9881 & 0.9884 & 0.9887 & 0.9890 \\ 
  2.3 & 0.9893 & 0.9896 & 0.9898 & 0.9901 & 0.9904 & 0.9906 & 0.9909 & 0.9911 & 0.9913 & 0.9916 \\ 
  2.4 & 0.9918 & 0.9920 & 0.9922 & 0.9925 & 0.9927 & 0.9929 & 0.9931 & 0.9932 & 0.9934 & 0.9936 \\ 
  2.5 & 0.9938 & 0.9940 & 0.9941 & 0.9943 & 0.9945 & 0.9946 & 0.9948 & 0.9949 & 0.9951 & 0.9952 \\ 
  2.6 & 0.9953 & 0.9955 & 0.9956 & 0.9957 & 0.9959 & 0.9960 & 0.9961 & 0.9962 & 0.9963 & 0.9964 \\ 
  2.7 & 0.9965 & 0.9966 & 0.9967 & 0.9968 & 0.9969 & 0.9970 & 0.9971 & 0.9972 & 0.9973 & 0.9974 \\ 
  2.8 & 0.9974 & 0.9975 & 0.9976 & 0.9977 & 0.9977 & 0.9978 & 0.9979 & 0.9979 & 0.9980 & 0.9981 \\ 
  2.9 & 0.9981 & 0.9982 & 0.9982 & 0.9983 & 0.9984 & 0.9984 & 0.9985 & 0.9985 & 0.9986 & 0.9986 \\ 
  3 & 0.9987 & 0.9987 & 0.9987 & 0.9988 & 0.9988 & 0.9989 & 0.9989 & 0.9989 & 0.9990 & 0.9990 \\ 
   \hline
\end{tabular}
\end{table}

and the following code to get a graph, illustrating was was actually computed, in the table (see a previous post for more details)

> library("tikzDevice")
> options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))
+ tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))
> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")
> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\leq x)=\\displaystyle{
+ \\int_{-\\infty}^x \\varphi(t)dt}}$", cex = 1.5)
> dev.off()

Now we have the graph in another tex file. It is possible to embed the code in a tex file, or to compile the tex file to get a pdf file. I did generate the pdf file.

 Here is the tex file I finally get. It is now extremely simple to get your own normal distribution table. Now, I guess it could be possible to use sweave, or knitr. Once I’ll get a copy of Yihui’s book, I’ll try to use it to generate distribution table for my courses !

Diary of an addict

After four days offline (at least off my blog, see the previous post for more details), I have to face the truth: I am a computer addict. For sure. Here is the diary of the last four days, that were supposed to be without touching my computer, at work and at home. I tried to keep tracks of everytime I had to go on my computer. At home, that was fine (I have decided a few weeks ago that I should not check my email at home, in the evening, and in the morning, so unless I want to read the news, check on Twitter what’s going on, or write a post on my blog, I do not usually spend much time on our computer). But at the office, that’s another story…

  • Tuesday, April 2nd

6:17 Wake up, first day of the challenge.

8:17 Time to check if the code used for datascraping (on some websites) did run… It’s not like using my computer. It was just fixing problems for a future research. One minute, just cheking. Well, there was a problem in my code, I have to fix it (it takes much longer than I thought) and then, I ran it again (in order to exact some figures out of almost 200,000 internet pages). One code has been scaping a website for almost a week (and it looks like I have only one third of the data), and the other one, I have to run it everyday, to backup some daily figures from several websites.

8:32 While going to get a coffee, JF shows me the web site of the Antartica Journal of Mathematics (where he was kindly invited to submit an article, and also to apply if he’s willing to join the board) and we go through Rob Hyndman’s warning on his blog, about junk journals. No offense to the webdesigner hired by this journal, but we had a lot of fun on that website… so… nineties.

8:47 I have to go online, on my email account to download a paper I have to review. I had a reminder by an editor this weekend. Print the article. While seeking for the email, I notice that I did receive during the night an invitation to give my opinion about another article, for another journal. Just go briefly through the paper. Even if my (personal) quota for 2013 is already exceeded, I decided to accept to write a referee report. Also check something with a co-author. Quickly. Also read four comments on the blog submited during the night, and approve all of them. Damned! one of them mentioned a preprint related so something discussed in a post. Try to avoid to read the preprint. Go offline. Definitively. Turn on the music. Sacred music, namely In Seculum Longum.

9:39 After alsmost 30 minutes reading articles, I give up. I open LaTeX to type corrections for a couple of chapters I have to write down (and send them as soon as possible, the deadline was last week). So far, no internet. I take a gum, I feel nervous…

9:44 Go online to check how to use enquote{} in latex (it looks like I have some damned babel problems !) Go on several forums. Spent the morning working on my LaTeX files.

12:54 Back from lunch, I turn off the sacred music, swith to Sandinista, by The Clash,

13:14 JF want to check with me the schedule for my graduate courses for Winter 2014 session, one on extreme values and copulas, and one on time series. Have to go on the website of the university, to find what has been planed. Spend also some time seeking for old emails, since the information is only partially online.

14:02 Have to go online, one more time, to buy a ticket for the French railway, for a colleague of mine.

14:17 Mathieu asks me go check the emails received this morning, and to send an email in order to confirm that I will join a meeting about the organisation of a workshop.

14:40 Discussion with Mathieu about resubmiting an article. We need to go on my email account to check several (interesting) comments from the referees.

15:03 This time, I have to launch the internet browser. I have to find out how to avoid fist names in a bibliography, using a bib file, and get only initials. Some more time on LaTeX forums.

15:08 Remember that I have to book a room for a colleague, visiting me this August. Need to check the price on the webiste, and send emails.

15:19 Wrote a recommendation for a former student of mine. Send it.

15:26 Discussion by email with two co-authors, since we did plan to work on a joint paper this month.

15:48 Time to work on slides for a conference at the end of this month, and write codes to produce some graphs. So, still on my computer. But not (really) online.

Finally, I went back home early, to cook, and spend some time with the kids.

  • Wednesday, April 3rd

Second day.

8:17 Check again my R code. Run again one of them. Looks like there was a problem.

8:50 Go to give my course. Until noon, I spent the morning producing codes, to show how to compute chain ladder estimates, and explain roots of bootstrap in regression models.

13:03 Buzy red light on my phone when I get back to my office: it looks like some people at the faculty tried to reach me. Usually I do not answer the phone. I hate my phone: you want to reach me, send me an email. OK, in my mailbox, there are a couple of emails from the faculty “please, call us back, regarding the conference you plan to organize”. Damned, how can I tell them I am on a mission, that I try to avoid using my email (and that I have to deal with my phone-phobia at the same time).

13:16 Go online to check code for an R package I want to use to produce nice graphs

13:30 Check quickly my emails, delete 90% of them, answer to one, postpone others. Decide to go to work in a coffee shop, the whole afternoon. While taking the elevator, I started to discuss with a colleague. Looks like I missed an email about a meeting that will take place in the afternoon. I want to work on my chapters. I go to the coffee shop. Nothing serious in the meeting, as far as I understood. I spent the afternoon reading, checking typos in chapters that should be sent soon, and articles on advanced methodes in finance, based on trees… No computer for a few hours ! I did it !

17:49 I have to go online at home, time to pay my HydroQuébec bill. Damned !

17:56 While I am about to log off, I received an email from a former student of mine, with a link to a nice article (entitled “l’informatique, ça s’apprend“) I really want to share it. But I can’t. Less than 36 hours after my mission, that would be a defeat ! Do not go on Twitter !

18:02 Cooking for the kids. Will miss the Montreal Hackathon organized by R users. Wednesdy evening is not a great day to join those social events (could geek meetings be called social ?). My wife has a late course in the evening, and I still try to see how deeply addicted I can be. I finally decided that I will check my emails twice a day. But just to remove spams (or messages like “I am a student in a engineering program from India and I would like to start a PhD with you“, or “the back door in one of the building will be locked during the week-end from 22:15 till 23:42 for security reasons“) and to see if there might be important ones. Probably have to answer some of them, but I’ll try to postpone for most of them.

18:41 started a game of kid audio with the girls. Almost became a pugilat when they started to argue about kettledrum and snare-drum, I wanted to show them on youtube, but finally I gave up (there is an old saying: never interfere in a girl-and-girl fight). Decided to ask the elder to read a story to the youngest, while I was washing the dishes (which is usually the perfect timing for a dvd). Meanwhile, it looks like my son went online, for his music assignment: his teacher is using online videos to help them practicing. Argggg.

  • Thursday, April 4th

Third day.

8:29 As usual, checking the code for datascraping… reload the one that did crash (again) during the night

Error in substr(html, 8, 12) :
  invalid multibyte input string at '<e9>lair,'

Damned. Moving around 200,000 pages without being caught is difficult. Have to play some music. Gonzales, piano solo.

8:47 Have to check quickly my emails. The problem is that, on average, per week day, I have a bit more than 100 emails (excluding official spam). If I do now scan them, I end up with on thousand emails very quickly… Need to moderate a comment on the blog.

8:50 Still online, checking my emails, bad news about fundings for a student of mine, have to send a couple of emails to find a backup solution.

8:54 Quick discussion by email about copyrights for a chapter in a book

8:54 Have to send also emails to book a room for a colleague who will visit me in August

8:55 Postpone a Skype discussion with a co-author, still trying to avoid unecessary use of my computer.

8:57 Answer an email to schudele a meeting because a student asked to get a grade revision, and an adhoc committee is necessary. Looks like I am part of it.

9:17 Start to write recommendation letters for a Christophe who’s applying for positions in several universities, in France.

9:34 Back on the slides and the R code. On my computer, but offline.

10:11 Email, brief answer to a former student of mine, who might be interesting to share some datasets, but it looks like there might some confidentiality issues. I wanted to work on those data with a student, in September. Have to find an alternative.

10:45 Discussion with student in master program (face to face this time). Have to go on Dropbox to download a pdf file he wrote, and to download a couple of paper to check the proofs.

13:05 Work with Amadou, my phd student, need to find a pdf version of a book, since the property is clearly given there, but the book is out of print (no way to get a paper copy). Also go online to find a reference on a complicated model.

14:40 Email from Fred, about a reserving technique that seems new, from a paper he just discovered.

15:03 Upload on slideshare some (old) slides that do the same thing that this new paper (aren’t there anyone checking before publishing papers that results are really new ?)

15:14 Play Rodrigo y Gabriela, need something punchy to finish my day

16:56 Received an email from the immigration department, and I have to go to their website to find a doctor for a medical examination of the whole family.

19:26 Request by email from financial services to get the exact amount (in Euros) taken from my credit card. Have to go on my bank account, online.

19:43 Disccussion by email with Frédéric about one year uncertainty and bootstrap with overdispersed Poisson models

  • Friday, April 5th

Last day of the test. Fourth day.

06:18 My daughter wakes up and tells me it is unfair to have snow for her birthday. Have to go on http://meteomedia.com/weather/… to check for weather forecasts. Hopefully, we should have nice weather this afternoon…

08:16 Once again, checking R codes, still running this time ! Great ! Time to play some music.  Air, Premiers Symptômes.

8:38 Email regarding next week jury, checking legal aspects

9:04 I have to print bank informations, that I did download yesterday, check orders placed on Amazon in the last 3 months, scan documents, send them to financial services.

9:13 Have to check for an account number with financial services

9:15 Brief email to some contributors of a book that I should edit this year.

9:21 Lauch Skype, have to discuss with coauthors in France.

11:54 Received an email about courses in September, looks like a quick answer is needed.

13:10 Update the syllabus for the course I will give in September. Decided to write that cellphone will not be allowed during the class.

14:12 Work with Ben, a master student, on a paper. Need to scan notes I have written down to send him by email (this time, we did work together without using my computer).

14:38 Finished my recommendation letters for Christophe (for positions in France, more than a dozen recommendations). Have to send them individualy by email. Have also to find some email adresses that I do not have.

14:49 Received an email claiming that I cannot give my (graduate) course on extreme value in 2014, not in the official programm. Have to spend some time checking why. It seems that it has been removed from the list, and that the code has changed (hopefully JF was online to check that information much more efficiently than I would).

14:59 Received an answer from one of the colleagues I just sent recommendation to. We used to be students at the same time, a few years ago. Write a short email back to give (personal) news.

16:19 Started to type sketches of what can be the final exam for my course on GLM for actuarial science. So far 5 questions. Need to find about 40…. Will take a while.

Mission aborted.

I finally left the office later on, to pick up my son and bring him to his fencing course (to prepare for the Jeux de Montréal that will take place tomorrow). Went back home, then, for a cake with candles for my daughter’s birthday. Later on, had to spend some time online, on the blog, for my students. And started to type this post. So, here was the story of the past four days. I have to admit that looking back at those four days is quite informative :

  1. I cannot work without a computer, I can hardly work offline. No only for my research, to get help from forums on R and LaTeX, or to seek articles (the time I spent in Paris at the library making photocopies of old articles is clearly over).
  2. I do not only need a computer to write R codes, and produce LaTeX slides and artices. I need a computer… for everything. To plan meetings, for social interactions with colleagues, to find some help, to find a theorem, to book an hotel, etc.
  3. I understand more clearly why I am so unproductive in terms of research ! it looks like I spend (I was about to say waste) a lot of time on administrative tasks. Small tasks. But adding a lot of small tasks, finally, it is difficult to have 4 or 5 consecutive hours to work exclusively on some research project.

Génération hackers ?

Je suis régulièrement effaré lorsque j’entends les parents d’ami(e)s de ma fille (la petite dernière) s’extasier que leur enfant arrive à dévérouiller “tout seul !” leur téléphone cellulaire. Le coté auto-célébration de son rejeton, je connais depuis plus d’une dizaine d’années (et je pense en avoir fait aussi ma part), mais c’est plutôt le commentaire “ah vraiment, cette génération sait tellement bien utiliser l’informatique… tu te rends compte, elle a seulement 3 ans !“.

Il serait temps de rétablir une vérité historique: je commence à me faire vieux, mais pourtant, quand j’étais à l’école primaire, on avait des cours d’informatique. Ce n’était pas comme aujourd’hui, où mes enfants ont toujours eu un ordinateur au fond de la salle (dès l’école maternelle, soit à l’age de 2 ans en France), qui servait à faire défiler des photos, ou à aller chercher des informations sur internet. De mon temps, on n’avait pas un ordinateur dans chaque classe, mais il y avait une salle informatique avec des MO5 (qui venaient de sortir). Et par informatique, j’entends par là que l’on apprenait à taper des lignes de code (on n’avait pas internet, oui, j’ai connu ces ages reculés). Ce n’était pas bien méchant: il y avait le logo, mais surtout le Basic (c’était dans le cadre du plan informatique pour tous). De manière surprenante, on était actif devant un écran, en produisant des choses !

Je me souviens avoir fait mes premiers jeux à l’age de mon fils (lui a 10 ans, mais j’étais peut-être un peu plus vieux, maintenant que je découvre les dates exactes en tapant ce billet, je vois que ma mémoire me joue des tours). Souvent, on recopiait des lignes de codes, mais quand même ! Comment se fait-il que l’aspect programmation ait été complètement enlevé des écoles, pour ne parler que d’applications ? et le plus souvent sous une forme ultra rudimentaire, pour ne pas dire mauvaise. Par exemple, je suis régulièrement effaré que mon fils n’ait pas la moindre notion de sécurité informatique (et je ne parle pas de mise en garde avant de créer une page sur Facebook… ce qui serait la moindre des choses) ! Il rentre régulièrement de l’école, m’expliquant que des copains lui ont parlé d’un nouveau site de jeux. Mais il faut s’inscrire, créer un compte, etc. L’autre jour, en discutant un peu avec lui des mots de passe, on a compris qu’il pourrait être amusant de créer un site, où les gens entreraient leur adresse électronique, et un mot de passe. Et que le mot de passe permettrait surement de se connecter sur leur adresse électronique (oui, il s’est rendu compte qu’il utilisait toujours le même mot de passe). Il pourrait alors faire des bonnes blagues en envoyant des messages à leur place ! (il a 10 ans, ses blagues sont encore relativement innocentes). On a eu une longue discussion au début de la semaine de relâche, et je me rends compte qu’il ne manque pas grand chose pour en faire un vrai hacker (j’entends par là le sens que l’on trouve sur wikipedia, “un hacker est quelqu’un qui aime comprendre le fonctionnement d’un mécanisme, afin de pouvoir le bidouiller“; il est dommage que dans l’imaginaire collectif – j’entends par là ce que l’on peut lire dans les journaux – le hacker soit aussi mal perçu, alors que c’est juste quelqu’un de curieux… la curiosité est devenu un bien mauvais défaut).

Après avoir longuement hésité (et parce que je ne trouvais pas de camps de jour qui proposait d’apprendre à faire de l’informatique), je me suis lancé: mon fils va faire du R. Lui est content car il réclame souvent à pouvoir “faire des trucs” sur l’ordinateur, et moi car j’ai l’illusion qu’il va apprendre des choses qui pourraient lui servir un jour (au moins à comprendre comment fonctionne un ordinateur). Et autant que faire se peut, j’essaye de séparer les activités familiales de ce qui pourrait s’approcher du travail. En fait, je pensais acheter python for kids (et en profiter pour découvrir un langage que j’aurais du apprendre voilà 10 ans). Mais le livre est en anglais, et mon fils n’est pas très à l’aise. Bref, je me suis lancé dans R for kids (par moi même)L’objectif était d’apprendre à faire des dessins  (un peu dans l’idée du logo je pense). De comprendre qu’un dessin était une succession de formes de base. J’ai commencé (pendant le cours d’escrime de mon fils) à taper quelques fonctions simples (carré, trait, triangle, disque, point, etc), et à coder les principales couleurs (pour qu’il les tape en français). Tout est caché dans la fonction

source("http://freakonometrics.free.fr/RforKIDZ.R")

Ensuite, on s’est lancé. Le point de départ est de faire un dessin ! Oui, c’est plus simple. Ensuite, on va définir des points, en donnant leur coordonnées à partir de la grille de fond

fond()

pour créer la grille de fond, et pour les points

A=c(0,0)
B=c(4,12)
C=c(0,12)
D=c(2,15)

On peut d’ailleurs visualiser ces points (pour vérifier qu’ils sont bien placés)

point(A)
point(B)
point(C)
point(D)

Puis on fait les figures.

carre(A,B,"gris")
polygone(C,B,D,couleur="rouge")

(dans la première version, je n’avais pas pensé faire une fonction spécifique pour les triangles) Ensuite, on rajoute un drapeau,

E=c(2,18)

pour le sommet du mat, puis pour le reste

trait(D,E,"noir",ep=2)
F=c(2,16)
point(F)
G=c(6,17)
point(G)
polygone(E,F,G,couleur="jaune")

Ce n’est pas du code, ça…. Ben, un peu quand même…. surtout quand on a vu qu’on pouvait translater une figure,

h=15
A=c(0+h,0)
B=c(4+h,12)
C=c(0+h,12)
D=c(2+h,15)
E=c(2+h,18)
carre(A,B,"gris")
polygone(C,B,D,couleur="rouge")
trait(D,E,"noir",ep=2)
F=c(2+h,16)
G=c(6+h,17)
polygone(E,F,G,couleur="jaune")

Et hop, on a deux tours.

Je pense que c’est là le cœur de la programmation: comprendre qu’il y a une forme de base, et qu’après, on ne fait que répéter. Ensuite, au centre, on a fait le mur, et on a mis les créneaux, là encore, en comprenant que c’était la même figure, translatée plusieurs fois…

A=c(4,0)
B=c(15,8)
carre(A,B,"gris")

A=c(5,8)
B=A+1
carre(A,B,"gris")

et puis on répète

A=c(7,8)
B=A+1
carre(A,B,"gris")

A=c(9,8)
B=A+1
carre(A,B,"gris")

etc…

(on a vu au passage que si on ne compte pas utiliser un point, on peut donner son nom à un autre) Enfin, pour faire le mur et la porte, on a vu qu’on avait le choix: Le plus simple (après de longues négociations discussions) a été de faire un rectangle, puis de faire un trou carré, et un cercle (en blancs).

A=c(13,8)
B=A+1
carre(A,B,"gris")

A=c(8,0)
B=c(12,4)
carre(A,B,"blanc")

C=c(10,4)
disque(C,2,"blanc")

Pas mal ? Bon, maintenant, ça me gêne un peu. Parce que je ne suis pas un bon codeur, et je vais apprendre de mauvais réflexes à mon fils (ou mes enfants, car ma fille a fini par participer, mais on verra plus tard ce qu’elle a fait).

Le plus marrant, c’est qu’on a vu comment faire un film: on a construit une voiture (assez sommaire, j’en conviens, les ingénieurs pesteront surement en voyant notre boite à chaussures avec deux roues).

dessin=function(x){
fond()
A=c(2+x,2)
B=c(6+x,4)
C=c(3+x,2)
D=c(5+x,2)
carre(A,B,"vert")
disque(C,.75,"noir")
disque(D,.75,"noir")
}

(c’est moi qui est codé la fonction, on verra ça plus en détails une prochaine fois). Et ensuite, on l’a faite se translater, de la gauche, vers la droite: on commence par taper

dessin(0)

puis

dessin(1)

et

dessin(2)

etc… en allant vite, on crée du mouvement…

Amusant, non ?

Ma fille, elle a opté pour un dessin plus traditionnel… le fameux “maison avec arbre et arc en ciel”,

Voilà ce qu’on a pu faire en quelques heures… Je pense qu’on pourrait faire mieux, et je suis ouvert à toutes suggestions: sur la façon d’apprendre à coder, sur l’interface (on utilise R-studio: on code dans la fenêtre de gauche, on utilise l’icône “run” et le dessin s’affiche à droite), sur d’éventuelles applications amusantes, ou des expériences menées par des instituteurs qui veulent apprendre les bases de l’informatique à leurs élèves. Je trouve énormément de ressources en anglais, comme le livre python for kids dont je parlais au début (ou les sites dédiés aux jeux que l’on code soi-même, en python, comme inventwithpython.com/, qui me font penser à ce que je faisais quand j’étais petit), mais je pourrais citer le scratch. Car si la France semble avoir été pionnière en 1980, je ne vois plus grand chose ces temps-ci, en français… Mais je ne sais peut-être pas bien chercher dans la communauté francophone.

Big data, statistics and computer science

Today, software and hardware together provide far more powerful factories than most statisticians realize, factories that many of today’s most able young people find exciting and worth learning about on their own. Their interest can help us greatly, if statistics starts to make much more nearly adequate use of the computer. However, if we fail to expand our uses, their interest in computers can cost us many of our best recruits, and set us back many years.” John Tukey, The technical tools of statistics, 1965, http://jstor.org/… via http://cm.bell-labs.com/…

https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/02/102646212-05-04.jpeg

A random walk ? What else ?

Consider the following time series,

What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,

> PP.test(x)

	Phillips-Perron Unit Root Test

data:  x 
Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758

If we look at the autocorrelation function, we do observe some persistence,

> acf(x,100)

Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies

as  for some  (the so called Hurst index). But here, we start to observe unexpected ouputs,

> library(longmemo)
> (d  <- WhittleEst(x))
'WhittleEst' Whittle estimator for  fractional Gaussian noise ('fGn');	 call:
WhittleEst(x = x)
	  time series of length  n = 759.

H = 0.9899335
coefficients 'eta' =
    Estimate Std. Error z value   Pr(>|z|)
H 0.98993350 0.02468323 40.1055 < 2.22e-16
 <==> d := H - 1/2 = 0.49 (0.025)

 $ vcov       : num [1, 1] 0.000609
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr "H"
  .. ..$ : chr "H"
 $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ...
 $ spec       : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...

or more precisely some non-expected values for Hurst parameter, which should be in 

> confint(d)
      2.5 %   97.5 %
H 0.9415553 1.038312

Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,

> plot(d)

It is probablty time to ask where I found that series… To be honest, I did borrow  it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use

> Y=2013
> M=1
> D=25
> url=paste(
"http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html?
timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02-
01&Year=",Y,"&Month=",M,"&Day=",D,sep="")
> page=scan(url,what="character")

Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,

So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…

The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

  • The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let https://latex.codecogs.com/gif.latex?N denote a counting random variable, then it said to be Poisson distributed if there is https://latex.codecogs.com/gif.latex?\lambda\in(0,\infty) such that

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall%20k\in\mathbb{N}

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among https://latex.codecogs.com/gif.latex?n insured. If individual death probabilities are identical, say https://latex.codecogs.com/gif.latex?p, and if deaths are independent events, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall%20k\in\{0,1,\cdots,n\}
And if https://latex.codecogs.com/gif.latex?n\rightarrow\infty and  https://latex.codecogs.com/gif.latex?np\rightarrow%20\lambda, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)\rightarrow%20e^{-\lambda}\frac{\lambda^k}{k!}Again, this is an asymptotic theorem, which is valid when we have a lot of observations (https://latex.codecogs.com/gif.latex?n\rightarrow\infty), but also, the probability of occurrence should be extremely small (since https://latex.codecogs.com/gif.latex?p\sim\lambda/n), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

  • The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let https://latex.codecogs.com/gif.latex?X_1,%20\cdots,X_n denote i.i.d random variables taking values in https://latex.codecogs.com/gif.latex?%20\mathbb{R}^d (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let https://latex.codecogs.com/gif.latex?\mathcal{A}_n\subset\mathbb{R}^d. If  https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)\rightarrow%200 as https://latex.codecogs.com/gif.latex?n\rightarrow\infty (or https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)=O(n^{-1}) to be a little bit more specific about the assumptions), let https://latex.codecogs.com/gif.latex?N denote the (random variable characterizing) count of events https://latex.codecogs.com/gif.latex?\{X_i%20\in%20\mathcal{A}_n\}, then https://latex.codecogs.com/gif.latex?N can be approximated by a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda%20=n%20\times%20\mathbb%20P(X_i%20\in%20\mathcal{A}_n).
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-28-a%CC%80-11.14.21.png

To get an interpretation related to insurance modeling, let https://latex.codecogs.com/gif.latex?\mathcal{A} denote an upper layer in a reinsurance contract, i.e. https://latex.codecogs.com/gif.latex?\mathcal{A}=\{x%3Ed\} for some deductible https://latex.codecogs.com/gif.latex?d. Let https://latex.codecogs.com/gif.latex?X_i‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible https://latex.codecogs.com/gif.latex?d becomes extremely large (and https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A})\rightarrow%200), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if https://latex.codecogs.com/gif.latex?N has a Poisson distribution and, conditionally on https://latex.codecogs.com/gif.latex?Nhttps://latex.codecogs.com/gif.latex?X_1,\cdots,X_N are independent identically distributed generalized Pareto random variables, then https://latex.codecogs.com/gif.latex?\max\{X_1,\cdots,X_N\} has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

  • The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

  • Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

number of
death per
year
Empirical
counts
Poisson
distribution
0 109 108.67
1 65 66.21
2 22 20.22
3 3 4.11
4 1 0.63
5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

number of
hurricanes
per year
empirical
frequency
Poisson
frequency
0 30 27.16
1 48 47.99
2 37 42.41
3 29 24.98
4 8 11.03
5 3 3.90
6 3 1.15
7 1 0.29
8 and more 0 0.08
  • Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period https://latex.codecogs.com/gif.latex?T (in years), then the yearly probability of non-occurrence is https://latex.codecogs.com/gif.latex?1-(1/T).

And the probability of non-occurence over https://latex.codecogs.com/gif.latex?n years is then https://latex.codecogs.com/gif.latex?1-[1-(1/T)]^n. It is standard to summarize this property with the following table,

return period https://latex.codecogs.com/gif.latex?T

Number of years (https://latex.codecogs.com/gif.latex?n) without catastrophes

10 20 50 100 200
10 65.1% 40.1% 18.3% 9.6% 4.9%
20 87.8% 64.2% 33.2% 18.2% 9.5%
50 99.5% 92.3% 63.6% 39.5% 22.5%
100 99.9% 99.4% 86.7% 63.4% 39.5%
200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability https://latex.codecogs.com/gif.latex?1/T=1/n, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then https://latex.codecogs.com/gif.latex?1-\exp(-1), which is equal to 0.632.

  • Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor https://latex.codecogs.com/gif.latex?p is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq0)=1-(1-p)^{50%20\times%2080}

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq%200)\neq%2050\times%2080\times%20p

On the other hand

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq 0)=1-(1-p)^{50\times80%20}%20\sim1-\exp\left(-50\times80\times%20p%20\right)

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that https://latex.codecogs.com/gif.latex?N is null when https://latex.codecogs.com/gif.latex?N has a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda=50\times80\times%20p. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

https://latex.codecogs.com/gif.latex?1-\exp(-50\times%2080/7200)

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).

Amsterdam

I will be in Amsterdam for the end of this week. I will be in the jury of the PhD defense of Julien Tomas, entitled “Quantifying Biometric Life Insurance Risks With Non-Parametric Smoothing Methods” (the thesis will probably be online soon). But before, I will give a talk at the actuarial seminar at UvA. My visit last time was a real pleasure, and it should be the same this time too. I will give a talk this Thursday on “R for actuarial science“. The slides can be downloaded from here.

De la non-connexité du Vaucluse

Avant-hier, José, un ancien collègue rennais me faisait noter une bizarrerie de la cartographie (et me posait des questions sur l’impact sur les cartes faites avec R). En fait, il m’a fait découvrir que le département du Vaucluse n’était pas connexe. Comme on le voit sur la carte ci-contre, il y a l’enclave des papes, qui est enclavé dans la Drome, mais administrativement rattachée au Vaucluse. Étonnant non ?

Maintenant avec R, ce genre de choses existent. Par exemple, il est possible de travailler avec les îles, qui sont rattachées à tel ou tel département. Regardons ce qui se passe ici, avec les cartes standards de R,

>  library(maps)
>  france = map(database="france")
>  france$names
  [1] "Nord"                                
  [2] "Pas-de-Calais"

(…)

 [92] "Gard"                                
 [93] "Vaucluse"                            
 [94] "Tarn-et-Garonne"                     
 [95] "Alpes-Maritimes"                     
 [96] "Vaucluse"                            
 [97] "Tarn"                                
[108] "Hautes-Pyrenees"                     
[109] "Var:Iles d'Hyeres:I. du Levant"      
[110] "Var:Iles d'Hyeres:I. de Porquerolles"
[111] "Var:Iles d'Hyeres:I. de Port Cros"   
[112] "Haute-Corse"                         
[113] "Pyrenees-Orientales"                 
[114] "Corse du Sud"

On voit que le Vaucluse apparaît deux fois dans la liste des départements. Pour les îles, elles sont rattachées à un département avec un nom spécifique (comme on le voit sur l’île de Porquerolles, par exemple). Mais pas l’enclave des papes. En fait, si on cherche le Vaucluse, il apparaît deux fois

>  which(substr(tolower(france$names),1,5)=="vaucl")
[1] 93 96

Aussi, si on colore le Vaucluse, c’est le département tout entier (avec l’enclave) qui ressort,

 Le code est ici

>  dpt="Vaucluse"
>  couleur="red"
>  match=match.map(france,dpt)
>  color=couleur[match]
>  map(database="france", fill=TRUE, col=color)

On peut aussi faire ressortir l’enclave. Pour cela, il suffit d’aller demander de colorer de manières différentes les deux régions,

>  match[which(match==1)[2]]=2
>  couleur=c("blue","red")
>  color=couleur[match]
>  map(database="france", fill=TRUE, col=color)

Ah, la joie des cartes avec R…

 

Generating a non-homogeneous Poisson process

Consider a Poisson process gif.latex (54×20), with non-homogeneous intensity . Here, we consider a deterministic function, not a stochastic intensity. Define the cumulated intensity

in the sense that the number of events that occurred between time gif.latex (8×13) and gif.latex (6×12) is a random variable that is Poisson distributed with parameter  .

For example, consider here a cyclical Poisson process, with intensity

   lambda=function(x) 100*(sin(x*pi)+1)

To compute the cumulated intensity, consider a very general function

   Lambda=function(t) integrate(f=lambda,lower=0,upper=t)$value

The idea is to generate a Poisson process on a finite interval .

The first code is based on a proposition from Çinlar (1975),

  1. start with https://latex.codecogs.com/gif.latex?s=0
  2. generate gif.latex (96×19)
  3. set gif.latex (112×19)
  4. set gif.latex (6×12) denote gif.latex (124×19)
  5. deliver
  6. go to step 2.

In order to get the infinimum of gif.latex (12×13), consider a code as

   v=seq(0,Tmax,length=1000)
   t=min(v[which(Vectorize(Lambda)(v)>=s)])

(it might not be very efficient…. but it should work). Here, the code to generate that Poisson process is

   s=0; v=seq(0,Tmax,length=1000)
   X=numeric(0)
   while(X[length(X)]<=Tmax){
     u=runif(1)
     s=s-log(u)
     t=min(v[which(Vectorize(Lambda)(v)>=s)])
     X=c(X,t)
   }

Here, we get the following histogram,

   hist(X,breaks=seq(0,max(X)+1,by=.1),col="yellow")
   u=seq(0,max(X),by=.02)
   lines(u,lambda(u)/10,lwd=2,col="red")

Consider now another strategy. The idea is to use the conditional distribution before the next event, given that one occurred at time ,

  1. start with
  2. generate gif.latex (51×16)
  3. set gif.latex (74×14)
  4. deliver
  5. go to step 2.

Here the algorithm is simple. For the computational side, at each step, we have to compute and then http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=F_t%5E%7B-1%7D. To do so, since is increasing with values in , we can use a dichotomic algorithm,

   Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t))
   Ftinv=function(u){
     a=0
     b=Tmax
     for(j in 1:20){
       if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
       if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
       a=binf
       b=bsup
     }
   return((a+b)/2)
   }

Here the code is the following

   t=0; X=t
   while(X[length(X)]<=Tmax){
     Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t))
     Ftinv=function(u){
      a=0
      b=Tmax
      for(j in 1:20){
        if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
        if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
        a=binf
        b=bsup
      }
      return((a+b)/2)
     }
     x=Ftinv(runif(1))
     t=t+x
     X=c(X,t)
   }

The third code is based on a classical algorithm to generate an homogeneous Poisson process on a finite interval: first, we generate the number of events, then, we draw uniform variates, and we sort them. Here, the strategy is closed, except that is won’t be uniform any longer.

  1. generate the number of events on the time interval gif.latex (101×19)
  2. generate independently gif.latex (114×17) where 
  3. set gif.latex (60×15) i.e. the ordered values  gif.latex (136×16)
  4. deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t_i‘s

This algorithm is extremely simple, and also very fast. This is one function to inverse, and it is not in the loop,

   n=rpois(1,Lambda(Tmax))
   Ft=function(x) Lambda(x)/Lambda(Tmax)
   Ftinv=function(u){
     a=0
     b=Tmax
     for(j in 1:20){
       if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
       if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
       a=binf
       b=bsup
     }
     return((a+b)/2)
     }
   X0=rep(NA,n)
   for(i in 1:n){
     X0[i]=Ftinv(runif(1))
    }
   X=sort(X0)

Here is the associated histogram,

An alternative is based on a rejection technique. Actually, it was the algorithm mentioned a few years ago on this blog (well, the previous one). Here, we need an upper bound for the intensity, so that computations might be much faster. Here, consider

  1. start with
  2. generate gif.latex (96×19)
  3. set gif.latex (137×19)
  4. generate gif.latex (95×19) (independent of http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=u)
  5. if gif.latex (90×19) then deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t
  6. go to step 2.

Here, consider a constant upper bound,

   lambdau=function(t) 200
   Lambdau=function(t) lambdau(t)*t

The code to generate a Poisson process is

   t=0
   X=numeric(0)
   while(X[length(X)]<=Tmax){
     u=runif(1)
     t=t-log(u)/lambdau
     if(runif(1)<=lambda(t)/lambdau) X=c(X,t)
  }

The histogram is here

Finally, the last one is also based on a rejection technique, mixed with the second one. I.e. define

gif.latex (433×20)

The good thing is that this function can easily be inverted

gif.latex (215×21)

  1. start (as usual) with
  2. generate gif.latex (63×19)
  3. set gif.latex (74×14)
  4. generate gif.latex (96×19)
  5. if gif.latex (124×19) then deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t
  6. goto step 2.

Here, the algorithm is simply

   t=0
   while(X[length(X)]<=Tmax){
     Ftinvu=function(u) -log(1-x)/lambdau
     x=Ftinvu(runif(1))
     t=t+x
     if(runif(1)<=lambda(t+x)/lambdau(t+x)) X=c(X,t)
   }

Obviously those five codes work, the first one being much slower than the other three. But it might be because my strategy to seek the infimum is not great. And the latter worked well since there were not much rejection, I guess it can be worst…

All those algorithms were mentioned in a nice survey written by Raghu Pasupathy and can be downloaded from http://web.ics.purdue.edu/~pasupath/…. In the paper, non-homogeneous spatial Poisson processes are also mentioned…

 

Simulation de séries temporelles

Un billet rapide pour reprendre le code tapé en cours, la semaine passée. Considérons  un processus autorégressif d’ordre 1,  où  est un bruit blanc, stationnaire, i.e.  appartient à l’intervalle . Le code pour simuler un tel processus est

n=1000
bruit=rnorm(n)
phi1= .85
X=rep(NA,n)
X[1]=0
for(t in 2:n){X[t]=phi1*X[t-1]+bruit[t]}
plot(acf(X),lwd=5,col='blue')
plot(pacf(X),lwd=5,col='blue')

ou avec un autocorrélation au premier ordre négative,

phi1= -0.7

On peut aussi regarder un processus autorégressif au second ordre,

sur la figure ci-dessous (avec en haut à gauche le triangle de stationnarité du couple de paramètres).

phi1=  0.3
phi2=  0.5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=phi1*X[t-1]+phi2*X[t-2]+bruit[t]}

Histoire de changer un peu, on peut regarder un processus moyenne mobile au premier ordre,  où  est un paramètre dans .

theta1=  .8
X=rep(NA,n)
X[1]=0
for(t in 2:n){
X[t]=bruit[t]+theta1*bruit[t-1]}

ou une moyenne mobile du second ordre,

theta1= -.6
theta2=  .5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=bruit[t]+theta1*bruit[t-1]+
theta2*bruit[t-2]}

 

Inference and autoregressive processes

Consider a (stationary) autoregressive process, say of order 2,

for some white noise  with variance . Here is a code to generate such a process,

> phi1=.5
> phi2=-.4
> sigma=1.5
> set.seed(1)
> n=240
> WN=rnorm(n,sd=sigma)
> Z=rep(NA,n)
> Z[1:2]=rnorm(2,0,1)
> for(t in 3:n){Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+WN[t]}

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . There are (at least) three techniques to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, and thus, if we consider a matrix formulation,

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
Min      1Q  Median      3Q     Max
-4.3491 -0.8890 -0.0762  0.9601  3.6105

Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1  0.45107    0.05924   7.615 6.34e-13 ***
X2 -0.41454    0.05924  -6.998 2.67e-11 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.449 on 236 degrees of freedom
Multiple R-squared: 0.2561,	Adjusted R-squared: 0.2497
F-statistic: 40.61 on 2 and 236 DF,  p-value: 6.949e-16

> regression$coefficients
X1         X2
0.4510703 -0.4145365
> summary(regression)$sigma
[1] 1.449276
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
[,1]
[1,]  0.4517579
[2,] -0.4155920

Now, we need to extract the estimated innovation process, from this set of parameters (note that it could be possible to include the variance term in Yule-Walker equations, to get a three dimensional linear equation)

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.445706

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. Thestandard model is a Gaussian model, i.e.

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]	; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1]  0.4509685 -0.4144938  1.4430930

$value
[1] 425.0164

$counts
function gradient
88       NA

$convergence
[1] 0

$message
NULL

Here, our three estimators are rather close. Actually, if we generate 1,000 time series (of size 240), those are the Box-plots of our three estimators, for the first order autoregressive coefficient

for the second one,

and finally for the standard deviation of the innovation process

All those estimators behave nicely, and are rather close. Note that they all might be biased, but they are consistent (see Davidson and MacKinnon for instance, in their book, for more details).