That man changed the world?

You’ve already seen that picture, haven’t you? The truth is, you can hardly miss it those days…. The first time I saw it, a few months ago, I did not recognize the man (probably because I was more used to visualize him with a more black and white style). As far as I remember, the teaser did use a quote you can find everywhere in his biography, starting with a documentary: that man “changed the world“.

I mean, there are big fans of Steve Jobs around, I get it. But did he change my world? I doubt it…. The thing is, I do not have a iPhone (I do not even have a cell phone, and those who tried to reach me know that I barely answer the phone when it rings, in my office and at home), I do not have an iPad, and if I have a Mac at the Office, it is only because of the alternative I had… So if he did not change my life, whose life did he change?

Perhaps the life of those believed in him? If we look at the value of the Apple stock, starting on October, 23rd, 2001 (first iPod released), and compare it with the Nasdaq, both starting at value 100 on October 2001, we get

> library(tseries)
> x.nasdaq <- get.hist.quote("^IXIC")
> x.apple <- get.hist.quote("AAPL")
> nasdaq.100 <- 100*x.nasdaq$Close[time(x.nasdaq$Close)>as.Date("23/10/2001",
+ "%d/%m/%Y")]/as.numeric(x.nasdaq$Close[time(x.nasdaq$Close)==as.Date(
+ "23/10/2001","%d/%m/%Y")])
> plot(nasdaq.100,ylim=c(60,max(apple.100)),col="blue")
> apple.100 <- 100*x.apple$Close[time(x.apple$Close)>as.Date("23/10/2001",
+ "%d/%m/%Y")]/as.numeric(x.apple$Close[time(x.apple$Close)==as.Date(
+ "23/10/2001","%d/%m/%Y")])
> lines(apple.100,col="red")

in blue is the Nasdaq, and in red, the (closing) price of the Apple stock,

People who did buy Apple stocks in October 2001 had a good intuition. In ten years (perhaps a few months more), the stock was worth 40 times the price investors did pay for it. That’s not bad…. so indeed, Steve Jobs did probably change the life of some (long tern) investors who believed in him.

De la disparition du français comme langue de référence (?)

En mettant à jour ma chronique Somewhere Else, depuis quelques temps, j’avais l’impression de citer de moins en moins d’articles en français dans mes tweets. Plus j’y pensais, plus je me rendais compte que je lisais de moins en moins la presse française (ou disons en français). Quand je lis la presse en français, j’ai l’impression de retrouver, avec quelques jours de retard (quelques heures de retards pour les informations plus importantes), ce que l’on trouve sur les sites anglophones. Loin de moi l’idée de juger qui que ce soit ! On a eu déjà quelques débats sur la pression mise sur les journalistes, et je comprend que traduire est une solution parfois simple, et économiquement viable (mais dans ce cas, je préfere aller lire courrierinternational.com qui au moins cite ses sources) ! C’est l’impression que j’avais l’autre jour en feuilletant le numéro de juillet de Science et Vie Junior avec les enfants, avec un article expliquant la taille de la tour la plus haute que l’on pourrait faire avec des légo. Mais l’étude avait été mentionnée en mars dernier sur bbc.co.uk (reprise alors sur le blog). Et j’ai cette impression très réguliement…

Je ne parle pas d’articles scientifiques ici, juste d’études ou de points de vue, sur différents sujets, parfois d’actualité, souvent en relation avec mes centres d’intérêt, comme la science ou l’économie (entre autres). Les sites en français ne semblent plus être des sites de référence (dans le sens ou, souvent, ils relayent une information glânée ailleurs). Et poutant, mon niveau d’anglais est très moyen. Mais je préfère lire un point de vue de @noahpinion sur son blog noahpinionblog.blogspot.ca, @delong sur delong.typepad.com ou @timharford sur timharford.com (dont je rate souvent les subtilitées, n’ayant pas les clés pour tout comprendre) plutôt que de lire des éditos d’économistes dans les journaux français (je devrais mettre un paquet de guillemets à économistes). Même en ratant des passages, faute de culture anglophone suffisante, j’y apprend beaucoup plus de choses. Et souvent, je trouve même cela passionnant, si ce n’est drôle. Et les commentaires sont souvent d’un haut niveau. La science et l’économie sont – a priori- des sujets qui intéressent autant les anglophones que les francophones, je devrais donc pouvoir apprendre dans les deux langues.

En partant de cette impression, le but de mon billet était de voir un peu plus en détails qui je citais dans mes chroniques Somewhere Else (et sur mon compte twitter, i.e. @freakonometrics). Afin de me faire une idée un peu plus précise, j’ai utilisé un petit code, mis en ligne tout à l’heure, pour extraire la liste des urls mentionnées dans mes tweets. Si on regarde le site que je cite le plus sur un peu plus de 900 tweets (les 900 les plus récents), on retrouve

Cocorico ! Et si je m’étais trompé ? Car on ne peut pas faire mentir les chiffres: le site que je site le plus dans mes tweets est www.lemonde.fr ! Bon, si on regarde de plus près, c’est le seul site francophone à compter (strictement) plus de 6 citations (on pourra trouver www.sauvonsluniversite.com avec 6 mentions, dont 2 hier pour deux brillants articles, suivis par www.sciencepresse.qc.ca et www.ledevoir.com qui ont chacun 4 citations). Viennent ensuite des sites anglophones

Quand on regarde plus attentivement, on voit que le dernier comptage est biaisé, car pour le Washington Post (par exemple) les blogs sont localisés avec une adresse de la forme, www.washingtonpost.com/blogs/wonkblog/ (et figurent dans mon décompte), mais ce n’est pas le cas pour le New York Times, qui héberge krugman.blogs.nytimes.com, opinionator.blogs.nytimes.com/ ou encore economix.blogs.nytimes.com/. Ces trois sites, par exemple, ont tous été 4 fois, sur les 900 tweets analysés. Donc si on voulait un décompte plus réaliste, il faudrait

(qui est alors loin devant www.lemonde.fr ). Viennent ensuite

Voilà pour la liste des sites mentionnés au moins 5 fois. Si on se limite à cette courte liste (en incluant ceux avec 4 citations), on a un total de 178 citations vers des hébergeurs anglophones, contre 36 pour des citations vers des sites en français, soit un ratio 5:1.

Et si au lieu de faire un comptage, on regardait de plus près, que verrait-on ? En fait, si on regarde les trois derniers tweets qui mentionnent www.lemonde.fr, le plus récent (daté du 24 août) était du copinage honteux, puisque j’ai découvert – avec beaucoup de retard – que Laurent Gobillon était ‘nominé au Prix 2013 du jeune économiste‘, or Laurent était un bon copain de promo, on a fait notre mémoire de sociologie (sur le rire dans les relations familiales) ensemble

Si on regarde les deux précédants (datés du 19 août) on a

Autrement dit, je renvoie vers deux articles en ligne sur www.nature.com/ mentionnés dans Le Monde (en fait, une étude était mentionnée sans être réellement cité, la première référence n’était pas la bonne, d’où les deux messages).

J’ai l’impression que cette rapide analyse quantitative me conforte dans l’impression que j’avais. La presse francophone n’est plus une presse de référence. Maintenant, je n’utilise que mon propre référentiel et mes centres d’intérêt. Et je serais curieux d’avoir d’autres points de vue…

R, Twitter and URLs

Yesterday evening, I wanted to play with Twitter, and see which websites I was using as references in my tweets, to get a Top 4 list.
The first problem I got was because installing twitteR on Ubuntu is not that simple ! You have to install properly RCurl… But you before install the package in R, it is necessary to run the following line in a terminal
$ sudo apt-get install 
  libcurl4-gnutls-dev
then, launch R
$ R
and then you can run the standard
> install.packages("RCurl")
and install finally the package of interest,
> install.packages("twitteR")
Then, the second problem I had was that twitteR has been updated recently because of Twitter’s new API. Now, you should register on Twitter’s developers webpage, get an Id and a password, then use it in the following function (I did change both of them, below, so if you try to run the following code, you will – probably – get an error message),
> library(twitteR)
> cred <- getTwitterOAuth("ikzCtYif9Rwoood45w","rsCCifp99kw5sJfKfOUhhwyVmPl9A")
> registerTwitterOAuth(cred)
[1] TRUE
> T <- userTimeline('freakonometrics',n=5000)
you should also go on some webpage and enter a PIN that you find online.
To enable the connection, please direct your web browser to:
http://api.twitter.com/oauth/authorize?oauth_token=cQaDmxGe...
When complete, record the PIN given to you and provide it here:
It is a pain in ass, trust me. Anyway, I have be able to run it. I can now have the list with all my (recent) tweets
> T <- userTimeline('freakonometrics',n=5000)

Now, my (third) problem was to extract from my tweets the url of references. The second tweet of the list was

But when you look at the text, you see

> T[[2]]
[1] "freakonometrics: [textmining] \"How a Computer Program Helped Reveal J. K. 
Rowling as Author of A Cuckoos Calling\" https://t.co/wdmBGL8cmj by @garethideas"
So what I get is not the url used in my tweet, but a shortcut to the urls, from https://t.co/. Hopefully, @3wen (as always) has been able to help me with the following functions,
> extraire <- function(entree,motif){
+	res <- regexec(motif,entree)
+	if(length(res[[1]])==2){
+		debut <- (res[[1]])[2]
+		fin <- debut+(attr(res[[1]],"match.length"))[2]-1
+		return(substr(entree,debut,fin))
+	}else return(NA)}
> unshorten <- function(url){
+	uri <- getURL(url, header=TRUE, nobody=TRUE, followlocation=FALSE, 
+       cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl"))
+	res <- try(extraire(uri,"\r\nlocation: (.*?)\r\nserver"))
+	return(res)}

Now, if we use those functions, we can get the true url,

> url <- "https://t.co/wdmBGL8cmj"
> unshorten(url)
[1] http://www.scientificamerican.com/article.cfm?id=how-a-computer-program-helped-show..
Now I can play with my list, to extract urls, and the address of the website,
> exturl <- function(i){
+ text_tw <- T_text[i]
+ locunshort2 <- NULL
+ indtext <- which(substr(unlist(strsplit(text_tw, " ")),1,4)=="http")
+ if(length(indtext)>0){
+ loc <- unlist(strsplit(text_tw, " "))[indtext]
+ locunshort=unshorten(loc)
+ if(is.na(locunshort)==FALSE){
+ locunshort2 <- unlist(strsplit(locunshort, "/"))[3]}}
+ return(locunshort2)}
Using apply with this function, and my list, and counting using a simple table() function, I can see that my top four (over more than 900 tweets) of reference websites is the following:
             www.nytimes.com         www.guardian.co.uk 
                          19                         21 
      www.washingtonpost.com             www.lemonde.fr 
                          21                         22
Nice, isn’t it ?

Residuals from a logistic regression

I always claim that graphs are important in econometrics and statistics ! Of course, it is usually not that simple. Let me come back to a recent experience. A got an email from Sami yesterday, sending me a graph of residuals, and asking me what could be done with a graph of residuals, obtained from a logistic regression ? To get a better understanding, let us consider the following dataset (those are simulated data, but let us assume – as in practice – that we do not know the true model, this is why I decided to embed the code in some R source file)

> source("http://freakonometrics.free.fr/probit.R")
> reg=glm(Y~X1+X2,family=binomial)

If we use R’s diagnostic plot, the first one is the scatterplot of the residuals, against predicted values (the score actually)

> plot(reg,which=1)

we is simply

> plot(predict(reg),residuals(reg))
> abline(h=0,lty=2,col="grey")

Why do we have those two lines of points ? Because we predict a probability for a variable taking values 0 or 1. If the tree value is 0, then we always predict more, and residuals have to be negative (the blue points) and if the true value is 1, then we underestimate, and residuals have to be positive (the red points). And of course, there is a monotone relationship… We can see more clearly what’s going on when we use colors

> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> abline(h=0,lty=2,col="grey")

Points are exactly on a smooth curve, as a function of the predicted value,

Now, there is nothing from this graph. If we want to understand, we have to run a local regression, to see what’s going on,

> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)

This is exactly what we have with the first function. But with this local regression, we do not get confidence interval. Can’t we pretend, on the graph about that the plain dark line is very close to the dotted line ?

> rl=lm(residuals(reg)~bs(predict(reg),8))
> #rl=loess(residuals(reg)~predict(reg))
> y=predict(rl,se=TRUE)
> segments(predict(reg),y$fit+2*y$se.fit,predict(reg),y$fit-2*y$se.fit,col="green")

Yes, we can.And even if we have a guess that something can be done, what would this graph suggest ?

Actually, that graph is probably not the only way to look at the residuals. What not plotting them against the two explanatory variables ? For instance, if we plot the residuals against the second one, we get

> plot(X2,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X2,residuals(reg)),col="black",lwd=2)
> lines(lowess(X2[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X2[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

The graph is similar to the one we had earlier, and against, there is not much to say,

If we now look at the relationship with the first one, it starts to be more interesting,

> plot(X1,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X1,residuals(reg)),col="black",lwd=2)
> lines(lowess(X1[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X1[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

since we can clearly identify a quadratic effect. This graph suggests that we should run a regression on the square of the first variable. And it can be seen as a significant effect,

Now, if we run a regression including this quadratic effect, what do we have,

> reg=glm(Y~X1+I(X1^2)+X2,family=binomial)
> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(predict(reg)[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(predict(reg)[Y==1],residuals(reg)[Y==1]),col="red")
> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)
> abline(h=0,lty=2,col="grey")

Actually, it looks like we back where we were initially…. So what is my point ? my point is that

  • graphs (yes, plural) can be used to see what might go wrong, to get more intuition about possible non linear transformation
  • graphs are not everything, and they never be perfect ! Here, in theory, the plain line should be a straight line, horizontal. But we also want a model as simple as possible. So, at some stage, we should probably give up, and rely on statistical tests, and confidence intervals. Yes, an almost flat line can be interpreted as flat.

Las Vegas and financial institutions

Exactly one month ago, I entered the Bellagio casino to gamble at the roulette. It was actually a request from my daughter’s godfather (who happens to be a probabilist, actually). On a comment on a previous post, he suggested the following deal,

In the Bellagio you put 10$ for me on the 33 and 10$ for you as well. If 33 shows up, you bring me to a French “3 étoiles” restaurant next time you stop by in France. If 33 doesn’t shows up, I bring you to MacDonald…

I have to admit that I like eating in French “3 étoiles” restaurants, so I did gamble. Well, I could not remember the terms of the agreement very well (neither the number to select, nor the amount to put on the table). So I did ask my daughter which number I was supposed to pick, and she choose 22. Anyway, the number that came up was neither the 33 nor the 22, so we lost. And roulette tables at the Bellagio were down to a $15 minimum (from what I remember, I was supposed to play $5 or $10). And I have seen tables with $100 minimum (probably more, but I am not sure, and I could not take much pictures inside) ! So I did play $15 (I did keep chips as souvenirs), and I have to admit that I was excited during a few seconds. I really enjoyed that thrilling sensation ! And I was playing only $15 !

Later on, in the room of the hotel – while we’ve been watching TV – we saw some poker games where people where putting $200,000 on the table (there was almost a million in the pot) ! I tried to explain my kids that this was a reason why there were so many signs on the walls claiming that kids were not supposed to enter casinos, and so many ads about gambling being an addiction. It is not reasonable to put so much on a table ! $200,000 on the table ? This is probably more than all I could possibly own !

I thought about all that yesterday, when I discovered the following Table, about leverage ratios of bank,see  http://fool.com/investing/…

Company

Leverage Ratio (Assets-to-Equity), 2007

Bear Stearns 34:1
Morgan Stanley 33:1
Merrill Lynch 32:1
Lehman Brothers 31:1
Goldman Sachs 26:1

(with similar values in U.K., according to http://voxeu.org/…)

What does 30 mean ? It means that a company with $1 in capital holds $30 in (various) financial positions (see http://newleftreview.org/II/… for a discussion). If you think about it, with a relative decline of 3.5%, the absolute loss is larger than the capital hold by the company… Now, if we forget about Lehman, and focus on me, gambling in Las Vegas, we can try to illustrate this 30:1 leverage ratio as follows. The way I see it is that, if I were a bank with $200,000 equity (equity being, from my understanding, everything I own), I would be able to borrow 30 times this amount, and put this money on some table in Las Vegas. OK, there might be a big difference, since in Vegas, on average I will loose money, while most models in finance claim that (on average) we should gain money (somehow, since it might depend on your reference level). And no one really own the casino in real life. But still. A 30 leverage ratio means that I would be playing more than $6 million on a table in Las Vegas ! How should I understand that 30 leverage ratio ? Am I really such a small player ? Are banks really too big players ? Or perhaps they do not hold enough capital to play that big….

Exposure as a possible explanatory variable

Iin insurance pricing, the exposure is usually used as an offset variable to model claims frequency. As explained many times on this blog (e.g. here), and in my notes, if we have to identical drivers, but one with an exposure of 6 months, and the other one of one year, it should be natural to assume that, on average, the second driver will have two times more accidents. This is the motivation to use a standard (homogeneous) Poisson process to model claim frequency. One can also see here legal issue, since, in case of a (partial) reinbursement of a premium, it would be done prorata temporis. The risk is proportional to the exposure. Thus, if https://latex.codecogs.com/gif.latex?Y_i denote the number of claims of insured https://latex.codecogs.com/gif.latex?i, with characteristics https://latex.codecogs.com/gif.latex?\boldsymbol{X}_{i}=(X_{i,1},\cdots,X_{i,k}) and exposure https://latex.codecogs.com/gif.latex?E_i, with a Poisson regression, we would write

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(E_i\cdot%20\exp(\boldsymbol{X}_i%27\boldsymbol{\beta}))

or equivalently

https://latex.codecogs.com/gif.latex?Y_i\sim\mathcal{P}(\exp(\log(E_i)+\boldsymbol{X}_i%27\boldsymbol{\beta}))

From this expression, the logarithm of the exposure is an explanatory variable, but there should be no coefficient (the coefficient here is taken to be one). Can’t we use the exposure as an explanatory variable ? Will we get a unit parameter ?

Of course, in the context of ratemaking, it is probably not a relevant question, since actuaries are required to predict annual claim frequency (since insurance contract are supposed to provide a one year coverage). But it might be interesting to get a better understanding of why people might be leaving our portfolio (i.e. are cancelling their insurance policy before term, or not renew someday).

To be more specific and get a better understanding, consider the following model: consider a Poisson process to model claims arrival, and people dedicated to their insurance company (they never leave). Let us generate scenarios over twenty years

> n=983
> D1=as.Date("01/01/1993",'%d/%m/%Y')
> D2=as.Date("31/12/2013",'%d/%m/%Y')
> L=D1+0:(D2-D1)
> set.seed(1)
> arrival=sample(L,size=n,replace=TRUE)
> exposure=N=rep(NA,n)
> departure=rep(D2,n)
> set.seed(2)
> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   while(max(w)<expo) w=c(w,max(w)+1+trunc(rexp(1,1/1000)))
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-2)}
> df=data.frame(N=N,E=exposure/365)

Here the expected time between claims is considered to be 1000 days. The (annual) intensity of the Poisson process is here

> 365/1000
[1] 0.365

so if we run a Poisson regression on the logarithm of the exposure (please feel free to had other covariates if you want, the example here is just to see what could happen when exposure is considered as a standard covariate), we should get a parameter close to

> log(365/1000)
[1] -1.007858

Here, the regression on a constant, with the offset variable is

> reg=glm(N~1+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ 1 + offset(log(E)), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.4145  -0.4673   0.2367   0.8770   3.6828  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.04233    0.02532  -41.17   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.9  on 982  degrees of freedom
Residual deviance: 1116.9  on 982  degrees of freedom
AIC: 3282.9

Number of Fisher Scoring iterations: 5

which is consistent with what we just said. If we run the regression with the logarithm of the exposure as a possible explanatory variable, we would expect to have a coefficient close to 1. And indeed…

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.03350    0.08546  -12.09   <2e-16 ***
log(E)       1.00920    0.03292   30.66   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2553.6  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

If we keep the offset, and add the variable, we can see that it become useless (which is a test of a unit parameter, somehow)

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0810  -0.8373  -0.1493   0.5676   3.9001  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.033503   0.085460 -12.093   <2e-16 ***
log(E)       0.009201   0.032920   0.279     0.78    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1064.3  on 982  degrees of freedom
Residual deviance: 1064.2  on 981  degrees of freedom
AIC: 3762.7

Number of Fisher Scoring iterations: 5

Here, we do have pure Poisson processes, so exposure is crucial, since the parameter of the Poisson distribution is proportional to the exposure. But we cannot learn anything else from the exposure.

Consider some real data.

> head(baseFREQ)
  nocontrat exposition zone puissance agevehicule
1        27       0.87    C         7           0
2       115       0.72    D         5           0
3       121       0.05    C         6           0
4       142       0.90    C        10          10
5       155       0.12    C         7           0
6       186       0.83    C         5           0
  ageconducteur bonus marque carburant densite region nbre
1            56    50     12         D      93     13    0
2            45    50     12         E      54     13    0
3            37    55     12         D      11     13    0
4            42    50     12         D      93     13    0
5            59    50     12         E      73     13    0
6            75    50     12         E      42     13    0

What do we get if we consider a Poisson regression on the logarithm of the exposure ?

> reg=glm(nbre~log(exposition),data=baseFREQ,family=poisson)
> summary(reg)

Call:
glm(formula = nbre ~ log(exposition), family = poisson, data = baseFREQ)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-0.3988  -0.3388  -0.2786  -0.1981  12.9036  

Coefficients:
                Estimate Std. Error z value Pr(>|z|)    
(Intercept)     -2.83045    0.02822 -100.31   <2e-16 ***
log(exposition)  0.53950    0.02905   18.57   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 12931  on 49999  degrees of freedom
Residual deviance: 12475  on 49998  degrees of freedom
AIC: 16150

Number of Fisher Scoring iterations: 6

If we add the exposure to the offset, what’s happening ? (let us use a nonparametric transformation, so visualize what’s going on)

> library(gam)
> reg=gam(nbre~offset(log(exposition))+s(exposition),data=baseFREQ,family=poisson)
> plot(reg,se=TRUE)

There is a clear and significant effect. The more insured stay, the less likely they get a claim. Actually, it can be observed without running a regression.

> i1=which(baseFREQ$nbre>0)
> i0=which(baseFREQ$nbre==0)
> h1=hist(baseFREQ$exposition[i1],probability=TRUE)
> h0=hist(baseFREQ$exposition[i0],probability=TRUE)
> plot(h1$mids,h1$density,type='s',lwd=2,col="red")
> lines(h0$mids,h0$density,type='s',col='blue',lwd=2)

In blue, we have the density of the exposure for those who did not have claims, and in red, the density of those who did have one claim (or more)

So here, we cannot assume a unit value for the parameter. What does that mean ? Can we reproduce such a behavior ?

In order to get a better understandung, consider two possible behaviors for the insured. The first one will be the following : if the company does not offer substantial discounts after no several years with no claims, the insured might leave the company. For instance, if the insured has no claim during 5 years, then after 5 years, he will leave the company (to get a better price somewhere else, say). The code will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=c(0,0)
+   while((max(w)<expo) & (max(diff(w))<1500)) w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   if(max(diff(w))>1500) departure[i]=arrival[i]+max(w[-length(w)])+1500
+   exposure[i]=departure[i]-arrival[i]
+   N[i]=max(0,length(w)-3)}
> df=data.frame(N=N,E=exposure/365)

Here, I consider 1500 days, instead of 5 years,, but it is the same idea. So, what do we have here ?

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       1.65738    0.04494   36.88   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2567.31  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

Here, the coefficient is (significantly) larger than 1. More precisely,

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.5684  -0.9668  -0.2321   0.4244   3.6265  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.50844    0.10286  -24.39   <2e-16 ***
log(E)       0.65738    0.04494   14.63   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1114.24  on 982  degrees of freedom
Residual deviance:  885.71  on 981  degrees of freedom
AIC: 2897.9

There is clearly a bias here : people staying long are more like likely to have an accident. Which is consistent with our story, since clients with low risks left.

The second behavior will be the following : sometimes, the insured are not satisfied with the way claims are handled, and they might leave after the first claim. Consider the case where, after one claim, it is likely (e.g. with probability 50%) that the insured leaves the company. Instead of assuming that the insured did not like claims management, consider the case were the car is so damaged that he cannot drive it anymore. So it will be useless to pay an insurance premium. The code here will be

> for(i in 1:n){
+   expo=D2-arrival[i]
+   w=0
+   stay=TRUE
+   while((max(w)<expo) & (stay==TRUE)) { w=c(w,max(w)+trunc(rexp(1,1/1000)))
+   stay=sample(c(TRUE,FALSE),prob=c(.5,.5),size=1)}
+   N[i]=length(w)-2
+   if(stay==FALSE) {departure[i]=arrival[i]+max(w)
+   N[i]=length(w)-1}
+   exposure[i]=departure[i]-arrival[i]}
> df=data.frame(N=N,E=exposure/365)

Here, after each claim, the insured toss a coin to see if he cancels the contract, or not.

> reg=glm(N~log(E),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E), family = poisson, data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)       0.30640    0.02511  12.203   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 666.92  on 982  degrees of freedom
Residual deviance: 498.29  on 981  degrees of freedom
AIC: 2666.3

This time, the parameter is (again significantly) smaller than one.

> reg=glm(N~log(E)+offset(log(E)),data=df,family=poisson)
> summary(reg)

Call:
glm(formula = N ~ log(E) + offset(log(E)), family = poisson, 
    data = df)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-2.28402  -0.47763  -0.08215   0.33819   2.37628  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  0.09920    0.04251   2.334   0.0196 *  
log(E)      -0.69360    0.02511 -27.625   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 1116.87  on 982  degrees of freedom
Residual deviance:  498.29  on 981  degrees of freedom
AIC: 2666.3

The story is now rather different, since those who stay long should not have encountered a lot of opportunities to leave. So clearly, they did not have much claims. If someone has a long exposure, the negative sign in the output above means that he should not have much claims, on average.

As we can see, those models produce rather difference outputs. Note that it is possible much more interpretations. For instance, depending on the way data were extracted,

  • all policies observed, over those twenty years,
  • all policies in force at some specific date, until now
  • all policies in force at some specific date, until one year after
  • all policies in force now

So far, we have been using the first method, but the other ones will yield different interpretations, e.g. because of survivor bias. But that’s another story… And one can read Boucher and Denuit (2008) to go further.