Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

$\frac{1}{n}\sum_{i=1}^n \widehat{Y}_i=\frac{1}{n}\sum_{i=1}^n Y_i$

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist) [1] 42.98 One can prove that it is also the prediction for the average individual in our sample > predict(lm(dist~speed,data=cars), + newdata=data.frame(speed=mean(cars$speed)))
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue") > abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

Variance of the Average of a Sequence

In the case where $\{Y_1,\cdots,Y_n\}$ are i.i.d. random variables, then

$\text{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right)=\frac{\text{Var}(Y_t)}{n}$

Now, what if $\{Y_1,\cdots,Y_n\}$ are identically distributed, but no longer independent. What if we have an autoregressive process? Assume that

$Y_t=\phi Y_{t-1}+\varepsilon_t$

Then

$\text{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right) = \frac{1}{n^2}\left(\sum_{t=1}^n \text{Var}(Y_t)+\sum_{s\neq t}\text{Cov}(Y_s,Y_t) \right)$

can be written

$\text{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right) = \frac{1}{n^2}\left(n\gamma(0) + \sum_{h=1}^{n-1} 2 (n-h) \gamma(h)\right)$

Here, we will express the variance as a function of $\gamma(0)$ and $\phi$, but it is possible to use also $\sigma^2$, since, in the context of an $AR(1)$,

$\gamma(0)=\frac{\sigma^2}{1-\phi^2}$

Now, since $\gamma(h)=\phi^h \gamma(0)$ we get

$\text{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right) = \frac{\gamma(0)}{n^2}\left(n\gamma(0) + \sum_{h=1}^{n-1} 2 (n-h) \phi^h\right)$

which can be simplified, since

$\sum_{h=1}^{n-1} 2(n-h) \phi^h=2\phi^{n-1} \sum_{h=1}^{n-1} (n-h) \left(\frac 1\phi\right)^{n-h-1}=2\phi^{n-1} \frac {\partial}{\partial x}\left. \sum_{i=1}^{n} x^{n-i} \right\vert_{x=1/\phi}$

i.e.

$\sum_{h=1}^{n-1} 2(n-h) \phi^h=2\phi^{n-1} \frac{(n-1)\phi^{-(n+1)} - n\phi^{-n} + \phi^{-1}}{\phi^{-1}(\phi^{-1}-1)^2}=2 \frac{(n-1)\phi^{-1} - n + \phi^{n-1}}{(\phi^{-1}-1)^2}$

So, the variance of the mean can be writen as

$V=\mathrm{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right)=\frac{\gamma(0)}{n^2}\left[n + 2 \frac{(n-1)\phi^{-1} - n + \phi^{n-1}}{(\phi^{-1}-1)^2}\right]$

Observe that if $n$ is large enough,

$V=\frac{\gamma(0)}{n^2}\left[n + 2 \frac{(n-1)\phi^{-1} - n + \phi^{n-1}}{(\phi^{-1}-1)^2}\right]\sim \frac{\gamma(0)}{n}\frac{1+\phi}{1-\phi}$

This asymptotic relationship is well known actually. A simple way to get it is the following. One can can write

$V=\mathrm{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right)=\frac{\gamma(0)}{n}\left[ \sum_{h=-n+1}^{n-1}\left(1-\frac{\vert h\vert}{n}\right)\rho(h) \right]$

or equivalently

$V=\mathrm{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right)=\frac{\gamma(0)}{n}\left[ 1+2\sum_{h=1}^{n-1}\left(1-\frac{h}{n}\right)\rho(h) \right]$

But actually, the first relationship is probably more interesting to get an asymptotic approximation,

$V=\mathrm{Var}\left(\frac{1}{n}\sum_{t=1}^n Y_t\right)\sim\frac{\gamma(0)}{n}\left[ \sum_{h=-\infty}^{\infty}\rho(h) \right]$

In the context of an $AR(1)$ process, this can be writen

$\sum_{h=-\infty}^{\infty}\rho(h) = \frac{1+\phi}{1-\phi}$

Thus, we get the following well-known relationship

$V=\frac{\gamma(0)}{n}\cdot \frac{1+\phi}{1-\phi}$

In the case where $\{Y_1,\cdots,Y_n\}$ is an i.i.d. sequence, i.e. $\phi=0$, then we get the relationship mentioned initially. And in the case of a random walk… unfortunately, we cannot use that relationship. But observe that

$V=\frac{1}{n}\text{Var}\left(\sum_{t=1}^n\sum_{h=1}^t\varepsilon_h\right)$

i.e.

$V=\frac{1}{n}\text{Var}\left(n\varepsilon_1+(n-1)\varepsilon_2+\cdots+\varepsilon_n\right)$which can be written

$V=\frac{\text{Var}(\varepsilon)}{n}\sum_{h=1}^n h^2=\frac{(2n+1)(n+1)}{6n}\text{Var}(\varepsilon)$

If we compare the true value and the approximation, we get the following graph,

> V=function(phi,s2=1,n=100){
+ g0=s2/(1-phi^2)
+ if(phi<1){
+ if(phi==0){v1=g0/n}
+ if(phi>0){v1=g0/n^2*(n+2*((n-1)*
+ phi^(-1)-n+phi^(n-1))/(phi^(-1)-1)^2)}
+ v2=g0/n*(1+phi)/(1-phi)
+ }
+ if(phi==1){
+ v1=(2*n+1)*(n+1)*s2/(6*n)
+ v2=NA
+ }
+ return(c(v1,v2))}
>
> Vphi=function(phi) V(phi,1,100)
> x=seq(.01,1,by=.02)
> M=matrix(unlist(lapply(x,V)),nrow=2)
> plot(x,M[1,],type="l",col="red",log="y",
+ ylab="Variance of the average (log scale)",
+ xlab="Autoregressive coefficient")
> lines(x,M[2,],col="blue")

Visualizing overdispersion (with trees)

This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors,

> X=as.factor(paste(sinistres$carburant,sinistres$zone,
+ cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101)))) > E=sinistres$exposition
> Y=sinistres$nbre > vm=vv=ve=rep(NA,length(levels(X))) > for(i in 1:length(levels(X))){ + ve[i]=Ei=E[X==levels(X)[i]] + Yi=Y[X==levels(X)[i]] + vm[i]=meani=weighted.mean(Yi/Ei,Ei) # moyenne + vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei) # variance + cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n") + } Class D A (17,24] average = 0.06274415 variance = 0.06174966 Class D A (24,40] average = 0.07271905 variance = 0.07675049 Class D A (40,65] average = 0.05432262 variance = 0.06556844 Class D A (65,101] average = 0.03026999 variance = 0.02960885 Class D B (17,24] average = 0.2383109 variance = 0.2442396 Class D B (24,40] average = 0.06662015 variance = 0.07121064 Class D B (40,65] average = 0.05551854 variance = 0.05543831 Class D B (65,101] average = 0.0556386 variance = 0.0540786 Class D C (17,24] average = 0.1524552 variance = 0.1592623 Class D C (24,40] average = 0.0795852 variance = 0.09091435 Class D C (40,65] average = 0.07554481 variance = 0.08263404 Class D C (65,101] average = 0.06936605 variance = 0.06684982 Class D D (17,24] average = 0.1584052 variance = 0.1552583 Class D D (24,40] average = 0.1079038 variance = 0.121747 Class D D (40,65] average = 0.06989518 variance = 0.07780811 Class D D (65,101] average = 0.0470501 variance = 0.04575461 Class D E (17,24] average = 0.2007164 variance = 0.2647663 Class D E (24,40] average = 0.1121569 variance = 0.1172205 Class D E (40,65] average = 0.106563 variance = 0.1068348 Class D E (65,101] average = 0.1572701 variance = 0.2126338 Class D F (17,24] average = 0.2314815 variance = 0.1616788 Class D F (24,40] average = 0.1690485 variance = 0.1443094 Class D F (40,65] average = 0.08496827 variance = 0.07914423 Class D F (65,101] average = 0.1547769 variance = 0.1442915 Class E A (17,24] average = 0.1275345 variance = 0.1171678 Class E A (24,40] average = 0.04523504 variance = 0.04741449 Class E A (40,65] average = 0.05402834 variance = 0.05427582 Class E A (65,101] average = 0.04176129 variance = 0.04539265 Class E B (17,24] average = 0.1114712 variance = 0.1059153 Class E B (24,40] average = 0.04211314 variance = 0.04068724 Class E B (40,65] average = 0.04987117 variance = 0.05096601 Class E B (65,101] average = 0.03123003 variance = 0.03041192 Class E C (17,24] average = 0.1256302 variance = 0.1310862 Class E C (24,40] average = 0.05118006 variance = 0.05122782 Class E C (40,65] average = 0.05394576 variance = 0.05594004 Class E C (65,101] average = 0.04570239 variance = 0.04422991 Class E D (17,24] average = 0.1777142 variance = 0.1917696 Class E D (24,40] average = 0.06293331 variance = 0.06738658 Class E D (40,65] average = 0.08532688 variance = 0.2378571 Class E D (65,101] average = 0.05442916 variance = 0.05724951 Class E E (17,24] average = 0.1826558 variance = 0.2085505 Class E E (24,40] average = 0.07804062 variance = 0.09637156 Class E E (40,65] average = 0.08191469 variance = 0.08791804 Class E E (65,101] average = 0.1017367 variance = 0.1141004 Class E F (17,24] average = 0 variance = 0 Class E F (24,40] average = 0.07731177 variance = 0.07415932 Class E F (40,65] average = 0.1081142 variance = 0.1074324 Class E F (65,101] average = 0.09071118 variance = 0.1170159 Again, one can plot the variance against the average, > plot(vm,vv,cex=sqrt(ve),col="grey",pch=19, + xlab="Empirical average",ylab="Empirical variance") > points(vm,vv,cex=sqrt(ve)) > abline(a=0,b=1,lty=2) An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines) > library(tree) > T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+ + as.factor(marque)+as.factor(carburant)+as.factor(region)+ + agevehicule+ageconducteur,data=baseFREQ, + split = "gini",minsize =25000) The tree is the following > plot(T) > text(T) Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous. > X=as.factor(T$where)
> E=sinistres$exposition > Y=sinistres$nbre
> vm=vv=ve=rep(NA,length(levels(X)))
>   for(i in 1:length(levels(X))){
+  	   ve[i]=Ei=E[X==levels(X)[i]]
+  	   Yi=Y[X==levels(X)[i]]
+   vm[i]=meani=weighted.mean(Yi/Ei,Ei)    # moyenne
+   vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei)    # variance
+  cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n")
+  }
Class  6 average =   0.04010406  variance = 0.04424163
Class  8 average =   0.05191127  variance = 0.05948133
Class  9 average =   0.07442635  variance = 0.08694552
Class  10 average =  0.4143646   variance = 0.4494002
Class  11 average =  0.1917445   variance = 0.1744355
Class  15 average =  0.04754595  variance = 0.05389675
Class  20 average =  0.08129577  variance = 0.0906322
Class  22 average =  0.05813419  variance = 0.07089811
Class  23 average =  0.06123807  variance = 0.07010473
Class  24 average =  0.06707301  variance = 0.07270995
Class  25 average =  0.3164557   variance = 0.2026906
Class  26 average =  0.08705041  variance = 0.108456
Class  27 average =  0.06705214  variance = 0.07174673
Class  30 average =  0.05292652  variance = 0.06127301
Class  31 average =  0.07195285  variance = 0.08620593
Class  32 average =  0.08133722  variance = 0.08960552
Class  34 average =  0.1831559   variance = 0.2010849
Class  39 average =  0.06173885  variance = 0.06573939
Class  41 average =  0.07089419  variance = 0.07102932
Class  44 average =  0.09426152  variance = 0.1032255
Class  47 average =  0.03641669  variance = 0.03869702
Class  49 average =  0.0506601   variance = 0.05089276
Class  50 average =  0.06373107  variance = 0.06536792
Class  51 average =  0.06762947  variance = 0.06926191
Class  56 average =  0.06771764  variance = 0.07122379
Class  57 average =  0.04949142  variance = 0.05086885
Class  58 average =  0.2459016   variance = 0.2451116
Class  59 average =  0.05996851  variance = 0.0615773
Class  61 average =  0.07458053  variance = 0.0818608
Class  63 average =  0.06203737  variance = 0.06249892
Class  64 average =  0.07321618  variance = 0.07603106
Class  66 average =  0.07332127  variance = 0.07262425
Class  68 average =  0.07478147  variance = 0.07884597
Class  70 average =  0.06566728  variance = 0.06749411
Class  71 average =  0.09159605  variance = 0.09434413
Class  75 average =  0.03228927  variance = 0.03403198
Class  76 average =  0.04630848  variance = 0.04861813
Class  78 average =  0.05342351  variance = 0.05626653
Class  79 average =  0.05778622  variance = 0.05987139
Class  80 average =  0.0374993   variance = 0.0385351
Class  83 average =  0.06721729  variance = 0.07295168
Class  86 average =  0.09888492  variance = 0.1131409
Class  87 average =  0.1019186   variance = 0.2051122
Class  88 average =  0.05281703  variance = 0.0635244
Class  91 average =  0.08332136  variance = 0.09067632
Class  96 average =  0.07682093  variance = 0.08144446
Class  97 average =  0.0792268   variance = 0.08092019
Class  99 average =  0.1019089   variance = 0.1072126
Class  100 average = 0.1018262   variance = 0.1081117
Class  101 average = 0.1106647   variance = 0.1151819
Class  103 average = 0.08147644  variance = 0.08411685
Class  104 average = 0.06456508  variance = 0.06801061
Class  107 average = 0.1197225   variance = 0.1250056
Class  108 average = 0.0924619   variance = 0.09845582
Class  109 average = 0.1198932   variance = 0.1209162

Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get

Here, we can identify classes where remaining heterogeneity.

It is “simply” the average value

for some obscure reasons, simple things are usually supposed to be simple. Recently, on the internet, I saw a lot of posts on the “average time in which you hold a stock“, and two rather different values are mentioned

• Take any stock in the United States. The average time in which you hold a stock is – it’s gone up from 20 seconds to 22 seconds in the last year” (Michael Hudson on http://www.telegraph.co.uk/) or “The founder of Tradebot, in Kansas City, Mo., told students in 2008 that his firm typically held stocks for 11 seconds” (on http://www.nytimes.com/) among many others
• Based on the NYSE index data, the mean duration of holding period by US investors was around 7 years in 1940. This stayed the same for the next 35 years.  The average holding period had fallen to under 2 years by the time of the 1987 crash. By the turn of the century it had fallen to below one year. It was around 7 months by 2007” (on http://topforeignstocks.com/ see also the graph below) or “Two-thirds [of the managers of more than 800 institutional funds interviewed in a study] had higher turnover than they predicted […] Even though most are judged by performance over three-year horizons, their average holding period was about 17 months, and 19% of the managers held the typical stock for one year or less” (mentioned on http://online.wsj.com/) again among many others

How comes that on the one hand, some people talk about less than 20 sec. for the “average time in which you hold a stock“, and on the other, around a year. How can we have such a difference ? We are talking about an average time here, not a rare event probability…

To understand what might be wrong, consider the following case, with a market, and two stocks: one is kept over a year (52 weaks) while the other other is traded – and exchanged – every week (52 times per year). What is the “average time in which you hold a stock” ? Is it

• 26.5 weeks ? the average time for the first stock is 52 weeks, while it is 1 for the second one, i.e. 53 over 2
• 1.96 weeks ? over a year the first stock has been traded once, while it was exchanged 52 times for the second one, i.e. 104 over 53 (total time over the total number of transactions)

Obviously, there is a selection bias in that study (see here for an illustration of that concept, in French). In order to get a better understanding, consider the following simple model, with a large number of simulated stocks. At each transaction, they can be hold by 3 types of investors,

• with probability 70%, hold – on average – for 20 sec.
• with probability 20%, hold – on average – for 15 days
• with probability 10%, hold – on average – for 10 years

As claimed by Warren Buffett, “my favorite time frame for holding a stock is forever“, so it might not be absurd to consider investors who keep a stock for a long period of time. Assume further that the time frame for holding a stock is exponentially distributed (the rate depending on the kind of investor). Assume that those stocks are observed during a period of time of 20 years (which might sound reasonable). Several techniques can be used to estimate the “average time in which you hold a stock

• The first one is to calculate the mean, per stock, of the holding time, and to consider the average over all the stocks. Maybe it would be a good idea to exclude the last observation (since data were censored),
• The second one is to divide the (total) period of time by the (total) number of investors that hold the stock during that time frame (or number of transactions)
• A third idea might be to use the first method, but instead of removing the last one, to use an estimator of the mean based on Kaplan-Meier estimate
• A fourth idea is to look at what happened at a specific date (say after 10 years), i.e. which investor had the stock, and how long he kept it.

The code to generate that process is the following

> set.seed(1)
> invest=sample(size=ns,c("A","B","C"),
+ prob=c(.7,.2,.1),replace=TRUE)
> lambda=(invest=="A")*20/(365*24*60*60)+
+        (invest=="B")*15/365+
+        (invest=="C")*10
> E=rexp(ns,rate=1/lambda)
> T=cumsum(E)
> T=T[T<20]
> plot(c(T,50),0:length(T),type="s",xlim=c(0,20),col="blue")

with the following trajectory for the number of investor that did hold that specific stock between time 0 and time 20.

Then, the different techniques are the following,

# method 1
> E1=diff(T)
> m1=mean(E)
> M1[s]=m1

for the first one (means of time length, per stock),

# method 2
> if(length(T)>1){
+ n2=length(T)-1
+ d2=T[length(T)-l]-T[1]
+ N2[s]=n2; D2[s]=d2
+ }

for the second one (time length and number of transactions),

+ # method 3
+ T3=c(T,20)
+ C3=c(rep(0,length(T)-1),1)
+ km=survfit(Surv(diff(T3), 1-C3)~1)
+ m3=summary(km,rmean='individual')$table[5] + M3[s]=m3 for the third one (based on a prediction of the expected mean, from Kaplan-Meier estimate) and # method 4 > T0=c(0,T,20) > m4=min(T0[T0>10])-max(T0[T0<10]) > M4[s]=m4 for the fourth one (based on what happened at time 10). Using monte carlo simulations, we get very different quantities, that can all be interpreted as the “average time in which you hold a stock” > sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE) [1] 0.3692335 > mean(M1,na.rm=TRUE) [1] 0.5469591 > mean(M3,na.rm=TRUE) [1] 1.702908 > mean(M4,na.rm=TRUE) [1] 12.40229 If we change to probabilities (and assume that high frequency investors are much more important than long-term ones), e.g. > invest=sample(size=ns,c("A","B","C"), + prob=c(.9,.09,.01),replace=TRUE) then the first two estimates are rather different. But not the last two. > sum(D2,na.rm=TRUE)/sum(N2,na.rm=TRUE) [1] 0.04072227 > mean(M1,na.rm=TRUE) [1] 0.06393767 > mean(M3,na.rm=TRUE) [1] 0.2504322 > mean(M4,na.rm=TRUE) [1] 12.05508 So I have to confess that the “average time in which you hold a stock” can be almost anything from 10 sec. to 10 years, it clearly depends on the way the average is calculated. The second point is that if the proportion of high frequency trading is extremely high, I should not affect the last one (which is, from my point of view, the most interesting one, an might also be improved by here also integrate a censored variate). So I guess people should be careful when discussing such quantities… And if anyone is willing to share data on that topic, I’d be glad to look at them… Warming in Paris: minimas versus maximas ? Recently, I received comments (here and on Twitter) about my previous graphs on the temperature in Paris. I mentioned in a comment (there) that studying extremas (and more generally quantiles or interquantile evolution) is not the same as studying the variance. Since I am not a big fan of the variance, let us talk a little bit about extrema behaviour. In order to study the average temperature it is natural to look at the linear (assuming that it is linear, but I proved that it could reasonably be assumed as linear in the paper) regression, i.e. least square regression, which gives the expected value. But if we care about extremes, or almost extremes, it is natural to look at quantile regression. For instance, below, the green line is the least square regression, the red one is 97.5% quantile, and the blue on the 2.5% quantile regression. It looks like the slope is the same, i.e. extremas are increasing as fast as the average… tmaxparis=read.table("temperature/TG_SOUID100845.txt", skip=20,sep=",",header=TRUE) head(tmaxparis) Dparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d")
Tparis=as.numeric(tmaxparis$TG)/10 Tparis[Tparis==-999.9]=NA I=sample(1:length(Tparis),size=5000,replace=FALSE) plot(Dparis[I],Tparis[I],col="grey") abline(lm(Tparis~Dparis),col="green") library(quantreg) abline(rq(Tparis~Dparis,tau=.025),col="blue") abline(rq(Tparis~Dparis,tau=.975),col="red") (here I plot randomly some points to avoid a too heavy figure, since I have too many observations, but I keep all the observations in the regression !). Now, if we look at the slope for different quantile level (Fig 6 in the paper, here, but on minimum daily temperature, here I look at average daily temperature), the interpretation is different. s=0 COEF=SD=rep(NA,199) for(i in seq(.005,.995,by=.005)){ s=s+1 REG=rq(Tparis~Dparis,tau=i) COEF[s]=REG$coefficients[2]
SD[s]=summary(REG)$coefficients[2,2] } with the following graph below, s=0 plot(seq(.005,.995,by=.005),COEF,type="l",ylim=c(0.00002,.00008)) for(i in seq(.005,.995,by=.005)){ s=s+1 segments(i,COEF[s]-2*SD[s],i,COEF[s]+2*SD[s],col="grey") } REG=lm(Tparis~Dparis) COEFlm=REG$coefficients[2]
SDlm=summary(REG)\$coefficients[2,2]
abline(h=COEFlm,col="red")
abline(h=COEFlm-2*SDlm,lty=2,lw=.6,col="red")
abline(h=COEFlm+2*SDlm,lty=2,lw=.6,col="red")

Here, for minimas (quantiles associated to low probabilities, on the left), the trend has a higher slope than the average, so in some sense, warming of minimas is stronger than average temperature, and on other hand, for maximas (high probabilities on the right), the slope is smaller – but positive – so summer are warmer, but not as much as winters.
Note also that the story is different for minimal temperature (mentioned in the paper) compared with that study, made here on average daily temperature (see comments)… This is not a major breakthrough in climate research, but this is all I got…