Tag Archives: stepwise

Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table(
  "http://freakonometrics.free.fr/saporta.csv",
  head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE)
M=matrix(NA,100,ncol(MYOCARDE))
colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7])
S1=S2=M1=M2=M
for(i in 1:100){
idx = which(sample(0:1,size=n, replace=TRUE)==1)
reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,]))
nm=names(reg$coefficients)
M1[i,nm]=reg$coefficients
S1[i,nm]=summary(reg)$coefficients[,2]
f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="")
reg=glm(f,data=MYOCARDE[-idx,])
M2[i,nm]=reg$coefficients
S2[i,nm]=summary(reg)$coefficients[,2]
}

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){
idx=which(!is.na(M1[,j]))
plot(M1[idx,j],M2[idx,j])
abline(a=0,b=1,lty=2,col="gray")
segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j])  
segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j])  
}

For instance, with the intercept, we have the following

 

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

for(j in 1:8){
library(ks)
idx = which(!is.na(M1[,j]))
Z = cbind(M1[idx,j],M2[idx,j])
H = Hpi(x=Z)
fhat = kde(x=Z, H=H)
image(fhat$eval.points[[1]],
fhat$eval.points[[2]],fhat$estimate)
abline(a=0,b=1,lty=2,col="gray")
abline(v=0,lty=2)
abline(h=0,lty=2)
}

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

 

 

On the next one, we have again significance on the training sample, but not on the validation sample,

 

 

and probably more interesting

where the two are very consistent.

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)

Regression on variables, or on categories?

I admit it, the title sounds weird. The problem I want to address this evening is related to the use of the stepwise procedure on a regression model, and to discuss the use of categorical variables (and possible misinterpreations). Consider the following dataset

> db = read.table("http://freakonometrics.free.fr/db2.txt",header=TRUE,sep=";")

First, let us change the reference in our categorical variable  (just to get an easier interpretation later on)

> db$X3=relevel(as.factor(db$X3),ref="E")

If we run a logistic regression on the three variables (two continuous, one categorical), we get

> reg=glm(Y~X1+X2+X3,family=binomial,data=db)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Now, if we use a stepwise procedure, to select variables in the model, we get

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3

       Df Deviance    AIC
- X3    4   587.81 593.81
<none>      582.29 596.29
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=593.81
Y ~ X1 + X2

       Df Deviance    AIC
<none>      587.81 593.81
- X2    1   606.90 610.90
- X1    1   622.44 626.44

So clearly, we should remove the categorical variable if our starting point was the regression on the three variables.

Now, what if we consider the same model, but slightly different: on the five categories,

> X3complete = model.matrix(~0+X3,data=db)
> db2 = data.frame(db,X3complete)
> head(db2)
  Y       X1       X2 X3 X3A X3B X3C X3D X3E
1 1 3.297569 16.25411  B   0   1   0   0   0
2 1 6.418031 18.45130  D   0   0   0   1   0
3 1 5.279068 16.61806  B   0   1   0   0   0
4 1 5.539834 19.72158  C   0   0   1   0   0
5 1 4.123464 18.38634  C   0   0   1   0   0
6 1 7.778443 19.58338  C   0   0   1   0   0

From a technical point of view, it is exactly the same as before, if we look at the regression,

> reg = glm(Y~X1+X2+X3A+X3B+X3C+X3D+X3E,family=binomial,data=db2)
> summary(reg)

Call:
glm(formula = Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E, family = binomial, 
    data = db2)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-3.0758   0.1226   0.2805   0.4798   2.0345  

Coefficients: (1 not defined because of singularities)
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -5.39528    0.86649  -6.227 4.77e-10 ***
X1           0.51618    0.09163   5.633 1.77e-08 ***
X2           0.24665    0.05911   4.173 3.01e-05 ***
X3A         -0.09142    0.32970  -0.277   0.7816    
X3B         -0.10558    0.32526  -0.325   0.7455    
X3C          0.63829    0.37838   1.687   0.0916 .  
X3D         -0.02776    0.33070  -0.084   0.9331    
X3E               NA         NA      NA       NA    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 806.29  on 999  degrees of freedom
Residual deviance: 582.29  on 993  degrees of freedom
AIC: 596.29

Number of Fisher Scoring iterations: 6

Both regressions are equivalent. Now, what about a stepwise selection on this new model?

> step(reg)
Start:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E

Step:  AIC=596.29
Y ~ X1 + X2 + X3A + X3B + X3C + X3D

       Df Deviance    AIC
- X3D   1   582.30 594.30
- X3A   1   582.37 594.37
- X3B   1   582.40 594.40
<none>      582.29 596.29
- X3C   1   585.21 597.21
- X2    1   600.56 612.56
- X1    1   617.25 629.25

Step:  AIC=594.3
Y ~ X1 + X2 + X3A + X3B + X3C

       Df Deviance    AIC
- X3A   1   582.38 592.38
- X3B   1   582.41 592.41
<none>      582.30 594.30
- X3C   1   586.30 596.30
- X2    1   600.58 610.58
- X1    1   617.27 627.27

Step:  AIC=592.38
Y ~ X1 + X2 + X3B + X3C

       Df Deviance    AIC
- X3B   1   582.44 590.44
<none>      582.38 592.38
- X3C   1   587.20 595.20
- X2    1   600.59 608.59
- X1    1   617.64 625.64

Step:  AIC=590.44
Y ~ X1 + X2 + X3C

       Df Deviance    AIC
<none>      582.44 590.44
- X3C   1   587.81 593.81
- X2    1   600.73 606.73
- X1    1   617.66 623.66

What do we get now? This time, the stepwise procedure recommends that we keep one category (namely C). So my point is simple: when running a stepwise procedure with factors, either we keep the factor as it is, or we drop it. If it is necessary to change the design, by pooling together some categories, and we forgot to do it, then it will be suggested to remove that variable, because having 4 categories meaning the same thing will cost us too much if we use the Akaike criteria. Because this is exactly what happens here

> library(car)
> reg = glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db)
> linearHypothesis(reg,c("X3A=X3B","X3A=X3D","X3A=0"))
Linear hypothesis test

Hypothesis:
X3A - X3B = 0
X3A - X3D = 0
X3A = 0

Model 1: restricted model
Model 2: Y ~ X1 + X2 + X3

  Res.Df Df  Chisq Pr(>Chisq)
1    996                     
2    993  3 0.1446      0.986

So here, we should pool together categories A, B, D and E (which was here the reference). As mentioned in a previous post, it is necessary to pool together categories that should be pulled together as soon as possible. If not, the stepwise procedure might yield to some misinterpretations.

Sélection de variables versus sélection de modalités

En cours, nous avions évoqué (très rapidement) la sélection automatique de variables. La méthode la plus simple est une méthode stepwise, basé sur un critère de type AIC, ou BIC. Considérons la base suivante,

>  N = base$nbre
>  E = base$exposition
>  X1 = base$carburant
>  X2 = cut(base$agevehicule,c(0,3,10,101),
+ right=FALSE)
>  X3 = cut(base$ageconducteur,c(0,22,45,101),
+ right=FALSE)
>  X4 = as.factor(base$zone)
>  X5 = as.factor(base$puissance)
>  X6 = as.factor(base$region)
>  X7 = as.factor(base$marque)
>  base1=data.frame(N,E,X1,X2,X3,X4,X5,X6,X7)

Une méthode stepwise (backward) donne ici

> reg1=glm(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),
+ family="poisson",data=base1)
> step(reg1)
Start:  AIC=20492.67
N ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X5   11    15316 20482
- X3    2    15305 20490
<none>       15304 20493
- X2    2    15314 20499
- X1    1    15319 20506
- X7   10    15343 20511
- X4    5    15398 20576
- X6   14    15569 20729

Step:  AIC=20482.35
N ~ X1 + X2 + X3 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
- X3    2    15317 20479
<none>       15316 20482
- X2    2    15326 20488
- X1    1    15334 20498
- X7   10    15359 20505
- X4    5    15410 20566
- X6   14    15579 20717

Step:  AIC=20479.33
N ~ X1 + X2 + X4 + X6 + X7 + offset(log(E))

Df Deviance   AIC
<none>       15317 20479
- X2    2    15327 20485
- X1    1    15334 20495
- X7   10    15360 20502
- X4    5    15410 20563
- X6   14    15620 20754

Call:  glm(formula = N ~ X1 + X2 + X4 + X6 + X7 
       + offset(log(E)),
       family = "poisson",
data = base1)

Coefficients:
(Intercept)          X1E     X2[3,10)   X2[10,101)          X4B
-1.0588454   -0.1653822    0.0266763   -0.1135451   -0.0004047
X4C          X4D          X4E          X4F          X60
0.1497622    0.3748811    0.5052894    0.4292016   -0.3590838
X61          X62          X63          X64          X65
-0.9300641   -1.0278887   -1.1818218   -1.0971797   -0.9459414
X66          X67          X68          X69         X610
-1.3690795   -1.1425678   -1.5309402   -1.3883549   -1.4603624
X611         X612         X613          X72          X73
-1.6763206   -1.3974092   -1.4864404    0.0246113    0.1144990
X74          X75          X76         X710         X711
-0.0932555    0.1635397   -0.1478095    0.2502030    0.1967970
X712         X713         X714
-0.2420215    0.2161411   -0.1963162

Degrees of Freedom: 49999 Total (i.e. Null);  49967 Residual
Null Deviance:	    15810
Residual Deviance: 15320 	AIC: 20480

Autrement dit, on supprime la troisième (âge du conducteur principal, par classes arbitraires) et la cinquième variable (puissance du véhicule) en gardant toutes les autres. Mais ici, si une variable n’a pas été retenue, c’est que globalement, elle n’apportait pas beaucoup d’information. Il serait toutefois possible de garder une information partielle, en gardant éventuellement certaines modalités. L’idée est de disjoncter la base, en créant des variables indicatrices par modalités. La base sera beaucoup plus grosse, et la sélection prendra alors beaucoup plus de temps,

> base2=data.frame(model.matrix( ~ 0+X1+X2+X3+X4+X5+X6+X7,
+ data=base1))
> base2$E=base1$E
> base2$N=base1$N
> reg2=glm(N~.-E+offset(log(E)),family="poisson",
+ data=base2)
>  step(reg2)
Start:  AIC=20492.67
N ~ (X1D + X1E + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101.
X4B + X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 +
X510 + X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 +
X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + E) - E + offset(log(E))

Step:  AIC=20492.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4B
X4C + X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 +
X511 + X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 +
X64 + X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 +
X613 + X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 +
X713 + X714 + offset(log(E))

Df Deviance   AIC
- X4B         1    15304 20491
- X58         1    15304 20491
- X511        1    15304 20491
- X2.3.10.    1    15304 20491
- X72         1    15304 20491
- X513        1    15304 20491
- X512        1    15304 20491
- X515        1    15304 20491
- X74         1    15305 20491
- X3.45.101.  1    15305 20491
- X714        1    15305 20491
- X55         1    15305 20492
- X3.22.45.   1    15305 20492
- X711        1    15306 20492
- X76         1    15306 20492
- X59         1    15306 20492
<none>             15304 20493
- X514        1    15306 20493
- X713        1    15306 20493
- X73         1    15307 20493
- X56         1    15307 20493
- X710        1    15307 20494
- X75         1    15308 20494
- X2.10.101.  1    15308 20495
- X57         1    15309 20495
- X4C         1    15310 20496
- X510        1    15310 20496
- X60         1    15312 20498
- X4F         1    15314 20500
- X712        1    15316 20503
- X1D         1    15319 20506
- X4D         1    15337 20524
- X61         1    15345 20532
- X65         1    15350 20536
- X62         1    15352 20538
- X64         1    15359 20545
- X4E         1    15362 20549
- X63         1    15366 20553
- X67         1    15370 20556
- X612        1    15381 20568
- X69         1    15382 20569
- X66         1    15387 20574
- X610        1    15389 20576
- X68         1    15393 20580
- X611        1    15406 20592
- X613        1    15451 20637

Step:  AIC=20490.67
N ~ X1D + X2.3.10. + X2.10.101. + X3.22.45. + X3.45.101. + X4C
X4D + X4E + X4F + X55 + X56 + X57 + X58 + X59 + X510 + X511 +
X512 + X513 + X514 + X515 + X60 + X61 + X62 + X63 + X64 +
X65 + X66 + X67 + X68 + X69 + X610 + X611 + X612 + X613 +
X72 + X73 + X74 + X75 + X76 + X710 + X711 + X712 + X713 +
X714 + offset(log(E))

etc etc… et si on va directement à la fin,

Step:  AIC=20469.18
N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F + X57 + X510 + X60
X61 + X62 + X63 + X64 + X65 + X66 + X67 + X68 + X69 + X610 +
X611 + X612 + X613 + X73 + X75 + X76 + X710 + X712 + X713 +
offset(log(E))

Df Deviance   AIC
<none>             15315 20469
- X76         1    15317 20470
- X713        1    15317 20470
- X73         1    15317 20470
- X57         1    15318 20470
- X75         1    15318 20471
- X710        1    15319 20471
- X510        1    15319 20471
- X4C         1    15322 20474
- X60         1    15322 20475
- X2.10.101.  1    15325 20478
- X4F         1    15325 20478
- X1D         1    15333 20485
- X712        1    15338 20490
- X61         1    15356 20508
- X4D         1    15359 20511
- X62         1    15363 20515
- X65         1    15363 20515
- X64         1    15371 20524
- X63         1    15378 20530
- X67         1    15383 20536
- X4E         1    15390 20543
- X612        1    15394 20547
- X69         1    15396 20548
- X66         1    15400 20553
- X610        1    15403 20555
- X68         1    15407 20559
- X611        1    15419 20572
- X613        1    15467 20619

Call:  glm(formula = N ~ X1D + X2.10.101. + X4C + X4D + X4E + X4F
X57 + X510 + X60 + X61 + X62 + X63 + X64 + X65 + X66 + X67 +
X68 + X69 + X610 + X611 + X612 + X613 + X73 + X75 + X76 +
X710 + X712 + X713 + offset(log(E)), family = "poisson",
data = base2)

Coefficients:
(Intercept)          X1D   X2.10.101.          X4C          X4D
-1.20880      0.16886     -0.13808      0.14888      0.37539
X4E          X4F          X57         X510          X60
0.50458      0.42768      0.08381      0.18722     -0.36509
X61          X62          X63          X64          X65
-0.93836     -1.03471     -1.18803     -1.10217     -0.95624
X66          X67          X68          X69         X610
-1.37463     -1.15391     -1.54213     -1.40188     -1.47217
X611         X612         X613          X73          X75
-1.68559     -1.40582     -1.49700      0.10874      0.15022
X76         X710         X712         X713
-0.15183      0.21948     -0.27400      0.19565

Degrees of Freedom: 49999 Total (i.e. Null);  49971 Residual
Null Deviance:	    15810
Residual Deviance: 15310 	AIC: 20470

Si la troisième variable (âge du conducteur principal, par classes arbitraires) disparait assez vite, en revanche, une information sur la cinquième (la puissance) est gardée car certaines modalités semblent être informative sur la fréquence d’accidents. En revanche, on notera qui si on fait un arbre, la troisième variable était toujours clairement significative, ce qui peut nous conforter dans l’idée de faire de la sélection de variables sur les modalités.

> library(tree)
> TREE= tree(N~X1+X2+X3+X4+X5+X6+X7+offset(log(E)),split="gini",
+ mincut = 2500,data=base1)
> plot(TREE)
> text(TREE,cex=.9)

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.