Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.