Tag Archives: linear

How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in , the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time( 
  stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé 
      0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean))
+ }

We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations,

> vN = c(10,100,1000,10000)
> T = Vectorize(time_lm)(vN)

we can then plot it

> plot(vN,T[2,]/1e6,log="xy",col="red",type="b",
+      xlab="Number of Observations",ylab="Time")
> lines(vN,T[1,]/1e6,col="blue",type="b")

It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones).

And actually, the same story olds for logistic regressions. Consider the following dataset

> n=1e6
> X=runif(n)
> S=-3+2*X+rnorm(n)
> Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S)))
> B=data.frame(X,Y)

The frequentist version of the logistic regression is

> glm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = glm(Y~X,data=B[idx,],family=binomial)
+   summary(reg)
+ }

and the Bayesian one, using stan,

> stan_glm = "
+ data {
+ int N;
+ vector[N] x;
+ int<lower=0,upper=1> y[N];
+ }
+ parameters {
+ real alpha;
+ real beta;
+ }
+ model {
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ y ~ bernoulli_logit(alpha + beta * x);
+ }
+ "
> stanmodel = stan_model(model_code = stan_glm) )
> glm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+        data = list(N=n,
+        x = X[idx],
+        y = Y[idx]),
+        iter = 1000, warmup=200)
+   summary(fit)
+ }

Again, we can see how long it takes to run those regression models

> time_gl m= function(n){
+   M = microbenchmark(glm_freq(n),
+   glm_bayes(n),times=50)
+   return(apply( matrix(M$time,nrow=2),1,mean))
+ }

 

On linear models with no constant and R2

In econometrics course we always say to our students that “if you fit a linear model with no constant, then you might have trouble. For instance, you might have a negative R-squared”. So I tried to find databases on the internet such that, when we compute a linear regression, we actually obtain a negative R squared. I have generated hundreds to random databases that should exhibit such a property, in R. With no success. Perhaps to be more specific, I should explain what might happen if we do not include a constant in a linear model. Consider the following dataset, where points are on a straight line, with a negative slope, far from the origin, symmetric with respect to the first diagonal.

> x=1:3
> y=3:1
> plot(x,y)

Points are on a straight line, so it is actually possible to get a perfect linear model. But only if we integrate a constant in our model. This is related to the fact that the correlation between our two variates is -1,

> cor(x,y)
[1] -1

The least-square program is here

http://freakonometrics.blog.free.fr/public/perso5/olssc01b.gif

i.e. the estimate of the slope is

http://freakonometrics.blog.free.fr/public/perso5/olcsc02.gif

Numerically, we obtain

> sum(x*y)/sum(x^2)
[1] 0.7142857

which is the actual slope on the illustration above. If we compute the sum of squares of errors (as a function of the slope), we have here

> ssr=function(b){sum((y-b*x)^2)}
> SSR=Vectorize(ssr)
> B=seq(-1,3,by=.1)
> plot(B,SSR(B),ylim=c(0,ssr(3)),cex=.6,type="b")

so the value we have computed is actually the minimum of the sum of squares of errors. But note that the sum of squares always exceeds the total sum of squares in red on the graph above

> ssr(b)
[1] 6.857143
> sum((y-mean(y))^2)
[1] 2

Recall that the total “coefficient of variation“, denoted http://freakonometrics.blog.free.fr/public/perso5/R2.gif, is defined as

http://freakonometrics.blog.free.fr/public/perso5/olsnc04.gif

i.e.

> 1-ssr(b)/sum((y-mean(y))^2)
[1] -2.428571

which is negative. It is also sometimes defined as “the square of the sample correlation coefficient between the outcomes and their predicted values“. Here it would be related to

> cor(b*x,y)
[1] -1

so we would have a unit http://freakonometrics.blog.free.fr/public/perso5/R2.gif . So obviously, using the http://freakonometrics.blog.free.fr/public/perso5/R2.gif in a model without a constant would give odd results. But the weird part is that if we run that regression with R, we get

> summary(lm(y~0+x))

Call:
lm(formula = y ~ 0 + x)

Residuals:
1       2       3
2.2857  0.5714 -1.1429

Coefficients:
Estimate Std. Error t value Pr(>|t|)
x   0.7143     0.4949   1.443    0.286

Residual standard error: 1.852 on 2 degrees of freedom
Multiple R-squared: 0.5102,	Adjusted R-squared: 0.2653
F-statistic: 2.083 on 1 and 2 DF,  p-value: 0.2857

Here, the estimation is correct. But the http://freakonometrics.blog.free.fr/public/perso5/R2.gif we obtain tells us that the model is not that bad… So if anyone knows what R computes, I’d be glad to know. The value given by R (thanks Vincent for asking me to look carefully at the R source code) is obtained using Pythagoras’s theorem to compute the total sum of square,

> sum((b*x)^2)/(sum((b*x)^2)+sum((y-b*x)^2))
[1] 0.5102041

So be careful, the http://freakonometrics.blog.free.fr/public/perso5/R2.gif might look good, but meaningless !