There is no “Too Big” Data, is there?

A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques.

So, consider here a logistic regression , based on some covariates. We have  explanatory variables ( will be large, but not too large) and  observations (with ), Here we have a big (potentially) matrix product  i.e. with a large  matrix. Here, assume that we have a  matrix, with  individual observations, and  possible variables (and the intercept). Actually, in my model, only  variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated.

n=100000
library(mnormt)
k=50
r=.2
Sig=matrix(r,k,k)
diag(Sig)=1
X=rmnorm(n,varcov=Sig)
U=pnorm(rmnorm(n,varcov=Sig))
p=exp(-U[,1]-X[,1]-1)/(1+exp(-U[,1]-X[,1]-1))
Y=rbinom(n,size=1,p)
df=data.frame(Y,U,X)
names(df)=c("Y",paste("U",1:50,sep=""),paste("X",1:50,sep=""))
reg=glm(Y~.,data=df,family="binomial")

In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared,

  • generate   datasets with  observations, by sub-sampling
  • generate   datasets with  observations, by sub-sampling,

On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of  explanatory variables, only  were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used.

L1=L2=L1s=L2s=list()
library(MASS)
ns1=n/10
ns2=n/100
for(s in 1:100){
i=sample(1:n,size=ns1,replace=TRUE)
reg_sub=glm(Y~.,data=df[i,],family="binomial")
L1[[s]]=reg_sub
L1s[[s]]=stepAIC(reg_sub)
i=sample(1:n,size=ns2,replace=TRUE)
reg0=glm(Y~.,data=df[i,],family="binomial")
L2[[s]]=reg_sub
L2s[[s]]=stepAIC(reg_sub)
}

For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get

VC=c(-1,-1,rep(0,49),-1,rep(0,49))
coef=function(k){
C1=unlist(lapply(L1,function(x) coefficients(x)[k]))
C2=unlist(lapply(L2,function(x) coefficients(x)[k]))
m=summary(reg)$coefficients
u=seq(quantile(C2,.2),quantile(C2,.8),length=501)
v=dnorm(u,m[k,1],m[k,2])
plot(u,v,col="white",xlab="",ylab="",axes=FALSE)
axis(1)
polygon(c(u,rev(u)),c(v,rep(0,length(u))),col="grey",border=NA)
abline(v=VC[k],lty=2)
boxplot(C1,horizontal=TRUE,add=TRUE,at=max(v)/3)
boxplot(C2,horizontal=TRUE,add=TRUE,at=max(v)/3*2)
}

coef(2)

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

  • the ROC curve for the ‘s obtained with the model on the complete dataset [red]
  • the ROC curves for the ‘s obtained with the model on the ‘s subsample [light blue]
  • the ROC curve for the  ‘s obtained by averaging the previous estimates [blue]

S=predict(reg,type="response")
Y=def$Y
M.ROC=ROC.curve(S,Y)
plot(M.ROC[1,],M.ROC[2,],type="s",col="red")

Z=df$Y*0
for(si in 1:100){
S=predict(L1s[[si]],type="response",newdata=df)
Z=Z+S
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="light blue")
}

S=Z/100
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="blue",lwd=2)

If we consider sub-samples of size , we get the following, and when we consider sub-samples of size , without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

  • running one regression on our   matrix
  • averaging prediction after running  regressions on some   matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with variables (where should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (April 23, 2014). There is no “Too Big” Data, is there? Freakonometrics. Retrieved March 17, 2025 from https://doi.org/10.58079/ouv9


5 thoughts on “There is no “Too Big” Data, is there?”

  1. Hi Arthur

    The pages on subsampling and RCO curves referenced in this article no longer display some of the mathematical equations which makes it difficult to understand what’s going on.

    Greets

    JR

  2. Micro-segment c’est plutôt une façon de parler.

    Cela s’inscrit dans la tendance actuelle qui est très clairement à la segmentation en termes de demande client (je parle avec un point de vue du conseil). Une conséquence de la future loi Hamon sans doute.

    Ensuite, l’arbitrage tarifaire entre mutualisation & segmentation relève plus de la politique que de l’actuariat (voir MAIF vs AGA).

  3. Bonjour,

    les données massives sont en effet une problématique majeure en pricing.

    Je travaille actuellement sur une base de 5 millions de lignes et 80 variables après retraitement.

    Je ne suis pas super convaincu par la méthodologie d’échantillonnage lorsque l’objectif est justement d’identifier des micro-segments tarifaire.

    L’exemple présenté est intéressant mais sur 2 variables uniquement.

    R propose les package bigmemory et bigglm pour traiter ce genre de base il me semble.

    Cordialement

    1. oui, il y a des packages dédiés sous R, mais je voulais tenter une approche alternative car je pense qu’il y a des choses à explorer dans cette direction…

      sur l’histoire des 2 variables ma théorie est que je pense que peu de variables peuvent expliquer beaucoup, après c’est surtout du bruit statistique (2 était exagéré, mais 20 serait peut être réaliste). Autrement dit, j’ai du mal à comprendre la robustesse des micro-segments tarifaires… je pense que c’est justement quelque chose d’artificiel (surtout en assurance). En plus, les micro-segments vont, selon moi, à l’encontre de l’idée même de mutualisation des risques. On ne cherche pas une homogénéité parfaite dans un groupe, mais une homogénéité réaliste à laquelle croit un assuré. Non ?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.