Category Archives: Datasets

I Got The Feelin’

Last week, I’ve been going through my CD collection, trying to find records I haven’t been listing for a while. And I got the feeling that music I listen to nowadays is slower than the one I was listening to in my 20’s. I was wondering if that was an age issue, or it was simply the fact that music in the 90s was “faster” than the one released in 2015. So I tried to scrap the BPM database to get a more appropriate answer about this “feeling” I have. I did extract two information: the BPM (beat per minute) and the year (of release).

Here is a function to extract information from the website,

> library(XML)
> extractbpm = function(VBP,P){
+ url=paste("https://www.bpmdatabase.com/music/search/?artist=&title=&bpm=",VBP,"&genre=&page=",P,sep="")
+ download.file(url,destfile = "page.html")
+ tables=readHTMLTable("page.html")
+ return(tables)}

For instance

> extractbpm(115,13)
$`track-table`
Artist Title
1 Eros Ramazzotti y Claudio Guidetti Dimelo A Mi
2 Everclear Volvo Driving Soccer Mom
3 Evils Toy Dear God
4 Expose In Walked Love
5 Fabolous ft. 2 Chainz When I Feel Like It
6 Fabolous ft. 2 Chainz When I Feel Like It
7 Fabolous ft. 2 Chainz When I Feel Like It
8 Fanny Lu Fanfarron
9 Featurecast Ain't My Style
10 Fem 2 Fem Obsession
11 Fernando Villalona Mi Delito
12 Fever Ray Triangle Walks
13 Firstlove Freaky
14 Fito Blanko Pegadito Suavecito
15 Flechazo Del Norte Mariposa Traicionera
16 Fluke Switch/Twitch
17 Flyleaf Something Better
18 FM Static The Next Big Thing
19 Fonseca Eres Mi Sueno
20 Fonseca ft. Maffio & Nayer Eres Mi Sueno
21 Francesca Battistelli Have Yourself A Merry Little Christmas
22 Frankie Ballard Young & Crazy
23 Frankie J. More Than Words
24 Frank Sinatra The Hucklebuck
25 Franz Ferdinand The Dark Of The Matinée
Mix BPM Genre Label Year
1 — 115 — Sony 2009
2 — 115 — Capitol Records 2003
3 — 115 — — —
4 — 115 — Arista Records 1994
5 Explicit 115 Urban Def Jam/Island Def Jam 2013
6 — 115 Urban Def Jam/Island Def Jam 2013
7 Radio Edit 115 Urban Def Jam/Island Def Jam 2013
8 — 115 Latin Pop Universal Latino 2011
9 Psychemagik Dub 115 — Jalapeno 2012
10 — 115 — Critique Records 1993
11 — 115 — Mt&vi Records/caminante Records 2001
12 Rex The Dog Remix 115 — Little Idiot/Mute 2012
13 — 115 — Jwp Music 2000
14 — 115 Merengue Mambo Crown Loyalty 2012
15 — 115 — Hacienda 2010
16 Album Version 115 — One Little Indian Records 2004
17 — 115 Alternative A&M/Octone 2013
18 — 115 — Tooth & Nail Records 2007
19 — 115 Merengue Mambo 10 2012
20 Urban Version 115 — 10 2012
21 — 115 — Word/Fervent/Warner Bros 2009
22 — 115 Country Warner Bros 2015
23 Mynt Rocks Radio Edit 115 — Columbia 2005
24 — 115 Jazz Columbia 1950
25 — 115 New Wave — 2004

We have here one of the few old songs, a 1950 tune by Frank Sinatra. If we scrap the website, with a simple loop (where the bpm is from 40 to 200). Start with

BASE=NULL
> vbp=40
> p=1

and then, a loop based on

> while(vbp<=2017){
+ F=extractbmp(vbp,p)
+ if(length(F)==1){
+ BASE=rbind(BASE,F[[1]][,c("Artist","Title","BPM","Year")])
+ p=p+1}
+ if(length(F)==0){
+ vbp=vbp+1
+ p=1}}

Then we should clean the dataset

BASE=BASE[-duplicated(BASE),]
BASE=BASE[-which(BASE$Year=="—"),]
BASE$y=as.numeric(as.character(BASE$Year))
BASE$bpm=as.numeric(as.character(BASE$BPM))
BASE=BASE[BASE$y>=1940,]

and we end up with almost 50,000 tunes.

boxplot(BASE$bpm~as.factor(BASE$y),
col="light blue")

Over the past 20 years, it looks like speed of tunes has declined (let us forget tunes of 2017, clearly we have a problem here…)

library(mgcv)
plot(BASE$y,BASE$bpm)
reg=gam(bpm~s(y),data=BASE)
B=data.frame(y=1950:2017)
p=predict(reg,newdata=B)
lines(B$y,p,lwd=3,col="red")

which is confirmed with a (smoothed) regression

p2=predict(reg,newdata=B,se.fit=TRUE)
plot(B$y,p2$fit,lwd=3,col="red",type="l",ylim=c(90,140))
lines(B$y,p2$fit+p2$se.fit)
lines(B$y,p2$fit-p2$se.fit)

even when incorporating the confidence band. Bumps are probably related to smoothing parameters, but indeed, it looks like the average speed of music tune has decreased, from 110-115 in the 90’s to less than 100 nowadays. Now to be honest, I would love to have access to personal information from itunes, deezer or spotify, to get a better understanding (eg when in the week, in the day, do we like to listen to faster music for instance). But so far, I could not have access to such data. Too bad…

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp01.png

and we fit

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp02.png

where https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png subsamples (of equal size), and run https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png regression on samples of size https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp06.png, as function of https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png regression on samples of size https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp06.png, when https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp01.png

and we fit

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp02.png

Here, we plot here https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp07.png and a confidence interval, defined as

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png segments on the right are the https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png estimators (each on samples of size https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the https://f.hypotheses.org/wp-content/blogs.dir/253/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.