Tag Archives: time

Wasting Time (and Givin’ Up)

There was an interesting post, published a few days ago, entitled This Blog is a Waste of My Time. The funny thing is that I had exactly the same experience at the same time. Since 2013 ended, I wanted to update my resume. And I observed that I got zero publications in the past two years years. ZeroNada. Nothing published in 2012 and nothing published in 2013. Of course, it is mainly a timing issue, since several papers are still in the loop, and I might end up with a few papers published in 2014 (at least one was accepted the first week of 2014). But still… When I decided to turn off my laptop yesterday evening at 2 a.m. (this morning actually) I started wondering also if blogging wasn’t a waste of time. Or if it was something else.

  • My Research is a Waste of My Time (and not only mine)

This will sound like a cliché, but academic do waste a lot of time when doing (or pretending doing) some research,

  1. wasting time applying for grants: do I have to be more specific here? By wasting time I mean working during a month (almost) to fill forms, and there then get a “your proposal is extremely interesting, you got positive feedback from the reviewers, unfortunately, there’s no funding from the government…“. We all had this experience. We’re wasting our time here… and the reviewers time, too.
  2. wasting time in committees: as mentioned above, I have to spend time in research committees, reading applications for grants, but also in faculty committed, discussing office allocation for instance. We got more postdocs, visitors, interns than available seats… and for some reasons, I am on the bargaining comity, trying to argue with my colleague that my postdoc staying for 6 weeks should be before his visitor staying for 2 weeks on the priority list. I do have to do this, but you have to admit that, somehow, you waste the time of four tenured professor (plus me, I am not tenured) on some bullshit here… I am also on the PhD comity, where we receive applications. In December, we did spend a lot of time on the application of one candidate, potentially interesting (like many hours, discussing and arguing), and we’re not even sure that if we agree to enroll him in the PhD program, the candidate will join. If he’s not coming, that will be a waste of time
  3. wasting time in the review process: it looks like I spent more time reading and writing reports on others papers than writing my own ! Ok, that might be an actual quote I got from one of my referees in a recent paper… I keep saying that I should start saying “no, I am too busy” when an editor ask me for a review. But I also keep saying that a lot of bullshit managed to get published. So I cannot stand aside and wait. I mean, I could: I’m French and we’re usually good for this kind of things. But I’d rather be involved in the process, advise the editor if the paper is not worth it, and help to improve the paper if it might be interesting. But again, I spend a lot of time in this process. I know what others are doing, but I keep delaying my own. You cannot find my name if you look for articles published in 2012 or 2013, but I am somewhere, as one of those anonymous referees thanked at the end of the article (who sometimes spent more time on the paper than the PhD supervisor who barely knows what the paper is about, but still has his – or her – name on it). There was an interesting post by Rob Hyndman entitled How to get your paper rejected quickly a few weeks ago. I still don’t know if I agree with everything, but I agree that “review­ers spend a great deal of time pro­vid­ing com­ments, and it is dis­re­spect­ful to ignore them” (I would say “might spend“, but I do not want to argue on that point today). A lot of time is wasted in the publication process.
  4. wasting time trying to get data: before Julie started her internship in September, I tried to get datasets to work on demographic problems. I started discussing (and filling) forms to get French datasets, and managed to get a smaller in Québec. The agenda was simple: we work on the small dataset, write the code, and then, once we’ll get the big dataset, we’ll just use the code that we tested on the small one. After 6 months, I still wonder if my form has been accepted, and if, someday, I will be able to get access to this dataset. I know that the dataset exists. I mean, I know that two datasets exist, and I just ask for a merge… but it looks like there might be ethical considerations, so it takes time.

I do waste a lot of time in the process of making research, and I do not mention here procrastination. Actually, I believe that procrastination is extremely important, and is not a waste of time… But I will get back, someday, on that point in another post.

  • My Teaching Related Duties are a Waste of My Time

I will not claim that teaching is a waste of time. I am still extremely pretentious, and I believe that by the end of my courses, my students could actually learn something… But the problem is more on associated duties. One might think of writing the exams (and sketches of solutions) or grading (since I do not have T.A.s to help). It takes time. A lot of time actually. But I won’t consider it as wasted. Two shorts stories to explain what I mean (that occurred in the Autumn session)

  1. in September, I gave a course, and there were 4 tests scheduled. A few hours before the first one, I got an email from a student, asking me to reschedule it because he could not be there. He asked me to postpone his examen a few days after. I said no, essentially because we signed an agreement on the first day, and the student knew by that time that he will not be able to be there for the exam. And never told me before. I decided to stand on this principle. The thing is the student invoked religious matters, and I understood it will start to be stinky soon. But I had principles. I got some moral support from my colleagues, and from my Dean, but everyone was telling me that I was in charge in this battle (“we support you, but you’re on your own“) since we have our academic independence. I did ask for legal backup from the Professors Union (three times) and no feedback. Then, I heard that a letter had been sent to the rector by a lawyer, and within 10 minutes, I gave the student everything he asked. If he asked me to take the test on a Sunday, I would have said yes… Just because lawyers basic rule is to waste others time, or money. So I gave up. I did not want to waste my time on that battle, on my own. The funny (?) side of this story is that so did the student: I agreed to postpone the test at the end of the session, he came for the second test (but never show up in my class) and got a little bit more than 30%. I did not get further news from him, and he did not take the other tests. But I did waste quite some time, and some bad nights and insomnia, too.
  2. in December, I was grading some homework I gave to my students (practical, on databases) and I saw on two forums that a pair of students was asking for help. Actually, it was not help but could you please do this for me ? They did mention the number of their database (each group had a different database, and the person who posted the question in those forums was located in Montréal). This was fraud, or at least fraud attempt. So I gave them 0% – on that specific homework. Students confessed that they did ask for help on the forum (but never asked me anything)… and I gave up. I mean, I decided to grade their work, and I did fill a form for fraud, sent to the faculty, so that it will be someone else’s problem. I did not want to spend time arguing that those students clearly should not get the exam (one of them had only 20% at the final written exam – the other one 36%) : if they want to learn something, taking the course in the Winter session was clearly an opportunity to learn something… But they din’t get it, and I gave up.

I clearly waste time on a lot of things. But when I look back at the past four or five years, I might feel ashamed not to have more prestigious (somehow) publications, lectures notes without typos everywhere, but at least, the blog is something I am still proud of, sort of. And when I end up working, tired, around 2 a.m., I have the feeling that something is wrong, and that a lot of time has been wasted. And I have to confess that I think I should give up on something… But I don’t think it will be on my blogging activity.

Too large datasets for regression ? What about subsampling….

recently, a classmate working in an insurance company told me he had too large datasets to run simple regressions (GLM, which involves optimization issues), and that they were thinking of a reward for the one who will write the best R-code (at least the fastest). My first idea was to use subsampling techniques, saying that 10 regressions on 100,000 observations can take less time than a regression on 1,000,000 observations. And perhaps provide also better results…

  • Time to run a regression, as a function of the number of observations

Here, I generate a dataset as follows

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

where http://freakonometrics.hypotheses.org/files/2016/11/largesamp03.png is a spline function (just to make it as general as possible, since in insurance ratemaking, we include continuous variates that do not influence claims frequency linearly in the score). Yes, there might be also useless variables, including one of them which is strongly correlated with one that has an impact in the regression. The code to generate the dataset is simply

> n=10000
> X1=rexp(n)
> X2=sample(c("A","B","C"),size=n,replace=TRUE)
> X3=runif(n)
> Z=rmnorm(n,c(0,0),matrix(c(1,0.8,.8,1),2,2))
> X4=Z[,1]
> X5=Z[,2]
> X6=X1^2
> E=runif(n)
> lambda=.2*X5-4*dbeta(X3,2,5)+X1+
+1*(X2=="A")-2*(X2=="B")-5*(X2=="C")
> Y=rpois(n,exp(lambda))
> base=data.frame(Y,X1,X2,X3,X4,X5,X6,E)

We would like the study the time it takes to run a regression, as a function of the size (i.e. the number of lines http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png) of the dataset.

> system.time( glm(Y~bs(X1)+X2+X3+X4+
+ X5+X6+offset(log(E)),family=poisson,
+ data=base) )
utilisateur     système      écoulé
0.25        0.00        0.25

Here, the time I look at is the last one. But so far, it was rather simple, but it is not the best model I can get. Let us use a stepwise (backward) variable selection,

> system.time( step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=poisson,
+ data=base)) )
Start:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))
Step:  AIC=2882.1
Y ~ bs(X1) + X2 + X3 + X4 + X5 + offset(log(E))
Df Deviance    AIC
<none>        2236.0 2882.1
- X5      1   2240.1 2884.2
- X4      1   2244.1 2888.2
- X3      1   4783.2 5427.3
- X2      2   5311.4 5953.5
- bs(X1)  3   6273.7 6913.8
utilisateur     système      écoulé
1.82        0.03        1.86

Finally, from the first regression, we have points in black (based on 200 simulated datasets), and with a stepwise procedure, we have the points in red.

i.e. it might look linear (proportional), but if it was linear, then on a log-log scale, we should have also straigh lines, with slope 1,

Actually, it looks like a convex function.

The interpretation of that convexity might lead to misinterpretation. On the graph below on the left, on a dataset two times bigger than the previous one (black point) will be less than two times longer to run, while on the right, it will be more than two timess longer,

Convexity can simply be interpreted as “too large datasets take time, and too small too…”. Which is a first step: it should be interesting, in some cases, to run several regressions on smaller datasets….

  • Running 100 regressions on 100 lines, or running 1 regression on 10,000 lines ?

Here, we have datasets with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines. The questions is how long will it take if we subdived into http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png subsamples (of equal size), and run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regressions ?

> nk=trunc(n/k)rep(1:k,each=nk); nt=nk*k
> base=data.frame(Y[1:nt],X1[1:nt],
+ X2[1:nt],X3[1:nt],X4[1:nt],X5[1:nt],
+ X6[1:nt],E[1:nt],classe)
> system.time( for(j in 1:k){
+  glm(Y~bs(X1)+X2+X3+X4+X5+
+ X6+offset(log(E)),family=poisson
+ ,data=base,subset=classe==j) })
utilisateur     système      écoulé
1.31        0.00        1.31
> system.time( for(j in 1:k){
+      step(glm(Y~bs(X1)+X2+X3+
+ X4+X5+X6+offset(log(E)),family=
+ poisson,data=base,subset=classe==j)) })
Start:  AIC=183.97
Y ~ bs(X1) + X2 + X3 + X4 + X5 + X6 + offset(log(E))

[…]

  Df Deviance    AIC
<none>        117.15 213.04
- X2      2   250.15 342.04
- X3      1   251.00 344.89
- X4      1   420.63 514.53
- bs(X1)  3   626.84 716.74
utilisateur     système      écoulé
11.97        0.03       12.31

On the graph below, we have the time (y-axis, here on a log scale) it took to run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, as function of http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png (x-axis), including the time it took to run the regression on a dataset of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png which is the concentration of dots on the left (i.e. http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=1), both on the 6 regressors – in black – and with a strepwise procedure – in red. One has to keep in mind that I did not remove the printing option in the stepwise procedure, so it might be difficult to compare the two clouds (black vs. red). Nevertheless, we clearly see that if we run http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png regression on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png, when http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png is not too large, i.e. less than 10 or 15, it is not longer than the regression on http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=200,000 lines.

So here we see that running 100 regressions on 2,000 lines is longer than running 1 regression on 200,000 lines… But maybe we are not comparing things that are actually comparable: what if it takes a bit longer, but we strongely improve the quality of our estimators ?

  • What about the quality of the output ?

Here, we consider only one dataset, with http://freakonometrics.hypotheses.org/files/2016/11/largesamp04.png=100,000 lines (just to make it run a bit faster). And http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png=20 subsets. Recall that the generated dataset is from

http://freakonometrics.hypotheses.org/files/2016/11/largesamp01.png

and we fit

http://freakonometrics.hypotheses.org/files/2016/11/largesamp02.png

Here, we plot here http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png and a confidence interval, defined as

http://freakonometrics.hypotheses.org/files/2016/11/largesamp08.png

The lightblue segment is the initial estimator, while the blue one is obtained from the stepwise procedure. The grey area represent the estimation on the overall sample, while the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png segments on the right are the http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators (each on samples of size http://freakonometrics.hypotheses.org/files/2016/11/largesamp06.png).

We can see that we have much more volatility on those http://freakonometrics.hypotheses.org/files/2016/11/largesamp05.png estimators, but the average (horizontal doted lines) are not so bad… The true value (i.e. the one used to generate the dataset is the dotter black horizontal line).
And if we repeat that on 1,000 simulated dataset, we obtaind the following distribution for http://freakonometrics.hypotheses.org/files/2016/11/largesamp07.png (blue line), so we have an unbiased estimator of our parameter (the verticular line being here the true value), here including a stepwise procedure,

But if we add the the red curve is the average of the http://freakonometrics.hypotheses.org/files/2016/11/largesamp09.png the previous one being now the clear blue line in the back, we see that taking average of estimators on subsamples is not bad at all, on the contrary,

and for those who think that the stepwise procedure is a mistake, here is what we get without it,

So what we can see is that running 20 regressions can take (a little) more time (from what we’ve seen earlier) than running only one on the whole dataset…. but it provides better estimates. So the tradeoff is not that simple, and maybe running several regressions on huge datasets can be a proper alternative.

Sales forecasting

Quelques heures de cours à l’ESC Rennes sur sur le thème Sales Forecasting. Les slides sont en ligne, avec la partie 1 et la partie 2 (j’ai aussi mis en ligne la base de données sur le trafic autoroutier). Parmi les contraintes techniques, il fallait utiliser Excel (exclusivement). Je mets donc des liens vers macro1 et macro2 qui sont des add-ins permettant de faire un peu de séries temporelles. Sinon un petit exemple est aussi disponible…

La partie 1 est

et la partie 2