Tuesday, at the end of my 5-hour crash course on machine learning for actuaries, Pierre asked me an interesting question about computational time of different techniques. I’ve been presenting the philosophy of various algorithm, but I forgot to mention computational time. I wanted to try several classification algorithms on the dataset used to illustrate the techniques
> rm(list=ls())
> myocarde=read.table(
"http://freakonometrics.free.fr/myocarde.csv",
head=TRUE,sep=";")
> levels(myocarde$PRONO)=c("Death","Survival")
But the dataset is rather small, with 71 observations and 7 explanatory variables. So I decided to replicate the observations, and to add some covariates,
> levels(myocarde$PRONO)=c("Death","Survival")
> idx=rep(1:nrow(myocarde),each=100)
> TPS=matrix(NA,30,10)
> myocarde_large=myocarde[idx,]
> k=23
> M=data.frame(matrix(rnorm(k*
+ nrow(myocarde_large)),nrow(myocarde_large),k))
> names(M)=paste("X",1:k,sep="")
> myocarde_large=cbind(myocarde_large,M)
> dim(myocarde_large)
[1] 7100 31
> object.size(myocarde_large)
2049.064 kbytes
The dataset is not big… but at least, it does not take 0.0001 sec. to run a regression. Actually, to run a logistic regression, it takes 0.1 second
> system.time(fit< glm(PRONO~.,
+ data=myocarde_large, family="binomial"))
user system elapsed
0.114 0.016 0.134
> object.size(fit)
9,313.600 kbytes
And I was surprised that the regression object was 9Mo, which is more than four times the size of the dataset. With a large dataset, 100 times larger,
> dim(myocarde_large_2) [1] 710000 31
it takes 20 sec.
> system.time(fit<-glm(PRONO~.,
+ data=myocarde_large_2, family="binomial"))
utilisateur système écoulé
16.394 2.576 19.819
> object.size(fit)
90,9025.600 kbytes
and the object is ‘only’ ten times bigger.