This afternoon, we’ve seen in the training on data science that it was possible to use AIC criteria for model selection.
> library(splines) > AIC(glm(dist ~ speed, data=train_cars, family=poisson(link="log"))) [1] 438.6314 > AIC(glm(dist ~ speed, data=train_cars, family=poisson(link="identity"))) [1] 436.3997 > AIC(glm(dist ~ bs(speed), data=train_cars, family=poisson(link="log"))) [1] 425.6434 > AIC(glm(dist ~ bs(speed), data=train_cars, family=poisson(link="identity"))) [1] 428.7195
And I’ve been asked why we don’t use a training sample to fit a model, and then use a validation sample to compare predictive properties of those models, penalizing by the complexity of the model. But it turns out that it is difficult to compute the AIC of those models on a different dataset. I mean, it is possible to write down the likelihood (since we have a Poisson model) but I want a code that could work for any model, any distribution….
Hopefully, Heather suggested a very clever idea, using her package
@freakonometrics @pihive you could use gnm with `data = dat2, constrain = “*”, constrainTo = coef(mod1)’ then use AIC on result
— Heather Turner (@HeathrTurnr) 29 Juillet 2015
And actually, it works well.