What happens if we forget a trivial assumption ?

Last week, @dmonniaux published an interesting post entitled l’erreur n’a rien d’original  on  his blog. He was asking the following question : let https://latex.codecogs.com/gif.latex?a, https://latex.codecogs.com/gif.latex?b and https://latex.codecogs.com/gif.latex?c denote three real-valued coefficients, under which assumption on those three coefficients does https://latex.codecogs.com/gif.latex?ax^2+bx+c has a real-valued root ?

Everyone aswered https://latex.codecogs.com/gif.latex?b^2-4ac\geq%200, but no one mentioned that it is necessary to have a proper quadratic equation, first. For instance, if both https://latex.codecogs.com/gif.latex?a and https://latex.codecogs.com/gif.latex?b are null, there are no roots.

It reminds me all my time series courses, when I define https://latex.codecogs.com/gif.latex?ARMA(p,q) processes, i.e.

https://latex.codecogs.com/gif.latex?\Phi(L)%20X_t=\Theta(L)\varepsilon_t

To have a proper https://latex.codecogs.com/gif.latex?ARMA(p,q) process, https://latex.codecogs.com/gif.latex?\Phi(\cdot) has to be a polynomial of order https://latex.codecogs.com/gif.latex?p, and https://latex.codecogs.com/gif.latex?\Theta(\cdot) has to be a polynomial of order https://latex.codecogs.com/gif.latex?q. But that is not enough ! Roots of https://latex.codecogs.com/gif.latex?\Phi(\cdot) and https://latex.codecogs.com/gif.latex?\Theta(\cdot) have to be differents ! If they have one root in common then we do not deal with a https://latex.codecogs.com/gif.latex?ARMA(p,q) process.

It sounds like something trivial, but most of the time, everyone forgets about it. Just like the assumption that https://latex.codecogs.com/gif.latex?a and https://latex.codecogs.com/gif.latex?b should be non-null in @dmonniaux‘s problem.

And most of the time, those theoretical problems are extremely important in practice ! I mean, assume that you have an https://latex.codecogs.com/gif.latex?AR(1) time series,

https://latex.codecogs.com/gif.latex?(1-\phi%20L)X_t=\varepsilon_t

but you don’t know it is an https://latex.codecogs.com/gif.latex?AR(1), and you fit an https://latex.codecogs.com/gif.latex?ARMA(2,1),

https://latex.codecogs.com/gif.latex?(1-\phi%20L)(1-\theta%20L)X_t=(1-\theta%20L)\varepsilon_t

Most of the time, we do not look at the roots of the polynomials, we just mention the coefficients of the polynomials,

https://latex.codecogs.com/gif.latex?(1-[\phi%20+\theta]%20L+\theta\phi%20L^2)X_t=(1-\theta%20L)\varepsilon_t

The statistical interpreration is that the model is mispecified, and we have a non-identifiable parameter here. Is our inference procedure clever enough to understand that https://latex.codecogs.com/gif.latex?\theta should be null ? What kind of coefficients https://latex.codecogs.com/gif.latex?\phi_1 and https://latex.codecogs.com/gif.latex?\phi_2 do we get ? Is the first one close to https://latex.codecogs.com/gif.latex?\phi and the second one close to https://latex.codecogs.com/gif.latex?0 ? Because that is the true model, somehow….

Let us run some monte carlo simulations to get some hints

> ns=1000
> fit2=matrix(NA,ns,3)
> for(s in 1:ns){
+ X=arima.sim(n = 240, list(ar=0.7,sd=1))
+ fit=try( arima(X,order=c(2,0,1))$coef[1:3] )
+ if(!inherits(fit, "try-error")) fit2[s,]=fit
+ }

If we just focus on the estimations that did run well, we get

> library(ks)
> H=diag(c(.01,.01))
> U=as.data.frame(fit2)
> U=U[!is.na(U[,1]),]
> fat=kde(U,H,xmin=c(-2.05,-1.05),xmax=c(2.05,1.05))
> z=fat$estimate
> library(RColorBrewer)
> reds=colorRampPalette(brewer.pal(9,"Reds"))(100)
> image(seq(-2.05,2.05,length=151),
+ seq(-1.05,1.05,length=151),
+ z,col=reds)

The black dot is were we expect to be : https://latex.codecogs.com/gif.latex?\phi_1 close to https://latex.codecogs.com/gif.latex?\phi and https://latex.codecogs.com/gif.latex?\phi_2 close to https://latex.codecogs.com/gif.latex?0. (the stationnarity triangle for https://latex.codecogs.com/gif.latex?ARMA(2,1) time series was added on the graph) But the numerical output is far away from what we were expecting.

So yes, the theoretical assumption to have distinct roots is very import, even if everyone forgets about it ! From a numerical point of view, we can get almost anything if we forget about that trivial assumption ! Actually, I still wonder which kind of “anything” we have… When we look at the distribution of https://latex.codecogs.com/gif.latex?\theta, it is clearly not “uniform”

> hist(fit2[,3],col="light blue",probability=TRUE)

And actually, there are a priori no reason to have https://latex.codecogs.com/gif.latex?\theta\in[-1,1]. But that’s what we observe here

> range(fit2[!is.na(fit2[,3]),3])
[1] -1  1

OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (October 4, 2014). What happens if we forget a trivial assumption ? Freakonometrics. Retrieved September 11, 2024 from https://doi.org/10.58079/oux4


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.