In the exercises for the MAT8181 course, there are two Exercises (16 and 17) about prediction and extrapolation based on MA(1) and AR(1) time series. But before discussing those exercises (I had some request for hints), I wanted to recall the definition of the linear prediction,
where
As discussed previously on this blog, we consider here on projection not on (this would be the conditional expectancy) but on the linear subset.
The goal of Exercise 2 was to establish an important result, in the context of Gaussian random vectors. If is a (multivariate) Gaussian vector,
, then
where is the vector
.
Keep those results in mind, and let us look at Exercise 17, for instance. Here, is an AR(1) process, with innovation
,
One observation (say ) is missing. We have here 3 questions:
- what is the best linear prediction of
given
and
- what is the best linear prediction of
given
and
- what is the best linear prediction of
given
and
Case 1. Here, we have to compute
Since we have an AR(1) process, ,
and
. Thus, from the relationship above
which can be written
i.e. . Which makes sense actually: the AR(1) process is Markovian of order one, so
And we seen in class that for an AR(1) process
So far, so good.
Case 2. Here, we have to compute
Since we have an AR(1) process, ,
and
. Thus, from the relationship above
i.e. .
Case 3. Finally, we have to compute
One more time, since we have an AR(1) process, ,
and
. So here, the relationship above becomes
Here, we can write
i.e.
So finally, what we got here is
and
The mean squared errors for each of those estimates are obtained computing
I guess I should probably stop here… that’s a detailed hint actually.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (March 25, 2014). Linear ‘Prediction’ for AR Time Series. Freakonometrics. Retrieved February 14, 2025 from https://doi.org/10.58079/ouur