In the first part (here), we introduced Bellman’s idea of backward induction. But what if we consider now infinite time horizon ? Actually, the maths will be even more simple… and we will be able to use fixed pointed theorem to derive solutions.
- The mathematical framework
Here, consider the following value function
and define
A sequence is said to be an admissible solution for starting point x,
if and . If we reformulate the dynamic programming idea, we obtain that if is a solution to problem , then for all , sequence is a solution to problem . It comes that function v is a solution of Bellman’s equation
Note that is can be ssen as some fixed point resul, since
i.e.
So far, it shouldn’t be so hard….
- Frank Ramsey’s model (discrete version)
In 1928, Frank Ramsey wanted to understand the amount of savings in a dynamic perspective (in how much of its income should a nation save). Consider the following infinite horizon problem, where some planifier wants to maximize
subject to constraints , and .
Before looking at dynamic programming answers, we might start with standard Lagrangian optimization techniques.
Assuming concavity of utility function and production function, we should look only for interior solutions. Define the Lagrangian as
Thus, the first order conditions are then given by
and
Assume further some terminal condition, e.g.
(also called transversality condition). If we combine those two conditions, and assume that the first constraint is saturated, we obtain the so-called Euler equation,
It is also possible to use Bellman’s equation: given the dynamic of the capital
The first order condition states
But since v is unknown, so is its derivative. But from the enveloppe theroem, we obtain something like
where
We can then write
i.e.
and finally
which is, Euler’s equation.
- A specified model, with calculations
As in the previous post (here), consider a log utility function, and a power production function, and . The dynamic is then
Note that fixed points are here
and
Recall that the value function is defined as
A natural idea to derive the value function can be to iterate, i.e.
starting with a simple function, e.g the null function, at step 0. At step n=1
thus
At step n=2,
i.e.
The first order condition is then
and thus, we obtain
that can be plugged in the previous equation, i.e.
At step 3, we start from that new expression, derive the first order condition, and we get
and
and so on…
And finally, we can prove that
i.e. . Assuming that
actually, we can prove that
(and has a form that can be explicited).