Optimal control, part 1

Since I had recently a request (from Benoît) about optimal control, I will start here a series of posts on that topic. But first, let us start with a simple problem, with discrete time, no randomness, and with a finite horizon. This might be a too simple framework to model complex problem, but that should be interesting to derive heuristic intuitions (I will skip here the mathematical problems, that can be found in very good books… references will come soon).

  • An introduction to backward induction

Before starting seriously, let us consider the following example: we want to reach the red city on the right from the red city on the left, as fast as possible. There are some roads, and the number is the number of hours it takes. Let us prove that the optimal way is the red one,

The first idea can be to calculate all possible trajectories, but with a large number of roads, the number of possible ways can soon be extremely large. An alternative can be to look backwards (like any students facing a question where the answer is given: start from the end, and try to find a possible way to reach it).


Numbers are greens are the number of hours that we still need one we’ve reached that point. Let us move again one step backward, and consider the orange points,

In the middle, we had to chose either to go up (it will still take 9 hours) or go down (and then 14 hours are necessary). Thus, the optimal strategy, once we’ve reach that point, it to take the 9 hour road. This will be idea the idea proposed by Bellman. Let us go backward again, to the purple cities. Again, we have to chose the shorter way,

From the top, the fastest road will take 13 hours, and from the bottom, it will take 16 hours. Thus, since it takes 10 hours to reach the top city, this road is necessarily faster than taking the one below (since it takes 8 hours to reach the nearest city).
Any this is it. We now have an intuitive idea of what should be done to find optimal strategies.

  • The optimization problem, discrete with finite horizon

Let us consider the following optimization problem: we want to find the optimal strategy https://freakonometrics.hypotheses.org/files/2022/04/control-01.png that maximizes the following function

https://freakonometrics.hypotheses.org/files/2022/04/control-02.png

with simple constraints, such as https://freakonometrics.hypotheses.org/files/2022/04/control-03.png (a state space) and https://freakonometrics.hypotheses.org/files/2022/04/control-04.png (i.e. some dynamic constraints). Assume further that the starting point is given, i.e.
https://freakonometrics.hypotheses.org/files/2022/04/control-05.png. In economic application, note that frequently https://freakonometrics.hypotheses.org/files/2022/04/control-06.png i.e. we consider a discounted version of the value.

  • The idea of dynamic programming

The intuitive idea of dynamic programming is that if the optimal path from A to C goes throught B, then the path is optimal from B to C. Thus, it will be natural to consider backward induction techniques.
Thus, define
https://freakonometrics.hypotheses.org/files/2022/04/control-07.png
https://freakonometrics.hypotheses.org/files/2022/04/control-08.png
https://freakonometrics.hypotheses.org/files/2022/04/control-09.png

https://freakonometrics.hypotheses.org/files/2022/04/control-10.png
https://freakonometrics.hypotheses.org/files/2022/04/control-11.png
Then Bellman’s principle can be used to link those problems: if https://freakonometrics.hypotheses.org/files/2022/04/control-12.png is a solution of the problem https://freakonometrics.hypotheses.org/files/2022/04/control-13.png then, for all
https://freakonometrics.hypotheses.org/files/2022/04/control-14.png, https://freakonometrics.hypotheses.org/files/2022/04/control-15.png is a solution of problem https://freakonometrics.hypotheses.org/files/2022/04/control-16.png.
Note that, so far, we assume that such an optimal sequence does exist. Thus, we get that for all x

https://freakonometrics.hypotheses.org/files/2022/04/control-17.png

and more generally,

https://freakonometrics.hypotheses.org/files/2022/04/control-18.png

Hence, from a practical point of view, we solve those equation using a backward approch. I.e. first,

https://freakonometrics.hypotheses.org/files/2022/04/control-20.png

and then

https://freakonometrics.hypotheses.org/files/2022/04/control-21.png

and so on

https://freakonometrics.hypotheses.org/files/2022/04/control-22.png

… etc. It can be proved that the sequence https://freakonometrics.hypotheses.org/files/2022/04/control-23.png is solution of https://freakonometrics.hypotheses.org/files/2022/04/control-24.png if and only if for all https://freakonometrics.hypotheses.org/files/2022/04/control-25.png, https://freakonometrics.hypotheses.org/files/2022/04/control-15.png
is solution of

https://freakonometrics.hypotheses.org/files/2022/04/control-27.png

So far, it does not look so difficult….

  • A simple example

Let https://freakonometrics.hypotheses.org/files/2022/04/control-30.png denote the consumption at period t, and assume consumption yields utility

https://freakonometrics.hypotheses.org/files/2022/04/control-31.png

as long as the consumer lives. Assume the consumer is impatient, or has a stronger preference for present, so that he discounts future utility by a factor https://freakonometrics.hypotheses.org/files/2022/04/control-32.png. Let https://freakonometrics.hypotheses.org/files/2022/04/control-33.png be capital he got at time t. Assume that his initial capital is a given amount https://freakonometrics.hypotheses.org/files/2022/04/control-34.png, and suppose that this period’s capital and consumption determine next period’s capital as

https://freakonometrics.hypotheses.org/files/2022/04/control-35.png

where https://freakonometrics.hypotheses.org/files/2022/04/control-36.png is a positive constant and https://freakonometrics.hypotheses.org/files/2022/04/control-37.png. Assume further that capital cannot be negative. Then the consumer’s problem is simply

https://freakonometrics.hypotheses.org/files/2022/04/control-38.png

given https://freakonometrics.hypotheses.org/files/2022/04/control-39.png. Bellman’s equation is then

https://freakonometrics.hypotheses.org/files/2022/04/control-40B.png

whih leads us to a simplier problem than the initial one, since only two variables are involved here https://freakonometrics.hypotheses.org/files/2022/04/control-41.png and https://freakonometrics.hypotheses.org/files/2022/04/control-42.png. And to solve that problem, we use backwards induction techniques.
Since https://freakonometrics.hypotheses.org/files/2022/04/control-44.png is known, we can derive easily https://freakonometrics.hypotheses.org/files/2022/04/control-45.png, and so on until https://freakonometrics.hypotheses.org/files/2022/04/control-46.png. More precisely, given https://freakonometrics.hypotheses.org/files/2022/04/control-47.png, we can get https://freakonometrics.hypotheses.org/files/2022/04/control-49.png which is the maximum of function

https://freakonometrics.hypotheses.org/files/2022/04/control-50.png

with https://freakonometrics.hypotheses.org/files/2022/04/control-51.png. One can see that the following function is a possible solution

https://freakonometrics.hypotheses.org/files/2022/04/control-52.png

where each https://freakonometrics.hypotheses.org/files/2022/04/control-53.png is a constant. Further, the optimal amount to consume at time https://freakonometrics.hypotheses.org/files/2022/04/control-54.png is

https://freakonometrics.hypotheses.org/files/2022/04/control-55.png

i.e., if we explicit those expressions

https://freakonometrics.hypotheses.org/files/2022/04/control-60.png
https://freakonometrics.hypotheses.org/files/2022/04/control-61.png
https://freakonometrics.hypotheses.org/files/2022/04/control-62.png

…etc.

  • A reformulation of the optimisation problem

Another way of expressing the optimization problem is the following: https://freakonometrics.hypotheses.org/files/2022/04/control-70.png was the variable of interest, https://perso.univ-rennes1.fr/arthur.charpenti<br /> er/latex/control-71.png was the control variable, and https://freakonometrics.hypotheses.org/files/2022/04/control-72.png. Thus, the programm

https://freakonometrics.hypotheses.org/files/2022/04/control-73b.png

can be expressed as

https://freakonometrics.hypotheses.org/files/2022/04/control-74.png

Several extension can be considered,

  • consider an infinite sum
https://freakonometrics.hypotheses.org/files/2022/04/control-80.png
  • consider a random component
https://freakonometrics.hypotheses.org/files/2022/04/control-81.png
  • consider a continuous version
https://freakonometrics.hypotheses.org/files/2022/04/control-82.png

But those items will be for posts that I still have to write down….


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (January 5, 2010). Optimal control, part 1. Freakonometrics. Retrieved September 16, 2024 from https://doi.org/10.58079/oucz


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.