Ten years ago, I was giving a short course in Louvain-la-Neuve, on “an introduction to multivariate and dynamic risk measures“. I have written notes that are still online, but I never found time to do anything with them…
Tag Archives: dynamic
Graduate Crash Course on Risk Measures
Tomorrow morning, I will give a crash course on risk measures at Louvain-la-Neuve, in Belgium. This is a crash course of PhD students (and researchers) with a long introduction on the univariate static framework (and some mathematical tools that will be interesting later on, such as the Fenchel transform and more generally on convexity, as well as some results on optimal transport). I will also mention what was obtained in decision theory, inspired by Itzhak Gilboa‘s Theory of Decision under Uncertainty. Then I will mention extensions to derive multiple risk measures, based on Marc Henry and Alfred Galichon‘s work. Finally, I will conclude by introducing the difficulty to derive dynamic risk measures.
The slides are based on a document I am still working on. And unfortunately, the deeper I get to explain the roots of the axioms, or the assumptions, the more papers I discover (and I need to read, and understand). So I guess I will need some time to finalize my survey. Note that I decided to skip details on technical issues when working on , and the weak topology on the dual of . I will try to add additional references in the notes, but I wanted the slides to be as simple as possible. I also want to add more connections with statistical results, such as Neyman Pearson’s lemma, for instance (as mentioned in a paper by Alexander Schied). All my apologies for the typos, too.
Optimal control, part 2
In the first part (here), we introduced Bellman’s idea of backward induction. But what if we consider now infinite time horizon ? Actually, the maths will be even more simple… and we will be able to use fixed pointed theorem to derive solutions.
- The mathematical framework
Here, consider the following value function
and define
A sequence is said to be an admissible solution for starting point x,
if and . If we reformulate the dynamic programming idea, we obtain that if is a solution to problem , then for all , sequence is a solution to problem . It comes that function v is a solution of Bellman’s equation
Note that is can be ssen as some fixed point resul, since
i.e.
So far, it shouldn’t be so hard….
- Frank Ramsey’s model (discrete version)
In 1928, Frank Ramsey wanted to understand the amount of savings in a dynamic perspective (in how much of its income should a nation save). Consider the following infinite horizon problem, where some planifier wants to maximize
subject to constraints , and .
Before looking at dynamic programming answers, we might start with standard Lagrangian optimization techniques.
Assuming concavity of utility function and production function, we should look only for interior solutions. Define the Lagrangian as
Thus, the first order conditions are then given by
and
Assume further some terminal condition, e.g.
(also called transversality condition). If we combine those two conditions, and assume that the first constraint is saturated, we obtain the so-called Euler equation,
It is also possible to use Bellman’s equation: given the dynamic of the capital
The first order condition states
But since v is unknown, so is its derivative. But from the enveloppe theroem, we obtain something like
where
We can then write
i.e.
and finally
which is, Euler’s equation.
- A specified model, with calculations
As in the previous post (here), consider a log utility function, and a power production function, and . The dynamic is then
Note that fixed points are here
and
Recall that the value function is defined as
A natural idea to derive the value function can be to iterate, i.e.
starting with a simple function, e.g the null function, at step 0. At step n=1
thus
At step n=2,
i.e.
The first order condition is then
and thus, we obtain
that can be plugged in the previous equation, i.e.
At step 3, we start from that new expression, derive the first order condition, and we get
and
and so on…
And finally, we can prove that
i.e. . Assuming that
actually, we can prove that
(and has a form that can be explicited).