Tag Archives: control

Modeling Dynamic Incentives: Application to Basketball

I will give a talk on “Modeling Dynamic Incentives: Application to Basketball” at the GERAD (Groupe d’études et de recherche en analyse des décisions) on June, 10th June, 6th. This is some joint work with Nathalie Colombier and Romuald Elie

An important aspect of the strategy of most organizations is the provision of incentives to the employees to meet the organization’s objectives. Typically this implies tying pay to performance (see Prendergast, 1999). In order to reward employees for their effort, firms spend considerable resources on performance evaluations. In many cases, evaluation consists of comparing actual performance to a pre-defined individual target. Another frequently used format is relative performance evaluation. Relative performance evaluation may motivate employees to work harder.But it may also be demoralizing and create an excessively competitive workplace, which may hinder overall performance; see Lazear (1989). Determining the overall impact of relative performance evaluation is crucial for companies. Economic research on relative performance evaluation has mainly focused on the comparison of final performances between competitors,like in tournament theory, and on quantitative and subjective performance ratings (Lazear and Gibbs, 2009). In contrast, what happens during a competition and the impact of feedback frequency on effort have so far received little attention. Following Berger and Pope (2011), we decided to use a basketball application to get a better understanding of the role of the feedback information. Sports datasets allow to observe score and team behavior continuously (during a game but also during the season) which can be use as a proxy of the effort. Berger an Pope (2010) asked ”can loosing lead to winning ?” looking at the impact of the halftime score difference on winning probability in NCAA (college) and NBA (pro) games. More precisely, they studied whether a team loosing at halftime is more likely to win than expected using a logit model. They find that usually the higher the score difference the more likely the are to win. But if the halftime score difference is around 0 they observe a discontinuity: loosing with a small difference (e.g. down by 1 point) can lead to increase the effort and win the game. In this paper we try answer the question ”when loosing lead to winning ?”.

Optimal control, part 2

In the first part (here), we introduced Bellman’s idea of backward induction. But what if we consider now infinite time horizon ? Actually, the maths will be even more simple… and we will be able to use fixed pointed theorem to derive solutions.

  • The mathematical framework

Here, consider the following value function


and define


A sequence https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-04.png is said to be an admissible solution for starting point x,


if https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-06.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-07.png. If we reformulate the dynamic programming idea, we obtain that if https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-04.png is a solution to problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-08.png, then for all https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-09.png, sequence https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-10.png is a solution to problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-11.png. It comes that function v is a solution of Bellman’s equation


Note that is can be ssen as some fixed point resul, since




So far, it shouldn’t be so hard….

  • Frank Ramsey’s model (discrete version)

In 1928, Frank Ramsey wanted to understand the amount of savings in a dynamic perspective (in how much of its income should a nation save). Consider the following infinite horizon problem, where some planifier wants to maximize


subject to constraints https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-02.png https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-03.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-04.png.
Before looking at dynamic programming answers, we might start with standard Lagrangian optimization techniques.
Assuming concavity of utility function and production function, we should look only for interior solutions. Define the Lagrangian as


Thus, the first order conditions are then given by




Assume further some terminal condition, e.g.


(also called transversality condition). If we combine those two conditions, and assume that the first constraint is saturated, we obtain the so-called Euler equation,


It is also possible to use Bellman’s equation: given the dynamic of the capital


The first order condition states


But since v is unknown, so is its derivative. But from the enveloppe theroem, we obtain something like




We can then write




and finally


which is, Euler’s equation.

  • A specified model, with calculations

As in the previous post (here), consider a log utility function, and a power production function,https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-18.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-19.png. The dynamic is then

 https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-20.pngand https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-21.png

Note that fixed points are here




Recall that the value function is defined as


A natural idea to derive the value function can be to iterate, i.e.


starting with a simple function, e.g the null function, at step 0. At step n=1




At step n=2,




The first order condition is then


and thus, we obtain


that can be plugged in the previous equation, i.e.


At step 3, we start from that new expression, derive the first order condition, and we get




and so on…
And finally, we can prove that


i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-36.png. Assuming that


actually, we can prove that


(and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-39.png has a form that can be explicited).

Optimal control, part 1

Since I had recently a request (from Benoît) about optimal control, I will start here a series of posts on that topic. But first, let us start with a simple problem, with discrete timeno randomness, and with a finite horizon. This might be a too simple framework to model complex problem, but that should be interesting to derive heuristic intuitions (I will skip here the mathematical problems, that can be found in very good books… references will come soon).

  • An introduction to backward induction

Before starting seriously, let us consider the following example: we want to reach the red city on the right from the red city on the left, as fast as possible. There are some roads, and the number is the number of hours it takes. Let us prove that the optimal way is the red one,

The first idea can be to calculate all possible trajectories, but with a large number of roads, the number of possible ways can soon be extremely large. An alternative can be to look backwards (like any students facing a question where the answer is given: start from the end, and try to find a possible way to reach it).

Numbers are greens are the number of hours that we still need one we’ve reached that point. Let us move again one step backward, and consider the orange points,

In the middle, we had to chose either to go up (it will still take 9 hours) or go down (and then 14 hours are necessary). Thus, the optimal strategy, once we’ve reach that point, it to take the 9 hour road. This will be idea the idea proposed by Bellman. Let us go backward again, to the purple cities. Again, we have to chose the shorter way,

From the top, the fastest road will take 13 hours, and from the bottom, it will take 16 hours. Thus, since it takes 10 hours to reach the top city, this road is necessarily faster than taking the one below (since it takes 8 hours to reach the nearest city).
Any this is it. We now have an intuitive idea of what should be done to find optimal strategies.

  • The optimization problem, discrete with finite horizon

Let us consider the following optimization problem: we want to find the optimal strategy https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-01.png that maximizes the following function


with simple constraints, such as https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-03.png (a state space) and https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-04.png (i.e. some dynamic constraints). Assume further that the starting point is given, i.e.
https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-05.png. In economic application, note that frequently https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-06.png i.e. we consider a discounted version of the value.

  • The idea of dynamic programming

The intuitive idea of dynamic programming is that if the optimal path from A to C goes throught B, then the path is optimal from B to C. Thus, it will be natural to consider backward induction techniques.
Thus, define

Then Bellman’s principle can be used to link those problems: if https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-12.png is a solution of the problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-13.png then, for all
https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-14.png, https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-15.png is a solution of problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-16.png.
Note that, so far, we assume that such an optimal sequence does exist. Thus, we get that for all x


and more generally,


nce, from a practical point of view, we solve those equation using a backward approch. I.e. first,


and then


and so on


… etc. It can be proved that the sequence https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-23.png is solution of https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-24.png if and only if for all https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-25.png,  https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-15.png
is solution of


So far, it does not look so difficult….

  • A simple example

Let  https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-30.png denote the consumption at period t, and assume consumption yields utility


as long as the consumer lives. Assume the consumer is impatient, or has a stronger preference for present, so that he discounts future utility by a factor https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-32.png. Let https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-33.png be capital he got at time t. Assume that his initial capital is a given amount https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-34.png, and suppose that this period’s capital and consumption determine next period’s capital as


where https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-36.png is a positive constant and https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-37.png. Assume further that capital cannot be negative. Then the consumer’s problem is simply


given https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-39.png. Bellman’s equation is then


whih leads us to a simplier problem than the initial one, since only two variables are involved here https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-41.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-42.png. And to solve that problem, we use backwards induction techniques.
Since https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-44.png is known, we can derive easily https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-45.png,  and so on until https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-46.png. More precisely, given https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-47.png, we can get https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-49.png which is the maximum of function


with https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-51.png. One can see that the following function is a possible solution


where each https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-53.png is a constant. Further, the optimal amount to consume at time https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-54.png is


i.e., if we explicit those expressions



  • A reformulation of the optimisation problem

Another way of expressing the optimization problem is the following: https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-70.png was the variable of interest, https://perso.univ-rennes1.fr/arthur.charpenti<br />
er/latex/control-71.png was the control variable, and https://perso.univ-rennes1.fr/arthur.charpentier/latex/control-72.png. Thus, the programm


can be expressed as


Several extension can be considered,

  • consider an infinite sum
  • consider a random component
  • consider a continuous version

But those items will be for posts that I still have to write down….