This afternoon, after a short visit at ETH Zürich yesterday, I will be at the département de sciences actuarielles, at the Université de Lausanne. I will be talking about using optimal transport to mitigate unfair predictions and quantify counterfactual fairness. Slides are now online.
Tag Archives: optimal
On my way to Toronto
Tomorrow, I will be on my way to Toronto (by train, as always). I will give a seminar on Monday, at the University of Toronto. The long title is Using optimal transport to mitigate unfair predictions and quantify counterfactual fairness (slides are available)
Many industries are heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation, is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In the first part of this seminar, we propose to mitigate possible discrimination (related to so call « group fairness », related to discrepancies in score distributions) through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. This part will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155). In the second part, we will focus on another aspect of discrimination usually called « counterfactual fairness », where the goal is to quantify a potential discrimination « if that person had not been Black » or « if that person had not been a woman ». The standard approach, called « ceteris paribus » (everything remains unchanged) is not sufficient to take into account indirect discrimination, and therefore, we consider a « mutates mutants » approach based on optimal transport. With multiple features, optimal transport becomes more challenging and we suggest a sequential approach based on probabilistic graphical models. This part will be based on recent work with Agathe Fernandes Machado and Ewen Gallic (2408.03425 and 2501.15549).
Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport
Our recent paper, Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport, with Agathe and Ewen is now online
Recently, optimal transport-based approaches have gained attention for deriving counterfactuals, e.g., to quantify algorithmic discrimination. However, in the general multivariate setting, these methods are often opaque and difficult to interpret. To address this, alternative methodologies have been proposed, using causal graphs combined with iterative quantile regressions (Plečko and Meinshausen (2020)) or sequential transport Fernandes Machado et al. (2025)) to examine fairness at the individual level, often referred to as “counterfactual fairness.” Despite these advancements, transporting categorical variables remains a significant challenge in practical applications with real datasets.
In this paper, we propose a novel approach to address this issue. Our method involves (1) converting categorical variables into compositional data and (2) transporting these compositions within the probabilistic simplex of \mathbb{R}^d. We demonstrate the applicability and effectiveness of this approach through an illustration on real-world data, and discuss limitations.
See https://github.com/fer-agathe/transport-simplex for the codes
Talk at StatQAM on Counterfactuals and Optimal Transport
Next Thursday, I will present our recent work at the StatQAM seminar, with Emmanuel Flachaire ajd Ewen Gallic, on Optimal Transport for Counterfactual Estimation: A Method for Causal Inference
Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).
Slides are now online.
Optimal Transport for Counterfactual Estimation: A Method for Causal Inference
With Emmanuel Flachaire et Ewen Gallic, we recently uploaded a paper entitled Optimal Transport for Counterfactual Estimation: A Method for Causal Inference on ArXiv.
Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).
Slides from a talk given last week are online.
On consequences of Goodhart’s law
This post was initially written in French, in the Winter 2021.
As Marilyn Strathern stated, Goodhart’s Law says that “when a measure becomes a goal, it ceases to be a good measure.” There are many economic applications, but this law also helps to understand the dangers of algorithmic decisions, or to explain the difficulty of using the data available since the beginning of the SARS-CoV-2 COVID-19 pandemic.
Optimal Portfolios #2
Matching, Optimal Transport and Statistical Tests
To explain the “optimal transport” problem, we usually start with Gaspard Monge’s “Mémoire sur la théorie des déblais et des remblais“, where the the problem of transporting a given distribution of matter (a pile of sand for instance) into another (an excavation for instance). This problem is usually formulated using distributions, and we seek the “optimal” transport from one distribution to the other one. The formulation, in the context of distributions has been formulated in the 40’s by Leonid Kantorovich, e.g. from the distribution on the left to the distribution on the right.
Consider now the context of finite sets of points. We want to transport mass from points \{A_1,\cdots,A_4\} to points \{B_1,\cdots,B_4\}. It is a complicated combinatorial problem. For 4 points, there are only 24 possible transfer to consider, but it exceeds 20 billions with 15 points (on each side). For instance, the following one is usually seen as inefficient
while the following is usually seen as much better
Of course, it depends on the cost of the transport, which depends on the distance between the origin and the destination. That cost is usually either linear or quadratic.
There are many application of optimal transport in economics, see eg Alfred’s book Optimal Transport Methods in Economics. And there are also applications in statistics, that what I’ve seen while I was discussing with Pierre while I was in Boston, in June. For instance if we want to test whether some sample were drawn from the same distribution,
set.seed(13)
npoints <- 25
mu1 <- c(1,1)
mu2 <- c(0,2)
Sigma1 <- diag(1, 2, 2)
Sigma2 <- diag(1, 2, 2)
Sigma2[2,1] <- Sigma2[1,2] <- -0.5
Sigma1 <- 0.4 * Sigma1
Sigma2 <- 0.4 *Sigma2
library(mnormt)
X1 <- rmnorm(npoints, mean = mu1, Sigma1)
X2 <- rmnorm(npoints, mean = mu2, Sigma2)
plot(X1[,1], X1[,2], ,col="blue")
points(X2[,1], X2[,2], col = "red")
Here we use a parametric model to generate our sample (as always), and we might think of a parametric test (testing whether mean and variance parameters of the two distributions are equal).
or we might prefer a nonparametric test. The idea Pierre mentioned was based on optimal transport. Consider some quadratic loss
ground_p <- 2
p <- 1
w1 <- rep(1/npoints, npoints)
w2 <- rep(1/npoints, npoints)
C <- cost_matrix_Lp(t(X1), t(X2), ground_p)
library(transport)
library(winference)
a <- transport(w1, w2, costm = C^p, method = "shortsimplex")
then it is possible to match points in the two samples
nonzero <- which(a$mass != 0)
from_indices <- a$from[nonzero]
to_indices <- a$to[nonzero]
for (i in from_indices){
segments(X1[from_indices[i],1], X1[from_indices[i],2], X2[to_indices[i], 1], X2[to_indices[i],2])
}
Here we can observe two things. The total cost can be seen as rather large
> cost=function(a,X1,X2){
nonzero <- which(a$mass != 0)
naa=a[nonzero,]
d=function(i) (X1[naa$from[i],1]-X2[naa$to[i],1])^2+(X1[naa$from[i],2]-X2[naa$to[i],2])^2
sum(Vectorize(d)(1:npoints))
}
> cost(a,X1,X2)
[1] 9.372472
and the angle of the transport direction is alway in the same direction (more or less)
> angle=function(a,X1,X2){
nonzero <- which(a$mass != 0)
naa=a[nonzero,]
d=function(i) (X1[naa$from[i],2]-X2[naa$to[i],2])/(X1[naa$from[i],1]-X2[naa$to[i],1])
atan(Vectorize(d)(1:npoints))
}
> mean(angle(a,X1,X2))
[1] -0.3266797
> library(plotrix)
> ag=(angle(a,X1,X2)/pi)*180
> ag[ag<0]=ag[ag<0]+360
> dag=hist(ag,breaks=seq(0,361,by=1)-.5)
> polar.plot(dag$counts,seq(0,360,by=1),main=”Test Polar Plot”,lwd=3,line.col=4)
(actually, the following plot has been obtain by generating a thousand of sample of size 25)
In order to have a decent test, we need to see what happens under the null assumption (when drawing samples from the same distribution), see
Here is the optimal matching
Here is the distribution of the total cost, when drawing a thousand samples,
VC=rep(NA,1000)
VA=rep(NA,1000*npoints)
for(s in 1:1000){
X1a <- rmnorm(npoints, mean = mu1, Sigma1)
X1b <- rmnorm(npoints, mean = mu1, Sigma2)
ground_p <- 2
p <- 1
w1 <- rep(1/npoints, npoints)
w2 <- rep(1/npoints, npoints)
C <- cost_matrix_Lp(t(X1a), t(X1b), ground_p)
ab <- transport(w1, w2, costm = C^p, method = "shortsimplex")
VC[s]=cout(ab,X1a,X1b)
VA[s*npoints-(0:(npoints-1))]=angle(ab,X1a,X1b)
}
plot(density(VC)
So our cost of 9 obtained initially was not that high. Observe that when drawing from the same distribution, there is now no pattern in the optimal transport
ag=(VA/pi)*180
ag[ag<0]=ag[ag<0]+360
dag=hist(ag,breaks=seq(0,361,by=1)-.5)
polar.plot(dag$counts,seq(0,360,by=1),main="Test Polar Plot",lwd=3,line.col=4)
Nice isn’t it? I guess I will spend some time next year working on those transport algorithm, since we have great R packages, and hundreds of applications in economics…
Pourquoi frapper aussi fort la balle au service ?
Au tennis, il n’est pas rare de voir les joueurs servir très fort sur leur première balle, puis de se calmer sur la seconde. Est-ce pour se reposer un peu après le premier effort ou bien y-a-t-il une réelle logique derrière ?
Modélisons un peu tout ça (en reprenant une modélisation présentée par David Gale en 1971). Un joueur a un ensemble de services possibles. A son premier service, il choisit le service
. La première difficulté est que le service soit correct (ou passe le filet pour faire simple), et on note
la probabilité qu’un tel service passe le filet. Si le service est passé, il a une certaine probabilité de gagner le point, que l’on notera
. Bien que ne connaissant pas vraiment le tennis, il n’est pas absurde de supposer qu’il y ait une forme d’arbitrage,
- un service puissant a peu de chance de passer (
petit), mais s’il passe le filet le serveur sera à son avantage, et la probabilité de gagner le point sera élevée (
grand)
- un service faible a de grandes chances de passer (
grand), mais la balle sera alors facile pour l’adversaire, et la probabilité de gagner le point sera faible (
petit)
A un service donné, la probabilité de gagner le point (sur ce premier service) est alors
.
Supposons que soit “ordonné” au sens où
peut être supposée strictement croissante.
Compte tenu de la remarque que nous avions faite initialement, il n’est pas absurde de supposer que la fonction

soit à valeur dans , avec une fonction croissante pour
, puis décroissante pour
.
Le joueur cherche à maximiser sa probabilité de gagner le point.
Mais au tennis, si le premier service ne passe pas, le serveur a le droit a une deuxième chance. Le serveur choisi alors en réalité un couple correspondant à son premier et second service, respectivement. La probabilité de gagner le point, à stratégie
donnée, est alors

On peut noter que la stratégie optimale pour la seconde balle est indépendante de ce qui a été fait pour la première. En effet, si on cherche à résoudre, à donné

la condition du premier ordre est

i.e

Autrement dit, le second service doit être celui qui maximise sa probabilité de gagner le point, i.e. .
Mais si on sait qu’il est optimal de jouer , dans ce cas, pour le premier service, le joueur cherche à résoudre

dont la condition du premier ordre est

i.e.

Le point s_1 qui vérifie cette condition peut se visualiser sur la figure ci-dessous, car s’interprète comme
, i.e. la tangente de la fonction rouge, et on veut que cette tangente soit de pente
, qui est la pente de la courbe bleue (comme l’avait noté David Gale),
Autrement dit, il est optimal de servir plus fort la première fois que la seconde… Comme quoi finalement tout peut se démontrer avec un joli modèle…
What is the optimal strategy to marry the best one ?
Valentine’s day is a nice opportunity to post on hot and sexy topics… Well, it’s also an important day that I should not miss, probably as much as Saint Patrick’smy wife’s birthday. And as I mentioned last week (here), it is difficult to get the distribution of the age of marriage on the internet… So maybe we can build up a small model, to understand when do girls decide to get married… Consider a young girl who knows that he will not meet thousands of men willing to marry her (actually, one can consider the opposite point of view, with young man who can find only girls willing to marry him, the problem can be assumed as symmetric, especially if I do not want to get feminist leagues on my back).
Assume that men agree to marry her. Of course, among those
men, our girl wants to marry the “best” one (assume that men can be ranked objectively). Of course, she cannot meet the “best” guy immediately, so men are met randomly, and after each “interview“, either she reject him (forever, we assume she cannot get back and admit she made a mistake), or agree to marry him. An important assumption is that rejected men cannot be recalled.
From a mathematical point of view, we need to find the optimal stopping time. Here, the problem is slightly different compared with that one (with optimal time to get a bonus) or this one (with the optimal time to sit in a bar and have a beer). Here, we do not give “grades” to guy. The only thing that is observed is their relative ranks. Our girl cannot know if she’s meting the best of all men (out of ), but she knows if this one is better than the ones she already met. From a mathematical point of view, at time
, she knows the relative rank of
(compared with the first
), not his absolute rank. We also assume that
is known.
The optimal strategy is that she has to reject automatically the first (some kind of calibration period), and then, starting at time
, she will marry the best over the ones she has already met.
So assume that our girl already met guys, and decided to reject all of them. So now she’s trying to see if the
can be the optimal time to stop, and start looking seriously ….For an arbitrary cut-off
, the probability that the best applicant will show up at some time
is

i.e.
The term is because there is only one “best” guy, and the
is the probability that he shows up at time
(this can be visualized below)
Thus, we can write

i.e.

Thus, since the minimum of is obtained when
, which is the optimal time to stop (or here to start seeking), i.e. 36.7%.
Hence, the best strategy is to reject automatically the first =37% of the candidates (which is the maximum value of the function above), and then to select the first one (if possible) that is better than all previous candidates.
Consider the following Monte Carlo procedure: assume that she rejects – automatically – the first (we consider a loop with all possible values for
) and then gets married with the first one who is the best one she’s seen during the calibration period (or overall, which is the same),
n=100 ns=1000000 MOY1=MOY2=rep(NA,n) for(m in 2:(n-1)){ WHICH=rep(NA,ns); MARIAGE=rep(0,ns) for(s in 1:ns){ Z=sample(1:n,size=n,replace=FALSE) mx=max(Z[1:m]) STOP=FALSE for(k in (m+1):n){ if((Z[k]>mx)&(STOP==FALSE)){ WHICH[s]=k STOP=TRUE MARIAGE[s]=1 } } } HIS=WHICH[is.na(WHICH)==FALSE] TH=table(HIS) MOY1[m]=mean(HIS) MOY2[m]=mean(HIS)*mean(MARIAGE) THH=rep(NA,100) THH[as.numeric(names(TH))]=as.numeric(TH)/ns }
If we run it over all possible we get

The “distribution” (in green) can be seen as the probability to marry the guy of level , given that the first
were rejected. The sum is not one since there is a non null probability to marry no one. Actually, the probability to get married is the following
The more she waits, the smaller the probability of getting married. But on the other hand, the more she waits, the “better” the husband…. On the graph below is plotted the rank of the guy she marries, if she gets married (it was actually the vertical plain line in red on the animation)
So there is a trade-off. If not getting married gives a 0 satisfaction (lower than finally marrying anyone), and if marrying the guy with rank gives here satisfaction
,we have
(it was the vertical doted line in red on the animation). So it looks like it is optimal to test the first 35-38% men, and then to marry the best one she finds (if he is better than the best one she met during the “testing” procedure). So our previous analysis looks correct…
Now to go further, I have to admit that this model is known in academic literature as the secretary problem. In 1989, Thomas Ferguson wrote a nice paper inStatistical Science entitled who solved the secretary problem (here). Anthony Mucci published also an article in the Annals of Probability on possible extensions, in 1973 (here), or Thomas Lorenzen (there) in 1981. This problem is definitively an interesting one !
When should I optimally shoot at my son ? (part 2)
Following my previous post of yesterday, online here, assume now that I do not know if my son came when I turned my back at time, and missed me… Then the payoff function is the one propose by Vincent, i.e.

In that particular case,

becomes

i.e.

If we draw those functions, on [0,1], the optimal value is solution of
i.e. (we focus only on solutions in [0,1]). Because here the game is symmetric, my son should also shoot at time
Thus, the payoff is then
Since we consider here a zero-sum game, this cannot be a solution of the game. So the game does not have pure strategy solution.
Assume that now I have a mixed strategy, i.e. a distribution of the optimal time to shot. My strategy has distribution , with density
(we assume here that the density exists, or we seek only solution that are differentiable). Assume further that there exists
>0 such that the support of my optimal shooting time (the time to shoot is now a random variable) is (
,1] (or [
,1] since we assume that
is differentiable). There is a discussion at the end of Vincent’s post where he needs that assumption, at the end. Actually, I think we can make it now, since we can rationally assume that
I will not shot at time 0 (even on a neighborhood of 0 since I have zero chance to hit my son).
The expected payoff function, assuming that my son shoots at time y is

Since the zero-sum game is symmetric, again, the expected payoff should be zero. It comes that necessarily,

if . Hence, if we differentiate (with respect to y), we have

and if we differentiate one more time, it comes

i.e. a general solution should be of the form .
Here, we have the same solution as the one considered in Vincent’s blog. His solution is obtained as follows (with slightly different expressions) conditional to , my expected payoff is

i.e.

With a simple integration by parts,

where , i.e.

Thus,

So, if we want to be indifferent to ‘s strategy,
, where

with ,

Consider solutions , then
=1, i.e. either
=1 and then
is constant, or
=-1. This means that
is in proportional to
.
If we substitute in the equation we had, initially, it comes that

i.e.

If we consider =a and
=1, it comes that
=1/3 while
=1/4 (but we don’t really care about that normalizing constant).
It means that we should not start shooting before 1/3 of the tank is fulled. Actually, it makes sense, since

if <1/3 (while
=0 if
>1/3).
Histoire éthylique, et arrêt optimal
suite aux pressions générales, je vais reprendre mes discussions d’alcoolique…. ou plutôt reprendre des classiques de finance, en expliquant que ce sont simplement des problèmes que se posent les amateurs de boissons fortes (de là à conseiller plutôt de recruter dans les bars qu’à la sortie des grandes écoles…).
Bref, avant d’avoir entamé sa marche aléatoire dans la rue de la soif (ici, correspondant aux problèmes d’options à barrière traduit en termes financier), puis d’avoir un soucis avec ses clés (là), puis la maréchaussée (ici), notre héros (car on peut maintenant l’appeler un héros après 4 billets qui lui sont consacrés) avait du choisir son bar… Le problème est loin d’être simple. Il y a 20 bars dans la rue (disons pour faire quelque chose de plus formel). Il arrive de la place sainte Anne, et là, il souhaite choisir le bar le moins cher. Le soucis est qu’il n’a pas le droit de faire demi-tour1 et il ne connaît pas les prix pratiqués dans les différents bars. Il part avec un a priori qui est que le prix d’une pinte est compris entre 3 et 6 euros, que le prix est uniformément réparti entre ces deux prix, et que les prix sont indépendants d’un bar à l’autre. Pour les financiers, il a une option (de commander une bière), et peut l’exercer quand il le souhaite. Une option américaine en quelque sorte. Supposons qu’on soit arrivé au bar
. On peut soit payer
(qui est supposé aléatoire, uniformément distribué et indépendant des autres bars), soit espérer que l’on puisse payer moins cher plus loin,
Soit la valeur de cette option, alors

i.e.

soit

où désigne la loi du prix de la bière (soit ici une loi uniforme) avec une condition terminale de la forme

car il a soif, et ne quittera pas la rue sans avoir bu un verre !
Classiquement, par backward induction, on peut résoudre ce programme, à partir de la loi de . Posons
. Alors

et

soit simplement

soit enfin

Je laisse les plus courageux simplifier les calculs. La “frontière d’exercice” est alors obtenue par récurrence. Numériquement, le code est alors simplement
> n=20 > u=rep(NA,n) > b=6;a=3 > u[n]=(b+a)/2 > for(k in (n-1):1){ + u[k]=1/(b-a)*(u[k+1]*(b-u[k+1])+(u[k+1]^2-a^2)/2) + }
Dès qu’on atteint la barrière, on s’assoit au bar. On note que plus on avance dans la rue, moins on est exigent: au tout début, on ne s’assoit pas à moins de 3 euros 30… mais plus on avance, plus on relève le seuil d’exigence. Le calcul sous forme intégrale donne ici
> u=rep(NA,n) > b=6;a=3 > u[n]=(b+a)/2 > for(k in (n-1):1){ + g=function(x){pmin(x,rep(u[k+1],length(x)))/(b-a)} + u[k]=integrate(g,lower=a,upper=b)$value + }
J’avais déjà abordé ce problème dans un précédant billet, sur les options américaines, mais on peut maintenant aller un peu plus loin… que se passe-t-il si on suppose que les prix sont discret (par exemple par tranches de 50 centimes ou 1 euro) ? L’avantage avec ces méthodes numériques est que l’on peut très facilement enlever des hypothèses, par exemple ici on aurait
> h=2 > K=(b-a)*h+1 > PRIX=seq(a,b,by=1/h) > u2=rep(NA,n) > b=6;a=3 > u2[n]=(b+a)/2 > for(k in (n-1):1){ + g=function(x){pmin(x,rep(u[k+1],length(x)))} + u2[k]=sum(g(PRIX)*1/K)}
pour des seuils à 1 euros (les seuls prix possibles étant 3,4,5 ou 6 euros).
Ou la frontière suivante si les prix varient par tranche de 50 centimes.
Compte tenu de la discrétisation, notons que la vraie frontière devient alors ici
Bref, comme toujours, les problèmes d’alcooliques rejoignent les problèmes d’exercice optimal d’options américaines, problème classique en finance de marché…
1 pour rendre cette histoire crédible, à chaque bar rencontré il demande le prix d’une pinte. S’il estime que c’est trop cher, il s’exclame “mais c’est bien trop cher ici !” et s’en va. Sinon il commande et s’installe. Cette exclamation rend improbable – à ses yeux – l’idée de revenir finalement s’installer au bar….
Optimal control, part 2
In the first part (here), we introduced Bellman’s idea of backward induction. But what if we consider now infinite time horizon ? Actually, the maths will be even more simple… and we will be able to use fixed pointed theorem to derive solutions.
- The mathematical framework
Here, consider the following value function

and define

A sequence is said to be an admissible solution for starting point x,

if and
. If we reformulate the dynamic programming idea, we obtain that if
is a solution to problem
, then for all
, sequence
is a solution to problem
. It comes that function v is a solution of Bellman’s equation

Note that is can be ssen as some fixed point resul, since

i.e.

So far, it shouldn’t be so hard….
- Frank Ramsey’s model (discrete version)
In 1928, Frank Ramsey wanted to understand the amount of savings in a dynamic perspective (in how much of its income should a nation save). Consider the following infinite horizon problem, where some planifier wants to maximize

subject to constraints ,
and
.
Before looking at dynamic programming answers, we might start with standard Lagrangian optimization techniques.
Assuming concavity of utility function and production function, we should look only for interior solutions. Define the Lagrangian as

Thus, the first order conditions are then given by

and

Assume further some terminal condition, e.g.

(also called transversality condition). If we combine those two conditions, and assume that the first constraint is saturated, we obtain the so-called Euler equation,

It is also possible to use Bellman’s equation: given the dynamic of the capital


The first order condition states

But since v is unknown, so is its derivative. But from the enveloppe theroem, we obtain something like

where

We can then write

i.e.

and finally

which is, Euler’s equation.
- A specified model, with calculations
As in the previous post (here), consider a log utility function, and a power production function, and
. The dynamic is then


Note that fixed points are here

and

Recall that the value function is defined as

A natural idea to derive the value function can be to iterate, i.e.

starting with a simple function, e.g the null function, at step 0. At step n=1

thus

At step n=2,

i.e.

The first order condition is then

and thus, we obtain

that can be plugged in the previous equation, i.e.

At step 3, we start from that new expression, derive the first order condition, and we get

and

and so on…
And finally, we can prove that

i.e. . Assuming that

actually, we can prove that

(and has a form that can be explicited).
Optimal control, part 1
Since I had recently a request (from Benoît) about optimal control, I will start here a series of posts on that topic. But first, let us start with a simple problem, with discrete time, no randomness, and with a finite horizon. This might be a too simple framework to model complex problem, but that should be interesting to derive heuristic intuitions (I will skip here the mathematical problems, that can be found in very good books… references will come soon).
- An introduction to backward induction

Before starting seriously, let us consider the following example: we want to reach the red city on the right from the red city on the left, as fast as possible. There are some roads, and the number is the number of hours it takes. Let us prove that the optimal way is the red one,
The first idea can be to calculate all possible trajectories, but with a large number of roads, the number of possible ways can soon be extremely large. An alternative can be to look backwards (like any students facing a question where the answer is given: start from the end, and try to find a possible way to reach it).
Numbers are greens are the number of hours that we still need one we’ve reached that point. Let us move again one step backward, and consider the orange points,
In the middle, we had to chose either to go up (it will still take 9 hours) or go down (and then 14 hours are necessary). Thus, the optimal strategy, once we’ve reach that point, it to take the 9 hour road. This will be idea the idea proposed by Bellman. Let us go backward again, to the purple cities. Again, we have to chose the shorter way,
From the top, the fastest road will take 13 hours, and from the bottom, it will take 16 hours. Thus, since it takes 10 hours to reach the top city, this road is necessarily faster than taking the one below (since it takes 8 hours to reach the nearest city).
Any this is it. We now have an intuitive idea of what should be done to find optimal strategies.
- The optimization problem, discrete with finite horizon
Let us consider the following optimization problem: we want to find the optimal strategy that maximizes the following function

with simple constraints, such as (a state space) and
(i.e. some dynamic constraints). Assume further that the starting point is given, i.e.
. In economic application, note that frequently
i.e. we consider a discounted version of the value.
- The idea of dynamic programming
The intuitive idea of dynamic programming is that if the optimal path from A to C goes throught B, then the path is optimal from B to C. Thus, it will be natural to consider backward induction techniques.
Thus, define
…
Then Bellman’s principle can be used to link those problems: if is a solution of the problem
then, for all
,
is a solution of problem
.
Note that, so far, we assume that such an optimal sequence does exist. Thus, we get that for all x

and more generally,

Hence, from a practical point of view, we solve those equation using a backward approch. I.e. first,

and then

and so on

… etc. It can be proved that the sequence is solution of
if and only if for all
,
is solution of

So far, it does not look so difficult….
- A simple example
Let denote the consumption at period t, and assume consumption yields utility

as long as the consumer lives. Assume the consumer is impatient, or has a stronger preference for present, so that he discounts future utility by a factor . Let
be capital he got at time t. Assume that his initial capital is a given amount
, and suppose that this period’s capital and consumption determine next period’s capital as

where is a positive constant and
. Assume further that capital cannot be negative. Then the consumer’s problem is simply

given . Bellman’s equation is then

whih leads us to a simplier problem than the initial one, since only two variables are involved here and
. And to solve that problem, we use backwards induction techniques.
Since is known, we can derive easily
, and so on until
. More precisely, given
, we can get
which is the maximum of function

with . One can see that the following function is a possible solution

where each is a constant. Further, the optimal amount to consume at time
is

i.e., if we explicit those expressions



…etc.
- A reformulation of the optimisation problem
Another way of expressing the optimization problem is the following: was the variable of interest,
was the control variable, and
. Thus, the programm

can be expressed as

Several extension can be considered,
- consider an infinite sum

- consider a random component

- consider a continuous version

But those items will be for posts that I still have to write down….
Quand faut-il acheter son billet d’avion ?
Lors de l’organisation des JEEA (ici), la principale incertitude était non pas le nombre de participants (qui n’a cessé d’augmenter jusqu’à ce qu’on se décide à clôturer, et finalement les prévisions faites il y a un mois, lorsque l’on a contacté les traiteurs pour les plateaux repas du midi se sont avérées très précises), mais les remboursements de frais de transports1.
Pour laisser toute souplesse aux intervenants, je leur avais laissé acheter eux-même leurs billets pour venir sur Rennes, sans aucune précision sur le mode de transport. Pour le train, les prix sont assez stables, mais j’avais très peur pour les personnes qui venaient en avion, qu’elles s’y soient prises trop tôt, ou trop tard et que le prix soit très élevé. Bref, comme à chaque fois que je dois acheter un billet d’avion, je me pose la même question: “existe-t-il une date optimale pour acheter ton billet d’avion en ligne ?”.
- Une petite étude sur 6000 vols (domestiques) américains
Benny Mantin a fait une étude semblable, en regardant sur la période de 3 mois précédant le départ (ou 90 jours pour être précis) un millier de vols (au sens origine-destination), chaque vol étant considéré à 6 dates différentes (6 mercredis consécutifs).
Par exemple pour un vol. ABQ-BOS (Albuquerque-Boston), on a les six évolutions de prix suivantes,
(on peut visualiser une prédiction par régression locale, en rouge). Si l’on prend comme prix de départ comme référence (base 100), on observe les évolutions suivantes sur les 200 premiers vols,
Vu de loin, on semble observer une réelle tendance avec un optimum à 30 jours du départ. Si on change la fenêtre de la régression locale, on obtient des résultats assez similaires,
pour un lissage beaucoup plus fin,
- Quelle est alors la date optimale ?
On peut d’ailleurs regarder la distribution de la date optimale de départ (sur la régression lowess). On note que la date optimale semble être 1 mois avant le départ,
Si on prend une fenêtre de lissage beaucoup plus fine, on observe la distribution suivante pour la date où le minimum est atteint,
Si on regarde quand est obtenu le minimum sur la base brute (ou plutôt le moment où est atteint le minimum pour la première fois, car le prix est parfois constant deux jours consécutifs), et non pas sur les données lissées, on obtient la distribution suivante,
Si on fait la même chose sur le moment où le minimum est atteint pour la dernière fois, on a
Ces deux graphiques sont beaucoup plus difficile à lire, car les données brutes sont beaucoup plus ératiques que la prédiction par régression locale.
- Quid de la volatilité des prix ?
Bon, c’est bien tout de voir qu’il est optimal d’attendre un mois avant le départ pour avoir un prix en moyenne plus bas. Mais c’est aussi la période où les prix sont les plus volatiles…. Par exemple sur le premier vol que nous avons étudié, si on normalise le prix en base 100 (comme dans la partie précédante) on obtient une volatilité de la forme suivante, qui augmente au fur et à mesure que l’on se rapproche de la date de départ.
Pour une personne qui ne cherche pas le meilleur prix en moyenne, mais qui cherche à éviter que le prix soit trop élevé, on peut regarder par exemple une régression quantile. La courbe en rouge ici présente le quantile à 90% du prix du billet d’avion.
Comme précédement, on peut chercher la date optimale pour que le prix – en tant que quantile et pas en tant que prix moyen – soit le plus bas possible,
Notons que l’on a chercher une régression quantile sous une forme polynomiale (ici un polynôme de degré 5). Si on utilise un polynôme de degré plus faible (par exemple 4), la distribution du maximum – sur la courbe lissée – change un peu, mais finalement pas tant que ça….
Ce qui confirmerait la robustesse de nos conclusions…
- est-il pertinant de commander ses billets depuis le bureau ?
Un dernier point avant de cloturer ce billet. Sur les graphiques suivant, on s’intéresse aux variations du prix du billet d’avion d’un jour sur l’autre (aux log-rendements). On s’intéresse là encore au premier vol, ABQ-BOS, avec les zones bleues correspondant aux week-end,
La courbe en rouge est une régression locale afin de visualiser une éventuelle tendance, avec un voisinage fin afin de prendre en compte les cycles hebdomadaires. Ce qui est intéressant, c’est que les plus grandes variations ont lieu en dehors des week-end, avec en particulier des baisses de prix en début de semaines, et de hausse de prix juste avant les week-end,
Afin de mieux visualiser ces centaines de courbes brutalement superposées, on peut plutôt superposer les tendances lissées,
Cette impression d’avoir des baisses les lundi et des hausses le vendredi est confirmé sur ce graphique. On peut aussi regarder brutalement la moyenne des log-rendement en fonction du jour,
On peut aussi étudier la volatilité brute, en chacun des jours,
On note que les prix sont relativement calmes le week end, et que généralement, ils ont tendance à baisser le lundi…. moralité, mieux vaut attendre de revenir au bureau pour commander son billet… Bon bien sûr, il faut habiter aux Etats-Unis….
1 un parti pris était de faire une conférence gratuite, en fournissant des repas le midi pour tous les participants. En contrepartie, la conférence avait obtenu des subventions de la chaire AXA grands risques, et de différentes entités locales (ville de Rennes, Rennes métropoles, conseils général et régional). Mais nous ne souhaitions pas trop solliciter le CREM (le labo de la faculté de sciences économiques). Bref, nous avions une limite dans notre budget, et avec une centaine de participants, nous ignorions réellement quelle souplesse nous avions….