Tag Archives: optimal

Talk at StatQAM on Counterfactuals and Optimal Transport

Next Thursday, I will present our recent work at the StatQAM seminar, with Emmanuel Flachaire ajd Ewen Gallic, on Optimal Transport for Counterfactual Estimation: A Method for Causal Inference

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

Slides are now online.

Optimal Transport for Counterfactual Estimation: A Method for Causal Inference

With Emmanuel Flachaire et Ewen Gallic, we recently uploaded a paper entitled Optimal Transport for Counterfactual Estimation: A Method for Causal Inference on ArXiv.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

Slides from a talk given last week are online.

On consequences of Goodhart’s law

This post was initially written in French, in the Winter 2021.

As Marilyn Strathern stated, Goodhart’s Law says that “when a measure becomes a goal, it ceases to be a good measure.” There are many economic applications, but this law also helps to understand the dangers of algorithmic decisions, or to explain the difficulty of using the data available since the beginning of the SARS-CoV-2 COVID-19 pandemic.

Continue reading On consequences of Goodhart’s law

Matching, Optimal Transport and Statistical Tests

To explain the “optimal transport” problem, we usually start with Gaspard Monge’s “Mémoire sur la théorie des déblais et des remblais“, where the the problem of transporting a given distribution of matter (a pile of sand for instance) into another (an excavation for instance). This problem is usually formulated using distributions, and we seek the “optimal” transport from one distribution to the other one. The formulation, in the context of distributions has been formulated in the 40’s by Leonid Kantorovich, e.g. from the distribution on the left to the distribution on the right.

Consider now the context of finite sets of points. We want to transport mass from points \{A_1,\cdots,A_4\} to points \{B_1,\cdots,B_4\}. It is a complicated combinatorial problem. For 4 points, there are only 24 possible transfer to consider, but it exceeds 20 billions with 15 points (on each side). For instance, the following one is usually seen as inefficient

while the following is usually seen as much better

Of course, it depends on the cost of the transport, which depends on the distance between the origin and the destination. That cost is usually either linear or quadratic.

There are many application of optimal transport in economics, see eg Alfred’s book Optimal Transport Methods in Economics. And there are also applications in statistics, that what I’ve seen while I was discussing with Pierre while I was in Boston, in June. For instance if we want to test whether some sample were drawn from the same distribution,

set.seed(13)
npoints <- 25
mu1 <- c(1,1)
mu2 <- c(0,2)
Sigma1 <- diag(1, 2, 2)
Sigma2 <- diag(1, 2, 2)
Sigma2[2,1] <- Sigma2[1,2] <- -0.5
Sigma1 <- 0.4 * Sigma1
Sigma2 <- 0.4 *Sigma2
library(mnormt)
X1 <- rmnorm(npoints, mean = mu1, Sigma1)
X2 <- rmnorm(npoints, mean = mu2, Sigma2)
plot(X1[,1], X1[,2], ,col="blue")
points(X2[,1], X2[,2], col = "red")

Here we use a parametric model to generate our sample (as always), and we might think of a parametric test (testing whether mean and variance parameters of the two distributions are equal).

or we might prefer a nonparametric test. The idea Pierre mentioned was based on optimal transport. Consider some quadratic loss

ground_p <- 2
p <- 1
w1 <- rep(1/npoints, npoints)
w2 <- rep(1/npoints, npoints)
C <- cost_matrix_Lp(t(X1), t(X2), ground_p)
library(transport)
library(winference)
a <- transport(w1, w2, costm = C^p, method = "shortsimplex")

then it is possible to match points in the two samples

nonzero <- which(a$mass != 0)
from_indices <- a$from[nonzero]
to_indices <- a$to[nonzero]
for (i in from_indices){
segments(X1[from_indices[i],1], X1[from_indices[i],2], X2[to_indices[i], 1], X2[to_indices[i],2])
}

Here we can observe two things. The total cost can be seen as rather large

> cost=function(a,X1,X2){
nonzero <- which(a$mass != 0)
naa=a[nonzero,]
d=function(i) (X1[naa$from[i],1]-X2[naa$to[i],1])^2+(X1[naa$from[i],2]-X2[naa$to[i],2])^2
sum(Vectorize(d)(1:npoints))
}
> cost(a,X1,X2)
[1] 9.372472

and the angle of the transport direction is alway in the same direction (more or less)

> angle=function(a,X1,X2){
nonzero <- which(a$mass != 0)
naa=a[nonzero,]
d=function(i) (X1[naa$from[i],2]-X2[naa$to[i],2])/(X1[naa$from[i],1]-X2[naa$to[i],1])
atan(Vectorize(d)(1:npoints))
}
> mean(angle(a,X1,X2))
[1] -0.3266797

> library(plotrix)
> ag=(angle(a,X1,X2)/pi)*180
> ag[ag<0]=ag[ag<0]+360
> dag=hist(ag,breaks=seq(0,361,by=1)-.5)
> polar.plot(dag$counts,seq(0,360,by=1),main=”Test Polar Plot”,lwd=3,line.col=4)

(actually, the following plot has been obtain by generating a thousand of sample of size 25)

In order to have a decent test, we need to see what happens under the null assumption (when drawing samples from the same distribution), see

Here is the optimal matching

Here is the distribution of the total cost, when drawing a thousand samples,

VC=rep(NA,1000)
VA=rep(NA,1000*npoints)
for(s in 1:1000){
X1a <- rmnorm(npoints, mean = mu1, Sigma1)
X1b <- rmnorm(npoints, mean = mu1, Sigma2)
ground_p <- 2
p <- 1
w1 <- rep(1/npoints, npoints)
w2 <- rep(1/npoints, npoints)
C <- cost_matrix_Lp(t(X1a), t(X1b), ground_p)
ab <- transport(w1, w2, costm = C^p, method = "shortsimplex")
VC[s]=cout(ab,X1a,X1b)
VA[s*npoints-(0:(npoints-1))]=angle(ab,X1a,X1b)
}
plot(density(VC)

So our cost of 9 obtained initially was not that high. Observe that when drawing from the same distribution, there is now no pattern in the optimal transport

ag=(VA/pi)*180
ag[ag<0]=ag[ag<0]+360
dag=hist(ag,breaks=seq(0,361,by=1)-.5)
polar.plot(dag$counts,seq(0,360,by=1),main="Test Polar Plot",lwd=3,line.col=4)

 

Nice isn’t it? I guess I will spend some time next year working on those transport algorithm, since we have great R packages, and hundreds of applications in economics…

Pourquoi frapper aussi fort la balle au service ?

Au tennis, il n’est pas rare de voir les joueurs servir très fort sur leur première balle, puis de se calmer sur la seconde. Est-ce pour se reposer un peu après le premier effort ou bien y-a-t-il une réelle logique derrière ?

Modélisons un peu tout ça (en reprenant une modélisation présentée par David Gale en 1971). Un joueur a un ensemble http://freakonometrics.blog.free.fr/public/perso3/tennisserv01.gif de services possibles. A son premier service, il choisit le service http://freakonometrics.blog.free.fr/public/perso3/tennisserv02.gif. La première difficulté est que le service soit correct (ou passe le filet pour faire simple), et on note http://freakonometrics.blog.free.fr/public/perso3/tennisserv03.gif la probabilité qu’un tel service passe le filet. Si le service est passé, il a une certaine probabilité de gagner le point, que l’on notera http://freakonometrics.blog.free.fr/public/perso3/tennisserv05.gif. Bien que ne connaissant pas vraiment le tennis, il n’est pas absurde de supposer qu’il y ait une forme d’arbitrage,

  • un service puissant a peu de chance de passer (http://freakonometrics.blog.free.fr/public/perso3/tennisserv06.gif petit), mais s’il passe le filet le serveur sera à son avantage, et la probabilité de gagner le point sera élevée (http://freakonometrics.blog.free.fr/public/perso3/tennisserv05.gif grand)
  • un service faible a de grandes chances de passer (http://freakonometrics.blog.free.fr/public/perso3/tennisserv06.gif grand), mais la balle sera alors facile pour l’adversaire, et la probabilité de gagner le point sera faible (http://freakonometrics.blog.free.fr/public/perso3/tennisserv05.gif petit)

A un service http://freakonometrics.blog.free.fr/public/perso3/tennisserv07.gif donné, la probabilité de gagner le point (sur ce premier service) est alors http://freakonometrics.blog.free.fr/public/perso3/tennisserv08.gif.

Supposons que http://freakonometrics.blog.free.fr/public/perso3/tennisserv01.gif soit “ordonné” au sens où http://freakonometrics.blog.free.fr/public/perso3/tennisserv06.gif peut être supposée strictement croissante.
Compte tenu de la remarque que nous avions faite initialement, il n’est pas absurde de supposer que la fonction

http://freakonometrics.blog.free.fr/public/perso3/tennisserv09.gif

soit à valeur dans http://freakonometrics.blog.free.fr/public/perso3/tennisserv10.gif, avec une fonction croissante pour http://freakonometrics.blog.free.fr/public/perso3/tennisserv.gif, puis décroissante pour http://freakonometrics.blog.free.fr/public/perso3/tennisserv12.gif.

Le joueur cherche à maximiser sa probabilité de gagner le point.

Mais au tennis, si le premier service ne passe pas, le serveur a le droit a une deuxième chance. Le serveur choisi alors en réalité un couple http://freakonometrics.blog.free.fr/public/perso3/tennisserv15.gif correspondant à son premier et second service, respectivement. La probabilité de gagner le point, à stratégie http://freakonometrics.blog.free.fr/public/perso3/tennisserv16.gif donnée, est alors

http://freakonometrics.blog.free.fr/public/perso3/tennisserv17.gif

On peut noter que la stratégie optimale pour la seconde balle est indépendante de ce qui a été fait pour la première. En effet, si on cherche à résoudre, à http://freakonometrics.blog.free.fr/public/perso3/tennisserv18.gif donné

http://freakonometrics.blog.free.fr/public/perso3/tennisserv20.gif

la condition du premier ordre est

http://freakonometrics.blog.free.fr/public/perso3/tennisser21.gif

i.e

http://freakonometrics.blog.free.fr/public/perso3/tennisserv22.gif

Autrement dit, le second service doit être celui qui maximise sa probabilité de gagner le point, i.e. http://freakonometrics.blog.free.fr/public/perso3/tennisserv23.gif.
Mais si on sait qu’il est optimal de jouer http://freakonometrics.blog.free.fr/public/perso3/tennisserv23.gif, dans ce cas, pour le premier service, le joueur cherche à résoudre

http://freakonometrics.blog.free.fr/public/perso3/tennissserv24.gif

dont la condition du premier ordre est

http://freakonometrics.blog.free.fr/public/perso3/tennisserv25.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso3/tennisserv26.gif

Le point s_1 qui vérifie cette condition peut se visualiser sur la figure ci-dessous, car http://freakonometrics.blog.free.fr/public/perso3/tennisserv27.gif s’interprète comme http://freakonometrics.blog.free.fr/public/perso3/tennisserv28.gif, i.e. la tangente de la fonction rouge, et on veut que cette tangente soit de pente http://freakonometrics.blog.free.fr/public/perso3/tennisserv29.gif, qui est la pente de la courbe bleue (comme l’avait noté David Gale),

Autrement dit, il est optimal de servir plus fort la première fois que la seconde… Comme quoi finalement tout peut se démontrer avec un joli modèle…

What is the optimal strategy to marry the best one ?

Valentine’s day is a nice opportunity to post on hot and sexy topics… Well, it’s also an important day that I should not miss, probably as much as Saint Patrick’smy wife’s birthday. And as I mentioned last week (here), it is difficult to get the distribution of the age of marriage on the internet… So maybe we can build up a small model, to understand when do girls decide to get married… Consider a young girl who knows that he will not meet thousands of men willing to marry her (actually, one can consider the opposite point of view, with young man who can find only http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png girls willing to marry him, the problem can be assumed as symmetric, especially if I do not want to get feminist leagues on my back).

Assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men agree to marry her. Of course, among those http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men, our girl wants to marry the “best” one (assume that men can be ranked objectively). Of course, she cannot meet the “best” guy immediately, so men are met randomly, and after each “interview“, either she reject him (forever, we assume she cannot get back and admit she made a mistake), or agree to marry him. An important assumption is that rejected men cannot be recalled.

From a mathematical point of view, we need to find the optimal stopping time. Here, the problem is slightly different compared with that one (with optimal time to get a bonus) or this one (with the optimal time to sit in a bar and have a beer). Here, we do not give “grades” to guy. The only thing that is observed is their relative ranks. Our girl cannot know if she’s meting the best of all men (out of http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png), but she knows if this one is better than the ones she already met. From a mathematical point of view, at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she knows the relative rank of http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (compared with the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png), not his absolute rank. We also assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png is known.

The optimal strategy is that she has to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png (some kind of calibration period), and then, starting at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she will marry the best over the ones she has already met.
So assume that our girl already met http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png guys, and decided to reject all of them. So now she’s trying to see if the http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png can be the optimal time to stop, and start looking seriously ….For an arbitrary cut-off http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, the probability that the best applicant will show up at some time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif is http://freakonometrics.hypotheses.org/files/2015/12/wedding01.gif

http://freakonometrics.hypotheses.org/files/2015/12/wedding02.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wdeeing03.gif

The http://freakonometrics.hypotheses.org/files/2015/12/wedd02.gif term is because there is only one “best” guy, and the http://freakonometrics.hypotheses.org/files/2015/12/wedd03.gifis the probability that he shows up at time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif (this can be visualized below)

Thus, we can write

http://freakonometrics.hypotheses.org/files/2015/12/wedding04.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wedding05.gif

Thus, since the minimum of http://freakonometrics.hypotheses.org/files/2015/12/mariage18.png is obtained when http://freakonometrics.hypotheses.org/files/2015/12/mariage19.png, which is the optimal time to stop (or here to start seeking), i.e. 36.7%.

Hence, the best strategy is to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage20.png=37% of the candidates (which is the maximum value of the function above), and then to select the first one (if possible) that is better than all previous candidates.

Consider the following Monte Carlo procedure: assume that she rejects – automatically – the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (we consider a loop with all possible values for http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png) and then gets married with the first one who is the best one she’s seen during the calibration period (or overall, which is the same),

n=100
ns=1000000
MOY1=MOY2=rep(NA,n)
for(m in 2:(n-1)){
WHICH=rep(NA,ns); MARIAGE=rep(0,ns)
for(s in 1:ns){
Z=sample(1:n,size=n,replace=FALSE)
mx=max(Z[1:m])
STOP=FALSE
for(k in (m+1):n){
if((Z[k]>mx)&(STOP==FALSE)){
WHICH[s]=k
    STOP=TRUE
MARIAGE[s]=1
}
}
}
HIS=WHICH[is.na(WHICH)==FALSE]
TH=table(HIS)
MOY1[m]=mean(HIS)
MOY2[m]=mean(HIS)*mean(MARIAGE)
THH=rep(NA,100)
THH[as.numeric(names(TH))]=as.numeric(TH)/ns
}

If we run it over all possible http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png we get

http://freakonometrics.hypotheses.org/files/2015/12/mariage-anim.gif

The “distribution” (in green) can be seen as the probability to marry the guy of level http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png, given that the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png were rejected. The sum is not one since there is a non null probability to marry no one. Actually, the probability to get married is the following

The more she waits, the smaller the probability of getting married. But on the other hand, the more she waits, the “better” the husband…. On the graph below is plotted the rank of the guy she marries, if she gets married (it was actually the vertical plain line in red on the animation)

So there is a trade-off. If not getting married gives a 0 satisfaction (lower than finally marrying anyone), and if marrying the guy with rank http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png gives here satisfaction http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png ,we have

(it was the vertical doted line in red on the animation). So it looks like it is optimal to test the first 35-38% men, and then to marry the best one she finds (if he is better than the best one she met during the “testing” procedure). So our previous analysis looks correct…

Now to go further, I have to admit that this model is known in academic literature as the secretary problem. In 1989, Thomas Ferguson wrote a nice paper inStatistical Science entitled who solved the secretary problem (here). Anthony Mucci published also an article in the Annals of Probability on possible extensions, in 1973 (here), or Thomas Lorenzen (there) in 1981. This problem is definitively an interesting one !

When should I optimally shoot at my son ? (part 2)

Following my previous post of yesterday, online here, assume now that I do not know if my son came when I turned my back at time, and missed me… Then the payoff function is the one propose by Vincent, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel100.png

In that particular case,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel20.png

becomes

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel101.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel102.png

If we draw those functions, on [0,1], the optimal value is solution of https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel103.png

i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel104.png (we focus only on solutions in [0,1]). Because here the game is symmetric, my son should also shoot at time https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel105.png
Thus, the payoff is then

 https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel106.png

Since we consider here a  zero-sum game, this cannot be a solution of the game. So the game does not have pure strategy solution.

Assume that now I have a mixed strategy, i.e. a distribution of the optimal time to shot. My strategy has distribution https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel110.png, with density https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel111.png (we assume here that the density exists, or we seek only solution that are differentiable). Assume further that there exists https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel112.png>0 such that the support of my optimal shooting time (the time to shoot is now a random variable) is (https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel112.png,1] (or [https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel112.png,1] since we assume that https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel110.png is differentiable). There is a discussion at the end of Vincent’s post where he needs that assumption, at the end. Actually, I think we can make it now, since we can rationally assume that
I will not shot at time 0 (even on a neighborhood of 0 since I have zero chance to hit my son).
The expected payoff function, assuming that my son shoots at time y is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel1113.png

Since the zero-sum game is symmetric, again, the expected payoff should be zero. It comes that necessarily,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel120.png

if https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel121.png. Hence, if we differentiate (with respect to y), we have

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel122.png

and if we differentiate one more time, it comes

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel123.png

i.e. a general solution should be of the form https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel130.png.
Here, we have the same solution as the one considered in Vincent’s blog. His solution is obtained as follows (with slightly different expressions) conditional to https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png, my expected payoff is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel140.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel131.png

With a simple integration by parts,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel132.png

where https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel134.png, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel133.png

Thus,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel135.png

So, if we want to be indifferent to https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png‘s strategy, https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel141.png, where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel142.png

with https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel143.png,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel144.png

Consider solutions https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel150.png, then https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel151.png=1, i.e. either https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel152.png=1 and then https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel153.png is constant, or https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel152.png=-1. This means that https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel154.png is in proportional to https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel155.png.
If we substitute in the equation we had, initially, it comes that

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel160.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel161.png

If we consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png=a and https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png=1, it comes that https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel112.png=1/3 while https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel162.png=1/4 (but we don’t really care about that normalizing constant).
It means that we should not start shooting before 1/3 of the tank is fulled. Actually, it makes sense, since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel163.png

if https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png<1/3 (while https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel164.png=0 if  https://perso.univ-rennes1.fr/arthur.charpentier/latex/duel03.png>1/3).

Histoire éthylique, et arrêt optimal

suite aux pressions générales, je vais reprendre mes discussions d’alcoolique…. ou plutôt reprendre des classiques de finance, en expliquant que ce sont simplement des problèmes que se posent les amateurs de boissons fortes (de là à conseiller plutôt de recruter dans les bars qu’à la sortie des grandes écoles…).
Bref, avant d’avoir entamé sa marche aléatoire dans la rue de la soif (ici, correspondant aux problèmes d’options à barrière traduit en termes financier), puis d’avoir un soucis avec ses clés (), puis la maréchaussée (ici), notre héros (car on peut maintenant l’appeler un héros après 4 billets qui lui sont consacrés) avait du choisir son bar… Le problème est loin d’être simple. Il y a 20 bars dans la rue (disons https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-50.png pour faire quelque chose de plus formel). Il arrive de la place sainte Anne, et là, il souhaite choisir le bar le moins cher. Le soucis est qu’il n’a pas le droit de faire demi-tour1 et il ne connaît pas les prix pratiqués dans les différents bars. Il part avec un a priori qui est que le prix d’une pinte est compris entre 3 et 6 euros, que le prix est uniformément réparti entre ces deux prix, et que les prix sont indépendants d’un bar à l’autre. Pour les financiers, il a une option (de commander une bière), et peut l’exercer quand il le souhaite. Une option américaine en quelque sorte. Supposons qu’on soit arrivé au bar https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-01.png. On peut soit payer https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-02.png (qui est supposé aléatoire, uniformément distribué et indépendant des autres bars), soit espérer que l’on puisse payer moins cher plus loin,
Soit https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-03.png la valeur de cette option, alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-04.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-05.png

soit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-06.png

où https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-07.png désigne la loi du prix de la bière (soit ici une loi uniforme) avec une condition terminale de la forme

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-09.png

car il a soif, et ne quittera pas la rue sans avoir bu un verre !
Classiquement, par backward induction, on peut résoudre ce programme, à partir de la loi de https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-10.png. Posons https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-12.png. Alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-13.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-14.png

soit simplement

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-16.png

soit enfin

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-am-18.png

Je laisse les plus courageux simplifier les calculs. La “frontière d’exercice” est alors obtenue par récurrence. Numériquement, le code est alors simplement

> n=20
> u=rep(NA,n)
> b=6;a=3
> u[n]=(b+a)/2
> for(k in (n-1):1){
+ u[k]=1/(b-a)*(u[k+1]*(b-u[k+1])+(u[k+1]^2-a^2)/2)
+ }

Dès qu’on atteint la barrière, on s’assoit au bar. On note que plus on avance dans la rue, moins on est exigent: au tout début, on ne s’assoit pas à moins de 3 euros 30… mais plus on avance, plus on relève le seuil d’exigence. Le calcul sous forme intégrale donne ici

> u=rep(NA,n)
> b=6;a=3
> u[n]=(b+a)/2
> for(k in (n-1):1){
+ g=function(x){pmin(x,rep(u[k+1],length(x)))/(b-a)}
+ u[k]=integrate(g,lower=a,upper=b)$value
+ }

J’avais déjà abordé ce problème dans un précédant billet, sur les options américaines, mais on peut maintenant aller un peu plus loin… que se passe-t-il si on suppose que les prix sont discret (par exemple par tranches de 50 centimes ou 1 euro) ? L’avantage avec ces méthodes numériques est que l’on peut très facilement enlever des hypothèses, par exemple ici on aurait

> h=2
> K=(b-a)*h+1
> PRIX=seq(a,b,by=1/h)
> u2=rep(NA,n)
> b=6;a=3
> u2[n]=(b+a)/2
> for(k in (n-1):1){
+ g=function(x){pmin(x,rep(u[k+1],length(x)))}
+ u2[k]=sum(g(PRIX)*1/K)}

pour des seuils à 1 euros (les seuls prix possibles étant 3,4,5 ou 6 euros).

Ou la frontière suivante si les prix varient par tranche de 50 centimes.

Compte tenu de la discrétisation, notons que la vraie frontière devient alors ici

Bref, comme toujours, les problèmes d’alcooliques rejoignent les problèmes d’exercice optimal d’options américaines, problème classique en finance de marché…
1 pour rendre cette histoire crédible, à chaque bar rencontré il demande le prix d’une pinte. S’il estime que c’est trop cher, il s’exclame “mais c’est bien trop cher ici !” et s’en va. Sinon il commande et s’installe. Cette exclamation rend improbable – à ses yeux – l’idée de revenir finalement s’installer au bar….

Optimal control, part 2

In the first part (here), we introduced Bellman’s idea of backward induction. But what if we consider now infinite time horizon ? Actually, the maths will be even more simple… and we will be able to use fixed pointed theorem to derive solutions.

  • The mathematical framework

Here, consider the following value function

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-01.png

and define

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-02.png

A sequence https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-04.png is said to be an admissible solution for starting point x,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-05.png

if https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-06.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-07.png. If we reformulate the dynamic programming idea, we obtain that if https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-04.png is a solution to problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-08.png, then for all https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-09.png, sequence https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-10.png is a solution to problem https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-11.png. It comes that function v is a solution of Bellman’s equation

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-12.png

Note that is can be ssen as some fixed point resul, since

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-13.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/opt-contr-14.png

So far, it shouldn’t be so hard….

  • Frank Ramsey’s model (discrete version)

In 1928, Frank Ramsey wanted to understand the amount of savings in a dynamic perspective (in how much of its income should a nation save). Consider the following infinite horizon problem, where some planifier wants to maximize

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-01.png

subject to constraints https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-02.png https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-03.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-04.png.
Before looking at dynamic programming answers, we might start with standard Lagrangian optimization techniques.
Assuming concavity of utility function and production function, we should look only for interior solutions. Define the Lagrangian as

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-05.png

Thus, the first order conditions are then given by

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-06.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-088.png

Assume further some terminal condition, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-08.png

(also called transversality condition). If we combine those two conditions, and assume that the first constraint is saturated, we obtain the so-called Euler equation,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-09.png

It is also possible to use Bellman’s equation: given the dynamic of the capital

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-10.png
https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-11.png

The first order condition states

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-12.png

But since v is unknown, so is its derivative. But from the enveloppe theroem, we obtain something like

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-13.png

where

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-14.png

We can then write

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-15.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-16.png

and finally

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-17.png

which is, Euler’s equation.

  • A specified model, with calculations

As in the previous post (here), consider a log utility function, and a power production function,https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-18.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-19.png. The dynamic is then

 https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-20.pngand https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-21.png

Note that fixed points are here

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-22.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-23.png

Recall that the value function is defined as

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-24.png

A natural idea to derive the value function can be to iterate, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-25.png

starting with a simple function, e.g the null function, at step 0. At step n=1

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-26.png

thus

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-27.png

At step n=2,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-28.png

i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-29.png

The first order condition is then

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-30.png

and thus, we obtain

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-31.png

that can be plugged in the previous equation, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-32.png

At step 3, we start from that new expression, derive the first order condition, and we get

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-33.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-34.png

and so on…
And finally, we can prove that

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-35.png

i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-36.png. Assuming that

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-37.png

actually, we can prove that

https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-38.png

(and https://perso.univ-rennes1.fr/arthur.charpentier/latex/co-opt-39.png has a form that can be explicited).

Optimal control, part 1

Since I had recently a request (from Benoît) about optimal control, I will start here a series of posts on that topic. But first, let us start with a simple problem, with discrete time, no randomness, and with a finite horizon. This might be a too simple framework to model complex problem, but that should be interesting to derive heuristic intuitions (I will skip here the mathematical problems, that can be found in very good books… references will come soon).

  • An introduction to backward induction

Before starting seriously, let us consider the following example: we want to reach the red city on the right from the red city on the left, as fast as possible. There are some roads, and the number is the number of hours it takes. Let us prove that the optimal way is the red one,

The first idea can be to calculate all possible trajectories, but with a large number of roads, the number of possible ways can soon be extremely large. An alternative can be to look backwards (like any students facing a question where the answer is given: start from the end, and try to find a possible way to reach it).


Numbers are greens are the number of hours that we still need one we’ve reached that point. Let us move again one step backward, and consider the orange points,

In the middle, we had to chose either to go up (it will still take 9 hours) or go down (and then 14 hours are necessary). Thus, the optimal strategy, once we’ve reach that point, it to take the 9 hour road. This will be idea the idea proposed by Bellman. Let us go backward again, to the purple cities. Again, we have to chose the shorter way,

From the top, the fastest road will take 13 hours, and from the bottom, it will take 16 hours. Thus, since it takes 10 hours to reach the top city, this road is necessarily faster than taking the one below (since it takes 8 hours to reach the nearest city).
Any this is it. We now have an intuitive idea of what should be done to find optimal strategies.

  • The optimization problem, discrete with finite horizon

Let us consider the following optimization problem: we want to find the optimal strategy https://freakonometrics.hypotheses.org/files/2022/04/control-01.png that maximizes the following function

https://freakonometrics.hypotheses.org/files/2022/04/control-02.png

with simple constraints, such as https://freakonometrics.hypotheses.org/files/2022/04/control-03.png (a state space) and https://freakonometrics.hypotheses.org/files/2022/04/control-04.png (i.e. some dynamic constraints). Assume further that the starting point is given, i.e.
https://freakonometrics.hypotheses.org/files/2022/04/control-05.png. In economic application, note that frequently https://freakonometrics.hypotheses.org/files/2022/04/control-06.png i.e. we consider a discounted version of the value.

  • The idea of dynamic programming

The intuitive idea of dynamic programming is that if the optimal path from A to C goes throught B, then the path is optimal from B to C. Thus, it will be natural to consider backward induction techniques.
Thus, define
https://freakonometrics.hypotheses.org/files/2022/04/control-07.png
https://freakonometrics.hypotheses.org/files/2022/04/control-08.png
https://freakonometrics.hypotheses.org/files/2022/04/control-09.png

https://freakonometrics.hypotheses.org/files/2022/04/control-10.png
https://freakonometrics.hypotheses.org/files/2022/04/control-11.png
Then Bellman’s principle can be used to link those problems: if https://freakonometrics.hypotheses.org/files/2022/04/control-12.png is a solution of the problem https://freakonometrics.hypotheses.org/files/2022/04/control-13.png then, for all
https://freakonometrics.hypotheses.org/files/2022/04/control-14.png, https://freakonometrics.hypotheses.org/files/2022/04/control-15.png is a solution of problem https://freakonometrics.hypotheses.org/files/2022/04/control-16.png.
Note that, so far, we assume that such an optimal sequence does exist. Thus, we get that for all x

https://freakonometrics.hypotheses.org/files/2022/04/control-17.png

and more generally,

https://freakonometrics.hypotheses.org/files/2022/04/control-18.png

Hence, from a practical point of view, we solve those equation using a backward approch. I.e. first,

https://freakonometrics.hypotheses.org/files/2022/04/control-20.png

and then

https://freakonometrics.hypotheses.org/files/2022/04/control-21.png

and so on

https://freakonometrics.hypotheses.org/files/2022/04/control-22.png

… etc. It can be proved that the sequence https://freakonometrics.hypotheses.org/files/2022/04/control-23.png is solution of https://freakonometrics.hypotheses.org/files/2022/04/control-24.png if and only if for all https://freakonometrics.hypotheses.org/files/2022/04/control-25.png, https://freakonometrics.hypotheses.org/files/2022/04/control-15.png
is solution of

https://freakonometrics.hypotheses.org/files/2022/04/control-27.png

So far, it does not look so difficult….

  • A simple example

Let https://freakonometrics.hypotheses.org/files/2022/04/control-30.png denote the consumption at period t, and assume consumption yields utility

https://freakonometrics.hypotheses.org/files/2022/04/control-31.png

as long as the consumer lives. Assume the consumer is impatient, or has a stronger preference for present, so that he discounts future utility by a factor https://freakonometrics.hypotheses.org/files/2022/04/control-32.png. Let https://freakonometrics.hypotheses.org/files/2022/04/control-33.png be capital he got at time t. Assume that his initial capital is a given amount https://freakonometrics.hypotheses.org/files/2022/04/control-34.png, and suppose that this period’s capital and consumption determine next period’s capital as

https://freakonometrics.hypotheses.org/files/2022/04/control-35.png

where https://freakonometrics.hypotheses.org/files/2022/04/control-36.png is a positive constant and https://freakonometrics.hypotheses.org/files/2022/04/control-37.png. Assume further that capital cannot be negative. Then the consumer’s problem is simply

https://freakonometrics.hypotheses.org/files/2022/04/control-38.png

given https://freakonometrics.hypotheses.org/files/2022/04/control-39.png. Bellman’s equation is then

https://freakonometrics.hypotheses.org/files/2022/04/control-40B.png

whih leads us to a simplier problem than the initial one, since only two variables are involved here https://freakonometrics.hypotheses.org/files/2022/04/control-41.png and https://freakonometrics.hypotheses.org/files/2022/04/control-42.png. And to solve that problem, we use backwards induction techniques.
Since https://freakonometrics.hypotheses.org/files/2022/04/control-44.png is known, we can derive easily https://freakonometrics.hypotheses.org/files/2022/04/control-45.png, and so on until https://freakonometrics.hypotheses.org/files/2022/04/control-46.png. More precisely, given https://freakonometrics.hypotheses.org/files/2022/04/control-47.png, we can get https://freakonometrics.hypotheses.org/files/2022/04/control-49.png which is the maximum of function

https://freakonometrics.hypotheses.org/files/2022/04/control-50.png

with https://freakonometrics.hypotheses.org/files/2022/04/control-51.png. One can see that the following function is a possible solution

https://freakonometrics.hypotheses.org/files/2022/04/control-52.png

where each https://freakonometrics.hypotheses.org/files/2022/04/control-53.png is a constant. Further, the optimal amount to consume at time https://freakonometrics.hypotheses.org/files/2022/04/control-54.png is

https://freakonometrics.hypotheses.org/files/2022/04/control-55.png

i.e., if we explicit those expressions

https://freakonometrics.hypotheses.org/files/2022/04/control-60.png
https://freakonometrics.hypotheses.org/files/2022/04/control-61.png
https://freakonometrics.hypotheses.org/files/2022/04/control-62.png

…etc.

  • A reformulation of the optimisation problem

Another way of expressing the optimization problem is the following: https://freakonometrics.hypotheses.org/files/2022/04/control-70.png was the variable of interest, https://perso.univ-rennes1.fr/arthur.charpenti<br /> er/latex/control-71.png was the control variable, and https://freakonometrics.hypotheses.org/files/2022/04/control-72.png. Thus, the programm

https://freakonometrics.hypotheses.org/files/2022/04/control-73b.png

can be expressed as

https://freakonometrics.hypotheses.org/files/2022/04/control-74.png

Several extension can be considered,

  • consider an infinite sum
https://freakonometrics.hypotheses.org/files/2022/04/control-80.png
  • consider a random component
https://freakonometrics.hypotheses.org/files/2022/04/control-81.png
  • consider a continuous version
https://freakonometrics.hypotheses.org/files/2022/04/control-82.png

But those items will be for posts that I still have to write down….

Quand faut-il acheter son billet d’avion ?

Lors de l’organisation des JEEA (ici), la principale incertitude était non pas le nombre de participants (qui n’a cessé d’augmenter jusqu’à ce qu’on se décide à clôturer, et finalement les prévisions faites il y a un mois, lorsque l’on a contacté les traiteurs pour les plateaux repas du midi se sont avérées très précises), mais les remboursements de frais de transports1.

Pour laisser toute souplesse aux intervenants, je leur avais laissé acheter eux-même leurs billets pour venir sur Rennes, sans aucune précision sur le mode de transport. Pour le train, les prix sont assez stables, mais j’avais très peur pour les personnes qui venaient en avion, qu’elles s’y soient prises trop tôt, ou trop tard et que le prix soit très élevé. Bref, comme à chaque fois que je dois acheter un billet d’avion, je me pose la même question: “existe-t-il une date optimale pour acheter ton billet d’avion en ligne ?”.

  • Une petite étude sur 6000 vols (domestiques) américains

Benny Mantin a fait une étude semblable, en regardant sur la période de 3 mois précédant le départ (ou 90 jours pour être précis) un millier de vols (au sens origine-destination), chaque vol étant considéré à 6 dates différentes (6 mercredis consécutifs).
Par exemple pour un vol. ABQ-BOS (Albuquerque-Boston), on a les six évolutions de prix suivantes,

(on peut visualiser une prédiction par régression locale, en rouge). Si l’on prend comme prix de départ comme référence (base 100), on observe les évolutions suivantes sur les 200 premiers vols,

Vu de loin, on semble observer une réelle tendance avec un optimum à 30 jours du départ. Si on change la fenêtre de la régression locale, on obtient des résultats assez similaires,

pour un lissage beaucoup plus fin,

  • Quelle est alors la date optimale ?

On peut d’ailleurs regarder la distribution de la date optimale de départ (sur la régression lowess). On note que la date optimale semble être 1 mois avant le départ,

Si on prend une fenêtre de lissage beaucoup plus fine, on observe la distribution suivante pour la date où le minimum est atteint,

Si on regarde quand est obtenu le minimum sur la base brute (ou plutôt le moment où est atteint le minimum pour la première fois, car le prix est parfois constant deux jours consécutifs), et non pas sur les données lissées, on obtient la distribution suivante,

Si on fait la même chose sur le moment où le minimum est atteint pour la dernière fois, on a

Ces deux graphiques sont beaucoup plus difficile à lire, car les données brutes sont beaucoup plus ératiques que la prédiction par régression locale.

  • Quid de la volatilité des prix ?

Bon, c’est bien tout de voir qu’il est optimal d’attendre un mois avant le départ pour avoir un prix en moyenne plus bas. Mais c’est aussi la période où les prix sont les plus volatiles…. Par exemple sur le premier vol que nous avons étudié, si on normalise le prix en base 100 (comme dans la  partie précédante) on obtient une volatilité de la forme suivante, qui augmente au fur et à mesure que l’on se rapproche de la date de départ.

Pour une personne qui ne cherche pas le meilleur prix en moyenne, mais qui cherche à éviter que le prix soit trop élevé, on peut regarder par exemple une régression quantile. La courbe en rouge ici présente le quantile à 90% du prix du billet d’avion.

Comme précédement, on peut chercher la date optimale pour que le prix – en tant que quantile et pas en tant que prix moyen – soit le plus bas possible,

Notons que l’on a chercher une régression quantile sous une forme polynomiale (ici un polynôme de degré 5). Si on utilise un polynôme de degré plus faible (par exemple 4), la distribution du maximum – sur la courbe lissée – change un peu, mais finalement pas tant que ça….

Ce qui  confirmerait la robustesse de nos conclusions…

  • est-il pertinant de commander ses billets depuis le bureau ?

Un dernier point avant de cloturer ce billet. Sur les graphiques suivant, on s’intéresse aux variations du prix du billet d’avion d’un jour sur l’autre (aux log-rendements). On s’intéresse là encore au premier vol, ABQ-BOS, avec les zones bleues correspondant aux week-end,

La courbe en rouge est une régression locale afin de visualiser une éventuelle tendance, avec un voisinage fin afin de prendre en compte les cycles hebdomadaires. Ce qui est intéressant, c’est que les plus grandes variations ont lieu en dehors des week-end, avec en particulier des baisses de prix en début de semaines, et de hausse de prix juste avant les week-end,

Afin de mieux visualiser ces centaines de courbes brutalement superposées, on  peut plutôt superposer les tendances lissées,

Cette impression d’avoir des baisses les lundi et des hausses le vendredi est confirmé sur ce graphique. On peut aussi regarder brutalement la moyenne des log-rendement en fonction du jour,

On peut aussi étudier la volatilité brute, en chacun des jours,

On note que les prix sont relativement calmes le week end, et que généralement, ils ont tendance à baisser le lundi…. moralité, mieux vaut attendre de revenir au bureau pour commander son billet… Bon bien sûr, il faut habiter aux Etats-Unis….
1 un parti pris était de faire une conférence gratuite, en fournissant des repas le midi pour tous les participants. En contrepartie, la conférence avait obtenu des subventions de la chaire AXA grands risques, et de différentes entités locales (ville de Rennes, Rennes métropoles, conseils général et régional). Mais nous ne souhaitions pas trop solliciter le CREM (le labo de la faculté de sciences économiques). Bref, nous avions une limite dans notre budget, et avec une centaine de participants, nous ignorions réellement quelle souplesse nous avions….