Tomorrow, I will attend the 2024 Optimization Days, in Montréal. I will present some work we did last Fall with Philipp Ratz and Suzie Grondin, on (algorithmic) collusions in games, “Market Pricing with Reinforcement Learning” (the paper will be available soon)
Several recent articles have attempted to gain a better understanding of algorithmic collusion (Calvano et al. (2020), Klein (2021), Banchio & Mantegazza (2022) Rocher et al. (2023)). For example, in Calvano et al. (2020), a simulation study showed that for a simplified market environment, basic Q-Learning Agents can learn to collude tacitly, in order to propose higher prices and increase their combined profit. Inspired by some Iterated Prisoners Dilemma, we derive some reinforcement learning algorithm to investigate and discuss several recent results and their robustness, and explain how reinforcement learning differs from simpler strategies and which conditions lead to unfavorable outcomes from a consumer perspective. In particular, we first describe the reinforcement learning problem in a more general manner and investigate the influence of the hyper-parameters. We then consider two situations separately. One, similar in spirit to Rocher et al. (2023), assumes that the market is in equilibrium and that a general agent tries to exploit a pricing strategy of an incumbent agent. The second, more general, approach consists of an agent continuously updating their own policy.
The starting point was Calvano et al. (2020),
For classical games, the mathematical framework is the following
for example, with the prisoner’s dilemma
Then, consider repeated games, and possible collusion
The next step is to include randomness, with (dynamic) stochastic games
and standard equations
(I describe quickly the different concepts). Finally, we can move from here to reinforcement learning, and Q-learning
The idea will be to play (or to interact) to learn that matrix
with the following interpretations, for the different parameters
Then, we will play a little bit, on the framework introduced to present the prisoner’s dilemma, for instance to understand the importance of \beta, using in the \epsilon-greedy approach, with \epsilon_t=\exp(-\beta t)
That is our first approach to the concept of collusion : agents don’t need to “cooperate” to have collusion
Then, we will use the experiment of Calvano et al. (2020) to get more complex discussions…