This paper designs a sequential repeated game of a micro-founded society with three types of agents: individuals, insurers, and a government. Nascent to economics literature, we use Reinforcement Learning (RL), closely related to multi-armed bandit problems, to learn the welfare impact of a set of proposed policy interventions per $1 spent on them. The paper rigorously discusses the desirability of the proposed interventions by comparing them against each other on a case-by-case basis. The paper provides a framework for algorithmic policy evaluation using calibrated theoretical models which can assist in feasibility studies.
The classic approach to reinforcement learning is limited in that it only predicts the expected return. The specialized literature has long tried to remedy this problem by studying risk-sensitive models, but the distributional approach will not emerge until 2017. Since the seminal article Bellemare, Dabney, and Munos 2017 and the state-of-the-art performance of the C51 algorithm in the ATARI 2600 suite of benchmark tasks (Bellemare, Naddaf, et al. 2013), research has focused on understanding the behaviour of distributional algorithms. In this paper we place Bellemare’s original results in distributional dynamic programming in parallel with the classic results.
Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy — a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.