Mon, Jan 23, 2017 at noon:
H. Luke Shaefer
Founteneau, Raphael, Susan A. Murphy, Louis Wehenkel, and Damien Ernst. 2011. "Towards min max generalization in reinforcement learning." In Agents and Artificial Intelligence: International Conference, ICAART 2010, Valencia, Spain, January 2010, Revised Selected Papers, Series: Communications in Computer and Information Science (CCIS) edited by J. Filipe, A. Fred, and B. Sharp. Volume 129, 61-77.
In this paper, we introduce a minmax approach for addressing the generalization problem in Reinforcement Learning. The minmax approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, finite action spaces, and a finite optimization horizon. We discuss the non-triviality of computing an exact solution of the minmax problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this minmax problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the proposed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators.