Home > Publications . Search All . Browse All . Country . Browse PSC Pubs . PSC Report Series

PSC In The News

RSS Feed icon

Thompson says criminal justice policies led to creation of prison gangs like Aryan Brotherhood

Schmitz finds job loss before retirement age contributes to weight gain, especially in men

Kimball says Fed should get comfortable with "backtracking"

Highlights

Overview of Michigan's advanced research computing resources, Monday, June 27, 9-10:30 am, BSRB - Kahn Auditorium

U-M's Data Science Initiative offers expanded consulting services via CSCAR

Elizabeth Bruch promoted to Associate Professor

Susan Murphy elected to the National Academy of Sciences

Next Brown Bag

PSC Brown Bags
will resume fall 2016

Towards min max generalization in reinforcement learning

Publication Abstract

Founteneau, Raphael, Susan A. Murphy, Louis Wehenkel, and Damien Ernst. 2011. "Towards min max generalization in reinforcement learning." In Agents and Artificial Intelligence: International Conference, ICAART 2010, Valencia, Spain, January 2010, Revised Selected Papers, Series: Communications in Computer and Information Science (CCIS) edited by J. Filipe, A. Fred, and B. Sharp. Volume 129, 61-77.

In this paper, we introduce a minmax approach for addressing the generalization problem in Reinforcement Learning. The minmax approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, finite action spaces, and a finite optimization horizon. We discuss the non-triviality of computing an exact solution of the minmax problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this minmax problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the proposed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators.

Browse | Search : All Pubs | Next