Model-Free Monte Carlo-like Policy Evaluation

Publication Abstract

Fonteneau, R., Susan A. Murphy, L. Wehenkel, and D. Ernst. 2010. "Model-Free Monte Carlo-like Policy Evaluation." In Volume 9: AISTATS 2010 Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. San Francisco: Morgan Kaufmann Publishers.

We propose an algorithm for estimating the finite-horizon expected return of a closed loop control policy from an a priori given (off-policy) sample of one-step transitions. It averages cumulated rewards along a set of "broken trajectories" made of one-step transitions selected from the sample on the basis of the control policy. Under some Lipschitz continuity assumptions on the system dynamics, reward function and control policy, we provide bounds on the bias and variance of the estimator that depend only on the Lipschitz constants, on the number of broken trajectories used in the estimator, and on the sparsity of the sample of one-step transitions.

Browse | Search | Next

PSC In The News

RSS Feed icon

Miller comments on local efforts to provide healthcare to vulnerable populations

Shaefer discusses Americans with tight financial resources have fewer options as they navigate coronavirus closures and layoffs in NYT

Mehta's research on life expectancy crisis in the USA: The opioid crisis is not the decisive factor

More News

Highlights

Data Scientist Job Open at PSC/PDHP

New Investigator Mentoring Program (PDHP) Applications Sought

More Highlights


Connect with PSC follow PSC on Twitter Like PSC on Facebook