Bailey and Dynarski cited in piece on why quality education should be a "civil and moral right"
Kalousova and Burgard find credit card debt increases likelihood of foregoing medical care
Arline Geronimus wins Excellence in Research Award from School of Public Health
Yu Xie to give DBASSE's David Lecture April 30, 2013 on "Is American Science in Decline?"
U-M grad programs do well in latest USN&WR "Best" rankings
Sheldon Danziger named president of Russell Sage Foundation
Back in September
Fonteneau, R., Susan A. Murphy, L. Wehenkel, and D. Ernst. 2010. "Model-Free Monte Carlo-like Policy Evaluation." In Volume 9: AISTATS 2010 Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. San Francisco: Morgan Kaufmann Publishers.
We propose an algorithm for estimating the finite-horizon expected return of a closed loop control policy from an a priori given (off-policy) sample of one-step transitions. It averages cumulated rewards along a set of "broken trajectories" made of one-step transitions selected from the sample on the basis of the control policy. Under some Lipschitz continuity assumptions on the system dynamics, reward function and control policy, we provide bounds on the bias and variance of the estimator that depend only on the Lipschitz constants, on the number of broken trajectories used in the estimator, and on the sparsity of the sample of one-step transitions.
Browse | Search : All Pubs | Next