Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models

Abstract

We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy. In particular, we introduce a class of structural causal models (SCMs) for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs). We see this as a useful procedure for off-policy "debugging" in high-risk settings (e.g., healthcare); by decomposing the expected difference in reward between the RL and observed policy into specific episodes, we can identify episodes where the counterfactual difference in reward is most dramatic. This in turn can be used to facilitate review of specific episodes by domain experts. We demonstrate the utility of this procedure with a synthetic environment of sepsis management.

Publication
Proceedings of the Thirty-Sixth International Conference on Machine Learning (ICML)
Michael Oberst
Michael Oberst
PhD Student

Michael’s research interests include developing learning algorithms for dealing with non-stationarity / dataset shift in predictive modelling, as well as robust learning of treatment policies from observational data.

David Sontag
David Sontag
Associate Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related