Trajectory Inspection: A Method for Iterative Clinician-Driven Design of Reinforcement Learning Studies

Abstract

Reinforcement learning (RL) has the potential to significantly improve clinical decision making. However, treatment policies learned via RL from observational data are sensitive to subtle choices in study design. We highlight a simple approach, trajectory inspection, to bring clinicians into an iterative design process for model-based RL studies. We identify where the model recommends unexpectedly aggressive treatments or expects surprisingly positive outcomes from its recommendations. Then, we examine clinical trajectories simulated with the learned model and policy alongside the actual hospital course. Applying this approach to recent work on RL for sepsis management, we uncover a model bias towards discharge, a preference for high vasopressor doses that may be linked to small sample sizes, and clinically implausible expectations of discharge without weaning off vasopressors. We hope that iterations of detecting and addressing the issues unearthed by our method will result in RL policies that inspire more confidence in deployment.

Publication
AMIA 2021 Virtual Informatics Summit
Christina X Ji
Christina X Ji
PhD Student

Christina is interested in characterizing variation in treatment policies, examining the theoretical assumptions behind off-policy evaluation of reinforcement learning for healthcare, and developing algorithms for disease progression modeling.

Michael Oberst
Michael Oberst
PhD Student

Michael’s research interests include developing learning algorithms for dealing with non-stationarity / dataset shift in predictive modelling, as well as robust learning of treatment policies from observational data.

Sanjat Kanjilal
Sanjat Kanjilal
Clinical Fellow

Lecturer, Harvard Pilgrim Health Care Institute

David Sontag
David Sontag
Associate Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related