Research

Imagine a future where every individual has an AI assistant built into their health record: explaining test results and treatment plans, suggesting next steps, surfacing possible medical errors before they are propagated, and coordinating care and improving communication between the patient and their care team.

Realizing this vision requires fundamental advances in machine learning, causal inference, and human-AI interaction. Longitudinal healthcare data is complex, noisy, and prone to shifts in distribution over time, making it difficult to develop safe and effective predictive models. Moreover, prediction is often not enough: Causal questions are also important. For instance, understanding how treatments will affect outcomes can help patients and providers make more informed decisions. Finally, because AI assistants affect healthcare decisions through human-AI interaction, designing this interaction into the AI process is critical.

We strive to address these challenges in our main areas of research laid out below.


Machine learning on clinical time-series

Our lab develops algorithms that use data from electronic medical records to make better clinical predictions in areas like antibiotic resistance, cancer, heart failure, lupus, and other chronic illnesses. A key methodological challenge that we study is how to do prediction from multivariate time-series with complex long-range dependencies and substantial missingness, on which existing learning algorithms tend to perform poorly. In addition, we are concerned with efforts around fairness and interpretability to ensure accurate, useful, and equitable clinical predictions.

Natural language processing

Many variables useful for clinical research (e.g. disease state, interventions) are trapped in free-text clinical notes. Structuring such variables for downstream use typically involves a tedious process in which domain experts manually search through long clinical timelines. Natural language processing systems present an opportunity for automating this workflow, and our group has developed methods to improve on automated extraction, e.g. via unsupervised learning and hybrid human-AI systems. We are further exploring new paradigms to change electronic health records so we can have cleaner data moving forward.

Advances in NLP Methods:

Applications in Healthcare:

Probabilistic inference, Graphical Models, and Latent Variables

Probabilistic reasoning is a critical component of clinical machine learning. To reason about complex relationships between diseases and diagnostics, or to jointly infer the meaning of multiple terms in a clinical note, we need efficient procedures for inference to update our beliefs given observed data. We also need models that describe how observations affect our beliefs. Learning these models is challenging, since we have very little access to labeled data (some variables are hidden). Given these challenges, our work on probabilistic inference breaks down into two broad areas: efficient inference algorithms and unsupervised approaches to discovering hidden variables and graphical structure.

Efficient inference:
Unfortunately, performing inference (whether for parameter inference at training time, or answering queries at test time) is often computationally hard, so we focus on approximate inference. Our research in this area builds new approximate inference algorithms and seeks to theoretically understand existing approximations. We focus on both directed and undirected models, ranging from deep generative models, where we showed how to combine Hidden Markov Models with deep neural networks, to Markov Random Fields, where we developed a set of tools for understanding when and why existing approximation algorithms for MAP inference perform well.

Unsupervised learning & discovery of latent structure:
Can we automatically discover directed relationships between diseases and symptoms? Can we automatically detect new disease phenotypes? How can we best leverage deep generative models to learn complex latent representations of clinical data? Our work in this area focuses on developing efficient algorithms for learning latent variables and network structure from data.

Causal inference and prediction

Causal Inference & Policy learning: Many practical questions in healthcare are causal: Which treatments will work best and for which patients? To this end, our lab has developed novel causal inference methods that work well with high-dimensional data and modern machine learning techniques (e.g., neural networks). In addition, we have developed methods for better “debugging” of causal analyses, including helping domain experts assess if causal inference is feasible and whether techniques like reinforcement learning (RL) are working as intended.

Estimation of Causal Effects & Policy Learning:

Applications of Policy and Reinforcement Learning in Healthcare:

Causal Inference & Robust Prediction: Fundamentally, causal inference is about performing prediction in a new distribution (e.g., one in which all patients receive the same treatment). As a result, tools and ideas from causal inference can be readily adapted to the problem of making predictive models more robust to distribution shift.



Human-AI interaction

The algorithms that we deploy are often used in conjunction with clinical decision makers; they are used to provide a second opinion to the clinician, display information and can sometimes step in to make decisions when resources are limited. Therefore, it is critical when we develop these algorithms that we integrate the human into our design and deployment strategies. We have developed predictors that understand when they should defer to the clinician and when they are able to predict on their own. We have also developed strategies to onboard human users on when to trust our AI algorithms and when not to. As systems begin to include decision support, we have started to quantify the effect of cognitive shortcuts on clinical populations and consider how that should influence system design.