"Machine learning"

Graph cuts always find a global optimum for Potts models (with a catch)

We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal in a small …

Beyond perturbation stability: LP recovery guarantees for MAP inference on noisy stable instances

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP …

Deep Contextual Clinical Prediction with Reverse Distillation

Healthcare providers are increasingly using machine learning to predict patient outcomes to make meaningful interventions. However, despite innovations in this area, deep learning models often struggle to match performance of shallow linear models in …

Block Stability for MAP Inference

To understand the empirical success of approximate MAP inference, recent work (Lang et al., 2018) has shown that some popular approximation algorithms perform very well when the input instance is stable. The simplest stability condition assumes that …

Improving documentation of presenting problems in the emergency department using a domain-specific ontology and machine learning-driven user interfaces

Objectives: To determine the effect of a domain-specific ontology and machine learning-driven user interfaces on the efficiency and quality of documentation of presenting problems (chief complaints) in the emergency department (ED). Methods: As part …

Evaluating Reinforcement Learning Algorithms in Observational Health Settings

Much attention has been devoted recently to the development of machine learning algorithms with the goal of improving treatment policies in healthcare. Reinforcement learning (RL) is a sub-field within machine learning that is concerned with learning …

Learning Topic Models - Provably and Efficiently

Learning Weighted Representations for Generalization Across Designs

Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to building robust and reliable machine learning applications. We focus on distributional shift that arises in causal inference from …

Max-margin learning with the Bayes Factor

We propose a new way to answer probabilistic queries that span multiple datapoints. We formalize reasoning about the similarity of different datapoints as the evaluation of the Bayes Factor within a hierarchical deep generative model that enforces a …

Optimality of Approximate Inference Algorithms on Stable Instances

Approximate algorithms for structured prediction problems -- such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001) -- typically far exceed their theoretical performance guarantees on real-world instances. These …