Optimality of Approximate Inference Algorithms on Stable Instances

Abstract

Approximate algorithms for structured prediction problems – such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001) – typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of alpha-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.

Publication
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (AI-STATS)
Hunter Lang
Hunter Lang
PhD Student

Hunter’s research focuses on understanding and improving the performance of machine learning algorithms in the wild, with particular applications in MAP inference for graphical models, stochastic optimization, and weak supervision.

David Sontag
David Sontag
Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related