The MAP inference problem in discrete graphical models has found widespread applications in machine learning and statistical physics over the past few decades. However, for many useful model classes, this combinatorial optimization problem is NP-hard to solve efficiently. Approximation algorithms, which typically come with theoretical worst-case guarantees on their approximation ratios, are commonplace. On real-world data, however, these algorithms far outperform their worstcase guarantees, often returning solutions that are extremely close to optimal. This thesis asks, and partially answers, the question: “What structure is present in real-world data that makes MAP inference easy?” We propose stability conditions under which we prove that popular approximation algorithms work provably well, and we evaluate these conditions on real-world instances.