Who Should Predict? Exact Algorithms For Learning to Defer to Humans

Abstract

Algorithmic predictors should be able to defer the prediction to a human decision maker to ensure accurate predictions. In this work, we jointly train a classifier with a rejector, which decides on each data point whether the classifier or the human should predict. We show that prior approaches can fail to find a human-AI system with low mis-classification error even when there exists a linear classifier and rejector that have zero error (the realizable setting). We prove that obtaining a linear pair with low error is NP-hard even when the problem is realizable. To complement this negative result, we give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting. However, the MILP only scales to moderately-sized problems. Therefore, we provide a novel surrogate loss function that is realizable-consistent and performs well empirically. We test our approaches on a comprehensive set of datasets and compare to a wide range of baselines.

Publication
Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS)
Hussein Mozannar
Hussein Mozannar
PhD Student

Hussein’s interests focus on human-centric aspects of machine learning, namely how to integrate expert decision makers into machine learning pipelines while ensuring fairness and an understanding of long-term consequences.

Hunter Lang
Hunter Lang
PhD Student

Hunter’s research focuses on understanding and improving the performance of machine learning algorithms in the wild, with particular applications in MAP inference for graphical models, stochastic optimization, and weak supervision.

David Sontag
David Sontag
Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related