We describe a simple approach for combining an unbiased and a (possibly) biased estimator, and demonstrate its robustness to bias: estimate the error and cross-correlation of each estimator, and use these to construct a weighted combination that minimizes mean-squared error (MSE). Theoretically, we demonstrate that for any amount of (unknown) bias, the MSE of the resulting estimator is bounded by a small multiple of the MSE of the unbiased estimator. In simulation, we demonstrate that when the bias is sufficiently small, this estimator still yields notable improvements in MSE, and that as the bias increases without bound, the MSE of this estimator approaches that of the unbiased estimator. This approach applies to a range of problems in causal inference where combinations of unbiased and biased estimators arise. When small-scale experimental data is available, estimates of causal effects are unbiased under minimal assumptions, but may have high variance. Other data sources (such as observational data) may provide additional information about the causal effect, but potentially introduce biases. Estimators incorporating these data can be arbitrarily biased when the needed assumptions are violated. As a result, naive combinations of estimators can have arbitrarily poor performance. We show how to apply the proposed approach in these settings, and benchmark its performance in simulation against recent proposals for combining observational and experimental estimators. Here, we demonstrate that this approach shows improvement over the experimental estimator for a larger range of biases than alternative approaches.