An increasing number of decisions are being guided by machine learning algorithms. In most cases, an individual's historical behavior is used as input to an estimator that determines future decisions. But when an estimator is used to allocate resources, it may cease to be a good estimator: individuals may strategically alter their behavior to achieve a desired outcome. This paper develops a new class of estimators that are stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, we show that decision rules estimated with our strategy-robust method outperform those based on standard supervised learning approaches.