Random forests (Breiman 2001) are a powerful and ubiquitous predictive method, increasingly considered as an adaptive approach for non-parametric statistical estimation. A key limitation of Breiman's forests, however, is that they cannot take advantage of smoothness properties of the underlying signal. This leads to unstable estimates and noticeably suboptimal prediction in the presence of strong linear effects. Local linear regression can excel in this setting, but traditionally employs non-adaptive kernels and is subject to an acute curse of dimensionality. Drawing on the strengths of these techniques, we introduce locally linear forests, which use random forests to motivate a data-adaptive kernel that can then be plugged into a locally weighted regression. In our experiments, we find that locally linear forests improve upon traditional regression forests in the presence of strong, smooth effects; we also show how assuming the presence of smoothness can give powerful asymptotic convergence results. Last, we highlight our procedure's usefulness in the burgeoning field of heterogeneous causal inference, where it presents a compelling new theoretical and practical development.