In the past two decades, there has been considerable interest in so-called doubly-robust (DR) estimators in causal inference. To construct such estimators, estimation of two nuisance parameters – e.g., the outcome regression and propensity score – is generally required as an intermediate step. DR estimators derive their name from the fact that they are consistent if either of these two nuisance parameters is consistently estimated. In this talk, we will discuss the recent development of DR estimators that not only enjoy doubly-robust consistency but also allow the construction of confidence intervals and tests that remain valid even when one of the nuisance parameters is inconsistently estimated. This innovation is particularly important when flexible estimation strategies (e.g., machine learning) are used, since valid robust inference can then be especially difficult to achieve. We will also discuss a general strategy for constructing such estimators in a variety of settings. These new techniques provide an additional tool to support investigators in their efforts to derive robust scientific conclusions.