While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a legal mandate in the regulated verticals of many major industries. Moreover, scientists, physicians, researchers, analysts, and humans in general have the need to understand and trust models and modeling results that affect their work and their lives. Today many organizations and individuals are embracing machine learning but what happens when people need to explain these impactful, complex technologies to one-another or when these technologies inevitably make mistakes? This paper analyzes several debugging and explanatory approaches beyond the error measures and assessment plots typically used to interpret machine learning models. The approaches: individual conditional expectation (ICE) plots, LIME, leave-one-covariate-out (LOCO), surrogate models, and Shapley explanations, vary in terms of scope (i.e. global vs. local), the exactness of generated explanations, and suitable application domains. Along with descriptions of the techniques, findings regarding explanation trustworthiness and practical guidance for usage are also presented.