Keywords: p-value, effect size, hypothesis testing, false positive rate
P-values are ubiquitous in both medicine and social science literature; however, it is not well understood. Through my experience as a research statistician in medicine, I have encountered two extreme and opposing approaches to p-values. One is that some researchers and audiences tend to solely rely on p-values when interpreting results. The other is that, perhaps due to awareness of the limitations of p-values, some favor effect sizes instead, though these two are distinct concepts. To tackle the aforementioned partial approaches, this presentation has the following three aims. First, I define what a p-value is and why it warrants such statistical importance. I underscore p-value is a conditional probability and explain what it truly means when we say “the probability of the observed or more extreme data.” Second, I describe what an effect size is, its significance, and how it is different from p-values. I recommend, when possible, presenting p-values and effect sizes in tandem as they serve their own unique, but complementary, purposes. Lastly, I highlight the importance of being aware of high false positive rates in multiple hypothesis testing. With a thorough understanding of what a p-value is and common misinterpretations that accompany it, we will be able to better evaluate and appreciate p-values properly in context.