Abstract:
|
While the use of real-world/observational/Big Data for comparative effectiveness analyses has grown in recent years, casual inference from such studies typically relies on the critical but unprovable assumption of "no unmeasured confounders." However, quantitative assessments of the potential impact of unmeasured confounding is rare in literature. Over past decades, multiple approaches useful for sensitivity assessment for unmeasured confounding have emerged, with the particular method and performance depending on the amount of information available on the unmeasured confounders. However, the many options provided by those developed methods and the variety of research scenarios make this a challenge to understand the optimal course of action. Thus, we would like to initiate a discussion regarding: 1) current practices for evaluating the impact of unmeasured confounding; 2) new approaches that can produce an adjusted effect estimate with reduced bias by using information on unmeasured confounders obtained external to the study; 3) a best practical guidance flowchart for researchers to execute in their own real-world projects.
|