Abstract:
|
Modern online services generate a wealth of user impression logs. We often think of such logs in the context of randomized control trials (RCTs) or A/B testing. While RCTs are the gold standard for causal inference, we are increasingly presented with natural experiments realized by users of the service without the explicit intervention of the company itself. These natural experiments can provide us with important insights into user behavior, guide us in the creation or modification of features, or estimate the incremental effect of a treatment not explicitly imposed on the users. Natural experiments are often saddled with selection bias the unraveling of which is a key component to estimating incremental effects on treatment users. I will provide an overview of methods used at Microsoft to pick apart the selection bias in our data and demonstrate ways in which these methods allow us to draw conclusions from natural experiments and/or observational studies. I will explore causal impact estimation in new product launches, survey response bias, as well as other real world scenarios and highlight some ways in which bias can easily lead to erroneous conclusions about user behavior.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.