Keynote Address | Concurrent Sessions | Poster Sessions
Short Courses (full day) | Short Courses (half day) | Tutorials | Practical Computing Demonstrations | Closing General Session with Refreshments

Last Name:

Abstract Keyword:



Viewing Practical Computing Demos onlyView Full Program
Saturday, February 25
PCD1 Power and Sample Size Analysis Using Stata
Sat, Feb 25, 2:00 PM - 4:00 PM
City Terrace 6
Instructor(s): Chuck Huber, StataCorp
Power and sample size analysis is a fundamental step in the planning of any research project. This talk will demonstrate how to use Stata's power command to calculate power, sample size and minimum detectable effect size. We will show how to create customized tables and graphs for many study designs with both continuous and categorical outcomes. We will also demonstrate how to add your own methods to the power command and how to calculate power for multilevel/longitudinal studies using simulation.

PCD2 Xymp: A Web Application Supporting Best Practices in Bioassay
Sat, Feb 25, 2:00 PM - 4:00 PM
City Terrace 8
Instructor(s): David Lansky, Precision Bioassay, Inc.
The software system consists of three components (each on a different virtual server): a web application (written in PHP), a database, and a collection of R programs, packages, and reports (sweave and knitr). The system helps users perform randomized instances of routine bioassays, performs mixed model analyses (using linear or non-linear models), produces reports (including summaries). The statistical portion works well with simple or complex designs (from CRD to a strip-unit). The system contains a lot of features to meet regulatory requirements (users with different levels of authorization, automatic tracking and reporting of re-analyses of data, etc.). The system is designed to be very easy for routine use in the lab, while providing a rich collection of modern statistical capabilities. The system is designed to facilitate good collaboration between bioassay scientists and statisticians. Each assay has a protocol, each analysis has a protocol, the protocols capture all the statistical details; the lab users select protocols by name. The statisticians build the protocols.

PCD3 Marketing Mix Modeling and Optimization Using Bayesian Networks and BayesiaLab
Sat, Feb 25, 2:00 PM - 4:00 PM
City Terrace 12
Instructor(s): Stefan Conrady, Bayesia USA
“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Over the last century, various versions of this quote have been attributed to John Wanamaker, Henry Ford, and Henry Procter, among others. Yet, 100 years after these marketing pioneers, in this day and age of big data and advanced analytics, the quote still rings true among marketing executives. The ideal composition of advertising and marketing efforts remains the industry's Holy Grail. The current practice remains “more art than science.” The lack of a well-established marketing mix methodology has little to do with the domain itself. Rather, it reflects the fact that marketing is yet another domain that typically has to rely on non-experimental data for decision support.

The single most important thing we need to recognize about marketing mix modeling is that it is a causal question. This means, we are not looking for a prediction of an outcome variable based on the observation of marketing variables. Rather, we are looking to manipulate marketing variables to optimize an outcome variable. Thus, we are performing an intervention, which requires us to perform causal inference. This leads us to the Holy Grail of statistics, i.e. causal inference from observational data.

In this workshop, we introduce the basic concepts of graphical models and how they can help us perform causal identification, e.g. using causal assumptions and the well-known Adjustment Criterion. While this is straightforward in theory, the complexity of the marketing domain prevents the practical application of this criterion. Thus, we introduce a new criterion (Shpitser and VanderWeele, 2011) that reduces the number of assumptions that we require for confounder selection and causal identification.

Implementation with BayesiaLab With the confounders identified, we can now build a high-dimensional statistical model that represents the joint probability distribution of all marketing variables. We do that using the machine-learning algorithms of the BayesiaLab software platform. We obtain a Bayesian network that represents a multitude of relationships between all marketing variables and the outcome variable. Using BayesiaLab’s visualization functions, we can compare the machine-learned graph to our understanding of the domain. Furthermore, we can examine the (mostly nonlinear) response curves of the outcome variable as a function of the marketing variables. Most importantly, we use BayesiaLab to perform Likelihood Matching on all confounders to establish the causal response of the outcome variable.

With all causal response curves computed, we introduce cost functions for the marketing variables via BayesiaLab’s Function Node. On that basis, we proceed to BayesiaLab’s Target Optimization function, which, by means of a genetic algorithm, searches for an optimal combination of all marketing variables, while being subject to constraints of individual variables and an overall marketing budget constraint. The optimization report shows feasible solutions along with the degree of achievement.

PCD4 Dig Deeper and Uncover the Unexpected with JMP 13 and JMP Pro 13
Sat, Feb 25, 2:00 PM - 4:00 PM
City Terrace 10
Instructor(s): Mia Stephens, SAS Institute, Inc.; Scott Lee Wise, SAS Institute, Inc.
This session will cover how we can meet the challenge to explore, model and experiment on complex data analytic needs by: • Increasing Ease and Efficiency of Preparing and Accessing Data • Handling and Exploring all Types of Data, including Text • Providing Next Generation Analytical Tools in Quality, DOE & Reliability • Unleashing Advanced Analytics in Predictive Modeling • Improving the Ways to Share and Report Out Analytics and Graphs We will feature new ground-breaking methodology on relevant demos to maximize participant learning.