Online Program

Keynote Address | Concurrent Sessions | Poster Sessions | Short Courses | Tutorials | Practical Computing Expos | Closing Session: Feedback Panel

Last Name:

Abstract Keyword:

Title:

     

Viewing Practical Computing Expos onlyView Full Program
     
Saturday, February 23
PCE1 JMP Pro for the Practicing Statistician Sat, Feb 23, 1:45 PM - 3:30 PM
Napoleon A1&2
Instructor(s): Sam Gardner, JMP
Come see the depth and breadth of the data visualization and statistical analysis tools that are available in JMP Pro. JMP Pro’s data exploration tools that let you perform visual exploratory data analysis, leading you to important insights that guide the analysis process. JMP Pro also has advanced predictive analytics methods such as recursive partitioning, neural networks, PLS, gradient boosting and random forests. These advance predictive modeling tools can be applied broadly in areas such as scientific discovery, engineering, marketing, and financial risk.

You can also find more information about JMP Pro at our website.

 
PCE2 Mixed Model Power Analysis by Example: Using Free Web-Based Power Software Sat, Feb 23, 1:45 PM - 3:30 PM
Maurepas
Instructor(s): Dr. Deborah H. Glueck, Associate Professor; Dr. Aarti Munjal, PostDoctoral Fellow
GLIMMPSE is an open-source tool for calculating power and sample size for tests of means in the general linear mixed model and the general linear multivariate model. Mixed models have become the standard approach for handling correlated observations and accommodating missing data. The statistical methods used in GLIMMPSE are based on a transformation process that reduces the mixed model and hypothesis to an equivalent general linear multivariate model and hypothesis. Using power techniques for the equivalent multivariate model yields exact, or accurate approximate power for the original mixed model. The transformation method applies to designs with repeated measures, clustering, or a combination of the two.

GLIMMPSE is able to calculate power and sample size for tests of means in longitudinal and multilevel designs, whether cast as a mixed or multivariate model. We demonstrate the use of GLIMMPSE for two designs 1) a longitudinal study of a sensory focus intervention on memories of dental pain, and 2) a multilevel and longitudinal trial of a home-based program designed to reduce alcohol use among urban adolescents. A wizard format graphical user interfaces guides users through describing the study design, selecting the hypothesis tests, and choosing a covariance pattern. Many complicated covariance structures can be constructed by layering simpler patterns. For each example, we provide a step-by-step tutorial illustrating how to use the GLIMMPSE software and how to interpret the results.

Relevance to Conference Goals
Brief Description of the Product
 
PCE3 Taming Big Data: Predictive Analytics Powered by SAP’s HANA Sat, Feb 23, 1:45 PM - 3:30 PM
Napoleon D1
Instructor(s): Todd Borchert, Senior Director, Cognilytics
Big Data problems are continuing to emerge in nearly every domain where statisticians work, including health care, financial services and retail to name a few. The computing approaches of the past are often not scalable to Big Data. Technology has now enabled analytics practitioners to ask questions they didn’t dare ask before.

Learn how to take advantage of in-memory technology to reduce the time and cost involved in performing predictive analysis against vast data volumes. You can design predictive models and visualize, discover, and share insights – and do it in real time by harnessing the power of SAP HANA combined with SAP Predictive Analysis.

We will demonstrate the power of SAP Predictive Analysis running on HANA by showing a customer segmentation example. Customer segmentation is of critical importance across a number of industries where one needs to identify groups of like customers for product recommendation, customer service, or risk management. Running an automated segmentation or clustering algorithm is a computationally demanding task that can often take hours to run. We will show how segmentation of a 10.5 million loan portfolio can be performed using a K-Means algorithm in a matter of a couple of minutes instead of hours or days on traditional disk-based predictive systems.

For more information see our website.