Hotel Floor Plan | Program-at-a-Glance
Keynote Address | Concurrent Sessions | Poster Sessions
Short Courses (full day) | Short Courses (half day) | Tutorials | Practical Computing Demonstrations | Closing General Session with Refreshments

Last Name:

Abstract Keyword:

Title:

     

Viewing Short Course (half day)s onlyView Full Program
     
Thursday, February 18
SC3 Propensity Score Methods: Practical Aspects and Software Implementation
Thu, Feb 18, 8:00 AM - 12:00 PM
Opal
Instructor(s): Adin-Cristian Andrei, Feinberg School of Medicine

Download Handouts
Observational studies are becoming increasingly common and complex in a variety of industries, including biomedical, health services, pharmaceutical, insurance, and online advertising. To adequately estimate causal effect sizes, proper control of known potential confounders is critical. Having gained enormous popularity in recent years, propensity score methods are powerful and elegant tools for estimating causal effects. Without assuming prior knowledge of propensity score methods, this short course will use simulated and real data examples to introduce and illustrate important propensity score techniques such as weighting, matching, and subclassification.

Relevant R and SAS software packages for implementing data analyses will be discussed in detail. Specific topics to be covered include guidelines for constructing a propensity score model, creating matched pairs for binary group comparisons, assessing baseline covariate balance after matching, and using inverse propensity score weighting techniques. Illustrative examples will accompany each topic and a brief review of recent relevant developments and their implementation will be discussed.

Outline & Objectives

Outline:
- Observational Studies: definition, examples, causal effects, confounding control.

- Propensity Scores: definition, properties, modeling techniques.

- Propensity Score Approaches in Observational Studies: weighting, matching, sub-classification; graphical methods to assess covariate balance after matching; R and SAS software implementation of these techniques.

- Guidelines on how to best describe the methodology utilized and the results obtained when presenting to a non-technical audience.

- Brief review of the most recent methods developments and discussion of their potential for immediate use in practice.

Objectives and Scope:
The first objective is to provide an overview of some of the most commonly used propensity score-based methods in observational studies, while focusing on the practical aspects. The second objective is to present the practical implementation of these methods. The third objective is to discuss the advantages and disadvantages associated with these methods.

About the Instructor

Dr. Andrei has obtained a Ph.D. degree in Biostatistics from the University of Michigan in 2005. He is currently a Research Associate Professor in the Feinberg School of Medicine at Northwestern University, where he enjoys successful collaborations in cardiac surgery and cardiovascular outcomes research, of which observational studies are a major component. Dr. Andrei has developed expertise in propensity score techniques, as reflected by national conference presentations and his participation as primary biostatistician in a series of publications that utilize propensity score methods. He has developed practice-inspired and -oriented statistical methods in survival analysis, recurrent events, group sequential monitoring methods, hierarchical methods, and predictive modeling.

In the last 10 years, Dr. Andrei has collaborated with medical researchers in fields such as pulmonary/critical care, organ transplantation, nursing, prostate and breast cancer, anesthesiology and thoracic surgery.

Currently, he serves as Statistical Co-Editor of the Journal of the American College of Surgeons and deputy Statistical Editor of the Journal of Thoracic and Cardiovascular Surgery.

Relevance to Conference Goals

Upon attending this course, participants will become more familiar with propensity score-based methods for estimating causal effects in observational studies. Implementation in R and SAS software will be discussed in great detail, thus permitting participants to integrate these newly-acquired statistical techniques into their professional activities and projects. Learning how to produce simple yet powerful graphics to assess the propensity score model adequacy, check covariate balance and display the results, will undoubtedly benefit every participant. By leveraging their enhanced set of skills, individuals across industries will be adequately positioned to become more effective communicators in their interactions with customers and clients. Continued professional development is key to one’s career growth and can enhance the overall analytical capabilities within their respective organizations and institutions.

 
SC4 Introduction to Adaptive Designs
Thu, Feb 18, 8:00 AM - 12:00 PM
Crystal II
Instructor(s): Aaron Heuser, IMPAQ International; Minh Huynh, IMPAQ International; Chunxiao Zhou, IMPAQ International

Download Handouts
Unlike conventional studies or clinical trials in which the design is pre-specified, adaptive designs are a class of study designs with adaptive or flexible study characteristics. Things that can vary include the sample size, variable dose or intervention, or flexible number of study arms. This flexibility is guided by examining the accumulated data at an interim point in the study and initiating changes in the design that may make the studies more efficient, more likely to demonstrate an effect of the intervention---if one exists---or more informative (e.g., by providing broader dose-response information). This course will introduce adaptive designs, explore their weaknesses and strengths, illustrate the techniques with examples, and allow attendees to have hands-on experience with a toolkit designed to get started in using adaptive designs.

Outline & Objectives

A. Course Outline
1) Fundamental Principles of Adaptive Designs
2) Three Examples of Adaptive Designs
3) Adaptive Design versus Conventional Designs
a. Advantages of adaptive designs
b. Disadvantages and risks associated with adaptive designs
4) Recent Advances in Adaptive Designs
5) The Basic Adaptive Design Toolkit: What you need to get started
a. Essential design elements for your proposal or protocol designs
b. Software implementation
6) Hands-on Practice

B. Course Objectives
1) Gain familiarity with the basic principles of adaptive designs
2) See of examples of adaptive designs in practice
3) Understand strengths and weaknesses, and when to use and when not to use adaptive designs
4) See the latest application of adaptive designs
5) Obtain a starter toolkit to start using adaptive designs in your work

About the Instructor

Dr. Minh Huynh is a Principal Research Associate and Managing Director at IMPAQ International, LLC. Dr. Huynh was a former Institute Staff Scientist at the NIH Clinical Research Center. Dr. Aaron Heuser is a Senior Research Associate at IMPAQ International, LLC. Dr. Heuser was also a former Mathematical Statistician at the NIH Clinical Research Center in Bethesda, Maryland.

Relevance to Conference Goals

This course is closely related to Theme 2, Data Modeling and Analysis. The course will provide all attendees with theoretical and practical knowledge and techniques related adaptive designs through the application of state-of-the-art methods and findings. Course presentations will cover all aspects of adaptive designs and present information relevant to a broad range of applied statisticians, regardless of industry or field of expertise.

 
SC5 The Coward's Guide to Conflict
Thu, Feb 18, 8:00 AM - 12:00 PM
Topaz
Instructor(s): Colleen Mangeot, Cincinnati Children's Hospital Medical Center

Download Handouts
This workshop is based on "The Coward's Guide to Conflict - Empowering Solutions for Those That Would Rather Run Than Fight" by Tim Ursiny. Ever cringed at the thought of having a tough conversation? Do you avoid conflict, knowing deep down you need to do something? This workshop will give you the tools needed to address conflict more confidently and courageously.

Outline & Objectives

This interactive workshop will cover:
1. The Coward in Us All – examine your own reasons for avoiding conflict and the top 10 reasons people avoid conflict
2. Motivate Yourself to Address Conflict – examine the pain and pleasure of conflict
3. Common Causes of Conflict – Proactively avoid conflict
4. Techniques for Handling Conflict

You will complete interactive exercises and learn step by step approaches that will help you work through conflict more easily, gracefully, and courageously.

About the Instructor

Colleen Mangeot's diverse career includes 10 years in the actuarial field, 10 years in coaching and leadership development, and 6 years in biostatistics.
Highlights of her 7 year coaching business include: Successfully working with clients to increase efficiency and sales by 30% or more; Graduate of the world’s largest coach training organization, Coach U; Attained the Professional Coach Certification from the International Coach Federation in 2003; Monthly columnist for the Dayton Business Journal; National speaker with over 200 hours of paid speaking engagements; Contractor with the Anthony Robbins Companies.
She received her MS in Statistics from Miami University, received the NSA National Research Council Fellowship at NIOSH, and worked in statistical quality improvement at the VA. Now, in addition to working in the Biostatistical Consulting Unit at Cincinnati Children’s Hospital Medical Center, she is also an internal coach working with executives to further their careers.
She was a panelist for the invited session at JSM 2013, Secrets to Effective Communication for Statistical Consultants, and presented two sessions at last years’ Conference on Statistical Practice.

Relevance to Conference Goals

This session will develop important communication skills for career advancement, leadership and management effectiveness. We all must face conflict from time to time. The most successful people transform conflict into positive results. The approaches and steps taught in this session will apply equally well in all environments, academia, government and industry. The presenter has personally utilized these in her career in government and industry.

 
SC6 Bootstrap Methods and Permutation Tests
Thu, Feb 18, 1:30 PM - 5:30 PM
Opal
Instructor(s): Tim Hesterberg, Google Inc.

Download Handouts
We begin with a graphical approach to bootstrapping and permutation testing, illuminating basic statistical concepts of standard errors, confidence intervals, p-values, and significance tests.

We consider a variety of statistics (mean, trimmed mean, regression, etc.) and a number of sampling situations (one-sample, two-sample, stratified, finite-population), stressing the common techniques that apply in these situations. We'll look at applications from a variety of fields, including telecommunications, finance, and biopharm.

These methods let us do confidence intervals and hypothesis tests when formulas are not available. This lets us do better statistics (e.g., use robust methods (we can use a median or trimmed mean instead of a mean, for example)). They can help clients understand statistical variability. And some of the methods are more accurate than standard methods.

Outline & Objectives

Introduction to Bootstrapping
General procedure
Why does bootstrapping work?
Sampling distribution and bootstrap distribution

Bootstrap Distributions and Standard Errors
Distribution of the sample mean
Bootstrap distributions of other statistics
Simple confidence intervals
Two-sample applications

How Accurate Is a Bootstrap Distribution?

Bootstrap Confidence Intervals
Bootstrap percentiles as a check for standard intervals
More accurate bootstrap confidence intervals

Significance Testing Using Permutation Tests
Two-sample applications
Other settings

Wider variety of statistics
Variety of applications
Examples where things go wrong, and what to look for

Wider variety of sampling methods
Stratified sampling, hierarchical sampling
Finite population
Regression
Time series

Participants will learn how to use resampling methods:
* to compute standard errors,
* to check the accuracy of the usual Gaussian-based methods,
* to compute both quick and more accurate confidence intervals,
* for a variety of statistics and
* for a variety of sampling methods, and
* to perform significance tests in some settings.

About the Instructor

Dr. Tim Hesterberg is a Senior Statistician at Google. He previously worked at Insightful (S-PLUS), Franklin & Marshall College, and Pacific Gas & Electric Co. He received his Ph.D. in Statistics from Stanford University, under Brad Efron.

Hesterberg is author of the "Resample" package for R and primary author of the "S+Resample" package for bootstrapping, permutation tests, jackknife, and other resampling procedures, is co-author of Chihara and Hesterberg "Mathematical Statistics with Resampling and R" (2011), and is lead author of "Bootstrap Methods and Permutation Tests" (2010), W. H. Freeman, ISBN 0-7167-5726-5, and technical articles on resampling. See http://www.timhesterberg.net/bootstrap.

Hesterberg is on the executive boards of the National Institute of Statistical Sciences and the Interface Foundation of North America (Interface between Computing Science and Statistics).

He teaches kids to make water bottle rockets, leads groups of high school students to set up computer labs abroad, and actively fights climate chaos.

Relevance to Conference Goals

Resampling methods are important in statistical practice, but have been omitted or poorly covered in may old-style statistics courses. These methods are an important part of the toolbox of any practicing statistician.

It is important when using these methods to have some understanding of the ideas behind these methods, to understand when they should or should not be used.

They are not a panacea. People tend to think of bootstrapping in small samples, when they don't trust the central limit theorem. They are not a panacea. People tend to think of bootstrapping in small samples, when they don't trust the central limit theorem. However, the common combinations of nonparametric bootstrap and percentile intervals is actually accurate than t procedures. We discuss why, remedies, and better procedures that are only slightly more complicated.

These tools also show how poor common rules of thumb are -- in particular, n >= 30 is woefully inadequate for judging whether t procedures should be OK.

 
SC7 Applied Meta-Analysis Using R
Thu, Feb 18, 1:30 PM - 5:30 PM
Crystal II
Instructor(s): Din Chen, University of North Carolina at Chapel Hill

Download Handouts
In the Big Data era, it has become the norm for the data collected to address a similar scientific question coming from diverse sources of studies. The art and science of synthesizing information from diverse sources to draw a more effective inference is generally referred to as meta-analysis. In recent years, meta-analysis has played an increasingly important role in statistical applications, and its applications to those fields have led to numerous scientific discoveries. This course is then designed for an overview of meta-analysis based on the author's new book, "Applied Meta-Analysis Using R (2013)". This tutorial provides an up-to-date look at and thorough presentation on Big Data, along with meta-analysis models with detailed step-by-step illustrations and implementation using R. The examples are compiled from real applications in public literature, and the analyses are illustrated in a step-by-step fashion using the most appropriate R packages and functions. Attendees will be able to follow the logic and R implementation to analyze their own research data.

Outline & Objectives

Session 1:
• Introduction to R
• Overview to meta-analysis for both fixed-effects and random-effects models in meta-analysis. Real datasets in public health are introduced along with two commonly used R packages of "meta" and "rmeta"
Session 2:
• Meta-analysis models for binary data, such as for risk-ratio, risk difference and odds-ratio
• Methods to quantify heterogeneity and test the significance of heterogeneity among studies in a meta-analysis and then introduce meta-regression with R package of "metafor".

About the Instructor

Dr. Din Chen is a professor at the University of Rochester. He was the Karl E. Peace endowed eminent scholar chair in biostatistics at Georgia Southern University. Professor Chen is also a senior statistics consultant for biopharmaceuticals and government agencies with extensive expertise in clinical trials and bioinformatics. He has more than 100 referred professional publications and co-authored six books on clinical trial methodology and public health applications.
Professor Chen was honored with the "Award of Recognition" in 2014 by the Deming Conference Committee for highly successful 4 advanced biostatistics workshop tutorials at 4 successive Deming conferences on 4 different books that he has written or co-edited. In 2013, he was invited to give a short course at the twentieth Annual Biopharmaceutical Applied Statistics Symposium (BASS XX, 2013) for his contribution in meta-analysis and received a "Plaque of Honor" for his short course.
The tutorial is based on his recently book "Applied Meta-Analysis using R" with Professor Karl E. Peace by Chapman and Hall/CRC in 2013.

Relevance to Conference Goals

1. To give a up-to-date methodology development in meta-analysis and to guide the participants to learn the meta-analysis models

2. To give an overview of R implementations for meta-analysis models with R packages and R programming

3. To emphasize the applied aspects of meta-analysis with real public health data compiled from systematic reviews to help applied statisticians to solve their real-life problems from research and consulting.

 
SC8 Modern Statistical Process Control Charts and Their Use as a Tool for Analyzing Big Data
Thu, Feb 18, 1:30 PM - 5:30 PM
Topaz
Instructor(s): Peihua Qiu, University of Florida

Download Handouts
Big Data often take the form of data streams with observations of certain processes collected sequentially over time. Among many purposes, one common task to collect and analyze Big Data is to monitor the longitudinal performance/status of the related processes. To this end, statistical process control (SPC) charts could be a useful tool, although conventional SPC charts need to be modified properly in some cases. This short course discusses traditional SPC charts, including the Shewhart, CUSUM, and EWMA charts, as well as recent control charts based on change-point detection and fundamental multivariate SPC charts under the normality assumption. It also introduces novel univariate and multivariate control charts for cases when the normality assumption is invalid and discusses control charts for profile monitoring. Some examples will be discussed to use conventional control charts or their modifications for monitoring different types of processes with Big Data. Among many potential applications, dynamic disease screening and profile/image monitoring will be discussed in detail. All computations in the examples are solved using R.

Outline & Objectives

Course outline:

This short course discusses both traditional and more recent statistical process control (SPC) charts and how to use them for analyzing big data. More specifically, the course topics include (i) traditional SPC charts, including the Shewhart, CUSUM and EWMA charts, (ii) some recent control charts based on change-point detection, (iii) fundamental multivariate SPC charts under the normality assumption, (iv) novel univariate and multivariate control charts for cases when the normality assumption is invalid, (v) control charts for profile monitoring, and (vi) hands-on examples to use conventional control charts or their modifications for monitoring different types of processes with big data.


Course Objectives:

After taking this short course, participants should be able to apply some basic and more advanced statistical process control (SPC) charts to applications with big data streams. More specifically, participants will learn (i) traditional SPC charts, (ii) some recent control charts in the literature, and (iii) how to use SPC charts to handle applications with big data streams.

About the Instructor

The instructor, Professor Peihua Qiu, is the current editor of Technometrics, which is the flagship journal in industrial statistics, co-sponsored by ASA and American Society in Quality. He has been working on various statistical process control (SPC) problems since 1998, and has made substantial contributions in several SPC areas, including nonparametric SPC, SPC by change-point detection, and profile monitoring. His recent book Qiu (2014, Chapman & Hall) gives a systematic description of both traditional and newer SPC methods. Professor Qiu is the elected fellow of ASA, IMS and ISI. After obtaining his Ph.D. in statistics from University of Wisconsin - Medison, he helped create the Biostatistics Center at the Ohio State University during 1996-1998. Then, he worked as an assistant (1998-2002), associate (2002-2007) and full professor (2007-2013) of the School of Statistics at the University of Minnesota, and taught 13 different courses on various topics. He moved to University of Florida as the founding chair of the Department of Biostatistics in 2013. During his career, Professor Qiu is constantly involved in statistical consulting and collaborative research.

Relevance to Conference Goals

This proposed short course fits well the following two themes of the conference: 1) Big Data Prediction and Analytics, and 2) Data Modeling and Analysis.

Statistical process control (SPC) charts are commonly used in industries, especially in manufacturing industries. However, most SPC charts used in practice are several decades old. For instance, the most commonly used chart is the Shewhart chart proposed by Walter Shewhart in 1931. One main purpose of this course is to let the participants know that 1) many traditional SPC charts could give misleading results if their assumptions are not carefully checked, and 2) some newer SPC charts have been developed in the literature and they often provide more reliable results. In various big data applications, big data often take the form of data streams with observations of certain processes collected
sequentially over time. Another main purpose of the course is to show that SPC charts provide an effective tool for handling such applications. By taking this course, participants will learn statistical techniques that can apply to their daily jobs and better communicate with their clients and customers.