Add-Ons

JSM sessions which require ticket purchase have limited availability and therefore are subject to sell-out or cancellation. Below are the functions which still have availability. Although this list is updated in real-time, please bear in mind that tickets are sold online around the clock; if you plan to purchase a function ticket onsite and see the function on this list before you travel to JSM, we cannot guarantee it will still be available for purchase when you arrive at JSM. To find out how many tickets remain for a particular function, please contact the ASA at (703) 684-1221.

Available Add-Ons


Professional Development

CE_01C (TWO-DAY COURSE) Introduction to Bayesian Methods, Computation, and Modeling
INSTRUCTOR(S): Joseph Ibrahim

This is an introductory course in Bayesian modeling and computational methods. We will examine the fundamentals of the Bayesian paradigm including Bayes theorem, deriving posterior distributions, point estimation, interval estimation, and hypothesis testing, and model selection. We will discuss Bayesian methods for linear models, generalized linear models, models for longitudinal data, and survival models. Bayesian computational methods will also be discussed including Gibbs sampling and Metropolis sampling. Various case studies and datasets will also be discussed in detail using statistical packages such as SAS, WinBUGS, and R. On the second day of the course, more advanced topics will be discussed including hierarchical modeling, missing data, variable selection, prior elicitation, and Bayesian methods for clinical trial design.


CE_02C (HALF-DAY COURSE) Best Practices in Data Visualization: Present Your Data Clearly, Accurately, and Attractively
INSTRUCTOR(S): Teresa Larsen

Do you get questions about your data after you think you presented them clearly? Do people’s eyes glaze over when you show them your results? In this presentation, learn how to apply simple guidelines to improve how you display your statistical data (results) visually. As with verbal communication, there exists a grammar and vocabulary of visual communication that will help you get your point across clearly, accurately, and even beautifully. This course is designed for anyone with a desire to communicate data clearly in a visual form. No prerequisite is required.


CE_31T Joint Modeling of Longitudinal and Survival-Time Data in Stata
INSTRUCTOR(S): Yulia Marchenko

Joint modeling of longitudinal and survival-time data has been gaining more attention in recent years. Many studies collect both longitudinal and survival-time data. Longitudinal, panel, or repeated-measures data record data measured repeatedly at different time points. Survival-time or event history data record times to an event of interest such as death or onset of a disease. The longitudinal and survival-time outcomes are often related and should thus be analyzed jointly. Three types of joint analysis may be considered: 1) evaluation of the effects of time-dependent covariates on the survival time; 2) adjustment for informative dropout in the analysis of longitudinal data; and 3) joint assessment of the effects of baseline covariates on the two types of outcomes. This workshop will provide a brief introduction into the methodology and demonstrate how to perform these three types of joint analysis in Stata. No prior knowledge of Stata is required, but familiarity with survival and longitudinal analysis will prove useful.


CE_32T Advanced ODS Graphics Examples in SAS
INSTRUCTOR(S): Warren Kuhfeld

You can use SG annotation, modify templates, and change dynamic variables to customize graphs in SAS. Standard graph customization methods include template modification (which most people use to modify graphs that analytical procedures produce) and SG annotation (which most people use to modify graphs that procedures such as PROC SGPLOT produce). However, you can also use SG annotation to modify graphs that analytical procedures produce. You begin by using an analytical procedure, ODS Graphics, and the ODS OUTPUT statement to capture the data that go into the graph. You use the ODS document to capture the values of dynamic variables, which control many of the details of how the graph is created. You can modify the values of the dynamic variables, and you can modify graph and style templates. Then you can use PROC SGRENDER along with the ODS output data set, the captured or modified dynamic variables, the modified templates, and SG annotation to create highly customized graphs. This tutorial is based on the free web book at http://support.sas.com/documentation/prod-p/grstat/9.4/en/PDF/odsadvg.pdf. Prior experience with ODS Graphics is assumed.


CE_33T Introduction to Data Mining with CART Classification and Regression Trees
INSTRUCTOR(S): Kaitlin Onthank and Mikhail Golovnya

This tutorial is intended for the applied statistician wanting to understand/apply CART classification and regression tree methodology. Concepts will be illustrated using real-world, step-by-step examples. The course begins with an intuitive introduction to tree-structured analysis: what it is, why it works, why it is nonparametric; model-free; and advantages in handling all types of data, including missing values and categorical. Working through examples, we will review how to read the CART Tree output and set up basic analyses. This session includes performance evaluation of CART trees and covers ways to search for improved results. Once a basic working knowledge of CART has been mastered, the tutorial will focus on critical details for advanced CART applications, including choice of splitting criteria, choosing the best split, using prior probabilities to shape results, refining results with differential misclassification costs, the meaning of cross validation, tree growing, and tree pruning. The course concludes with discussion about the comparative performance of CART versus other computer-intensive methods such as neural networks and statistician-generated parametric models. Attendees receive six months access to fully functional versions of the SPM Salford Predictive Modeler software suite.


CE_34T Bayesian Analysis Using Stata
INSTRUCTOR(S): Yulia Marchenko

This workshop covers the use of Stata to perform Bayesian analysis. Bayesian analysis is a statistical paradigm that answers research questions about unknown parameters using probability statements. For example, what is the probability that people in a particular state vote Republican or Democrat? What is the probability that a person accused of a crime is guilty? What is the probability that the odds ratio is between 0.3 and 0.5? And many more. Such probabilistic statements are natural to Bayesian analysis because of the underlying assumption that all parameters are random quantities. In Bayesian analysis, a parameter is summarized by an entire distribution of values instead of one fixed value as in classical frequentist analysis. Estimating this distribution, a posterior distribution of a parameter of interest, is at the heart of Bayesian analysis. This workshop will demonstrate the use of Bayesian analysis in various applications and introduce Stata’s suite of commands for conducting Bayesian analysis. No prior knowledge of Stata is required, but basic familiarity with Bayesian analysis will prove useful.


CE_35T Small Area Estimation Using SAS Software
INSTRUCTOR(S): Pushpal Mukhopadhyay

Small area estimation techniques are used in survey sampling when sample sizes are too small to provide adequate precision for subpopulation estimates. These methods use implicit or explicit statistical models and are widely used in government agencies to plan action and make policy decisions. These model-based techniques require you to go beyond the design-based methods available in the SAS/STAT survey procedures; you need to use mixed effects modeling procedures to fit small area models. This workshop provides an overview of small area models and introduces SAS strategies to fit them. You will learn the two types of basic small area models, unit-level and area-level, which are illustrated with practical examples. Then, you will learn how to use the GLIMMIX and IML procedures to fit these models and obtain small area predictions and mean squared error for predictions with appropriate adjustments for the number of areas. Fully Bayesian approaches are commonly employed for small area estimation, too. You will also learn how to use the MCMC procedure to fit hierarchical Bayes models for small area estimation.


CE_36T Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets
INSTRUCTOR(S): Kaitlin Onthank and Mikhail Golovnya

How would you like to use data mining in addition to classical statistical modeling? In this presentation designed for statisticians, we will show how you can quickly and easily create data mining models. We will demonstrate with step-by-step instructions. We will use real-world data mining examples drawn from online advertising and the financial services industries. At the end of this workshop, our goal is for you to be able to build your own data-mining models on your own data sets. Data mining is a powerful extension to classical statistical analysis. As opposed to classical techniques, data mining easily finds patterns in data, nonlinear relationships, key predictors, and variable interactions that are difficult—if not impossible—to detect using standard approaches. This tutorial follows a step-step approach to introduce advanced automation technology—including CART, MARS, TreeNet Gradient Boosting, Random Forests, and the latest multi-tree boosting and bagging methodologies by the original creators of CART (Breiman, Friedman, Olshen, and Stone). All attendees will receive six months access to fully functional versions of the SPM Salford Predictive Modeler software suite.


CE_37T Design Multi-Arm, Multi-Stage Trials with Treatment Selection and Sample Size Re-Estimation in East
INSTRUCTOR(S): Cyrus Mehta and Lingyun Liu

There has been increasing interest in designing multi-arm, multi-stage trials with treatment selection and sample size re-estimation at interim analysis. Cytel has developed software to facilitate design, simulation, and monitoring of such trials in a streamlined manner. There are two main streams of statistical approaches to ensure strong control of type I error. One is the group sequential approach formulated in several publications (including Magirr et al 2012, Gao et al 2014). The other is based on the closed testing principle by combining the p-values from different stages as proposed by Posch et al 2005. We’ll review the theory behind these two approaches and demonstrate how to design such trials in East through case studies.


CE_38T Weighted GEE Analysis Using SAS/STAT Software
INSTRUCTOR(S): Michael Lamm

The generalized estimating equation (GEE) approach is commonly used for the analysis of longitudinal data. The standard GEE approach is based on complete cases and can lead to biased results when observations are missing due to dropouts or skipped visits. When your data include dropouts, weighted GEE methods can be used to eliminate bias. This workshop introduces the GEE procedure (new in SAS/STAT 13.2 release), which supports both the standard and weighted GEE methods for analyzing longitudinal data. You will learn about the different mechanisms used to describe why a response is missing and how the missing data mechanism affects inference using the standard and weighted GEE approaches. This workshop illustrates the use of PROC GEE with examples and compares the methods available in PROC GENMOD and PROC GEE. A basic familiarity with generalized linear models is assumed.


CE_39T Evolution of Classification: From Logistic Regression and Decision Trees to Bagging/Boosting and Netlift Modeling
INSTRUCTOR(S): Mikhail Golovnya and Dan Steinberg

Not so long ago, modelers would use traditional classification, data mining, and decision tree techniques to identify a target population. We have come a long way in recent years. By incorporating modern approaches—including boosting, bagging, and netlift—there has been a giant leap in this arena. We will discuss recent improvements to conventional decision tree and logistic regression technology via two case study examples: one in direct marketing and the second in biomedical data analysis. Within the context of real-world examples, we will illustrate the evolution of classification by contrasting and comparing regularized logistic regression, CART, random forests, TreeNet stochastic gradient boosting, and RuleLearner. All attendees will receive six months access to fully functional versions of the SPM Salford Predictive Modeler software suite.


CE_40T Software for Designing Dual-Agent Phase 1 Trials
INSTRUCTOR(S): Charles Liu and Hrishikesh Kulkarni

East ESCALATE is a widely used tool for simulating and analyzing Phase 1 dose escalation trials. Single-agent designs in this module include the 3+3, the modified Toxicity Probability Interval (mTPI) method, the Continual Reassessment Method (CRM), and the Bayesian Logistic Regression Model (BLRM). Two new methods recently introduced in the software are for dual-agent designs: (1) an extension of the BLRM and (2) a product of independent beta probabilities escalation (PIPE) design.We’ll begin by reviewing the underlying methodology of these new designs, and then present case studies using the software to inform the best choice of design parameters.


CE_41T Current Methods in Survival Analysis Using SAS/STAT Software
INSTRUCTOR(S): Changbin Guo

Survival analysis provides insight into time-to-event data. Traditional techniques such as the Kaplan-Meier estimator, the log-rank test, and Cox regression mostly focus on right-censored data and are widely used in practice. By comparison, specialized methods for dealing with interval-censoring and competing risks are less well known, and practicing statisticians are often not aware that software is available for these methods. After reviewing basic concepts of survival analysis, this tutorial introduces two new procedures in SAS/STAT for the analysis of interval-censored data: PROC ICLIFETEST and PROC ICPHREG. The tutorial demonstrates how to use these procedures for estimation and comparison of survival functions and proportional hazards regression. The tutorial then turns to the analysis of competing risks data and explains how to use the LIFETEST procedure to conduct nonparametric survival analysis and the PHREG procedure to investigate the relationship of covariates to cause-specific failures. A basic understanding of applied statistics is assumed.


CE_42T Improve Your Regression with Modern Regression Analysis Techniques: Linear, Logistic, Nonlinear, Regularized, GPS, LARS, LASSO, Elastic Net, MARS, TreeNet Gradient Boosting, Random Forests
INSTRUCTOR(S): Mikhail Golovnya and Dan Steinberg

Linear regression plays a big part in the everyday life of a data analyst, but the results aren’t always satisfactory. What if you could drastically improve prediction accuracy in your regression with a new model that handles missing values, interactions, and nonlinearities in your data? Instead of proceeding with a mediocre analysis, join us for this presentation, which will show you how modern regression analysis techniques can take your regression model to the next level and expertly handle your modeling woes. Using real-world data sets, we will demonstrate advances in nonlinear, regularized linear, and logistic regression. This workshop will introduce the main concepts behind Leo Breiman’s Random Forests and Jerome Friedman’s GPS (Generalized Path Seeker), MARS (Multivariate Adaptive Regression Splines), and Gradient Boosting. With these state-of-the-art techniques, you’ll boost model performance without stumbling over confusing coefficients or problematic p-values! All attendees will receive six months access to fully functional versions of the SPM Salford Predictive Modeler software suite.


CE_44P (SPANS TWO DAYS) Preparing Statisticians for Leadership: How to See the Big Picture and Have More Influence
INSTRUCTOR(S): Bonnie LaFleur and Jim Hess

Part I: Saturday, July 30
1:00 p.m. - 6:30 p.m.

Part II: Sunday, July 31
8:00 a.m. - 12:00 p.m.

What is leadership? Much has been written and discussed within the statistics profession in the last few years on the topic and its importance in advancing our profession. This course provides an understanding of leadership and how statisticians can improve and demonstrate leadership to affect their organizations. It features leaders from all sectors of statistics speaking about their personal journeys and provides guidance on personal leadership development with a focus on the larger organizational/business view and influence. Course participants work with their colleagues to discuss and resolve leadership situations that statisticians face. Participants will come away with a plan for developing their own leadership and connect with a network of statisticians who can help them move forward on their leadership journey.



Monday Roundtables and Speaker Luncheons

ML19 Incorporating Visual Literacy Standards in an Introductory Statistics Course
SPONSOR: Section on Statistical Education

SPEAKER(S): Jill Young

Twenty-first-century learners use the Internet to find answers to questions. Popular approaches to finding answers include search engines (e.g., Google) and applications (e.g., Wikipedia). Antiquated approaches to solving mathematical and statistical problems are replaced with sophisticated applications (e.g., Excel). Anyone with Internet access can use YouTube to gain knowledge about nearly every subject. In this context, how might we, as imparters of knowledge, engage students in learning statistics? The focus of this roundtable discussion will revolve around a project conducted in an introductory college course on business statistics. Students used statistics to analyze e-voting data and learned how to visually represent their analysis. Students were introduced to infographic software and visual literacy competencies. Working in small groups, students used infographic software to develop visual analyses. The instructor and library coordinator established a rubric for students as a framework for their visual representation. Students developed and demonstrated knowledge in all seven skill areas defined in the Visual Literacy Competency Standards for Higher Education.



Tuesday Roundtables and Speaker Luncheons

TL24 Calling All Statisticians to Consider Becoming Hospital Board of Director: An Impactful Way to Transform Health Care
SPONSOR: Health Policy Statistics Section

SPEAKER(S): Madhu Mazumdar, Icahn School of Medicine At Mount Sinai

All hospitals are governed by a board that is legally accountable for their clinical quality performance. Published literature reveals common board practices that show significant association with better performance (e.g., having a board committee on quality, spending at least a quarter of board meeting time on quality issues, and reviewing quality performance using dashboards or scorecards). Although the statistical methodologies underlying dashboards are simple for statisticians, typical board of directors (BOD) who are chosen for expertise in other nonquantitative fields such as social work, community organization, arts, and philanthropy are ill-prepared for taking a critical look at these reports and underlying data. Statisticians, on the other hand, are uniquely suited for this work, but are rarely found in this role. In this roundtable, I will discuss this opportunity and call on statisticians to apply for BOD positions. I’ll outline the process of acquiring a BOD position and illustrate the value of this role in terms of developing leadership skill and affecting health care transformation by sharing my experience of working with BOD members at Mount-Sinai Health System.



Wednesday Roundtables and Speaker Luncheons

WL15 Building a Successful Private Practice from the Ground Up
SPONSOR: Section on Statistical Consulting

SPEAKER(S): Kimberly Love, K. R. Love Quantitative Consulting and Collaboration

When one has the technical background and skill to analyze and interpret data statistically, it can be appealing to work with clients through a private statistical consulting practice. Private practice can indeed be rewarding in multiple ways, personally and professionally. To do this successfully, however, other knowledge and skills become important. In this roundtable, we will discuss the following elements of starting one’s own private practice: legal aspects of starting a company, creating a web and social media presence, determining services and service fees, developing and retaining a client base, managing finances and budget, access to resources (software, statistical training, etc.), creating a network of trusted collaborators, and maintaining a life/work balance. Anyone who is interested in starting a private practice and would like to learn more should attend this roundtable, as should those who already maintain a private practice and are interested in sharing their approaches to the issues outlined above. The facilitator will also share recent experiences from her own private practice.


 
Copyright © American Statistical Association