All Times ET
Back to search menu
Friday, February 19
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
PCD1 - Dashboards: Conveying Your Modeling Outcomes to Enhance Audience Engagement
Practical Computing Demo
Instructor(s): Clair Alston-Knox, Predictive Analytics Group; Theo Gazos, Predictive Analytics Group
Modern technology has led to massive increases in information (data) available to businesses and government agencies. Along with this increase in available data, management, employees , researchers and the general public need to be presented with the salient information it provides in a form that is suitable for them to clearly and quickly see the message of any underlying analysis or summary.
Dashboards, available using web-browsers or mobile technology have emerged as an effective medium in which to convey information using appropriate snapshots and trends, and can be tailored for different audiences.
In this tutorial, we will use several case studies to provide a basis for participants to think about how they may effectively construct dashboards for their own audience, with advice on the types of graphs and summaries that can be quickly understood, typical detail that different users may require, layout for web vs mobile technology and the use of group and global filtering. Automation for periodic updating will be achieved using a intuitive GUI interface pipeline, illustrating the ease of updating dashboards (and reports) which previously was manual and often time consuming.
Outline & Objectives
This tutorial is aimed at data scientists and statisticians who need to convey information to audiences beyond technical reports and scientific papers. Along with showing techniques to make dashboards interesting and attractive, we will introduce pipelines to automate the updating of the dashboard as new data becomes available, and guard against unexpected employee attrition and staff changes. No prior experience with Dashboards is required.
The tutorial will be case study based, and several dashboards will be constructed for different purposes. For example, a management dashboard will be constructed, for both webpage and mobile. We will use visualisations such as geo-charts and other standard charts, then filters will be applied to allow users easy access to the information they require.
Dashboards will be constructed with basic summary statistics and extended using more sophisticated models, conveying predictions and trends for policy decision purposes, planning and general interest. In addition, we will use other techniques, such as statistical processs control to produce real time monitoring dashboards that may be beneficial in industry and healthcare services.
About the Instructor
Dr Theo Gazos is the Managing Director of Predictive Analytics Group. Theo has over
25 years of experience building economic and econometric models that isolate and quantify the impact
of changing market dynamics (domestic and international), competition effects and government policy on private and government sector organisations. Theo is passionate about bringing the power of statistics and machine learning to all levels within organisations, and has used his years of experience to develop an interface and user flow within AutoStat® that makes this objective achievable.
Dr Clair Alston-Knox is a Senior Statistician with Predictive Analytics Group (Melbourne
Australia). She had been an research and academic statistician since 1992, with a number of biometric
and statistical consulting positions in government and universities. She joined Predictive Analytics and
the AutoStat Institute in 2018 because her teaching, consulting, advising and ethics committee roles were frequently frustrated by researchers who were very capable of understanding the objective and benefits of statistical or machine learning approaches, but did not have the resources to learn the required platform to enable next level analysis.
Relevance to Conference Goals
Dashboards are a powerful tool for enabling statisticians and data scientists to better communicate and collaborate with colleagues and clients. Constructing a useful dashboard requires skill as an applied statistician, and the collaboration with clients is usually very natural in this setting. Clients and colleagues are able to engage with the messages being displayed in the dashboard, and feedback tends to become very natural. This feedback loop is helpful in developing skills in both data story telling, as the statistician becomes aware of how lay-people interpret visual displays, and it serves to develop potentially lifelong collaborative partnerships through the development of better understandings of how various members of the team contribute to the outcome. The use of dashboards can have a positive effect on the organisation by conveying clear messages to employees in different areas of the operation, and allowing them to see company snapshots and trends in a clear format. This increased understanding allows employees to contribute to dialogue and planning based on a solid understanding of the organisations current position, making statistical contributions very active.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
PCD2 - JMP Statistical Discovery Software from SAS
Practical Computing Demo
Instructor(s): Ruth M Hummel, SAS Institute / JMP Division; Kevin Potcner, SAS Institute / JMP Division
Imagine this: You are a statistical consultant and your new client brings their collected data to you, explains their goals, and asks, “How do I analyze this?” You examine the spreadsheet of data and ask a few questions, only to realize that, although they collected a lot of data, each of the treatments was only applied to one large block, and there isn’t any replication to test the treatment effect! The whole experiment needs to be redone, and the resources that went into this first attempt were wasted.
This is why thoughtful design of the experiment is so critical – it can save you much time, money, and tears!
In this session we will briefly discuss WHY designed experiments are so important, and then we will cover HOW, in JMP, to quickly and easily design your experiment and generate a data table with appropriate order randomizations and with prepopulated model scripts to make analysis a one-click process once you’ve collected the data. We will also cover a few types of common designs, how to branch out into custom designs, and how to compare possible designs to pick the best one for your goals.
Outline & Objectives
Objectives:
Attendees should learn more about:
• How easy designing an experiment can be, even for complex scenarios.
• Common types of experimental designs and how to create them, and the flexibility of a custom optimized design and how to create one.
• How to compare and judge candidate designs, how to explore Power and Sample Size calculations.
• How to match the analysis to the experimental design.
Outline:
Introduction to Design Of Experiments
• What is DOE?
• Conducting Ad Hoc and One-Factor-at-a-Time (OFAT) Experiments
• Why Use DOE?
• Types of Experimental Designs
Factorial Experiments
• Designing Factorial Experiments
• Analyzing Full Factorial
Custom Designs
• Options for Factors
• Quick Overview of Optimality
Case Study
• Defining the Problem and the Objectives
• Identifying the Responses
• Identifying the Factors and Factor Levels
• Identifying Restrictions and Constraints
• Preparing to Conduct the Experiment
• Analysis
About the Instructor
Kevin Potcner is an Academic Ambassador with JMP (a division of SAS), working with professors and researchers to use JMP. Kevin also teaches predictive analytics and data mining in the MBA program at University of San Francisco, and he serves on the Data Science Advisory Board Member at California State University, Fullerton. He has an MS in Statistics from the University of Florida.
Ruth Hummel is also an Academic Ambassador with JMP, supporting the technical needs of professors and instructors who use JMP for teaching and research. Ruth is an author of Business Statistics and Analytics in Practice, 9th edition, and has been teaching and consulting about statistics and analytics for over a decade, at the University of Florida, at the US Environmental Protection Agency, and now at SAS/JMP. She has a PhD in Statistics from The Pennsylvania State University.
Relevance to Conference Goals
This session directly addresses “Theme 2: Study Design and Data Management” and “Theme 3: Implementation and Analysis” by covering the topics of how to design a study, how to compare potential study designs, how to investigate power and sample size concerns, and how to match the analysis of the data to the original experimental design.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
PCD3 - Causal Inference Using Stata: Estimating Treatment Effects with Observational Data
Practical Computing Demo
Instructor(s): Chuck Huber, StataCorp LLC
Modified: Observational data often come with challenges that the data analyst needs to address. Treatment status or the exposure of interest may not be assigned randomly. Data are sometimes missing not at random (MNAR), which can lead to sample-selection bias. And statistical models for these data often need to account for unobserved confounding.
Join Chuck Huber, Director of Statistical Outreach, as he shows you how you can use standard maximum-likelihood estimation to fit extended regression models (ERMs) that deal with all of these common issues. He will work examples that demonstrate how to account for these observational data problems when they arise individually and when they occur simultaneously.
Outline & Objectives
1. Overview of the potential-outcomes framework for causal inference
o Stable unit treatment value assumption
o Potential-outcome means
o Average treatment effects
o Average treatment effects on the treated
2. Estimating treatment effects
o Using the regress and margins commands
o Using the teffects commands
- Regression adjustment
- Inverse probability weighting
- Propensity score matching
- Covariate distance matching
3. Estimating treatment effects with complications
o Estimating treatment effects while accounting for unobserved confounders
o Estimating treatment effects with sample selection (data missing not at random)
About the Instructor
Joerg Luedicke is a Senior Social Scientist and Statistician at StataCorp LLC. Joerg was the lead developer of Stata's latestdiscrete choice model commands and helped in the development of Stata's teffects suite of commands. Prior to joining StataCorp, Joerg earned a PhD in Sociology from Bielefeld University, Germany.
Relevance to Conference Goals
This proposed practical computing demonstration is primarily relevant in regards to the "Implementation and Analysis" theme of the conference. We will present state-of-the-art statistical methods for estimating treatment effects with observational data. It is also relevant to the "Study Design and Data Management" theme because it is important for researchers to have a good understanding of causal inference methods and treatment effects estimation in the design stage of a study. Because the proposed demonstration shows a number of hands-on examples that include discussion on how to interpret and communicate the results, it also touches on the "Effective Communication" theme of the conference.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
PCD4 - WesDaX®: An Online Analysis and Reporting Platform
Practical Computing Demo
Instructor(s): Tom Krenzke, Westat; Naomi Yount, Westat.
WesDaX® is an online analysis and reporting platform created by Westat (www.wesdax.com) that can run from any standard web browser, requires no code writing, and no experience with analysis software. WesDaX supplements project reporting for research projects, and allows staff, clients, collaborators, and stakeholders to run analyses from microdata. The demonstration will begin with some background on WesDaX, what WesDaX can do, and what is unique about WesDaX. A tour through a public data suite will be given, demonstrating analyses of American Community Survey data and Behavioral Risk Factor Surveillance System data. The main objective of the presentation is to provide awareness of the tool, which can be beneficial to the audience and hit on several conference themes, such as educating others about data from surveys, evidence-guided statistical practices, and reproducible evidence. WesDaX analysis results are powered by WesVar (the analytic engine that computes the estimates) and are generated appropriately from complex sample data, with statistical testing. There is an option for advanced confidentiality protection, which protects against table differencing attacks.
Outline & Objectives
The objective of the course is to empower the user to be able to take an in-depth first look at data from sample surveys, while generate statistics that handle complex survey data in variance estimation and statistical testing. An outline of the course is as follows:
1. Introduction to WesDaX®
a. Video
b. Key features
c. Architecture – WesDaX interface, WesVar analytic engine
2. Statistical methods
a. Point estimation
b. Variance estimation
c. Statistical testing
d. Disclosure avoidance
3. Landing page
a. White paper
b. Guide
c. Users
d. Public use suite
e. Demo – BRFSS
4. Exercises
5. Summary
a. Educating others about data from surveys
b. Evidence-guided statistical practices
c. Reproducible evidence
d. How to get started
About the Instructor
Tom Krenzke is a Vice President and Associate Director in Westat’s Statistics and Evaluation Sciences Unit, and has about 30 years of experience in survey sampling and estimation techniques. Mr. Krenzke adds new statistical capabilities by developing software for statistical disclosure control; nonresponse bias analysis; area sampling, and imputation. Mr. Krenzke is a Fellow of the American Statistical Association (ASA) and leads Westat’s Steering Committee on WesDaX (Westat’s real-time online table generator).
Naomi Yount, Ph.D., is an industrial /organizational psychologist and Westat Senior Study Director with more than 15 years of experience in organizational research. She has expertise in a variety of research methodologies, from qualitative interviewing to quantitative analyses of survey and other organizational data. At Westat, Dr. Yount conducts analyses such as psychometric analyses for new or revised surveys and key driver analyses predicting outcomes such as turnover or employee engagement.
Relevance to Conference Goals
WesDaX incorporates best practices in generating tabular statistics and conducting statistical tests from complex survey data without any programming code. As part of a data management toolkit, WesDaX provides an efficient way to disseminate aggregated data to the public. This tool can help statisticians and project managers use data to improve their ability to communicate with and aid customers and organizations, and have a positive impact on your organization. Furthermore, the course will demonstrate benefits related to data preparation for tabulations and will provide illustrative data analysis examples, which focus on a variety of data types from varied applied settings that support evidence-guided statistical practice.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
T1 - Regression-Style Modeling with Variable Selection and Reduction
Tutorial
Instructor(s): Clay Barker, SAS / JMP; Ruth M Hummel, SAS Institute / JMP Division
Variable Selection is a crucial step in the model building process, whether we are building a predictive model or trying to understand the results of a designed experiment. Generalized Regression modeling provides a single framework for doing interactive variable selection and fitting generalized linear models. This workshop will start with a brief overview of the generalized linear model for modeling responses that are not necessarily normally distributed. We will also introduce variable selection techniques, including stepwise methods like Forward Selection and penalized regression methods like the Lasso. We close the workshop with examples featuring both observational and experimental data and a variety of response types.
Outline & Objectives
Outline
1. Brief Overview of Generalized Linear Models
2. Intro to Stepwise Variable Selection Methods
3. Intro to Penalized Regression Methods
4. Examples with nonnormal distributions, censoring, multicollinearity, etc.
(a) Performance objectives
By attending this presentation, participants will improve their knowledge of generalized linear models and variable selection techniques. They will also feel comfortable using these methods in software.
(b) Content and instructional methods
The presentation will alternate between the use of slides and software demonstrations. Handouts given to attendees will cover both.
About the Instructor
Dr. Clay Barker is a Senior Research Statistician Developer with JMP (a division of SAS) on a variety of statistical platforms in JMP, including Generalized Regression, Fit Curve and Clustering. He earned his doctorate in statistics from North Carolina State University. He holds several patents, including one for his work on implementing new visualizations for interactive model building in generalized regression.
Dr. Ruth Hummel is an Academic Ambassador with JMP (a division of SAS), supporting the technical needs of professors and instructors who use JMP for teaching and research. Dr. Hummel is a coauthor of Business Statistics and Analytics in Practice, 9th edition (2018), and has been teaching and consulting about statistics and analytics for over a decade, at the University of Florida, at the US Environmental Protection Agency, and now at SAS/JMP. She has a PhD in Statistics from The Pennsylvania State University.
Relevance to Conference Goals
Career Development - Building regression models is a crucial part of data analysis. Sharpening these skills in modern software can be helpful for statisticians in every stage of their career. Performing variable selection in an interactive environment makes it quick and easy to communicate results and assess tradeoffs of different models.
Implementation and Analysis - We will be discussing applications related to:
• Modeling
• Inferential and hypothesis testing
• Predictive analytics
• New packages or procedures
• Analytics, big data, and unstructured data analytic methods
• Machine learning
• Implementing reproducible methods
• Evidence-guided statistical practice
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
T2 - Bayesian Analytics in Practice
Tutorial
Instructor(s): Sujit Kumar Ghosh, North Carolina State University; Amy Shi, SAS Institute, Inc.
The Bayesian paradigm provides a natural and practical way for building analytical models by expressing complicated models through a sequence of simple conditional models making them useful for simple to complex data structures. This tutorial will begin with a few simple introduction to Bayesian hierarchical models and then expand on more realistic and complex models that have recently emerged within Machine Learning literature. All of these models will be illustrated through practical applications and worked-out examples without getting into the theoretical underpinnings. Participants with basic knowledge of probability theory and statistical inferential framework would find the tutorial useful in expanding their standard toolkit to advanced use of Bayesian analytical methods. The concepts and methods discussed are demonstrated using the various software (R and SAS) developed by the presenters, but they are applicable to any modern Bayesian software package.
Outline & Objectives
Part I - Introduction to Bayesian Hierarchical Models
1. Basic components of Priors, Likelihood and Posterior;
2. Predictive Distributions;
3. Computational Methods using Monte Carlo Simulations
Part II - Primer on R and SAS
1. JAGS through R
2. SAS through PROC MCMC and PROC BGLIMM
Part III – Hierarchical Models in Practice
1. Linear and generalized linear models;
2. Multi-level models;
3. Penalized regression models with missing data
This tutorial aims to familiarize attendees with the essential concepts and computational methods of the Bayesian analytics in the areas where hierarchical modeling is conducted. They will learn how to deal with practical issues that arise from Bayesian analysis, especially those in multilevel modeling. Another major goal is to help attendees become comfortable with using software to conduct Bayesian inference using machine learning models.
About the Instructor
Professor Sujit Kumar Ghosh is currently a Full Professor in the Department of Statistics at North Carolina State University (NCSU). He has over 25 years of experience in conducting, applying, evaluating and documenting statistical analysis of biomedical and environmental data. Prof. Ghosh is actively involved in teaching, supervising and mentoring graduate students at the
doctoral and master levels. He has supervised over 35 doctoral graduate students and recently published a popular book titled "Bayesian Statistical Methods" co-authored with Brian Reich which is being used as a textbook at several universities. He is an elected fellow of the ASA and has also served as the Deputy Director at SAMSI (NC).
Amy Shi is a senior research statistician developer in the Advanced Statistical Methods Department at SAS Institute Inc. Her main responsibility is developing and enhancing the Bayesian capabilities of SAS software, with a focus on generalized linear mixed models, discrete choice models, and multilevel hierarchical settings. She is the developer of the BGLIMM procedure. She has a PhD in biostatistics from the University of North Carolina at Chapel Hill.
Relevance to Conference Goals
Bayesian methods are becoming ever more popular in many applied fields. We have designed the tutorial from a practical point of view, covering a wide range of commonly encountered hierarchical analytical models applied to several different data structures encountered in practice (simple rectangular data structure to complex data structure with missing and/or censored observations). The course emphasizes the practical aspect of Bayesian computational methods using some of industry standard software. The how-to part of the course is presented using a variety of worked-out examples from different applied settings, with code explained in detail.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
T3 - Tidyverse Tools in R for Data Science and Statistical Inference
Tutorial
Instructor(s): Chester Ismay, DataRobot; Jessica Minnier, Oregon Health & Science University
Many statisticians use R to clean, manage, and analyze data. Recently, a philosophy promoting tidy data (uniform in shape, one observation per row, one variable per column) and tidy code (readable, consistent across tasks) has risen in popularity. The “tidyverse” is a term for a collection of R packages that embrace this philosophy and are designed to work together to improve readability of code and reproducibility of workflows. The tidyverse is especially accessible to beginners as it allows students to dive into writing tidy code and quickly perform data wrangling, data reshaping, and data visualization tasks. It is also useful in professional data science and statistics as it provides a cohesive architecture for analyzing tidy data. This workshop will introduce tidyverse core and community packages for data processing and visualization (e.g. dplyr, janitor, ggplot2) and showcase new packages (moderndive, infer) designed to extend the tidyverse to a common framework for statistical inference. Using practical examples with large and complex R datasets (e.g. gapminder), we’ll show how to use tidyverse tools to create readable and accessible code for novices and experts alike.
Outline & Objectives
Participants will learn how to tackle the data analysis workflow from start (wrangling, tidying) to finish (visualization, analysis, inference) using tidyverse tools in R. Participants will be able to incorporate any of these tools into their own coding practices as each component can stand alone and also work seamlessly together as a whole. Topics include:
Data Wrangling using the dplyr package
Data Visualization using the ggplot2 package
Data Tidying using the tidyr and janitor packages
Resampling using the moderndive package with dplyr
Statistical inference using the infer package
Intended level: R novices to intermediate R users. Advanced R users may also find interest here as a simplified way to perform statistical inference from a simulation-based approach.
Prerequisites: Some experience with R is helpful. This workshop is interactive, so participants are encouraged to bring a laptop with the latest versions of R and RStudio installed. This workshop will only use free and open-source software.
About the Instructor
Chester leads data science, machine learning, and data engineering in-person workshops as a Data Science Evangelist for DataRobot University with DataRobot. He obtained a PhD in Statistics from Arizona State University and has taught courses and led workshops in mathematics, computer science, statistics, data science, and sociology. He is co-author of the fivethirtyeight, infer, and moderndive R packages and is author of the thesisdown R package. He is also a co-author of Statistical Inference via Data Science: A ModernDive into R and the Tidyverse, an open-source textbook for introductory statistics and data science using R.
Jessica Minnier is an Associate Professor of Biostatistics at OHSU who collaborates with researchers and clinicians in the medical and healthcare fields. She uses R for data cleaning, analysis, and visualization. She has experience teaching statistical methods and programming to graduate students in statistics, public health, biology and medicine, as well as to her statistician peers. She has helped develop R and statistics educational materials, including an interactive R Bootcamp, various recorded R workshops, and interactive R Shiny tutorials.
Relevance to Conference Goals
The tidyverse is designed to provide code that is easily readable, which greatly improves collaboration and communication. These methods can be implemented across many disciplines and can improve programming confidence for beginners. In addition, at the heart of many applied statistician roles is performing hypothesis tests and confidence intervals. The presented moderndive and infer packages extend the tidyverse philosophy to also allow statistical inference to follow a friendly, explicit syntax. This workshop will teach participants a new way to think about these inferential concepts using R packages designed to make their analyses simpler, more reproducible, and more easily modifiable for use in multiple projects. The concepts of bootstrapping and permutation tests, as shown computationally in the infer and moderndive packages, allow for professional development and growth in the field of statistical inference. Connections can also be made to traditional tests like the t-test and ANOVA through use of the infer package and other tidyverse packages. Participants can extend this approach to their own analyses and showcase it in their organization and in their own work.
Fri, Feb 19
9:00 AM - 11:00 AM
Virtual
T4 - Introduction to BlueSky Statistics
Tutorial
Instructor(s): Robert Anthony Muenchen, University of Tennessee
BlueSky Statistics is an open-source graphical user interface to the R language. It uses menus and dialog boxes to manage data, create plots, and analyze data. By default, the R code that it writes is hidden from the user, but can also be displayed and modified before execution.
For non-programmers, BlueSky helps them get their work done. For students of R, it helps to learn R code. For advanced users, it offers a way to convert code into dialogs that allow more effective interaction between programmers and non-programmers within an organization.
Outline & Objectives
1) Importing Data from Excel, SAS, SPSS, Stata, Web Survey tools, etc.
2) Managing Data
a) Computing new variables
b) Conditional transformations
c) Factor level management (uses forcats package)
d) Missing value imputation (uses simputation package)
e) Ranking
f) Recoding
g) Reshaping data (uses tidyr package)
h) Subsetting data
i) Splitting – test/train & group by processing
3) Graphics (uses ggplot2 package)
a) Grammar of Graphics concepts (faceting, etc)
b) Bar charts
c) Density
d) Frequency
e) Histograms
f) Line charts
g) Q-Q plots
h) Scatterplots
4) Basic Analysis
a) Summaries & frequencies
b) Crosstabulation
c) Group comparisons (parametric & non-parametric)
d) Tables – easily generate complex tables of statistics and statistical tests (uses the arsenal package)
5) Model Building
a) Linear regression
b) Logistic regression
c) ANOVA
6) Model Tuning
a) Overview of machine learning methods available
b) Overview of cross-validation methods available
7) Model Statistics – scoring, ROC curves, various metrics & diagnostics
8) Setting options – APA journal style, significant digits, etc.
9) Reproducibility & R Markdown
10) Syntax editor
About the Instructor
Robert A. Muenchen is the author of R for SAS and SPSS Users, and co-author of R for Stata Users and An Introduction to Biomedical Data Science. He is also the creator of r4stats.com, a popular web site devoted to analyzing trends in data science software, reviewing such software, and helping people learn the R language.
Bob is an ASA Accredited Professional Statistician™ who focuses on helping organizations migrate from SAS, SPSS, and Stata to the R Language. He has taught workshops on data science topics for more than 500 organizations and has presented workshops in partnership with the American Statistical Association, RStudio, DataCamp.com, and Revolution Analytics. Bob has written or co-authored over 70 articles published in scientific journals and conference proceedings and has provided guidance on more than 1,000 graduate theses and dissertations at the University of Tennessee.
Relevance to Conference Goals
Many organizations have statisticians and data scientists who are expert programmers. They also have staff members who need to analyze data, but who lack the time or inclination to become good programmers. Software like BlueSky Statistics can help these two types of analysts work together more effectively by using the same R software for their work. Programmers can extend BlueSky’s capabilities with menus and dialogs that control their R code. People wishing to learn R can do so by studying the code that BlueSky writes for them.
Fri, Feb 19
11:00 AM - 1:00 PM
Virtual
CS13 - Facing Organizational and Ethical Considerations Resulting from COVID-19
Concurrent Session
Chair(s): Yimei Li, University of Pennsylvania
Fri, Feb 19
11:00 AM - 12:30 PM
Virtual
CS14 - Leveraging More Information
Concurrent Session
Chair(s): Jessica Thomson, USDA
Fri, Feb 19
11:00 AM - 12:30 PM
Virtual
CS15 - Bayesian Applications
Concurrent Session
Chair(s): Clark Kogan, Washington State University
Fri, Feb 19
11:00 AM - 12:30 PM
Virtual
CS16 - Building Communication Skills at All Levels
Concurrent Session
Chair(s): Coleman Reed Harris, Vanderbilt University
Fri, Feb 19
12:30 PM - 1:30 PM
Virtual
PS3 - ePoster Session 3
Poster Session
Fri, Feb 19
1:30 PM - 3:00 PM
Virtual
CS17 - Complex Data and Designs
Concurrent Session
Chair(s): Eric B. Stephens, Nashville General Hospital
Fri, Feb 19
1:30 PM - 3:00 PM
Virtual
CS18 - Controlling Text and Texting Controls
Concurrent Session
Chair(s): Mary J Kwasny, Northwestern University
Fri, Feb 19
1:30 PM - 3:00 PM
Virtual
CS19 - Modeling Topics
Concurrent Session
Chair(s): Jennifer H Van Mullekom, Virginia Tech
Fri, Feb 19
1:30 PM - 3:00 PM
Virtual
CS20 - Engaging in Difficult Conversations with Nonstatisticians (Panel)
Concurrent Session
Chair(s): Thomas G Stewart, Vanderbilt University School of Medicine
Fri, Feb 19
3:00 PM - 4:30 PM
Virtual
GS2 - Closing Session
General Session
Chair(s): David J. Corliss, Peace-Work
The Closing Session is an opportunity for you to interact with the CSP Steering Committee in an open discussion about how the conference went and how it could be improved in future years. CSPSC vice chair, David Corliss, will lead a panel of committee members as they summarize their conference experience. The audience will then be invited to ask questions and provide feedback. The committee highly values suggestions for improvements gathered during this time. The best student poster will also be awarded during the Closing Session, and each attendee will have an opportunity to win a door prize.