The session will bring FDA and Industry Clinicians and Statisticians who have worked on the series on Getting the Questions Right or GTQR based upon applying ICH E9 (R1) to design clinical trials with clinically meaningful endpoints acceptable to regulatory authorities. Most organizations rely on their statisticians to use the “best approaches”. A randomized, double blind, placebo-controlled study is accepted as the gold standard. Randomization does not protect from bias due to events that occur after randomization, e.g. death, discontinuation of treatment, treatment switching, taking rescue medication or missing data due to various reasons. At present, these post-randomization events are dealt with implicitly based on choices made about the data collection and statistical analysis.
Analysis specifications related to key endpoints that were not pre-specified in the protocol are looked at with suspicion by regulators. In order to have greater transparency and clarity from regulators, the Draft ICH E9 (R1) Addendum “Statistical Principles for Clinical Trials: Estimands and Sensitivity Analysis in Clinical Trials”, was issued in June 2017..
An estimand includes (a) the patient population targeted by the scientific question (b) the variable (or endpoint), to be obtained for each patient, that is required to address the scientific question (c) the specification of how to account for intercurrent events to reflect the scientific question of interest (d) the population-level summary (e.g. means) for the variable which provides, as required, a basis for a comparison between treatment conditions.
The GTQR series conducted as 6 Webinars in 2018 brought together multiple stakeholders to advance the understanding and implementation of Estimands. This session will share key learnings that attendees can apply to improve the design and analysis of clinical trials.
In the recent FDA statement on its new strategic framework on real-world evidence (https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm627760.htm), the Sentinel system was highlighted as a good example of using RWE for regulatory decision making. In this session, we will examine the structure, technology and statistical methodologies that have enabled the Sentinel success. We will also examine safety evaluation as a continuum from RCT to RWE. We will discuss how the FDA RWE framework can be implemented in the holistic safety evaluation. We will invite speakers from the US FDA and members/advisors of the ASA BIOP section safety working group.
Safety data from various sources present many challenges with regard to curation, analysis, interpretation, and reporting. Safety outcomes have high variability in measurements and are multidimensional and interrelated in nature. Traditionally, safety data in drug research have primarily come from clinical trials using structured data sources. Newer sources of safety data within and outside clinical trials, including spontaneous reporting systems, real world, and social medial data sources have heightened the need to identify new approaches to analyze and present these data in some insightful way. Additionally, the availability of high speed computing and influx of digital sensors in premarketing and postmarketing arenas has also led to additional sources of safety data and readily available tools and statistical techniques. Visual analytics facilitates blending data visualization and statistical and data mining techniques to create visualization modalities that help researchers to make sense out of safety data with emphasis on how to complement computation and visualization to perform effective analysis. This session will discuss some of the independent and collaborative efforts and open source and online tools that can be leveraged for visual analytics of safety data from clinical trials, spontaneous reporting systems, social media sources, and electronic medical and healthcare records.
Traditional survival analysis featuring time-to-response associated with log-rank test has been the gold standard in evaluation of long-term safety or clinical outcome endpoints. However, it has been recognized that there are few downfalls of this convention. First of all, when multiple events are of interest, time to the first event is usually engaged to continue using the traditional survival analysis with little to no modifications. All events are considered equally and the relationships amongst the events are not taken into account in the analysis. It may mask the actual effect or risk by ignoring the frequency and number of the later events of a subject. Moreover, almost all events are viewed with different importance by both patients and clinical practitioners. Different weights should be assigned to the events of interest and incorporated into the analysis, while all events of interest should be considered. On the other hand, log-rank test, as a non-parametric method, may not be the most powerful tool for comparisons between treatment groups. Other methodologies where censoring can contribute to the information used in the tests have been investigated for their usage in time-to-response evaluations. This session will take another look at the objective of these long-term safety or clinical outcome trials and propose alternative endpoints and their associated methodologies to utilize the data collected in a clinical trial more efficiently. Case examples will be presented for each proposal. Advantages, disadvantages, past experiences and future guidance will also be illustrated by both regulatory and industry presenters.
Randomized controlled trials though represent the gold standards for the approval of a medical product, they have limitations logistically and analytically, in addition to ethical concerns, such as in rare diseases and may not reflect the actual use of medical products in a real-world setting. Given that clinical trials have experienced increased costs/time and design complexity, modernization of product development process is essential. Exploration and understanding of the roles of historical controls and real-world data is one of the areas. Historical control data have long been used in product development. The temporal effect of historical controls and the use of publications for extraction of historical controls however have made its use controversial. On the meanwhile, since the 21st Century Cures Act was signed into law, FDA has been launching initiatives to fulfill the legislation. One of the provisions (3023) is to require FDA to evaluate the potential utilization of real-world evidence to help support the approval of a new indication for a previously approved product and satisfy of post-approval study requirements. FDA regulates therapeutic treatments including biologics, devices and drugs. Because of different modality, there are different features in the considerations of clinical data collection for drugs, biologics and medical devices. For examples, there have been drugs approved based on data from registry-like case series, and registry data as external controls; while registry data have been used to update the long-term efficacy in vaccine products. Recognizing that there are work on-going, it is valuable to understand FDA’s experiences and approaches in the use of historical controls and real-world data in regulatory decision making and medical products approval. In this session, we invite the following speakers from FDA Centers (CBER, CDER, and CDRH) to present their current experiences and approaches: • Elizabeth Teeple, PhD, Center for Biologics Evaluation and Research (CBER) • Pallavi Mishra-Kalyani, PhD, Center for Drugs Evaluation and Research (CDER) • Yun-Ling Xu, PhD, Center for Devices and Radiological Health (CDRH)
Although biomedical technology has advanced, chronic diseases such as Alzheimer’s disease, autoimmune diseases, and diabetes remain difficult to treat. While researchers are continuing to search for effective treatment for these diseases, owing to the long duration and slow progression of chronic diseases, many challenges arise for the design and analysis of chronic disease clinical trials. For example, chronic disease clinical trials would require a long treatment period, leading to a high percentage of missing data and concerns about the validity of statistical analysis models; on the other hand, if the efficacy of a drug for chronic disease is only studied in relatively short-term clinical trials, could we or how would we extend the short-term findings to long term? What are clinically meaningful efficacy endpoints and what are appropriate statistical methods to analyze them?
In this session, we will discuss the statistical issues in chronic disease clinical trials. Our first speaker is Professor Michael Donohue from the University of Southern California. He is also a member of the American Statistical Association Alzheimer’s Disease Scientific Working Group. He will discuss estimands and estimation methods for Alzheimer’s disease clinical trials. Our second speaker is Dr. Sue-Jane Wang from CDER/FDA. She will talk about statistical approaches to biomarker endpoints in the context of chronic disease clinical trials with examples. Dr. James Hung from CDER/FDA will be the discussant for this session.
The discovery and development process of medical products generate large amount data, from genomic data, patient level data, to large networks of health care networks. How to extract knowledge in a better way from data and exploit this knowledge is a question equally challenging for industry, academia and regulatory agencies. The rapid development of sophisticated data visualization and analytical techniques provide powerful tools for data scientists and statisticians to examine data in ways beyond traditional approaches. With the development of computing technique and software, there are multiple software/languages/tools are available for data exploration and visualization including SAS JMP, R Shiny. These software/tools make it possible to explore large amount of data, create meaningful and dynamic visuals of data, identify trends and gain deep insights, and make inferences and predications with modern computing techniques. In this session, we will bring three experts to elaborate on the new tools for tackling modern-day data analysis problems in the field of biopharmaceutical industry and medical devices. The speakers will share their visions on the new advances and demonstrate how these tools are integrated into the operation of drug development and are making great impact in the way we see and understand the data. Douglas Robinson from Novartis will discuss how to use dynamic displays to maximize the value of Data, Zhiheng Xu from FDA will discuss data mining and visualization in the Regulation of Medical Devices. Zachary Skrivanek from Eli Lilly will share recent work from Lilly’s Advanced Analytics Hub.
Pediatric drug development faces substantial clinical, technical, logistical, and ethical challenges. Since the establishment of BPCA (Best Pharmaceuticals for Children Act) and PREA (Pediatric Research Equity Act), both pharmaceutical industry and FDA have put a lot of efforts in improving drug development for pediatric patients. As the current approach for pediatric drug development are largely lagging behind, FDA and European regulators are adopting more proactive approach and require sponsors to discuss the pediatric requirement earlier in the drug development process. Advanced planning and collaborations among regulatory, formulation, toxicity, PK/PD, statistical, and clinical teams are critical for the success of the pediatric drug development. There are many key questions defining the strategic plan, such as the timing to start enrolling pediatric patients, dose formulation and dose-regimen finding, and how to move from older children to younger children. Addressing these key questions requires rigorous planning and innovative thinking in all aspects of clinical development. Statisticians play a critical role in the entire process, often drive the decisions on study designs, dose selection, extrapolation plan, and sample size justification. To improve the processes and to save time and effort, it is important for statisticians and clinicians in both regulatory agency and industry to come together to exchange knowledge and share experiences. In this session, speakers from FDA, and pharmaceutical industry will share their insights on how to address some of the key questions for strategic planning and execution of pediatric clinical trials.
The draft ICH E9(R1) Addendum on “Estimands and Sensitivity Analysis in Clinical Trials” describes a structured framework that includes the specification of an estimand (i.e. “treatment effect to be estimated”) in the presence of intercurrent events, main method of estimation (estimator), and sensitivity estimators to explore the robustness of inferences from the main estimator to deviations from its underlying assumptions. This session will explore various advances in statistical methodology, which were motivated by the estimand framework and the potential five strategies of accounting for intercurrent events (treatment policy, composite, hypothetical, principal stratification, and while on treatment). This includes methods for estimation of the treatment difference for subjects in the trial who can adhere to one or both treatments based on the potential outcomes framework as well as an alternative framework based on true outcomes framework. In addition, as the Addendum incorporates the missing problem into the broader context of intercurrent events, sensitivity analyses should also be considered in this broader context, rather than merely as sensitivity to missing data assumptions. This session will also share methods for addressing sensitivity to the assumptions made within each of the five general strategies of accounting for intercurrent events.
As immunotherapy is being integrated as a key therapeutic pillar in oncology, it has unique issues worth special consideration in clinical trials designs such as biomarker, combination strategies, and endpoint. For example, 1) Biomarker and combination strategy: how to design confirmatory trials in a manner consistent with modern tumor biology information for monotherapy and combination? 2) Endpoint: how to design the trial considering the special characteristics of pseudo progression and delayed clinical effect? And how to design the trials when there may not be a good earlier endpoint to predict clinical benefit, e.g., in maintenance trials?
In this session, we will specifically discuss recent innovations in confirmatory clinical trial designs to increase operational efficiency and optimize patient outcomes in the context of development of cancer immunotherapies.
The classic phase II trials are designed to evaluate a single treatment in patients with a particular cancer type. Simon’s Two Stage Design has been a popular design. However, a change in emphasis of oncology drug development has occurred that involves the use of tumor genomics to guide the use of molecularly targeted drugs. We have a trial where we test a new therapy across many disease types simultaneously where all patients have the same mutation. A new class of trials where the drug is tested simultaneously across the different subgroups of different tumor types (or baskets). In this design, we need to answer “Does the drug work in any subgroup?” and “Does the response differ between subgroups?” The activity of the drug may have a similar response across all subgroups,or may be similar in some subgroups and no activity in others. In basket designs, an advantage would be the ability to share/ borrow information across the subgroups. The statistical models for basket design Phase II trials need to consider the heterogeneity between the subgroups, the sample size, and control of Type I and II errors. One design is an aggregation design [Cunanan, et al, 2017] where in Stage 1 determine the response for each subgroup and if the subgroups homogeneous or not; and in Stage 2 either continue allocating to aggregate group and single test for efficacy, or selected baskets and perform multiple tests for each basket for efficacy. The Bayesian models exploit the similarity and borrow information between baskets (subgroups) [Simon et.al. , Neuenschwander et. al. ]. One model EXNEX (exchangeability EX and non-exchangeability NEX) allows borrowing between subgroups while avoiding too optimistic borrowing for extreme subgroup outcome. In this session, we will review these different methods, address the sample size necessary, control of Type I and II errors, and how these methods are consistent with the FDA Guidance for Master Protocols [Sept 2018].
This session proposal is submitted on behalf of the ASA BIOP RWE Scientific Working Group.
Data from “real world” clinical practice and medical product utilization – outside of clinical trials – are regarded as an increasingly pragmatic source of evidence generation that holds high potentials to increase efficiency and improve clinical development and life cycle management of medical products.
Robust RWE will not only leverage increasing amount of real-world data, but weave together different sources of data, such as clinical data, registry, and electronic health records, to bridge the gaps between efficacy and effectiveness. However, many challenges still remain. We will propose a general framework in the use of RWE which include but are not limited to the following key elements: • define scientific question in a regulatory-specific context, • identify potential real-world data (RWD) sources, • evaluate RWD quality and standard in a fit-for-purpose manner, • determine clinical study design according to available RWD, • specify appropriate analytic approaches with the consideration of advanced analytics, and • analyze and interpret the study outcomes while being aware of the of the study design’s strengths and limitations.
The regulatory, scientific, and ethical issues in using RWE have yet to be fully understood and addressed. These issues present ample interesting research topics for biostatisticians such as how to ascertain outcome measures, how to mitigate biases and potential confounding in both study design and analysis, how to understand and define appropriate estimands, and how to obtain clear understanding on the different data sources and quality of the data on the impact of study design, analysis, and interpretation.
In this session, invited speakers will discuss and provide insight on the above challenges and opportunities. Potential speakers and discussant will include experts from regulatory agencies, academia, and industry.
FDA CDRH defines artificial intelligence under the digital health as a device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence techniques, such as machine learning, neural network, deep learning etc., have been used in medical device area such as medical imaging devices and biomarker development, etc. It serves as a critical component in many innovative medical devices yet does have mystic aspects. To gain a better understanding of this emerging technology, and its benefits, challenges and risks to medical decision making, this session intends to share thoughts and generate discussions among professionals and stakeholders from academia, industry as well as FDA.
Therapeutic protein products can elicit drug specific immune responses in people receiving the protein therapeutic. Because anti-drug antibodies (ADA) may negatively affect the safety and efficacy of protein therapeutics, detection and management of ADAs are essential components of drug development. A key component of ADA assay development is determination of cut point, by which samples are deemed to be either positive or negative. Although in 2019 FDA published draft guidance on the development and validation of ADA assay, in which a statistical procedure for cut point analysis is described, procedures for determination of cut point values are far from being settled and have remained the subject of a vigorous debate. In this session, three ADA assay experts, representing industry and regulatory perspectives, will together provide fresh insight on the latest advances in ADA assay cut point determination.
Utilizing observational studies to generate real world evidence and support the process of regulatory decision-making is of increasing interest. Due to non-randomized nature, the absence of confounding cannot be assumed in observational studies when the associations between a given exposure and a given outcome is investigated. The failure to account for confounding in the study design and statistical analysis methods can lead to biased results and ultimately an incorrect inference. Meanwhile, observational studies and statistical models rely on assumptions, which can range from how a variable is defined or summarized to how a statistical model is chosen and parameterized. The validity of all inferences from any analysis is dependent upon the extent to which these assumptions are met. In this session, speakers will discuss methods to assess the effect of confounders and minimize the resulting biases on study results. Moreover, sensitivity analyses by altering underlying assumptions to evaluate robustness of results will also be discussed.
Structured benefit-risk assessment in human drug evaluation has been the subject of many conference sessions, publications and books. The importance of quantitatively assessing benefit impacts versus risk impacts in public health is a key component of sponsor NDA/BLA submissions and FDA’s regulatory review. Given benefit-risk is of key importance to stakeholders, this session highlights examples of how benefit-risk planning and analysis affects sponsor and regulatory decisions. Such decisions include whether to develop a medicine, to continue development as the medication profile emerges, to approve a medicine, and/or to issue a warning or modify labeling. How has benefit-risk assessment affected decision-making within sponsor companies? How has it affected the approach of making decisions in CDER? What are best practices in benefit-risk planning, analysis, and communication to ensure effective and transparent decision-making? This session will demonstrate tangible ways that benefit-risk evaluation has contributed to and can effectively contribute in the future to medicines development and regulatory decision-making. Presentations will refer to case studies and review industry and regulatory perspectives. The session will conclude with a discussion of key challenges and best practices for benefit-risk approaches in decision-making. Presentations: 1. Advancing benefit-risk assessment for human drug review – Sara Eggers, Office of Strategic Programs, CDER, FDA 2. Applications of Benefit-Risk Assessment and Patient Preferences to Inform Decision Making – Drug Development, FDA Advisory Committee, and Regulatory Submission Examples – Eva Katz and Rachael DiSantostefano, Department of Epidemiology, Janssen R&D, LLC. 3. Best practices for benefit-risk evaluation to ensure effective and transparent decision-making – Gregory Levin, Office of Biostatistics, CDER, FDA
Historical data are from previous trials with a similar setting and could provide information relevant to the research questions of the current trial. Historical Data Borrowing in a new clinical trial uses historical information in both design and analysis for the trial. In the design stage, it can reduce the number of patients and hence reduce costs and timelines. In the analysis stage, it could improve the precision of the estimates. Therefore, historical data borrowing could increase statistical power for the hypothesis testing or reduce the type I error rate provided the historical information is sufficiently similar to the data from the current trial. On the other hand, heterogeneity often exists among the historical trials and between the current trial and the historical trials, which limits the use of historical data in new clinical trials.
During this session, the presenters from industry, academia, and regulatory will discuss their recent research in theory and applications on historical data borrowing in clinical trials. Dr. Ivan Chan, Vice President Statistics from AbbVie, will present the novel Bayesian approaches for historical data borrowing in continuous endpoints and their applications. Professor Matthew A. Psioda from UNC will present the use of patient level historical data to make historical data borrowing more efficient. Dr. David Ohlssen, Group Head of Statistical Methodology from Novartis, will present the summary of state-of-the-art statistical methodologies for historical data borrowing in clinical trial design and analysis. A speaker from FDA will present the regulatory thoughts and Statistical principles for historical data borrowing in clinical trial design and analysis. These novel and important information will significantly benefit audiences in research and practice on historical data borrowing in clinical trial design and analysis.
Pain, especially chronic pain, affects millions of humans and animals. Jensen (2016) reports that 43% of Americans suffer from some type of chronic pain, accounting for up to $635 billion annually in medical costs and lost productivity. Reid et al. (2013) assert that animals suffer more from pain that do humans because they cannot understand why pain occurs or anticipate pain relief. Our ability to assess pain in a valid and reliable way is essential to meet the growing demand to treat and manage pain more effectively in human and animals. Dr. Jean Recta (FDA) will examine pain assessment methods in animals relative to advances in pain assessment in humans. She will also discuss strengths and challenges in the use of animal pain assessment tools from a regulatory statistics perspective. Dr. Dottie Brown (Elanco Animal Health) will present on minimizing observation bias and placebo effects in chronic pain studies. Observation bias is of particular concern with subjective outcomes such as owner or veterinarian assessments of pain. Placebo effects may occur due to regression-to-the-mean. Many diseases, particularly chronic ones like osteoarthritis, have waxing and waning signs. Owners are more likely to seek out enrollment in a trial when those signs are at a peak. Over time, even without intervention, these animals will cycle back to their average level of symptom burden or disability, so animals in control groups may show improvement. Dr. Ciprian Crainiceanu (Johns Hopkins University) will discuss wearable computing for characterization of activity when pain occurs in the free-living environment. He will demonstrate the use of novel pattern recognition methods to extract the relevant movement signals, functional data analysis of these signals to quantify participant-specific differences between activity during periods when pain is reported or absent, and population level methods for characterizing daily patterns of activity as a function of pain levels.
Drug development is becoming increasingly costly due to the high attrition rate. To reduce cost and improve success rate, evidence based decision making is critical for clinical development. Key decisions in early drug development include but not limited to 1) do we see enough evidence to confirm proof of mechanism of the drug 2) do current efficacy and safety support further development? 3) can we modify (stop, accelerate or enroll more patients) an ongoing trial with accumulating data? 4) do we have the optimal dose/dose regimen for future studies? Traditional decision making rarely depends on formal quantification and relies mostly on an ambiguous process involving subjective expert opinion. Therefore, it is critical to develop a quantitative and evidence-based decision making framework which can provide a full picture of benefit and risk. Making good decisions involves three important factors. First, efficient and flexible trial designs allow collection of appropriate data for early adaptations. Second, appropriate fit-for-purpose metrics are needed to allow robust decision making. Third, appropriate statistical methods are critical for valid statistical inference that is essential for robust decision making. Numerous novel designs and statistical approaches for objective decision making have been developed in literature involving both Frequentist and Bayesian approaches. The Bayesian approaches are gaining significant attention recently due to their inherent flexibility and intuitive interpretation. However, the real life implementation of such design and statistical analysis are still rare. In this session, renowned speakers from FDA and industry will share their experience in developing novel designs and statistical approaches for robust decision-making in early clinical development. Practical utility of recent approaches including Bayesian methods will be discussed. Real world examples will be provided for illustration of Go/No Go decisions.
Natural historical control group has been implemented in multiple orphan products clinical development programs for treatment effect assessments for decades. The usage of such a group in common disease clinical development programs, however, is very limited if any. With the advancement of medicine, more and more medical products are on the market, available to the patients. It inevitably creates tough competition with the patients’ enrollment to clinical trials and difficulties in other operational considerations. Eventually, it may delay approvals of new molecular entities that could be more effective than the existing medical products. Recently, the interest of digging more into historical or external data has caught much attention. It is well-recognized that historical and external data can potentially improve efficiencies of a clinical trial. Besides using subject-level data from historical data reservoir in an attempt to mimic a randomized parallel-group trial, researches realized there is much more potential in historical data utilization. For example, through meta-analysis of the historical data, one can choose more sensitive endpoints and primary analysis models. Moreover, instead of using subject-level data, Bayesian framework can be introduced to leverage historical data at a population-level. This session will showcase few real case examples that use historical data beyond choosing a matching historical control. New methodologies and trial designs will also be discussed with case examples and/or simulations. Both industry and regulatory experiences will be shared, followed by lessons learned and future guidance.
The transition to precision medicine approaches in cancer has sparked a much-needed shift in the design and implementation of clinical trials. For example, umbrella trials are frequently used to evaluate the efficacy of targeted treatment to groups of patients with the same cancer type, while basket trials evaluate the effectiveness of a drug based on its underlying mode of action rather than strictly on the specific form of cancer it was intended to treat. Recently, a new promising approach in precision medicine is developing therapy for patients with specific molecular characteristic which are agnostic to cancer site. In this approach, rather than requiring separate development programs for each disease site, it will be based on biomarkers irrespective of organ site or histology. For example, the Food and Drug Administration (FDA) approved pembrolizumab, a programmed death 1 (PD-1) inhibitor, for the treatment of adult and pediatric patients with unresectable or metastatic, microsatellite-instability–high (MSI-H) or mismatch-repair–deficient (dMMR) solid tumors, regardless of tumor site or histology. There are many statistical issues for this new precision medicine approach from both device and therapy perspectives; yet the issues are less known to the society. In this section, experts from FDA, device and pharmaceutical industries, and academic will discuss the statistical issues and methods in clinical trial designs and data analysis for this new emerging precision medicine approach.
According to a recent review article by Tang et al. [Nature Reviews Drug Discovery 17, 783-784 (2018)], by the end of October 2018, the number of immune-oncology agents under development has increased to as many as 3,394, the majority of which are in preclinical or phase 1 stage. Despite such a concentration of resources in this explosively developing field, the statistical methodology for early oncology clinical development, on the other hand, have not been fully optimized to embrace the special needs and challenges of immune-oncology. Wage et al, summarized the main challenges of early-phase study design of immunotherapies from a statistical perspective [Journal for ImmunoTherapy of Cancer 6, 81 (2018)]. Late-onset toxicities, drug combinations, novel clinical endpoints, and design of expansion cohorts were identified as the primary issues for phase 1 development among others. In August 2018, a draft guidance released by the FDA, “Expansion Cohorts: Use in First-In-Human Clinical Trials to Expedite Development of Oncology Drugs and Biologics”, emphasized the challenges including collecting and reporting of new safety information, confirming recommended phase 2 dose, and evaluating preliminary anti-tumor activity. In this session, we will invite speakers from academia, regulatory agency, and industry with extensive experiences in immune-oncology early development to share their insight and innovation on several key issues. Opinion leaders will join the panel discussion to summarize the challenges and brainstorm ideas to overcome them. Not only could these conversations be of practical value for statisticians working on early stage immune-oncology to learn about the state-of-the-art approaches, but also inspire thinking differently and being innovative when new challenges emerge.
Bayesian approaches become increasingly interested by clinical researchers because they provide analytical pathway to effectively utilize available information and enhance efficiency or precision in clinical trials.
Bayesian design has been widely used in early phase clinical trials, and it is also attractive in late phase efficacy trials especially in rare disease areas and pediatric trials due to low incidences. It will expedite the development if we can leverage historical data from previous trials or other populations to augment the efficacy and reduce sample size for the current trial. However, several challenges in Bayesian designs need to be carefully considered during the design stage, such as choosing proper prior distributions, handling heterogeneity, and developing computationally feasible approaches.
In this session, innovative methods and practical considerations will be discussed for Bayesian design in clinical trials with methodologies and applications. Speakers from pharmaceutical industry who are highly engaged in this area will share their experiences and research in Bayesian analysis in designs and analysis in clinical trials. The speakers will present some Bayesian design in pediatric clinical trials, some applications of Bayesian methods in Benefit risk assessment, and some Bayesian methods in early development clinical trials. Discussions will be provided on these methods including some regulatory perspectives.
List of invited speakers: • Amarjot Kaur, PhD, Merck & Co., email@example.com, “Bayesian framework for pediatric drug development” • Margaret Gamalo, Eli Lilly & Co., firstname.lastname@example.org, “Applications of Bayesian Methods in Indirect Comparisons of Benefit Risk” • Mani Lakshminarayanan, Complete HEOR Solutions (CHEORS), email@example.com, “Use of Bayesian Methods in Shaping New Early Development Clinical Trials Paradigm”
Session discussant: Frank Harrell, Vanderbilt University, firstname.lastname@example.org
Getting the questions right is a critical part of any scientific pursuit. The ICH E9 R1 laid out a good conceptual framework for endpoints using estimand concept. In clinical development, the concept is not only important for clinical trial design, but can be very important for the safety and benefit risk evaluation at compound level. This session will examine the estimand in safety and benefit risk assessment. Potential topics will include but not limited to:
• “Challenges of Safety and Dual Benefit-Risk Estimands”: Safety endpoints are multifaceted with frequency, severity, timing, duration and reversibility. They are most often underpowered, unexpected and impossible to impute. Safety estimands are impacted differently by intercurrent events than efficacy estimands. The pairing of safety and efficacy estimands to conduct a benefit-risk assessment is even more challenging. In this talk, the challenge of safety estimands will be discussed first, then dual estimands for BRA will be discussed with examples.
• “Safety Estimands: A Regulatory Perspective”: One of the objectives of FDA CDER’s Office of Biostatistics Safety and Benefit-Risk working group has been to develop best practices for the Office’s evaluation of safety and benefit-risk, including the choice of safety estimands and appropriate methods to evaluate them. This talk will describe the working group’s current thinking regarding safety estimands, and will use an example indication to illustrate the thought process on different considerations when defining and evaluating safety estimands.
•Topic 3- “Creating a Benefit-Risk Estimand from My Drug Program’s Efficacy and Safety Estimands”: This talk will use a real example indication, beginning with a benefit-risk value tree, to discuss how a benefit-risk estimand may be created from the efficacy and safety estimands. The structured thinking of value trees and estimands will demonstrate how benefit-risk assessment may be considered in the planning phase of a clinical program, based on the scientific questions that require elucidation in this example clinical program.
Panel discussants will be invited to comment on this hot area. We hope to trigger more discussion and deeper thinking on the estimand of safety and benefit risk assessment.