Chair:  Yeh-Fong Chen (FDA)
Vice Chair: Hong Tian (BeiGene)
Instructor: Joseph Ibrahim, PhD (UNC)

Course Description:

This full-day short course is designed to give biostatisticians and data scientists a comprehensive overview of the use of Bayesian methods for clinical trial design as well as training on how these methods can be implemented using standard software. Specially, applications will be demonstrated using R, SAS or both.

The first part of the course gives a broad overview of Bayesian sample size determination with a focus on phase II/III trials. Focus is paid to four concepts that govern sample size determination: (1) the sampling prior that reflects knowledge about the parameter(s) in the data model, (2) the fitting prior used to analyze data once collected, (3) the criterion used as the basis of sample size determination, and (4) the strategy for monitoring, if the trial will include one or more interim analyses. For (3), a comprehensive review of Bayesian criterion for sample size determination will be given, covering such topics as Bayesian type I error rate control, Bayesian power, average coverage criterion, average length criterion, worst outcome criterion.  For (4) multiple strategies will be discussed for monitoring accumulating data, including using predictive probability of success and sequential methods. Applications will be provided based on actual Phase II/III trials using software implementations written as SAS (macros) or using R.

The second part of the course will focus broadly on advanced Bayesian trial designs. The types of designs considered fall into two broad categories: (1) designs that borrow information through the use of an informative prior specified a priori, and (2) designs that seek to borrow information across subgroups within a single trial. Examples designs of type (1) include trials where the goal may be to show that a next-generation medical device (e.g., a coronary stent) is non-inferior or superior to a previous generation of the same device and designs that seek to extrapolate information on treatment efficacy from adult to pediatric disease settings. In both cases, there are often multiple historical datasets that could inform the prior and attention will be given to techniques that can accommodate multiple information sources and to situations where historical dataset(s) are comprised of controls only. Example designs of type (2) include basket trials where the goal is to make inferences regarding treatment activity for different tumor types. In this case, information borrowing is designed to improve estimates of the tumor type-specific response probability. Emphasis of this part of the course will be on understanding the core ideas of these advanced Bayesian approaches and understanding how one can evaluate the performance of a design (i.e., operating characteristics of interest) using simulation studies.


Outline:

Section 1: Bayesian Sample Size Determination (SSD) for Phase II/III Trials

  • Priors used for SSD: sampling priors and fitting priors
  • Bayesian criterion for sample size determination
    • Bayesian power and type I error rates
    • Average coverage criterion
    • Average length criterion
    • Worst outcome criterion
  • Monitoring strategies
    • Monitoring trials using predictive probability of success
    • Efficacy and futility monitoring using skeptical, enthusiastic, and clinical priors
  • Applications and software demonstrations
    • Single arm phase IIA trial design with binary endpoints
    • Phase II trial design with adaptive randomization, and efficacy and futility monitoring

Section 2: Advanced Bayesian Designs Using Information Borrowing

  • Prior distributions based on historical data
    • Power priors & Normalized Power Priors
    • Commensurate priors
    • Robust mixture priors
    • Priors with criterion-based discounting
  • Considerations for constructing priors with a single versus multiple historical datasets
  • Considerations for borrowing information on treatment effects versus on controls only
  • Applications
    • Phase III medical device trial design with focus on primary and secondary endpoints
    • Pediatric trial design with extrapolation from adult data
    • Oncology basket trial design using hierarchical models and Bayesian model averaging

In short, this course will provide a comprehensive overview on the use of Bayesian methods for clinical trial design, including instruction on core cross-cutting concepts as well as example applications consistent with how Bayesian methods are currently being proposed and implemented for complex innovative trials.

Intended Audience: The intended audience for this course include biostatistician and data scientist clinical trial practioners who hold at least a masters-level degree in biostatistics or a related field.

Learning Objectives: The primary learning objectives for this course are to (1) provide practioners with a sound understanding of core, cross-cutting concepts for Bayesian clinical trial design and sample size determination, (2) help practioners understand the benefits and challenges of applying Bayesian methods in advanced trial designs using realistic case studies, and (3) to teach practitioners about software tools (SAS and R) that can be used to implement and evaluate Bayesian designs in practice. By providing applied practioners with a sound understanding of core concepts related to clinical trials, they will better equipped to have discussions with internal and external colleagues regarding the appropriate use of these important methods.

Relevance to Conference Goals: Interest in use of Bayesian methods for clinical trial design and analysis has greatly increased since passage of the 21st Cures Act, and has been further spurred by the FDA’s Complex Innovative Design (CID) Pilot Program. There has never been a time where there was a greater need for more expansive training on the use of Bayesian methods for clinical trial design than there is today. Indeed, FDA guidance explicitly identifies the potential value of Bayesian methods as a pathway for more innovative trials. For example, in the FDA guidance Interacting with the FDA on “Complex Innovative Trial Designs for Drugs and Biological Products”, it is stated that “Some examples of trial designs that might be considered novel or CID … including those that formally borrow external or historical information or borrow control arm data from previous studies to expand upon concurrent controls …”  and that “Bayesian inference may be appropriate in settings where it is advantageous to systematically combine multiple sources of evidence, such as  extrapolation of adult data to pediatric populations, or to borrow control data from Phase 2 trials to augment a Phase 3 trial.” Moreover, in the FDA guidance “Interacting with the FDA on Complex Innovative Trial Designs for Drugs and Biological Products”, it is stated that “Bayesian approaches may be well-suited for some CIDs intended to provide substantial evidence of effectiveness because they can provide flexibility in the design and analysis of a trial, particularly when complex adaptations and predictive models are used.” Our proposal not only aligns study design theme of CSP 2022, but also focuses on the innovative approaches to trial design that are needed now and in the future. Moreover, general training on Bayesian methods for trial design is much less widely available than training on classical frequentist approaches and so this course addresses a very important need where supply currently outpaces demand by a notable margin.

Software Packages: All examples in this short course will make use of SAS Software and/or R. For SAS implementations, tools will be provided with the course materials. For R, we will make use of custom R code (openly available), R packages on CRAN that were authored or co-authored by the presenters (e.g., “bmabasket”), and other Bayesian trial design R software provided with the course materials.


Instructor:

Joseph G. Ibrahim, Ph.D.
Alumni Distinguished Professor of Biostatistics
Director of the Biostatistics Core at UNC Lineberger Comprehensive Cancer Center

University of North Carolina at Chapel Hill

Joseph G. IbrahimDr. Joseph G. Ibrahim, Alumni Distinguished Professor of Biostatistics at the University of North Carolina at Chapel Hill, is principal investigator of two National Institutes of Health (NIH) grants for developing statistical methodology related to cancer, imaging, and genomics research.

Dr. Ibrahim is the Director of the Biostatistics Core at UNC Lineberger Comprehensive Cancer Center. He is the biostatistical core leader of a Specialized Program of Research Excellence in breast cancer from NIH. Dr. Ibrahim’s areas of research focus are Bayesian inference, missing data problems, cancer, and genomics.

He received his PHD in statistics from the University of Minnesota in 1988. With over 30 years of experience working in cancer clinical trials, Dr. Ibrahim directs the UNC Laboratory for Innovative Clinical Trials (LICT). He is also the Director of Graduate Studies in UNC’s Department of Biostatistics, as well as the Program Director of the cancer genomics training grant in the department. Dr. Ibrahim has published over 350 research papers, most in top statistical journals. He has published graduate-level books on Bayesian survival analysis and Bayesian computation. He teaches courses in Bayesian Statistics, Advanced Statistical Inference, Theory and Applications of Linear and Generalized Linear Models, and Statistical Analysis with Missing Data.

Dr. Ibrahim is a Fellow of the American Statistical Association (ASA), the Institute of Mathematical Statistics (IMS), the International Society of Bayesian Analysis (ISBA), the Royal Statistical Society (RSS), and the International Statistical Institute (ISI). He has given a great many full day and 2 day short courses in the past at ENAR, JSM, WNAR, and at pharmaceutical companies on several topics including Introduction to Bayesian methods, missing data, joint models for longitudinal and survival data, Bayesian clinical trial design, longitudinal data, Bayesian survival analysis, and cure rate models.