Skip to content

S4C – Expediate Dose Optimization Using Backfill and Randomization

Chair:  Ying Yuan, PhD (MD Anderson Cancer Center)
Abstract: As highlighted in the FDA’s guidance on dose optimization, a key consideration is collecting data across multiple dose levels to enable meaningful comparisons for dose selection. This often requires larger sample sizes and longer trial durations. Backfilling and randomization represent two critical strategies to address these challenges. Backfilling allows more patients to be treated during dose escalation without prolonging the trial, while randomization provides less biased comparisons across doses.

This session invites experts from industry and academia to discuss recent advances in statistical designs and methodologies for efficient dose optimization using backfilling and randomization.

Speaker: Ying Yuan, PhD (MD Anderson Cancer Center)
Title: Integrating Backfill and Randomization for Efficient Dose Optimization
Abstract:
Backfilling during dose escalation has been increasingly used in practice as a highly flexible approach to collect additional data across multiple dose levels without prolonging the overall trial duration. Because backfilling allows more patients to be treated at higher doses at a faster pace—and often in settings with substantial safety uncertainty in first-in-human trials—it is critically important to ensure patient safety and benefit. In this talk, I will discuss key strategies and considerations for backfilling, including when to open and close backfill cohorts and how to incorporate potentially higher toxicity observed during backfill into the dose-escalation process. I will also discuss approaches for integrating backfill patients into randomization to maximize data use and reduce the overall sample size required for dose optimization.

Speaker: Kentraro Takeda, PhD (Astellas)
Title: BF-BOIN-ET: A Backfill Bayesian Optimal Interval Design Using Efficacy and Toxicity Outcomes for Dose Optimization
Abstract: The primary purpose of a dose-finding trial for novel anticancer agents is to identify an optimal dose (OD), defined as the tolerable dose that has adequate efficacy in unpredictable dose-toxicity and dose-efficacy relationships. The FDA project Optimus reforms the paradigm of dose optimization and recommends that dose-finding trials compare multiple doses to generate these additional data at promising dose levels. The backfill is helpful in settings where the efficacy of a drug does not always increase with the dose level. More information is available at these doses by backfilling patients at lower doses while the trial continues to explore higher doses. This paper proposes a Bayesian optimal interval design using efficacy and toxicity outcomes that allows patients to be backfilled at lower doses during a dose-finding trial while prioritizing the dose-escalation cohort to explore a higher dose. A simulation study shows that the proposed design, the BF-BOIN-ET design, has advantages compared to the other designs in terms of the percentage of correct OD selection, reducing the sample size, and shortening the duration of the trial in various realistic settings. 

Speaker: Yong Zang, PhD (Indiana University)
Title: IPOD: An Optimal Design Integrating Proof-Of-Concept with Dose Optimization
Abstract: The emergence of targeted agents and immunotherapies has challenged the traditional oncology drug development process, demanding an urgent need for new design strategies that address both dose optimization and treatment effect evaluation. We have introduced the IPOD design, a seamless early-phase oncology trial design that integrates dose optimization with proof-of-concept evaluation. The IPOD design is statistically rigorous, controlling both the familywise error rate (FWER) and power while minimizing sample size. It is also highly flexible, enabling clinicians to evaluate multiple characteristics of treatment and select the optimal dose based on the totality of evidence, all while preserving FWER and power guarantees. A key strength of the IPOD design is its operational simplicity. All sample sizes and decision boundaries can be pre-tabulated, allowing implementation through straightforward comparisons between the observed number of responses and prespecified thresholds.