Skip to content

S4A

Information Borrowing in Oncology Trials

Chair: Freda Cooner (Eli Lilly)
Vice Chair: Xiang Li (JnJ)

Speaker: Cassie Dong (Takeda)
Title: Statistical modeling for information borrowing from external cohorts in oncology trials for rare population with time-to-event endpoints
Abstract: Introduction: for oncology studies with orphan condition (disease < 1 in 10,000 persons), it is challenging to conduct a randomized study with fully powered time-to-event endpoint. Thus, studies of such rare diseases may often end up with a positive trend without statistical significance due to small sample sizes. Borrowing information from external data through historical studies or chart review in this setting can potentially provide additional clinical evidence to use alongside a concurrent trial and thus improve the precision of point estimates of efficacy variables, resulting in narrower confidence intervals. Such hybrid approaches will ultimately speed up patient access to effective treatment with high unmet medical need.

Methodology: In this work, we provide both frequentist and Bayesian frameworks for time-to-event endpoints to incorporate external cohort, when efficacy data is available for both control and treatment arms, in a small sample size setting for both internal and external data. The frequentist approach is based on propensity-score weighted regression, using separate models to calculate the weights of the two external arms. The two Bayesian models differ in the way borrowing is measured. In the first approach, the amount of borrowing for the two treatment arms is parametrized through the variance parameter of the prior for hybrid control/treatment. In the second approach, a mixture prior is used to control the proportion of information used between concurrent and external cohorts. A Gibbs sampler is implemented, and credible intervals are used to compare the methods.

To overcome the challenges of small number of events from external cohorts, we propose a smoothed bootstrap-based tipping point analysis and a simulation framework of event projection to guide the sample size determination and sensitivity analyses. We illustrate the advantages/disadvantages of both methods through extensive simulation study and sensitivity analyses of prior specification. Practical considerations for regulatory submissions will also be shared.


Speaker: Haitao Chu (Pfizer)
Title: Non-concurrent controls in platform trials: can we borrow their concurrent observation data?
Abstract: Adaptive platform trials (APTs) offer an innovative approach to study multiple therapeutic interventions more efficiently through flexible features such as adding and dropping interventions as evidence emerges, creating a seamless process that avoids disruption of enrollment. The benefits and practical challenges implementing APTs have been widely discussed in the literature; however, less consideration has been given to how to use the non-concurrent control (NCC) data (i.e., the data generated by patients recruited in the control arm before a new treatment is added) when the outcome of interest is a time to event endpoint. Including the NCC can increase the power of the trial. However, due to the omnipresent change of standard care over time, population drift and other calendar time biases, complete borrowing of the NCC survival data may lead to some bias on the estimation.

In this talk, we propose an alternative approach to borrow the concurrent observation part of the NCC data by left truncation using a simple decision-making flowchart, which can reduce the bias due to the change of standard care under certain assumptions. Then, the restricted mean survival time (RMST), estimated by the Kaplan-Meier method, is used to compare the treatment versus the pooled control group. We present two simulation studies to illustrate the performance of the decision-making flowchart method under different scenarios. We advocate researchers and drug developers to apply and validate this simple approach in practice.


Speaker: Mohamad Hasan (Janssen)
Title: Dynamic Regularized Bayes Borrowing Leveraging Efficiency of Estimation
Abstract: Clinical trials often encounter enrollment challenges. Borrowing historical data to the current trial may alleviate these challenges by reducing the number of participants and shortening trial duration while increasing statistical efficiency. However, naive borrowing can result in biased estimates and inflate false positives rate (FPR). We proposed a machine learning-based dynamic borrowing approach – Regularized Bayes (RB) to ensure efficient borrowing via minimizing mean square error (MSE) to optimize the balance between bias and uncertainty. A regularization term was used to link the current study with historical data to dynamically calibrate the degree of borrowing. The final estimate based on RB is anticipated to improve the MSE based on independent analysis (IND) of the current study data (i.e., no borrowing). The simulation results showed that the RB had better adaptivity in discounting historical information with higher between-trial heterogeneity. In some situations, RB can achieve up to 55% lower MSE relative to no borrowing and improve true positive rate (TPR) over Meta-analytic prior by 15% to 30%. At the same time, it provides added protection against trial heterogeneity similar to the Robust meta-analytic and Commensurate priors.


Speaker: Robert Beckman (Georgetown University)
Title: Optimizing Proof of Concept Across Oncology Portfolios Subject to Budgetary Limitations
Abstract: This talk focuses on optimizing proof of concept throughout a portfolio of drugs and potential indications, rather than optimization for one drug or one study. It provides a quantitative analysis of a portfolio management practice commonly performed qualitatively by diversified pharmaceutical companies.

Previous work has shown that individual randomized “proof-of-concept” (PoC) studies may be designed to maximize cost-effectiveness, subject to an overall PoC budget constraint. Maximizing cost-effectiveness has also been considered for arrays of simultaneously executed PoC studies subject to a budget constraint.  Defining Type III error as the opportunity cost of not performing a PoC study, we saw that the optimal PoC study size was smaller than the traditional size when the number of equally valuable PoC studies exceeded the budget limit.

Here we evaluate the common pharmaceutical practice of awarding PoC study funds in two stages to allocate limited resources across a portfolio. Stage 1, or the first wave of PoC studies, screens drugs to identify those to be permitted additional PoC studies in Stage 2. In order to receive Stage 2 funding, a drug must obtain at least one PoC in Stage 1. This is a form of information borrowing between studies.

We investigate if and when this strategy significantly improves efficiency sufficiently to offset slowing development. We quantify the time-adjusted benefit, cost, benefit-cost ratio, as well as the Type III error given the number of Stage 1 PoC studies and various baseline variables.

Relative to a single stage PoC strategy, significant time adjusted cost-effectiveness gains are seen when at least one of the drugs has a low probability of success (10%) and especially when there are either a large number of indications allowed per drug (10) or a large portfolio of drugs (4). In these cases, the recommended number of Stage 1 PoC studies ranges from 2-4, tracking approximately with an inflection point in the minimization curve of Type III error. Small or relatively homogeneous portfolios do not benefit from this approach.