Chair: Rakhi Kilaru, MS, MBA (PPD clinical research business of Thermo Fisher Scientific)
Co-Chair: Yeh-Fong Chen, PhD (FDA)
Title: Covariate Adjustment in Clinical Trials
Abstract: Covariate adjustment allows for incorporation of prespecified prognostic factors, or factors that predict the likely outcome of a disease or ailment, in the statistical analyses and can result in narrower confidence intervals and greater statistical power to detect treatment effects. Several considerations apply in covariate adjustment such as therapeutic area and indication, how many covariates to choose, how should these be modeled, how to handle missing data for baseline covariates, and special cases that include leveraging the use of machine learning tools in covariate adjustment. In recent literature, the use of a “super-covariate”, a patient-specific prediction of the control group outcome as a further covariate has been proposed. Other considerations include the use of historical patient data to learn to construct an approximation to the optimal covariate that maximizes power in a future study, thereby leveraging rapidly improving machine learning technologies and increasingly vast quantities of individual participant data to improve clinical trials. This session will assemble presentations that cover many of the considerations above.
Speaker: Gary G. Koch, PhD (Department of Biostatistics, University of North Carolina at Chapel Hill)
Title: A Review for Randomization-based Covariate Adjustment with Recent Extensions
Abstract: For randomized clinical trials with at least moderate sample size, adjustment of comparisons between treatments for baseline covariables can be helpful for two reasons. One is enhancement of power, and the other is the removal of the influence of baseline imbalances for the covariables. Adjustment for baseline covariables can either be through generalized linear (or semi-parametric) models or through a randomization-based extension of Mantel-Haenszel methods. The former has the limitation of assumptions that may be debatable or unrealistic, although it can have the advantage of fully describing the relationship of an endpoint to both treatments and covariables in a general population. The latter has the advantage of no external assumptions (beyond its intrinsic assumptions of valid randomization and valid data), although it only enables inference for the comparison between treatments for the randomized population. The randomization-based method has invocation by constraining differences between treatments for means of covariables to 0 in a multivariate vector that additionally includes the unadjusted treatment effect sizes for the endpoints under assessment. Such randomization-based analysis of covariance (RBANCOVA) is applicable to differences between means for continuous measurements (or their ranks), differences between proportions, log hazard ratios for time to event data, log incidence density ratios for counted event data, and rank measures of association for ordinal data. Also, extensions to account for stratification factors in the randomization are available as well. Several examples which illustrate RBANCOVA and model-based counterparts have discussion.
Speaker: Bernard Vrijens, PhD (Aardex Group)
Title: Maximizing Treatment Impact: No Covariate Outweighs the Effect of Drug Adherence
Abstract: In clinical research, understanding the factors that shape treatment outcomes is critical. Yet, no covariate has a greater impact than non-adherence to prescribed medication. Using case studies and data, we will explore how inconsistent drug exposure from patient non-adherence can complicate dose selection and distort trial results. Additionally, we will discuss strategies to better manage adherence in clinical trials.
Speaker: Klaus Kähler Holst, PhD (Novo Nordisk)
Speaker: Christian Bressen Pipper, PhD (Novo Nordisk)
Title: A Retake on the Analysis of Scores Truncated by Terminal Events
Abstract: Many large scale RCTs record scores of disease progression up to a terminal event such as death signifying irreversible disease progression. Both the scores and the timing of the event are crucial components when assessing treatment efficacy. In reality it is impossible to disentangle score progression from an event such as death, since most scores are not readily meaningful beyond death. This is, however, typically what is done in the analysis of score progression, where – upon imagining a scenario in which deaths can be prevented – score values are predicted beyond death. Not only is this scenario far from reality but the assumptions on which predictions are based are also very hard to justify.
In this work we propose to assess treatment interventions simultaneously on scores and terminal event. Our proposal is founded on a natural data-generating mechanism without making assumptions about scores beyond the terminal event. We use modern semi-parametric statistical methods to provide robust and efficient inference for risk of terminal event and expected score progression conditional on being without terminal event at a pre-specified landmark time. Specifically, we derive semiparametric efficient one step estimators based on the efficient influence functions when including baseline covariates. To calculate the estimators in practice, we plug in predictions from prespecified working regression models. We then derive simultaneous large sample properties based on efficient influence functions and further accommodate for the fact that working regression models may be misspecified. As a final step we use the derived simultaneous behavior of our estimators to suggest a powerful closed testing procedure that allows for simultaneous assessment of treatment effect on both risk of terminal event and score progression. A simulation study mimicking a large scale outcome trial is provided to demonstrate the magnitude of efficiency and power gain.
Speaker: Yi Huang, PhD (Department of Mathematics and Statistics, University of Maryland, Baltimore County)
Title: One-Class Support Vector Machines Integrated Bayesian Approaches for External Control Borrowing in Clinical Trials
Abstract: The integration of real-world evidence into regulatory approval processes for treatments has expanded globally, with an increasing emphasis on synthetic control methods. These methods enhance the assessment of treatment efficacy and safety in clinical trials, particularly randomized trials with limited concurrent controls. Currently, there is no superior synthetic control method capable of optimizing resource use with multiple and heterogeneous external datasets that mirror real-world conditions. Common approaches include propensity score (PS) methods for adjusting covariate balance, Bayesian approaches for controlling information borrowing based on outcomes, and various two-stage methods. We introduce a novel machine learning application: one-class support vector machines (OCSVM) to exclude external data with non-matching covariates, offering a potential alternative to PS methods. The comprehensive simulation study demonstrates that OCSVM integrated with Bayesian methods generally surpasses the performance of traditional PS methods integrated with Bayesian approaches. However, OCSVM may encounter challenges in other scenarios.
This is a joint work with my student Ji Li, and my colleague Dr. Lilly Yue and her research group.