Centralized Statistical Monitoring Tools in Trial Conduct
Chair: Rakhi Kilaru (PPD)
Vice Chair: Colleen Russell (Parexel)
Speaker: Tim Rolfe (GSK)
Title: Harnessing the power of centralized statistical monitoring in trial conduct
Abstract: The centralised statistical monitoring and quality tolerance limits (CSM/QTL) special interest group are assembling collaborative approaches to centralized monitoring that adapt to study design and data sources, is resilient to environmental disruptions and uses a combination of statistical monitoring techniques and visualizations to drive an efficient monitoring strategy resulting in tangible benefits to quality and resources. Environmental disruptions such as the COVID-19 pandemic and the Ukraine crisis caused many challenges for clinical trials. On-site source data verification (SDV) in multicenter clinical trials became difficult due to travel bans and social distancing resulting in a fundamental shift from the traditional on-site monitoring paradigm to one more inclusive of CSM.
Commonly used on-site monitoring techniques are not optimal in finding data fabrication, tampering, non-random data distributions, scientific incompatibility between key measures of interest etc. with the greatest potential for jeopardizing the validity of study results. Quality tolerance limits (ICH E6 R2) are used to proactively control systematic risk to factors critical to quality. QTLs combined with statistical monitoring techniques reduce spending on inefficient on-site monitoring practices, resulting in diverting resources to increase sample size or conduct more trials.
Complementary to Quality by Design (QbD) principles, this talk will provide the framework for risk assessment and identification of relevant data and information deemed critical for quality tolerance limits and centralized statistical monitoring. Examples of studies using CSM and QTLs will be shared, with recommendations on how to best harness the power of statistical monitoring tools in post pandemic trial conduct to better manage risks and achieve targeted actions.
Speaker: Jianchang Lin (Takeda)
Title: Machine Learning Enabled Monitoring of Clinical Trials in Real Time Via Probabilistic Programming
Abstract: Monitoring of clinical trials by sponsors is a critical quality control measure to ensure the scientific integrity of trials and safety of subjects. With increasing complexity of data collection (increased volume, variety, and velocity), and the use of contract research organizations (CROs)/vendors, sponsor oversight of trial site performance and trial clinical data has become challenging, time-consuming, and extremely expensive. Across different clinical development phases (excluding estimated site overhead costs and costs for sponsors to monitor the study), trial site monitoring is among the top three cost drivers of clinical trial expenditures (9–14% of total cost).
In this presentation, we will introduce a machine learning based SMRT platform from recent MIT-Takeda AI program collaboration, which can help to enhance operational efficiency in clinical trial oversight and monitoring. Specifically, SMRT platform achieves these results via probabilistic programming, an emerging AI paradigm that offers an alternate scaling route that can be more data-efficient, compute-efficient, and robust than deep learning. SMRT automatically learns structured, multivariate, generative models for clinical trial data, by inferring and updating the source code of probabilistic programs, and detects anomalies by calculating conditional probabilities of new data in real time. Through advanced predictive analytic feature and automation, the platform allow to improve patient safety and site performance by detecting potential issues.
Speaker: Xiaofeng (Tina) Wang (FDA)
Title: FDA Experiences with a Centralized Statistical Monitoring Tool
Abstract: FDA issued two guidances entitled “Guidance for Industry Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring” and “A Risk-Based Approach to Monitoring of Clinical Investigations Questions and Answers Draft Guidance for Industry”. Two guidances assist sponsors of clinical investigations in developing risk-based monitoring strategies and plans for investigational studies of medical products, including human drug and biological products, medical devices, and combinations thereof. We describe our experience with a centralized statistical monitoring platform as part of a Cooperative Research and Development Agreement (CRADA) between CluePoints and the FDA. The approach employed in the CRADA to centralized statistical monitoring is based on a large number of statistical tests performed on all subject level data submitted, in order to identify sites that differ from the others. An overall data inconsistency score is calculated from a high-dimensional p-value matrix to assess the inconsistency of the data between one site and the data from all sites. Sites are ranked by the data inconsistency score (-log(p), where p is an aggregated p-value). Results from a deidentified application are provided to demonstrate the typical data anomaly findings through the Statistical Monitoring Applied to Research Trials (SMART) analysis. Sensitivity analyses are performed after excluding laboratory data and questionnaires data. Graphics from deidentified subject-level trial data are provided to illustrate abnormal data patterns. The analyses are performed by center, country/region, and patient separately. Key Risk Indicator (KRI) analysis is conducted for the selected endpoint. Possible causes of data anomalies are discussed. This data driven approach can be effective and efficient in selecting sites which exhibit data anomalies and provides insights to the statistical reviewers for conducting sensitivity analyses, subgroup analyses and site by treatment effect explorations. However, challenges generated by messy data, with the lack of conformance to data standards and the COVID-19 pandemic exist.
Discussant: Paul Schuette (FDA)