Chairs:
Zoe Hua, PhD (Servier Bio-Innovations)
Xiaojiang Zhan, PhD (Servier Bio-Innovations)
Abstract: Advances in machine learning and artificial intelligence together with increasing availability of real-world data is transforming the approaches to design clinical trials and optimize treatment strategies. This parallel session brings together experts from academia and industry to discuss cutting-edge methodologies that address key challenges in modern drug development and clinical decision support.
Dr. Yiyun Tang (Pfizer) will present novel reinforcement learning approaches for modeling time-dependent treatment effects, with applications to optimizing individualized treatment regimes. Dr. Yukang Jiang (University of North Carolina at Chapel Hill) will highlight AI-driven strategies for leveraging real-world data to enhance clinical decision support and evidence generation. Dr. Zhe Qu will propose a non-crossing quantile regression method based on deep learning framework in improving time-to-event prediction and discuss its practical use in the context of other quantile-based and non-quantile-based regression methos. Finally, Dr. Zhaohua Lu (Daiichi Sankyo) will demonstrate a novel workflow to develop clinical trial documents based on AI Agenic assistance.
Together, these talks showcase the practical application of innovative machine learning and AI methods in addressing emerging challenges in clinical research, bridging trial design, real-world evidence, and treatment optimization.
Speaker: Zhe Qu, PhD (Servier Bio-Innovations)
Title: Non-Crossing Quantile Regression for Time-to-Event Analysis: A Deep Learning Framework with Theoretical Guarantee
Abstract: Deep learning (DL) has garnered increasing attention in time-to-event prediction due to its ability to model complex nonlinear relationships while offering greater flexibility than traditional methods. In this work, we propose a non-crossing quantile regression framework that estimates multiple quantiles of event time simultaneously in right-censored survival data while ensuring valid quantile ordering. Unlike existing approaches that rely on multilayer perceptrons (MLPs), we leverage Kolmogorov-Arnold Networks (KAN) for efficient function approximation and Transformers for capturing intricate feature dependencies through self-attention. To provide theoretical insights, we establish upper bounds on the prediction error of our quantile estimators. We evaluate our framework on both simulated and real-world datasets, benchmarking its performance against existing quantile-based and non-quantile based methods.
Speaker: Yiyun Tang, PhD (Pfizer)
Title: Reinforcement Learning and Time-Dependent Treatment Effects for Optimizing Treatment Regimens
Abstract: Reinforcement learning (RL) encompasses methods designed to learn optimal policies for sequential decision-making problems that maximize long-term outcomes. Off-Policy Evaluation (OPE), originating from RL and causal inference, enables estimation of policy performance using existing data without conducting new experiments. While OPE has been widely applied in high-dimensional, offline decision-making in computer science, its potential in clinical research is significant. Many clinical trials inherently involve sequential decisions such as managing time-varying treatment effects, covariates, patient heterogeneity, and dynamic regimens (e.g. dose modifications).
A key advantage of OPE applied in drug development is estimating outcomes for alternative dosing or treatment strategies without initiating additional trials. This approach allows reconstruction of expected outcomes under alternative policies and informs future trial design.
In three simulated case studies across diverse clinical settings and objectives, we applied and compared multiple OPE methodologies, including the direct method, importance sampling, and doubly robust reinforcement learning. These case studies explored:
- Comparative effectiveness of alternative dosing regimens
- Evaluation of adverse event management strategies
- Optimization of biomarker- or response-adaptive treatment strategies (e.g., determining optimal thresholds)
By leveraging existing data and advanced Al tools, OPE provides a framework to estimate and explore outcomes of novel/alternative treatment regimens efficiently. This methodology offers the potential to address a wide range of clinical research questions with reasonable sample sizes, accelerating evidence generation and optimizing treatment strategies.
Speaker: Zhaohua Lu, PhD (Daiichi Sankyo, Inc.)
Title: AI Agenic Assistance for Developing Clinical Trial Documents
Abstract: Developing statistical analysis plans (SAPs) is often time-consuming, especially for statisticians who are new to the pharmaceutical industry. As standard practice, a complete SAP must be finalized before first-subject-in; delays at this stage can impede timely integration of statistical strategy into study design and execution. Despite the availability of a standard SAP template, inconsistencies in drafting and interpretation remain common, potentially affecting document quality and creating misalignment with programming teams. As pressures grow to accelerate development while controlling cost and timelines, a smarter, faster, and more consistent approach is essential.
We present an agentic AI system that uses retrieval-augmented generation (RAG) and large language model (LLM) tools to automatically generate study-specific SAPs. To mitigate LLM hallucinations and enhance consistency, we evaluated multiple optimization strategies within the RAG architecture to deliver stable and reliable outputs. Our approach applies semantic search, vector embeddings, and domain-tuned retrieval to extract protocol-specific information, which is then integrated into a harmonized SAP template using the LLM and supporting tools. AI agents equipped with these capabilities further improve the assistant’s flexibility, autonomy, and decision-making intelligence. The resulting system provides substantial gains in efficiency, cost savings, and compliance—while preserving scientific rigor through human-in-the-loop interactive review.
Speaker: Yukang Jiang, PhD (University of North Carolina at Chapel Hill)
Title: Real-World Evidence with Real-World Data: AI Strategies for Clinical Decision Support
Abstract: This presentation provides an overview of artificial intelligence (AI) strategies for transforming real-world data (RWD) into reliable real-world evidence (RWE) that supports clinical decision-making and trial design. Building on recent advances in multimodal modeling and foundation model development, we summarize how electronic health records (EHR), biobank, and clinical trial data can be harmonized to enable scalable and interpretable AI applications. A focus will be placed on frameworks that integrate structured clinical data, imaging biomarkers, and genetic information to identify disease risk patterns and directional multimorbidity networks, uncovering temporal and causal relationships among co-occurring diseases.