Skip to content

S4B – Conformal Inference for Uncertainty-Aware Machine Learning in Biomedical Research

Chairs: 
Ke Zhu, PhD (North Carolina State University)
Shu Yang, PhD (North Carolina State University)

Abstract: This session highlights advances in conformal inference, a model-free framework that provides reliable and interpretable uncertainty measures for machine learning. The speakers will introduce methods for censored and truncated survival data, semiparametric transformation models that achieve stronger conditional accuracy, fairness-aligned prediction regions for multivariate mixed outcomes, and a possibility-theoretic view that offers a clearer understanding of conformal prediction. Together, these talks demonstrate how uncertainty-aware approaches can improve trust and decision-making in high-stakes settings such as biomedical research.

Speaker: Jing Qin, PhD (National Institute of Allergy and Infectious Diseases, National Institute of Health)
Title: Extending Conformal Survival Prediction with Censored Data
Abstract: Censoring creates major challenges for constructing reliable predictive intervals in survival analysis. I begin by reviewing the conformal inference framework of Qin, Piao, Ning, and Shen (Biometrics, 2025), which uses a resampling-based calibration strategy to obtain distribution-free predictive intervals under right censoring. I then present an extension of their approach that improves stability, broadens applicability, and better accommodates complex censoring structures encountered in practice. I will also discuss how the method can be adapted to handle left-truncated and right-censored survival data. Simulations and data examples illustrate the robustness and practical utility of the extended framework.

Speaker: Peter Hoff, PhD (Duke University)
Title: Conditionally-Valid Prediction Via Monotonic Transformation Models
Abstract: While conformal prediction intervals provide prediction sets that are guaranteed to maintain a target coverage rate marginally over values of predictor variables, the conditional coverage of such procedures may be quite poor. As an alternative, we develop a semiparametric prediction procedure that provides good conditional coverage over a variety of data distributions. Our procedure is based on Bayesian inference using a rank likelihood, which permits automatic accommodation of data types that are discrete, ordinal, continuous, or some mix of these. Predictive inference using this approach is available via a very simple Gibbs sampling algorithm.

Speaker: Larry Han, PhD (Northeastern University)
Title: FACTOR: Fairness-Aligned Conformal Transport for Multivariate Mixed Outcomes
Abstract: In high-stakes domains, decisions often hinge on jointly predicting multiple, correlated outcomes of mixed type (continuous, ordinal, categorical). Existing multivariate conformal methods impose restrictive geometric assumptions, perform poorly with mixed outcomes, or lack subgroup-conditional guarantees, leading to inflated prediction regions and uneven coverage. We propose FACTOR (Fairness-Aligned Conformal Transport for Optimal Regions), a framework for constructing compact and equitable prediction regions. FACTOR learns an optimal-transport map in a latent space via normalizing flows with input-convex neural networks, providing a principled multivariate ranking without shape constraints. To enforce fairness, we synchronize latent-space ranks across subgroups, yielding distribution-free marginal coverage and a finite-sample  bound on subgroup calibration error. A sliding-window cutoff procedure then minimizes prediction region volume while preserving validity. Empirically, on synthetic and six real-world benchmarks, FACTOR consistently achieves target coverage with reduced region volume and subgroup disparities (measured by KS distance) relative to state-of-the-art baselines under competitive runtime. The method also produces interpretable visualizations and conditional summaries, making FACTOR a practical tool for uncertainty quantification in multivariate, mixed-outcome settings.

Speaker: Ryan Martin, PhD (North Carolina State University)
Title: Conformal Prediction Through a Possibility-Theoretic Lens: Easier Interpretation, More Functionality, and Automatic Extensions
Abstract: Conformal prediction (and frequentist methodology more generally) is often criticized as being ad hoc and difficult to interpret compared to Bayesian counterparts that are based on the more familiar probabilistic reasoning and calculus.  But it turns out that conformal prediction (and frequentist methodology more generally) is a special case of what’s called a “possibilistic inferential model (IM)” so it’s both natural and beneficial to interpret it through a possibility-theoretic lens.  In this talk, I’ll introduce possibility theory, possibilistic IMs, and their basic interpretation, then show how conformal prediction drops out as a special case.  This connection not only aids in interpretation but gives conformal prediction more functionality — full-blown, provably-reliable, Bayesian-like uncertainty quantification, which facilitates formal decision-making considerations, etc.  Finally, the derivation of conformal prediction from the IM perspective makes extensions to non-standard cases virtually automatic, and I’ll illustrate this in the now well-known case where the exchangeability assumption is violated due to covariate shift.