Recent advancements in uncertainty quantification are reshaping how machine learning models assess and communicate their confidence in predictions, particularly in safety-critical applications. Researchers are increasingly focusing on enhancing the calibration of model outputs, with new methods leveraging concepts from statistical mechanics and quantum mechanics to improve the reliability of uncertainty estimates. For instance, approaches that utilize pre-softmax logits and energy-based metrics are demonstrating improved adaptability in conformal prediction frameworks, while complex-valued representations are showing promise in better aligning model confidence with human perception. Additionally, Bayesian techniques are being scaled for large models through innovative routing strategies, enabling effective uncertainty quantification without significant computational overhead. These developments not only enhance the interpretability of model predictions but also address commercial challenges in domains like healthcare and autonomous systems, where understanding uncertainty is crucial for decision-making and risk management. The field is moving toward more robust, efficient, and theoretically grounded methods that promise to improve the deployment of machine learning systems in real-world scenarios.
Uncertainty quantification is essential for deploying machine learning models in high-stakes domains such as scientific discovery and healthcare. Conformal Prediction (CP) provides finite-sample cover...
Accurate uncertainty quantification is crucial for making reliable decisions in various supervised learning scenarios, particularly when dealing with complex, multimodal data such as images and text. ...
Modern deep neural networks achieve high predictive accuracy but remain poorly calibrated: their confidence scores do not reliably reflect the true probability of correctness. We propose a quantum-ins...
Uncertainty quantification has emerged as an effective approach to closed-book hallucination detection for LLMs, but existing methods are largely designed for short-form outputs and do not generalize ...
The merit of Conformal Prediction (CP), as a distribution-free framework for uncertainty quantification, depends on generating prediction sets that are efficient, reflected in small average set sizes,...
Foundation models are increasingly being deployed in contexts where understanding the uncertainty of their outputs is critical to ensuring responsible deployment. While Bayesian methods offer a princi...
We present a comprehensive ablation of nine finite-sample bound families for selective prediction with risk control, combining concentration inequalities (Hoeffding, Empirical Bernstein, Clopper-Pears...
Neural networks are a commonly used approach to replace physical models with computationally cheap surrogates. Parametric uncertainty quantification can be included in training, assuming that an accur...
Uncertainty quantification is central to safe and efficient deployments of deep learning models, yet many computationally practical methods lack lacking rigorous theoretical motivation. Random network...