Post-hoc calibration is a technique that adjusts a machine learning model's predicted confidence scores to accurately reflect their true probability of correctness. It aligns model confidence with empirical reliability, significantly reducing miscalibration without affecting the model's underlying accuracy, which is crucial for safety-critical applications.
Post-hoc calibration makes AI models more trustworthy by ensuring their confidence predictions accurately reflect how often they are correct. This process improves safety in critical applications, like assistive technologies, by allowing systems to act only when highly confident, without reducing the model's overall accuracy.
confidence calibration, probability calibration, reliability calibration
Was this definition helpful?