COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, refers to a proprietary algorithm and, more commonly in AI research, the associated dataset used for assessing recidivism risk in criminal justice. Developed by Northpointe (now Equivant), the algorithm predicts the likelihood of a defendant re-offending. In the context of AI fairness, the COMPAS dataset has become a critical benchmark. It contains demographic information, criminal history, and COMPAS scores for defendants, enabling researchers to analyze and identify attribute-specific biases in predictive models. Its significance stems from its role in highlighting and quantifying algorithmic bias in high-stakes decision-making, prompting extensive research into fair AI systems. Researchers and ML engineers in areas like algorithmic fairness, explainable AI, and responsible AI frequently use this dataset to validate new fairness assessment tools and develop bias mitigation strategies.
COMPAS refers to a dataset derived from a risk assessment tool used in the criminal justice system, which has become a crucial benchmark for studying algorithmic bias. Researchers use it to test new methods for detecting and addressing unfairness in AI models, especially in critical decision-making contexts.
Correctional Offender Management Profiling for Alternative Sanctions, COMPAS dataset
Was this definition helpful?