Online Fairness Auditing through Iterative Refinement
[PDF] [Publication Link] [Video]
Abstract: A sizable proportion of deployed machine learning models make their decisions in a black-box manner. Such decision-making procedures are susceptible to intrinsic biases, which has led to a call for accountability in deployed decision systems. In this work, we investigate mechanisms that help audit claimed mathematical guarantees of the fairness of such systems. We construct AVOIR, a system that reduces the number of observations required for the runtime monitoring of probabilistic assertions over fairness metrics specified on decision functions associated with black-box AI models. AVOIR provides an adaptive process that automates the inference of probabilistic guarantees associated with estimating a wide range of fairness metrics. In addition, AVOIR enables the exploration of fairness violations aligned with governance and regulatory requirements. We conduct case studies with fairness metrics on three different datasets and demonstrate how AVOIR can help detect and localize fairness violations and ameliorate the issues with faulty fairness metric design.