AI systems trained on historical data inherit and amplify societal biases. Biased hiring, lending, insurance, and criminal-justice AI systems have caused documented harm and regulatory action — including class-action lawsuits and regulatory fines.
Bias detection evaluates model outputs across protected characteristics (race, gender, age, disability) using statistical fairness metrics — demographic parity, equalized odds, disparate impact ratio. Pre-processing techniques address bias in training data; in-processing constraints modify model training objectives; post-processing calibration adjusts outputs for fairness. Ongoing monitoring detects fairness drift as data distributions evolve. Intersectional analysis evaluates bias across combinations of characteristics, not just individual dimensions.
Bias testing libraries (Fairlearn, AIF360), fairness metric calculators, intersectional analysis frameworks, and bias monitoring dashboards.
Nothing downstream yet.