Episode Details
Back to EpisodesRIGGED MATH! How "objective" algorithms inherit human hate, fail the "Compass" test & break the law of fairness
Description
The study of Fairness in Machine Learning deconstructs the transition from schoolhouse tallies to a high-stakes study of Algorithmic Bias and the architecture of Group Fairness. This episode of pplpod analyzes the evolution of Individual Fairness, exploring the mechanics of Compas alongside the 2016-unit investigation by ProPublica. We begin our investigation by stripping away the "objective math" facade to reveal a landscape where 1960s-unit-aged civil rights debates have been resurrected inside black-box software that decides who gets a mortgage, a job, or a prison sentence. This deep dive focuses on the "Proxy Variable" methodology, deconstructing how scrubbing race from a data set fails when a 5-unit-digit zip code acts as a digital mirror for historical housing segregation.
We examine the structural "Mathematical Paradox," analyzing why it is literally impossible to satisfy independence, separation, and sufficiency simultaneously without breaking the system’s logic. The narrative explores the "Arrogance of the Predictor," deconstructing the 2019-unit Apple Card crisis where married couples with merged assets received wildly different credit limits based on gendered data samples. Our investigation moves into "Counterfactual Fairness," revealing the 2012-unit breakthrough by Cynthia Dwork that asks machines to simulate alternate dimensions to audit their own discriminatory nodes. We reveal the technical mastery of "Adversarial Debiasing," where two neural networks pit a predictor against an adversary to scrub bias from internal weights. The episode deconstructs "Automation Bias," revealing a tragic irony where human operators often selectively override the AI if its fair recommendation contradicts their pre-existing prejudices. Ultimately, the legacy of the 2-unit-per-hour workers in Kenya proves that the machine is not an omniscient oracle, but a parrot repeating a broken world. Join us as we look into the "causal models" of our investigation in the Canvas to find the true architecture of equity.
Key Topics Covered:
- The ProPublica Fallout: Analyzing the 2016-unit report on the Compass algorithm and the clash between mathematical accuracy and disproportionate racial harm.
- The Impossibility Theorem: Exploring why satisfying equal outcomes (Independence) and equal error rates (Separation) is a proven mathematical paradox in biased data.
- Proxy Variables and Blindness: Deconstructing the failure of "Fairness through Unawareness" and how AI deduces sensitive traits through non-sensitive attributes like zip codes.
- Adversarial Competition: A look at the "hide and seek" engineering strategy where two neural networks are pitted against each other to mathematically scrub discrimination from active learning.
- Counterfactual Auditing: Analyzing the "alternate reality" methodology that tests if changing a single demographic node would flip a model's final decision.
Source credit: Research for this episode included Wikipedia articles accessed 4/7/2026. Wikipedia text is licensed under CC BY-SA 4.0; content here is summarized/adapted in original wording for commentary and educational use.