A world of knowledge explored

READING
ID: 83B33E
File Data
CAT:Artificial Intelligence
DATE:March 21, 2026
Metrics
WORDS:1,513
EST:8 MIN
Transmission_Start
March 21, 2026

Algorithms Mirror Society's Deepest Biases

In 2019, a widely-used algorithm designed to predict which patients needed extra medical care was quietly making a devastating error. The system analyzed about 200 million people annually, helping hospitals decide who should receive additional resources. But researchers at UC Berkeley discovered something alarming: at any given risk score, Black patients were considerably sicker than white patients the algorithm flagged for the same level of care. The problem wasn't a coding error or malicious intent. The algorithm used healthcare spending as a proxy for health needs, and because Black patients have historically received less healthcare spending due to systemic barriers, the system learned to assume they needed less help—even when they were objectively sicker.

This wasn't an isolated glitch. It was a pattern that has emerged across virtually every domain where artificial intelligence touches human lives.

The Seven Sources of Algorithmic Prejudice

Bias enters machine learning systems through at least seven distinct pathways, according to research by Suresh and Guttag. Some emerge from technical processes—how data is collected, measured, or aggregated. But the most insidious form, historical bias, doesn't originate in the machine learning process at all. It comes from the world itself.

Even perfect data collection, flawless measurement, and impeccable technical execution cannot prevent historical bias. When you train an algorithm on data that reflects centuries of discrimination, the system doesn't see injustice—it sees patterns to replicate. The algorithm that learned to associate Black skin with higher recidivism risk wasn't malfunctioning. It was functioning exactly as designed, learning from a criminal justice system that has disproportionately surveilled, arrested, and convicted Black Americans for generations.

This creates a perverse situation: the better an algorithm becomes at identifying patterns in historical data, the more effectively it can perpetuate historical wrongs.

When Risk Assessment Becomes Risk Creation

ProPublica's 2016 investigation of COMPAS, a risk assessment tool used across the criminal justice system, revealed how this plays out in practice. Black defendants were 77% more likely to be rated as high risk for future violent crime compared to white defendants. The false positive rates told an even starker story: 45% of Black defendants were incorrectly classified as high-risk, nearly double the 23% rate for white defendants.

The algorithm wasn't directly considering race—that would be illegal. Instead, it relied on proxy variables that function as indirect measures of race while maintaining plausible deniability. Factors like zip code, employment history, and social network characteristics all correlate with race in a society structured by residential segregation and employment discrimination. The algorithm treats these correlations as objective risk indicators, stripping away the sociopolitical context that created them in the first place.

The scale matters here. A prejudiced judge might make dozens of biased decisions in a career. A biased algorithm can make thousands in minutes, systematically and without rest. Data scientist Cathy O'Neil calls these "Weapons of Math Destruction"—models that are opaque, generate harmful outcomes that victims cannot dispute, and consistently penalize the poor while benefiting the well-off.

The Illusion of Medical Objectivity

Healthcare presents itself as objective, driven by biological facts rather than social prejudices. Yet algorithmic bias has found fertile ground even here.

Consider skin cancer detection. A 2025 study in the journal Dermis reviewed 21 global datasets containing over 100,000 images used to train AI diagnostic systems. They found exactly 11 images that explicitly represented brown or black skin tones—none from individuals of African or South Asian backgrounds. The resulting AI models showed significant accuracy drops for darker skin tones, more likely to misclassify cancerous lesions as benign in darker-skinned individuals. For a disease with a 99% five-year survival rate when detected early, a delayed diagnosis can be a death sentence.

The bias extends beyond diagnosis into treatment. A June 2025 study from Cedars-Sinai tested leading large language models including Claude, ChatGPT, and Gemini by presenting identical psychiatric cases with only the patient's race changed. When the patient was described as African American, the models generated measurably less effective treatment recommendations. The technology marketed as a tool to expand access to quality healthcare was instead replicating the documented disparities in how Black patients receive psychiatric care.

The Resume That Changes Based on Gender

Employment algorithms promised to remove human bias from hiring by focusing purely on qualifications. Amazon discovered the flaw in this logic when it had to scrap its CV-scanning tool after finding it had learned gender bias from historical hiring data. In a tech industry where women have been systematically underrepresented, the algorithm learned that being male was itself a qualification.

The bias has evolved as the technology has advanced. An October 2025 Stanford study published in Nature found that modern large language models consistently portrayed female candidates as younger and less experienced than male counterparts with identical qualifications. The systems gave higher scores to older men than to older women, even when their resumes were word-for-word identical except for gendered pronouns.

Derek Mobley's lawsuit against Workday illustrates the real-world consequences. As a Black job seeker over 40 with a disability, Mobley alleged that Workday's AI screening system discriminated against him on multiple grounds. What makes the case significant is that on May 16, 2025, Judge Rita Lin denied Workday's motion to dismiss and granted preliminary collective-action certification for the age discrimination claim—suggesting courts are beginning to take algorithmic discrimination seriously as a systemic problem, not just individual grievance.

Why Diversity Isn't Just About Fairness

The 2014 U.S. Equal Employment Opportunity Commission report found that the high-tech sector employed a larger share of white Americans and a smaller share of African Americans compared to overall private industry. This matters not just for employment equity, but because it shapes what biases get built into the systems these workers create.

When the people designing AI systems come from homogeneous backgrounds, they bring homogeneous assumptions about what counts as normal, professional, or trustworthy. An August 2025 test of AI tools found that systems rated braids and natural Black hairstyles with lower "intelligence" and "professionalism" scores—biases rarely seen with white women's hairstyles. These weren't explicit rules programmed into the system. They were patterns the AI absorbed from training data that reflected existing prejudices about professional appearance.

The lack of diversity creates a feedback loop. Biased algorithms make discriminatory hiring decisions, which reduces diversity in tech, which leads to more biased algorithms being built by homogeneous teams that don't recognize the problems.

The Transparency That Doesn't Exist

Deep learning neural networks are often called "black box" models—their workings remain opaque even to their creators. You can see what goes in and what comes out, but the complex layers of mathematical transformations in between resist human interpretation. This opacity makes discrimination difficult to detect without deliberate testing.

The Dutch childcare benefits scandal illustrates the danger. An algorithmic system wrongly accused tens of thousands of families of benefit fraud, with the system treating dual nationality and low income as proxies for deceit. The algorithm confused correlation with causality, and because its decision-making process was opaque, the injustice continued for years before being exposed.

Regulations are beginning to emerge. The EU AI Act and new U.S. state laws that took effect January 1, 2026, introduce requirements for impact assessments and transparency. But enforcement remains largely theoretical, and the technical challenge of explaining how a deep learning model reaches its conclusions hasn't been solved.

Redesigning the Future We're Teaching Machines

The problem with algorithmic bias isn't primarily technical—it's political. We can improve data collection, audit for disparate impacts, and require transparency. But as long as we train systems on data from an unequal society, they will learn to perpetuate that inequality unless we actively intervene.

Some interventions are straightforward: ensure training datasets include diverse representation, test systems for disparate impacts across demographic groups, and include diverse teams in system design. The National Institute of Standards and Technology's 2019 report on facial recognition systems found they were least reliable when identifying people of color, largely because standard datasets primarily represented white males. The solution isn't better algorithms—it's better data and deliberate correction for historical imbalances.

But other interventions require us to make uncomfortable choices. If an algorithm trained on historical hiring data learns that men make better engineers, do we override that pattern even though it accurately reflects past hiring? If a medical AI learns that doctors historically spend less time with Black patients, do we correct for that bias even though it's technically "accurate" to the training data?

These questions force us to acknowledge that algorithms aren't neutral arbiters revealing objective truths. They're tools that reflect the values and priorities we encode into them—whether we do so consciously or not. The choice isn't between biased humans and objective machines. It's between biased humans and machines that can replicate human biases at unprecedented scale, or machines that we deliberately design to counteract those biases.

The algorithms are already making decisions about who gets hired, who receives medical care, who goes to prison, and who gets a loan. The question is whether we'll continue letting them learn discrimination from our past, or whether we'll teach them a different future.

Distribution Protocols