In 2020, when Meta launched Instagram Reels to compete with TikTok, the company's own internal research flagged something disturbing: the new feature had "significantly higher prevalence of bullying and harassment, hate speech, and violence or incitement" than the rest of Instagram. The company launched it anyway. By 2024, when Meta wanted to grow Reels further, they allocated 700 staff to expansion while refusing safety teams just 2 specialists for child protection and 10 for election integrity.
This wasn't an oversight. According to more than a dozen whistleblowers from Meta and TikTok who spoke to the BBC in March 2026, both companies deliberately allowed more harmful content after internal research showed that outrage fueled engagement. Senior management at Meta told engineers to permit more "borderline" content—misogyny, conspiracy theories, racist posts—to compete with TikTok, explicitly citing "the stock price is down" as justification.
The Machinery of Anger
The term "borderline content" has become industry jargon for posts that are harmful but legal. These aren't death threats or explicit calls to violence. They're the sexualized images, the racist dog whistles, the conspiracy theories that make your blood pressure rise just enough to keep scrolling.
Engagement-based algorithms reward this content because anger is measurably more engaging than agreement. A study published in Science in November 2025 provided the first causal evidence of this effect. Researchers manipulated what 1,256 US participants saw on their feeds, reducing exposure to content expressing "antidemocratic attitudes and partisan animosity" for some users while increasing it for others. The results were stark: reducing toxic content warmed feelings toward the opposing political party by 2.11 degrees on a 100-point scale. Increasing it cooled feelings by 2.48 degrees.
The most telling finding? Seventy-four percent of participants reported noticing no impact on their experience. The algorithms were reshaping their political attitudes below the threshold of conscious awareness.
When Engineers Lose Control
Ruofan Ding worked as a machine-learning engineer at TikTok from 2020 to 2024. His description of the platform's algorithm is chilling: a "black box" where engineers have "no control of the deep-learning algorithm in itself." TikTok updated its algorithm almost weekly to gain market share, and each iteration seemed to surface more borderline content.
This isn't a bug. It's an emergent property of systems optimized for a single metric: engagement. Meta's own internal documents acknowledged that their algorithm offered content creators a "path that maximizes profits at the expense of their audience's wellbeing." The same documents warned that "current set of financial incentives our algorithms create does not appear to be aligned with our mission."
The platforms know. They've always known.
The Real-World Toll
Calum was 14 when the algorithm found him. By 19, he described himself as having been "radicalised by algorithm," fed a steady diet of content that made him "very kind of angry" and led him toward racist and misogynistic views. UK counter-terror police have observed a "normalisation" of antisemitic, racist, and far-right posts in recent months, with people becoming "more desensitised to real-world violence."
The pattern shows up in the data. After the October 7 attacks, antisemitic content reached 36-38% of comments on major UK news outlet YouTube channels. Following a hate crime in Washington in May 2025, it averaged 43% and hit 66% on some outlets. TikTok whistleblower "Nick" reported that cases involving "terrorism, sexual violence, physical violence, abuse, trafficking" appear to be increasing, yet the platform rated trivial cases—a politician being compared to a chicken—as higher priority than reports from teenagers about cyberbullying and sexualized images.
Why Quick Fixes Fail
Researchers at the University of Amsterdam tried to engineer their way out of the problem. They created an algorithm-free social media environment populated by AI bots, then tested interventions: hiding follower counts, boosting diverse viewpoints, chronological feeds. None of these solutions fixed the problem. Some made it worse.
The study revealed something counterintuitive: even without algorithmic amplification, the bots still formed echo chambers and polarized. The most extreme voices became "elites" by posting the most outrageous content, which attracted the most attention and followers. The dysfunction isn't just in the code—it's in the fundamental dynamics of attention economies.
User-side solutions fare no better. Teenagers told the BBC that systems for indicating they don't want problematic content "are not working." They still receive violence and hateful content. X's feature showing the geographic location of accounts addresses foreign interference but ignores that domestic dynamics overwhelmingly drive the volume and intensity of polarizing content.
The Incentive Problem No One Wants to Solve
When the BBC investigation broke in March 2026, Meta responded that "any suggestion that we deliberately amplify harmful content for financial gain is wrong." TikTok called the whistleblower claims "fabricated." But Matt Motyl, a senior Meta researcher, had already shared high-level research documents with the BBC showing "all sorts of harms to users on these platforms."
The platforms can deny intent while the machine does exactly what it was designed to do: maximize engagement. The algorithm doesn't care about truth or nuance or the health of democracy. It cares about whether you watch the next video, read the next post, feel strongly enough to comment.
We've built systems that treat human outrage as a renewable resource to be harvested. The platforms know the extraction is causing damage. They've measured it, documented it, and decided the stock price matters more. Until the incentive structure changes—through regulation, competition, or a fundamental rethinking of how we fund digital public spaces—the algorithms will keep serving us rage, and we'll keep consuming it, barely aware that our minds are being reshaped one angry click at a time.