A world of knowledge explored

READING
ID: 8571D8
File Data
CAT:Social Media
DATE:April 20, 2026
Metrics
WORDS:1,001
EST:6 MIN
Transmission_Start
April 20, 2026

YouTube Algorithm Manufactured Flat Earth Movement

Target_Sector:Social Media

In 2018, a handful of flat-Earth believers gathered in Birmingham, England for their first international conference. Attendance: 200 people. By 2019, that number had tripled. YouTube's recommendation algorithm, it turned out, had been serving flat-Earth videos to millions of users who'd watched unrelated conspiracy content. What started as a fringe theory confined to internet message boards had suddenly acquired the veneer of a movement, complete with merchandise, conventions, and genuine believers convinced they'd discovered suppressed truth.

The flat-Earth resurgence illustrates a troubling pattern: social media algorithms don't just connect like-minded people—they manufacture the appearance of consensus around ideas that barely exist offline.

The Engagement Trap

Social media platforms optimize for one metric above all others: keeping you scrolling. Every algorithm tweak, every recommendation, every curated feed serves this singular goal. The content that keeps users engaged longest gets amplified. The content that makes them click away gets buried.

This creates a systematic bias toward what researcher William Brady at Northwestern University calls "PRIME" information: content that's Prestigious, in-group signaling, Moral, and Emotional. Humans evolved to pay special attention to such information because it helped our ancestors navigate social hierarchies and cooperate in groups. Algorithms exploit this evolutionary quirk at scale.

A 2026 study by the Behavioural Insights Team and Bondata examined over 1,700 political posts across TikTok, Instagram, and X in Finland, France, and Romania. The finding: 67% of political content users encountered was opinion-based, entertainment-focused, and unverifiable. These weren't news articles or policy analyses—they were hot takes designed to provoke reactions.

The same study revealed that right-wing content dominated feeds at 58%, compared to just 26% left-wing and 16% centrist content. Even more telling: when researchers created avatars that exclusively engaged with left-wing political content, right-wing perspectives still dominated their feeds. Platform amplification was overriding explicit user preferences.

The Paradox of Moderation

The standard narrative holds that algorithms radicalize users by trapping them in "filter bubbles" of increasingly extreme content. A 2024 University of Pennsylvania study suggests reality is more complicated.

Researchers analyzed 87,988 real YouTube user histories and found something surprising: users who followed YouTube's recommendations actually consumed less partisan content than those who made their own viewing choices. After about 30 videos, the algorithm began steering users toward more moderate content, not more extreme.

Lead researcher Homa Hosseinmardi cautioned against oversimplifying: "Users have significant agency over their actions and may have viewed the same content, or worse, even without any recommendations."

This creates a paradox. Algorithms don't necessarily push users toward extremism through recommendations. Instead, they amplify whatever generates engagement—which often means giving a megaphone to minority viewpoints that provoke strong reactions. A flat-Earther arguing with mainstream science generates more engagement than someone calmly explaining basic geography. The algorithm sees the engagement metrics, not the merit of ideas.

When Fringe Becomes Familiar

Young Europeans aged 18-24 now spend between 5.3 and 6.2 hours daily on social media, depending on country. One in four spends at least eight hours per day scrolling. For this demographic, 42% cite social media as their primary news source.

This creates a distortion of perceived reality. When algorithms selectively amplify extreme political views, users begin to believe their political opponents are more radical than they actually are. A handful of activists posting dozens of times daily can create the impression of a mass movement. Repetition breeds familiarity, and familiarity breeds acceptance.

The content being amplified ranges from standard political tribalism to something darker. The BIT study documented AI-generated videos of gorillas telling misogynistic and xenophobic jokes, alongside memes expressing support for Nazi ideology. Islamic State operatives use X and Telegram to foster belonging among potential recruits. Al-Qaeda posts speeches on YouTube with encrypted links to training materials.

Half of young respondents in the BIT survey reported feeling disappointment, fear, anger, or sadness when encountering political discussions on social media. Yet they kept scrolling—exactly as the platforms intended.

The Illusion of Scale

Algorithms don't just amplify minority viewpoints—they disguise how small those minorities actually are. A coordinated group of 100 dedicated accounts can generate enough engagement to dominate a platform's trending topics. Hashtags serve as amplification engines: algorithms recognize them as signals of relevance and push that content to users searching for or following related topics.

This creates what researchers call "functional misalignment." Human social learning evolved to support cooperation—we pay attention to what others in our community think because shared beliefs helped groups survive. Algorithms hijack this instinct, presenting fringe views as if they represent widespread consensus.

The result: people spreading misinformation leverage moral and emotional content to trigger sharing, and algorithms amplify that content based on engagement metrics alone. The flat-Earthers didn't convince millions that the planet is flat, but they did convince those millions that flat-Earth belief is common enough to take seriously.

Beyond Content Moderation

YouTube's machine-learning systems reduced flagged extremist videos by 30% in 2023. Instagram redirects searches for extremist content toward tolerance-promoting material. Germany's Network Enforcement Act forces platforms to remove illegal hate speech within tight deadlines.

These interventions treat symptoms while the underlying disease persists. Algorithms still optimize for engagement. Minority viewpoints that provoke strong reactions still get amplified. The architecture of infinite scrolling, likes, and social rewards still exploits psychological vulnerabilities.

The Sitra innovation fund in Finland has proposed raising minimum age limits for full-feature social media access and enforcing them effectively. But the problem isn't that teenagers use social media—it's that social media is designed to amplify whatever keeps any user, regardless of age, engaged longest.

Until platforms face real incentives to optimize for something other than engagement time, algorithms will continue transforming fringe beliefs into apparent movements. The flat-Earthers are just the most obvious example. For every visible minority viewpoint that gets amplified into seeming mainstream acceptance, dozens more are following the same algorithmic path from obscurity to ubiquity.

The question isn't whether algorithms amplify minority viewpoints. They demonstrably do. The question is whether we'll redesign these systems before they convince us that every fringe belief represents half the population.

Distribution Protocols