A world of knowledge explored

READING
ID: 8277S6
File Data
CAT:Sociology
DATE:March 3, 2026
Metrics
WORDS:1,053
EST:6 MIN
Transmission_Start
March 3, 2026

Facebook Algorithm Fueled Myanmar Genocide

Target_Sector:Sociology

In 2017, Facebook became the primary source of news for millions of people in Myanmar—a country where internet access had only recently become widespread. Within months, the platform transformed into a weapon. False stories about the Rohingya Muslim minority spread like wildfire: claims of violent plots, fabricated attacks, dehumanizing imagery. By the time UN investigators arrived to document the genocide that followed, they had a clear conclusion: Facebook had been "instrumental in the radicalization of local populations." Over 700,000 Rohingya fled their homes. Thousands were killed. And the algorithm had helped fuel it all.

The Engagement Trap

Social media algorithms don't care about truth. They care about keeping you on the platform. Every like, comment, share, and second spent scrolling feeds data into systems designed to show you more of what keeps you engaged. The problem isn't that these algorithms are malicious—it's that engagement and accuracy don't correlate. Often, they're inversely related.

Outrage performs well. So does content that confirms what you already believe. When Facebook reconfigured its algorithms in 2018 to boost "meaningful social interactions" in response to declining user engagement, the company's own internal documents later revealed the consequences: "Misinformation, toxicity, and violent content are inordinately prevalent among reshares." The more sensational the claim, the more it spread.

This creates a particular danger for conspiracy theories. A single like allows platforms to infer your psychological traits and personality profile. From there, AI-powered systems can tailor content to your specific vulnerabilities, even inferring your mood to optimize when and how to deliver the next piece of content. Tech companies then enable advertisers—and anyone else willing to pay—to send microtargeted messages based on these profiles.

The Spiral: How Belief Becomes Identity

Conspiracy theories don't just spread on social media. They escalate through a predictable four-stage process that transforms casual interest into unshakeable conviction.

It starts with identity confirmation. Users actively seek out content that validates their existing worldview, consulting various sources that all tell the same story. This feels like research, like due diligence. Users imagine themselves as "real life investigators."

Stage two is identity affirmation. This is where things get creative. During Pizzagate—the false 2016 conspiracy claiming Democrats ran a child sex-trafficking ring from a Washington D.C. pizzeria—believers took real photos from Clinton Foundation work in Haiti, invented connections to sex trafficking, and posted their "findings" to Reddit and 4chan. They weren't just consuming conspiracy theories; they were producing evidence for them.

The third stage, identity protection, turns defensive. Believers actively work to discredit contradictory evidence, flooding comment sections with antagonistic posts. The conspiracy theory is no longer just an idea they're exploring—it's become part of who they are. Challenging it means challenging their identity.

Finally, identity enactment. Believers seek mainstream approval and recruit others. Sometimes this leads to violence. The Pizzagate believer who drove to that D.C. pizzeria with an assault rifle. The January 6, 2021 Capitol attack fueled by false claims of a stolen election. The shared identity forged online becomes a call to action offline.

Social media doesn't just enable these stages—it accelerates them. Continuous access to confirming information, combined with a community of fellow believers, creates a closed loop. When confronted with contradictory evidence, groups don't abandon their beliefs. They deepen their commitment.

Why Some Communities Are More Vulnerable

The communities most susceptible to conspiracy theory amplification aren't random. They're often marginalized populations whose existence is shaped by social exclusion. When you feel left behind by institutions, ignored by mainstream media, or dismissed by those in power, alternative explanations become appealing. Conspiracy theories offer both an enemy to blame and a community of people who understand.

This explains why fact-checking and debunking strategies consistently fail—and sometimes backfire. Telling someone their deeply held belief is false doesn't just challenge an idea. It attacks their identity and dismisses the community they've found. It reinforces the very exclusion that made them vulnerable in the first place.

YouTube's recommendation algorithm, for instance, demonstrably leads users toward more extremist content, particularly those with right-leaning views. TikTok's algorithm can be completely retrained after viewing just 20 videos about election fraud, after which the platform floods users with election disinformation, QAnon theories, and far-right extremism. When TikTok banned QAnon hashtags in July 2020, users simply created alternatives: #Pizzagte drew 1.6 million views, #Quanon got 540,000. The platforms play whack-a-mole while the algorithms keep serving the content under new labels.

Meta's Choice

In January 2025, Mark Zuckerberg announced sweeping changes to Meta's content policies, lifting prohibitions on harassment and denigration of racialized minorities. The company significantly rolled back automated content moderation, framing it as a commitment to free expression.

A former Meta employee saw something else: "I really think this is a precursor for genocide. We've seen it happen. Real people's lives are actually going to be endangered."

That assessment isn't hyperbole. We've already seen it happen on Meta's platforms. After the Myanmar genocide, Amnesty International's 2021 investigation found Meta "substantially contributed" to atrocities against the Rohingya. The Rohingya communities later requested Meta fund a $1 million education project in refugee camps—representing 0.0007% of Meta's $134 billion in 2023 profits. Meta rejected the request.

The company knows its algorithms prioritize and amplify harmful content because that content maximizes engagement and profit. They have the research, the internal documents, the historical evidence. The 2025 policy changes weren't made despite this knowledge. They were made with full awareness of the consequences.

Beyond Content Moderation

If debunking doesn't work and moderation is inadequate, what does? The most promising approaches focus on prevention rather than reaction. Media literacy education that teaches people to assess the credibility of sources before belief takes root. Critical thinking skills that help users recognize manipulation techniques. Community programs that address the underlying social exclusion making people vulnerable in the first place.

None of these solutions are as simple as tweaking an algorithm or adding more fact-checkers. They require investment in education, community building, and addressing systemic inequalities—work that falls outside the business model of engagement-driven platforms.

The algorithms will keep optimizing for time spent and content shared. The question is whether we'll build countermeasures that work at the same scale, or whether we'll keep watching marginalized communities get swept into spirals of belief that sometimes end in violence. Myanmar showed us what happens when we wait too long to answer.

Distribution Protocols