Society

How ‘algospeak’ reshapes online discourse amid censorship fears

Navigation

Ask Onix

Social media's shadow language: How 'algospeak' distorts communication

From "unalived" to "seggs," users are rewriting their vocabulary to evade what they believe are algorithmic bans-despite denials from platforms like Meta, TikTok, and YouTube that such word blacklists exist. The phenomenon, dubbed algospeak, reflects a broader crisis of trust in how content is moderated, with creators and activists alike adopting coded language to discuss everything from political scandals to protests-even when evidence of suppression is scarce.

The illusion of banned words

Tech giants insist they don't maintain lists of forbidden terms. "YouTube does not have a list of banned or restricted words," a company spokesperson told the BBC, emphasizing that context drives moderation decisions. Meta and TikTok echoed this stance, framing their policies as neutral tools to balance safety and free expression. Yet users remain skeptical, pointing to a history of opaque interventions-from TikTok's leaked 2019 directives to suppress "ugly" or LGBTQ+ creators to Meta's admitted errors in restricting Palestinian content post-October 7, 2023.

Experts argue the ambiguity fuels self-censorship. "People come up with folk theories in the context of that opacity," says Sarah T. Roberts, a UCLA professor studying digital labor. Without clear boundaries, users err on the side of caution, warping language to avoid perceived penalties-even if the algorithms aren't targeting specific words.

Creators navigate the 'black box'

Alex Pearlman, a comedian and political commentator with millions of followers, describes algorithmic censorship as an ever-present obstacle. He avoids mentioning "YouTube" on TikTok, fearing it triggers suppression, and resorts to euphemisms like "Island Man" for Jeffrey Epstein after multiple videos were removed without explanation. "You're left trying to discern what the black box is telling you," Pearlman says. His experience mirrors a broader trend: creators adopt algospeak not because it's proven effective, but because the alternative-risking invisibility-feels worse.

"It's an instrumentalisation of rules that don't make sense for regular people."

Sarah T. Roberts, UCLA

'Music festivals' and the algorithmic imaginary

In August 2025, users flooded platforms with posts about a fictitious Los Angeles "music festival"-a stand-in for ICE raid protests. The ruse went viral, ironically because of its coded nature, not in spite of it. Linguist Adam Aleksic calls this the "algorithmic imaginary": users alter behavior based on beliefs about censorship, which then shape the algorithm's responses. "People hypercorrect because they aren't sure what will be censored," Aleksic explains. The result? A feedback loop where algospeak becomes a self-fulfilling prophecy.

Profit over politics

Roberts contends the core issue isn't ideology but revenue. Platforms prioritize ad-friendly environments, suppressing content that might alienate advertisers or invite regulatory scrutiny. While companies claim moderation aligns with user safety, Roberts notes, "If and when they must deviate, they do." The 2023-2025 period saw Meta openly deprioritize political content-only reversing course after Donald Trump's second inauguration-highlighting how policies shift with business needs, not public interest.

Ariana Jasmine Afshar, a left-wing activist with a large following, embodies the paradox. She's used algospeak to dodge censorship yet acknowledges its absurdity: "None of us know what works." Her success-including direct outreach from Instagram for "strategies to do better"-undercuts the narrative of systemic suppression. Still, she insists, "They really confuse me."

The cost of opacity

The consequences extend beyond silly euphemisms. When platforms act as de facto arbiters of public discourse, their opacity risks erasing entire topics from view. Pearlman observes that after his Epstein videos were removed, fewer big-name creators tackled the subject. "A large part of the audience won't know who you're talking about" when using coded terms, he warns-a trade-off between visibility and clarity.

Roberts poses a stark question: "Is the best way to express civic dissatisfaction to spiral inside platforms profiting from that frustration?" The rise of algospeak, she suggests, signals a need to rethink digital public squares-before coded language becomes the only language left.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed