Unalive, Le$bean, and More: It’s Not Newspeak, It’s Algospeak

This content contains affiliate links. When you buy through these links, we may earn an affiliate commission.

Danika Ellis

Associate Editor

Danika spends most of her time talking about queer women books at the Lesbrary. Blog: The Lesbrary Twitter: @DanikaEllis

People on all sides of the political spectrum have been calling 1984 prophetic since almost the moment it was published. It’s been interpreted to fit any agenda, and any demonstration of government (or even corporate) power will inevitably be called Orwellian by someone. But one aspect of the novel that’s become relevant in the age of TikTok is Newspeak.

Newspeak is the language of Oceania, where 1984 takes place, and it was created by the Party to try to control not only how people communicate, but what thoughts they’re even capable of having. This is supported by the Sapir–Whorf hypothesis, which argues that the way a language is structured influences how speakers think. For instance, because of the way English structures sentences (subject → verb → object), English speakers are more likely to assign outcomes to a certain actor: “The dog broke the vase” vs. “The vase broke itself.”

Newspeak’s aim was to simplify and streamline language both to keep the population ignorant and docile and to guide citizens towards approved ways of thinking. One of the markers of Orwell’s invented language was simplified prefixes and suffixes: doubleplusgood or ungood.


If you’ve ever taken a scroll through TikTok, you probably noticed its own language forming. You might have seen words in captions with letters replaced with symbols, or emojis filling in for words (instead of “white people,” it’s “:white circle emoji: people”). There are a few different forces converging to create this dialect, which can be seen to a lesser extent on other forms of social media.

One factor is that TikTok and most other social media platforms rely on moderation by algorithm, at least for the first pass. The easiest way to do this is to have a list of banned words: words associated with threats or with sexual content or with controversial content. When creators realize their videos using these words are suppressed, they get around them with creative substitutions: “seggs” for sex, “le$bean” for lesbian. Taylor Lorenz at Washington Post calls this “algospeak”: language developed in response to an algorithm.

The other consideration for the emergence of algospeak is that the algorithm on any social media platform, but perhaps especially TikTok, is mysterious. You might upload a video one day and get millions of views, and then get hardly any other views the next time. Because you can be shadowbanned — have your content not shown to anyone else, but not be notified of it — it’s almost impossible to tell if your video is being suppressed because of what you talked about or if it just wasn’t popular.

This is a perfect breeding ground for paranoia. What words or topics are being suppressed by the algorithm isn’t possible for creators to know for certain, and it’s always changing. It’s safer to self-censor, even though those substitutions could also end up censored, or they may be completely unnecessary and were never being targeted at all.


The most striking similarity between Newspeak and algospeak is in one of its most popular substitutions: “unalive” for “dead” or “kill.” Video gamers livestreaming will talk about “unaliving” monsters. Teens looking for help for suicide ideation might mention wanting to “unalive themselves.”

Of course, Newspeak and algospeak differ greatly: while Newspeak is a top-down attempt to control language and thought, algospeak is an attempt to resist censorship and dictated off-limits topics. That also includes anti-vaccine groups and pro-eating disorders groups that have found ways to hide the actual topics they’re discussing through code words.

Moderation is a necessity of social media, and it’s also a monstrous challenge: it requires filtering through an endless stream of pieces of audio, visual, and text data. The stakes are high, because you may be providing a platform for terrorist groups or encouraging large-scale harassment.

Trying to fix this through algorithmic filtering makes sense, but it’s a blunt instrument for a nuanced and thorny problem. When it’s applied this simplified way, marginalized groups suffer the most: discussions of queerness, sexual health, racism, and more are all suppressed and may be kept from the people who need them most.


When Orwell invented Newspeak, he wasn’t anticipating Twitter or TikTok or the massive communication revolution that would come with the internet. But the idea behind Newspeak is still relevant now: the words we use are important. They help us to clarify and structure our thoughts. They guide the conversations happening online and off. Already, social media and its moderation strategy is changing how we think and speak. Now that we know, we need to start talking about the effects of this kind of language policing and what we want our linguistic future to look like — after all, we still have the words to do so.