Press ESC to close

Pro-Palestinian social media users turn to algospeak to avoid suppression

5 Min Read

When Rathbone deBuys, 37, posts TikTok videos critiquing Israel in its conflict with Hamas, he turns to common strategies to avoid being detected and deleted by the social media giant.

In the subtitles of his videos, he uses terrier and violin emojis instead of the words “terrorist” and “violence.” At the bottom of his videos — which have collectively received millions of likes — he adds a disclaimer, saying the post is for “educational” purposes only. Changing the captions may make it less likely the video will be flagged as violating TikTok’s rules against hateful rhetoric or violent content, deBuys said.

“A lot of people are tuned into the conflict and want to hear what people have to say,” said deBuys, a Louisiana-based musician who posts similar videos on Instagram. “But at the same time, there were instances of censorship.”

Since the bloody conflict between Israel and Hamas escalated into war this month, Palestinian-focused creators have increasingly been using “algospeak” — a collection of phrases, special spellings and code words — to prevent their posts from being removed or suppressed by social media companies. Some users are bleeping or adding sounds to disguise their voice-overs, while others are shifting the spellings of common English and Arabic words like “Palestine,” “genocide” and “Hamas” to evade detection. Many popular creators are instructing Palestinian users to adopt similar tactics and to keep track of how the content tech companies take down or suppress.

Palestinian-focused creators say there’s an urgent need to share a perspective on the war that differs from the mainstream media — and that algospeak is a necessary tactic to ensure their message lands.

Their rhetoric has revived years-long scrutiny over how tech companies like Meta, YouTube and TikTok police their platforms during moments of heightened violence between Israelis and Palestinians. Civil society groups have long criticized Meta for squashing the freedom of expression of Palestinian users by removing Arabic content more heavily than Hebrew posts. Activists have charged that tech companies have also not invested in systems to protect Palestinian users from hateful rhetoric and violent threats.

“This has been an ongoing problem for years,” said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. “They apply unequal standards to different parts of the world [and] they don’t always have local expertise” or language experience, particularly in the Global South.

YouTube, TikTok and Meta’s Facebook and Instagram have all deemed Hamas an extremist organization, which prohibits the group from having a presence on their platforms. While Meta and YouTube users can call for peace or comment on the issues facing Palestinians, they can’t express support for Hamas. Meta also said it changed the default settings in the region, limiting who can comment on new public Facebook posts to friends or established followers in attempt to clamp down on unwanted content. TikTok has said it added more moderators who speak Arabic and Hebrew to review posts about the war. YouTube has said it’s taking down hate speech targeting Jewish and Palestinian communities while connecting users with reliable news sources.

Earlier this week, throngs of Palestinian supporters complained that Meta was suppressing their content commenting on or documenting the violence. Popular influencers reported that counts of views and likes on their videos on Instagram and Facebook had sharply declined. Some users complained their posts were removed or hidden or their accounts were restricted for violating the companies’ content rules. Still others said their ability to broadcast live video had been restricted, while their ability to find Palestinian creators’ Live videos had also declined.

Meta said in a blog post this week that the company had fixed bugs that prevented some users’ posts, ephemeral videos known as Stories and short-form videos known as Reels from showing up properly. The company also said that for a “short time” a different bug prevented people from going Live. Meta said the glitches “affected accounts equally around the globe — not only people trying to post about what’s happening in Israel and Gaza — and it had nothing to do with the subject matter of the content.”

But many Palestinian-focused social media users are skeptical of Meta’s explanation after, they say, the company similarly suppressed their views during a two-week war between Israel and Hamas in 2021. During the conflict, Israeli police stormed the al-Aqsa Mosque, a sacred Muslim site in Jerusalem, prompting Hamas to fire rockets into Israel. Israel then retaliated with a bombing campaign that left more than 200 Palestinians dead. As users flooded Meta’s social networks with firsthand accounts of the battle, Instagram began restricting content containing the hashtag #AlAqsa. Meta initially blamed the issue on an automated software deployment error.

An outside audit commissioned by Meta on the recommendation of its independent Oversight Board found that the #AlAqsa hashtag was mistakenly added to a list of terms associated with terrorism by a third-party contractor that does content moderation for the company. The report noted this was likely because Meta’s systems that use artificial intelligence to monitor hate speech and other forms of problematic content use lists of terms associated with foreign terrorist organizations. Therefore it’s more likely that a person posting in Arabic might have their content flagged as potentially being associated with a terrorist group.

But not all Palestinian-focused social media users buy Meta’s explanation. Ameer Al-Khatahtbeh, a New Jersey resident who runs the news-oriented Instagram account Muslim, said his posts have seen declining engagement and views.

“Palestinians … experienced this suppression back in 2021,” he said. “We are seeing the same exact thing … happening right now.”

Many Palestinian-focused influencers are encouraging their followers to document any problematic content enforcement actions from the tech companies. Nadim Nashif, the director of digital rights advocacy group 7amleh-The Arab Center for the Advancement of Social Media, said his group has referred to social media platforms hundreds of reports of disinformation about the conflict, hate speech and users who say their accounts have been unfairly silenced.

Social media users are also encouraging each other to adopt unproven strategies to trick the algorithm. In some cases, users may begin their post with “I stand with Israel” only to start talking about their support for Palestinians. Others are finding creative ways to spell critical words about the conflict in both Arabic and English.

“We started removing dots” on posts in Arabic, said one Egyptian social media user who is sympathetic to the Palestinian cause and spoke on the condition of anonymity to avoid retaliation. “We mix English letters [with] the Arabic letters.”

When Instagram user Womena promoted an interview Wednesday with the journalist Mariam Barghouti, who critiqued the way international news outlets covered the Israel-Hamas war, they used the shorthands “P*les+in1ans” and “t*rr0rist+s” in place of “Palestinians” and “terrorists.”

But such tricks don’t always work. Just a few days ago, deBuys said TikTok removed the sound from a satirical video he posted in which he impersonated members of the Israel Defense Forces carrying out orders to attack Gaza. After the video racked up thousands of views, TikTok removed the sound, saying it violated the company’s community guidelines, he said. After The Washington Post sent questions to TikTok about the video, the sound was restored.

The “media can gloss over the fact that the Israel Defense Forces are massacring the Gazan civilian population,” said deBuys, whose accounts have received other violations in the past from TikTok and Instagram. “But to make a video about that that’s a satirical sketch about what is happening is taboo on TikTok.”

Taylor Lorenz and Will Oremus contributed to this report.