Instagram users are accusing the social network of purposefully censoring posts in support of Palestine – underscoring longstanding concerns about unfair moderation as war rages in Gaza.
Hena Mustafa, an Instagram user with 866 followers based in New York City, said that since she began posting about developments in Palestine as Israel mounted its siege in the past week, her Stories – photos and videos that disappear after 24 hours – have been receiving “significantly less views”. Friends and followers have messaged Mustafa to tell her that her posts are no longer appearing at the top of their Instagram feeds, her name has become unsearchable on the social network, and they are unable to interact with her posts.
Hundreds of others have shared similar experiences, said Nadim Nashif, founder and director of social media watchdog group 7amleh, the Arab Center for Social Media Advancement, which has been tracking the issue. 7amleh and others suspect the platform is shadow banning, or demoting content related to the conflict in the algorithm.
“Unfortunately, shadow banning is just one of the many ways in which we have seen Palestinian content silenced and censored over the last week,” he said. “This has been a trend of Meta in times of crisis, and we saw a significant spike of Palestinians and allies reporting limited reach and errors with content they posted about the ongoing crisis in Palestine.”
Meta said in a statement that “it is never our intention to suppress a particular community or point of view”, but that due to “higher volumes of content being reported” surrounding the ongoing conflict, “content that doesn’t violate our policies may be removed in error”. The company additionally attributed some issues to glitches in its algorithmic moderation system that reduced the reach of posts “equally around the globe” – regardless of subject matter.
However, Nashif noted that Meta made similar excuses in May 2021, during a separate series of escalations in Palestine during which Facebook and Instagram users posting about Palestine reported a similar reduction in the reach of their posts. Those incidents prompted a letter signed by more than 200 Meta employees demanding the company address such shortcomings. A subsequent independent analysis commissioned by Meta found that the social networks had violated Palestinian human rights by censoring content related to Israel’s attacks on Gaza.
As conflict reignites in Palestine, prominent users have again reported various forms of uneven enforcement or censorship – including the Pulitzer prize-winning New York Times reporter Azmat Khan, who said her 7,000-follower Instagram account “was shadow-banned” after posting a Story about the war in Gaza on Saturday. Journalist Ahmed Shihab-Eldin said on Tuesday that his Instagram account – where he had more than 100,000 followers and was posting frequently about Palestine – was permanently banned with little explanation.
“This is a problematic and unaccepted trend of Meta stifling Palestinian voices in times of crisis,” Nashif said.
In addition to allegations of shadow banning and outright censorship, Meta was under fire this week after users documented a glitch that translated “Palestinian” followed by the Arabic phrase “Praise be to Allah” to “Palestinian terrorists” in multiple profiles. A former Facebook employee with access to discussions among current Meta employees said the issue “really pushed a lot of people over the edge” – internally and externally.
“You cannot keep blaming it on glitches when it’s spreading misinformation and dehumanizing Palestinians by feeding into the narrative that all Palestinians are terrorists,” said the former employee, who spoke anonymously for fear of retaliation. “It’s very overwhelming for a lot of the employees of the company.”
Concerns of censorship and shadow banning are perennial on social media, but the stakes are much higher during times of war – making the real-world implications of such opaque company policies more dire, said Nora Benavidez, senior counsel at media watchdog group Free Press.
“People are seeing numbers of viewers far lower than is typical, and they are left to ask – is this because people are not finding the content interesting or because of the underlying decisions that the platforms are making?” she said. “Those are questions we never get answers to in any context, because these companies are not incentivized by any external regulation to be accountable.”
In light of these concerns, users are experimenting with ways to manipulate the algorithm. Mustafa, for example, said she splices selfies between political commentary on her stories and swaps letters – using “@” instead of “a” and “3” instead of “e” – when spelling out words like “Israel” (Isr@3l) and “Palestine” (P@l3stin3).
As people frantically search for credible information online, as misinformation proliferates on X, the heightened environment is only being worsened by allegations of shadow banning, said Benavidez.
“When it feels like platforms are limiting certain viewpoints, it fans the flames of division and tension because people on all sides of the issue are worried their content is being targeted,” she said. “This kind of worry and paranoia played out across communities helps to create environments that are electric and combustible.”