Meta is failing to catch memes and innuendo promoting Holocaust denial, oversight panel concludes

Advertisement

(JTA) – An Instagram post using a SpongeBob SquarePants meme to promote Holocaust denial managed to evade Meta’s system for removing such content, raising questions about the company’s ability to combat certain indirect forms of hate speech, an independent oversight panel concluded in a case published Tuesday.

The finding came in a review of Meta’s handling of a post featuring a meme of Squidward, a character from the cartoon series SpongeBob SquarePants, entitled “Fun Facts about the Holocaust.” A speech bubble next to the character contained lies and distortions about the Holocaust, including false claims that 6 million Jews could not have been murdered and that chimneys of the crematoria at Auschwitz were built only after World War II.

Withstanding six complaints from users that generated four automated reviews and two human assessments, the post stayed up from September 2020 until last year when Meta’s Oversight Board decided it would examine the situation and the company subsequently announced the post violated its policy against hate speech. The post even survived two user complaints that came after Meta’s adoption in October 2020 of a new rule expanding on its hate speech policy to explicitly bar Holocaust denial.

As part of its review in the SpongeBob case, the Oversight Board commissioned a team of researchers to search for Holocaust denial on Meta’s platforms. It was not hard to find examples, including posts using the same Squidward meme to promote other types of antisemitic narratives. Users try to evade Meta’s detection and removal system, the researchers found. Vowels are replaced with symbols, for example, and cartoons and memes offer a way to implicitly deny the history of the Holocaust without directly saying it didn’t happen.

Content moderation on social media is a notoriously difficult task. Some platforms, such as X, formerly known as Twitter, have taken a more hands-off approach, preferring to reduce oversight rather than err and risk stifling legitimate speech, which has resulted for X in the proliferation of antisemitism and extremism.

Meta has gone in the direction of increased moderation, even agreeing to empower the Oversight Board to make binding decisions in disputes over violations of the platform’s content policies. Meta’s vigilance has led in some cases to the accidental removal of content intended to criticize hate speech or educate the public about the Holocaust. The Oversight Board has repeatedly urged Meta to refine its algorithms and processes to reduce the chance of mistakes and improve transparency when it comes to enforcing the ban on Holocaust denial.

Meta set up its oversight panel amid mounting controversy over how the company handles content moderation. The move opened Meta to new scrutiny but did not stop the criticism. When, for example, the panel announced the SpongeBob case and requested comment from the public, it received a flurry of critical responses from Jewish groups and others.

“Holocaust denial and distortion on Meta platforms, on any forum, online or offline, is unequivocally hate speech,” the Anti-Defamation League wrote in its comment. “The Oversight Board must direct Meta to act accordingly by quickly removing violating posts like the one at issue in this case. Waiting for appeals to rise to the Oversight Board is unacceptable.”

Meanwhile, another commenter named the ADL as a negative influence on Meta’s content moderation practices, referring to the group as a “non-representative, Democratic-Leftist organization.”

“Meta has lost all integrity in this area given Metas serious level of hyper-partisan Orwellian-level censorship,” wrote the commenter, who identified themselves as Brett Prince.

In the SpongeBob case, the oversight panel made two new sets of recommendations for Meta. It said that assessing how well Meta is enforcing its ban on Holocaust denial is difficult because human moderators do not record the specific reason they removed a piece of content, a practice the panel urged Meta to change.

The panel also learned that as of May 2023, Meta was still sorting reviews based on an automation policy enacted following the outbreak of the COVID pandemic so that users’ appeals of decisions by users were rejected unless deemed “high-risk.” It is inappropriate to have kept the policy in place for so long, the panel said.

The spread of hate and misinformation on social media has become an acute problem in the aftermath of Hamas’s Oct. 7 attack on Israel, as platforms become flooded with depictions of graphic violence and war-related propaganda. In response to the circumstances, the Oversight Board adopted an expedited process for reviewing disputed cases.

Recommended from JTA

Advertisement