(JTA) — In October, one day after Facebook announced that it would ban Holocaust denial, Izabella Tabarovsky received an unexpected message from the platform.
A 2019 post of hers promoting an article she had written on Holocaust remembrance was being removed for violating Facebook’s “Community Standards on hate speech.” No further information was provided, and Tabarovsky doesn’t recall being given a way to appeal the decision.
She reached out to a Facebook spokesperson she found on Twitter but got no response.
Facebook’s decision to ban Holocaust denial came only after scholars, activists and celebrities had pilloried the platform for allowing hate speech. But Tabarovsky is no Holocaust denier. She’s a Jewish journalist who writes about Soviet Jewry, including the Holocaust in Soviet territories.
The article in question was called “Most Jews Weren’t Murdered In Death Camps. It’s Time To Talk About The Other Holocaust.” It was about how efforts at Holocaust remembrance don’t focus enough on the millions of Jews who were killed outside the concentration camps, such as Tabarovsky’s own relatives, who were murdered at Babyn Yar.
It’s possible the headline tripped up an algorithm meant to detect Holocaust denial, which then blocked Tabarovsky’s post. She doesn’t know, as she never heard from Facebook.
“This message popped up, and obviously the first reaction is, what did I say that was hateful?” Tabarovsky told the Jewish Telegraphic Agency. “We’ve seen so much antisemitic speech. They can’t battle it, they can’t take it down, and yet they remove Holocaust education posts from 2019. It’s truly incredible.”
Tabarovsky is among the long list of social media users whose anti-hate posts have mistakenly fallen victim to the algorithms that aim to remove hate speech. Companies such as Facebook, Twitter and TikTok say they have stepped up their fight against abusive posts and disinformation. But the artificial intelligence that drives those systems, intending to root out racism or calls for genocide, can instead ensnare the efforts to combat them.
Organizations that focus on Holocaust education say the problem is especially acute for them because it comes at a time when large percentages of young people are ignorant of basic facts about the Holocaust, and more online than ever.
Michelle Stein, the U.S. Holocaust Memorial and Museum’s chief communications officer, told JTA that the museum’s Facebook ads have often been rejected outright — frequently enough “that it’s a real problem for us.”
“Far too often our educational content is literally hitting a brick wall,” she said. “It is not OK that an ad that features a historical image of children from the 1930s wearing the yellow star is rejected, especially at a time when we need to educate the public on what that yellow badge represented during the Holocaust.”
The yellow star post is just one example of an ad that was blocked, Stein said. Jews who were later annihilated were forced by the Nazis to affix the stars to their clothing. Recently the yellow star has been appropriated by protesters of everything from vaccines to Brexit — which may have made Facebook especially sensitive to the image of the star. The Holocaust museum’s ad aimed to respond to incidents like those by educating people about what the star actually signified.
There have been other instances of Holocaust education being blocked as well. In March, Facebook deactivated the account of the Norwegian Center for Holocaust and Minority Studies for five days, as well as the accounts of 12 of its employees. When the accounts were restored, a local Facebook spokesperson told a Norwegian publication, “I cannot say whether this is a technical error or a human error.”
In 2018, the Anne Frank Center for Mutual Respect, a Holocaust education organization in New York, had a post removed from Facebook that included a photo of emaciated Jewish children. Redfish, an outlet affiliated with the Russian state, said it had three Holocaust remembrance posts, including one with a famous picture of Elie Weisel and others in a concentration camp barracks, taken off Facebook this year.
Holocaust educators are not the only ones to protest the way social media algorithms regulate purportedly hateful content. Anti-racist activists have complained of their Facebook posts being treated like hate speech, prompting the platform to change its algorithm. Jewish creators on TikTok say they’ve been banned after posting unobjectionable Jewish content. During the recent conflict in Israel and Gaza, both pro-Israel and pro-Palestinian activists said their posts were hidden or taken off Instagram and elsewhere.
Facebook (which owns Instagram) and TikTok both told JTA that users whose posts have been taken down can appeal the decision. Twitter did not respond to questions sent via email.
But Stein said the reasoning for why the ads are blocked is opaque, and the appeals process can sometimes take days. By the time the ads are approved, she said, the teaching moment they were meant to address has often passed. The museum has reached out to Facebook to address the issue, to no avail.
“It’s unclear to us what part of the post is the problem, so we’re forced to guess. But far more importantly, it stops us from getting that message out timely,” she said. “Social media’s great potential is not education anchored in a classroom, it’s educational moments anchored in what’s happening in the environment, so when you have to stop, that’s a true loss.”
A Facebook spokesperson told JTA that it uses “a combination of human and automated review” to detect hate speech, and that people will “usually” review the automated decisions. Facebook defines Holocaust denial to include posts that dispute “the fact that it happened, the number of victims, the methods, and the intentionality of it.”
“We do not rely exclusively on specific words or language to distinguish between Holocaust denial and educational content,” the spokesperson told JTA. “We also have escalation teams that can spend more time with content and get additional context in order for us to make a more informed decision.”
TikTok likewise told JTA that human moderators review content flagged by its artificial intelligence system, and that it teaches its moderators to distinguish between hate speech and what it defines as “counterspeech.” Neither Facebook nor Twitter provided further detail on when and how posts move from AI to human moderators, or how those human moderators are trained.
“We don’t know when they’re using automated tools, who is deciding what antisemitism is, who is deciding what anti-Black racism is,” said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society.
The ADL was one of the organizers of a high-profile ad boycott of Facebook last year to protest what it said were lax hate speech policies. Later in the year, Facebook announced it would ban Holocaust denial and crack down on other forms of hate.
“Are those trained data sets based on the experience of the people from the impacted communities?” Kelley asked. “Does that inform how the automated systems are being created?”
Both Facebook and TikTok said they were committed to keeping antisemitism off their platforms, and TikTok said it works with the ADL as well as the World Jewish Congress to shape its moderation of antisemitic hate speech. The WJC also works with Facebook.
“It is much harder to deal with stuff like tone or context, and that’s where the AI learning is critical, and that’s the space for learning, but it’s never going to be perfect,” said Yfat Barak-Cheney, the WJC’s director of international affairs. “Issues like nudity, where it’s easy for machines to detect it — then like 98 or 99% of it is removed automatically, before it reaches the platform. Issues like hate speech, where things like tone and content have a bigger role, then machines are not able to remove as much of it.”
Barak-Cheney said her organization is hesitant to press platforms on overreach in moderating topics like Holocaust denial because it’s more important to them that Facebook and other sites take a strong stance against hate speech. Before the WJC embarks on its annual Holocaust remembrance campaign on social media, called #WeRemember, it will send posts to social media platforms for pre-approval to ensure that they aren’t blocked when they go up.
“There’s improvements to make, but for us to push to say, ‘Hey, you should allow more content’ is going to be contrary to us asking them to make sure there’s no violating content that remains and is harmful,” she said.
Pawel Sawicki, the spokesperson for the Auschwitz-Birkenau State Museum, said that if educational posts are being banned, it’s at least a signal that platforms are taking the issue seriously. Sawicki said the museum hasn’t had its posts blocked, and that he’s still worried about the potential for Holocaust denial to spread on social media, despite the platforms’ policies.
“It shows some process of removing speech is going on in social media if such content disappears,” he said. “Things are changing, and we hope that it is a real change to their approach to hate speech more universally.”
Tabarovsky also supports social media companies taking robust action against Holocaust denial and hate speech. But she would have liked to understand why her post was blocked and, ideally, to find a way to avoid having her posts removed. Last week, after JTA inquired about the post and more than six months after it had been removed, Facebook restored it to the platform.
“It’s just crazy when you’re dealing with a robot that can’t tell the difference between Holocaust denial and Holocaust education,” Tabarovsky said. “How did we get to this point as humanity where we’ve outsourced such important decisions to robots? It’s just nuts.”
JTA has documented Jewish history in real-time for over a century. Keep our journalism strong by joining us in supporting independent, award-winning reporting.