Holocaust denial on Instagram
The Meta Oversight Board has removed one offensive post, yet millions more remain.
In September 2020, an Instagram user posted an antisemitic meme that denied the Holocaust. The meme used the character of Squidward from the TV show SpongeBob SquarePants to relate “Fun Facts About The Holocaust,” which contained several false and distorted claims that I will not repeat here. The user had approximately 10,000 followers and used hashtags to help spread the message. The user added that they were sharing “real history.”
Several other users reported the image to Meta as hate speech shortly after it was posted. (Meta, formerly Facebook, is the parent company of Instagram). Within a month, the post had been flagged to Meta’s attention four times. The company’s internal moderation system left the post up each time. Three of those decisions were made via automation, i.e., no humans reviewed the content. (Increasingly content moderation on social media is done via artificial intelligence.) One decision was made by a person who worked at Meta. That person chose to let the post remain, determining that it did not violate any policies.
Aside from being a morally reprehensible decision, it was also an inaccurate one. At the time of the post, Meta had a policy that expressly prohibited “mocking the concept, events or victims of hate crimes” on its platforms. Without question the Holocaust was a hate crime, and Meta was well within its rights to remove content on Instagram that mocked or diminished its cruelty. It chose not to.
We know these details because this week the Meta Oversight Board—an independent body created between 2018 and 2020 to assist Meta with content regulation decisions—issued a ruling ordering that the Squidward Holocaust denial meme be removed, along with a detailed report on how the post was allowed to remain on the site for three-and-a-half years. The report offers a window into the largely opaque process of how social media sites do or do not make decisions about the content we see in our news feeds each day. It also serves as a stark reminder that as we mark International Holocaust Remembrance Day, each year held on January 27, Holocaust denial continues to spread rapidly on the Web in alarming ways. One post has been removed, but millions more remain… and evidence suggests the problem is growing worse.
The Meta Oversight Board (OSB) is well-known within tech circles, but casual Facebook and Instagram users may not realize it exists.
By now the public is aware that “fake news,” fake history, and disinformation are widespread on social media. While the phenomenon long predated the 2016 U.S. Presidential elections, that event and subsequent revelations about how deep the proverbial rabbit hole went, in hindsight, proved to be a turning point in how platforms responded to the issue. In prior years, platforms largely disregarded it. In 2017 and 2018, in response to public pressure from activists, journalists and elected officials, social media companies were compelled to take some action.
Facebook / Meta responded with a proposal to create an independent body that would make binding decisions about content on its platforms. Suggested in fall 2018, Facebook drafted a charter in 2019 and began a series of consultations with experts to identify board members. In 2020, the Board members were appointed—outside experts with knowledge of law and civil rights, none of whom work at Meta—and began ruling on cases in 2021. Legally, the OSB is a separate company registered in Delaware. Facebook seeded the company $130 million through an irrevocable trust in order to fund its operations.
On the one hand, it is worth applauding Meta for taking such a step. Meta is the only platform that has made such an investment in an independent body of experts to continually evaluate content on its platforms and make binding decisions about it. There is no Twitter / X oversight board, nor are there permanent boards for TikTok or YouTube, where disinformation and hate speech run rampant. On the other hand, while $130 million is a lot of money for nearly everyone in the world, it is very little for a company that earns approximately $30 billion in revenue each quarter. The amount seeded to the OSB represents 0.11% of Meta’s annual revenue. There is a convincing argument to be made that Meta would need to invest far more resources to make a large-scale impact on the problem. Those investments would need to be made internally to how Meta does business, which was one of the OSB’s key findings. Outsourcing a relatively small amount of money to a third party could be interpreted as an exercise in public relations intended to deflect scrutiny from larger, structural issues.
As part of its deliberations, the OSB solicits comments and advice from external stakeholders. That is how I became involved. In 2022, OSB took up a case of war propaganda related to the Russian invasion of Ukraine. The post in question revolved around atrocities committed by Soviet troops during WWII. Since it resided at the intersection of history and tech, myself and other members of our History Communication Institute issued a public response to OSB suggesting that the content should be removed (you can read our opinion here). When the Holocaust denial case came before the Board, the OSB reached out to me personally for advice and asked if I would attend private meetings in Washington to discuss it, which I did.
Since those conversations were off-the-record, I won’t reveal their contents. I can, however, sketch the broader outlines of the issue and what the Board uncovered:
1. At the time this post was created, Meta did not explicitly ban Holocaust denial. As mentioned, in September 2020 Meta did have a policy against mocking victims of hate crimes. But it was not until October 2020 that Meta codified a ban on Holocaust denial; in fact, two years earlier Mark Zuckerberg had stated publicly that he believed Holocaust denial should not be banned as he felt most deniers were not doing so intentionally. This reflected a profound ignorance of the issue by Zuckerberg; most Holocaust denial is, in fact, purposefully antisemitic, propagated by hate groups with the intention of breeding animosity towards Jewish minorities. Meta reversed course once advocacy groups pointed this out, along with noting the alarming rise of antisemitism on Meta platforms, particularly during the Covid-19 pandemic when antisemites blamed the virus on a Jewish conspiracy, recycling familiar tropes from prior centuries and updating them for a modern-day context. Holocaust denial or minimization, in particular, has been a favored target of antisemites since at least the 1970s, a centerpiece of antisemitic conspiracy theories, white supremacists, and extremist groups such as the Nation of Islam or the so-called “Institute for Historical Review,” which is a white supremacist front group. Holocaust denial has been illegal in Belgium since 1995; illegal in the Czech Republic since 2000; and, since 2008, punishable across the EU. The United States has no such laws.
2. Once Meta did enact a ban on Holocaust denial, the post was still not removed. In part this was due to a reliance on automation (more on that in the next bullet point). But it is worth noting that a second human reviewer examined the post in 2023 and still did nothing, claiming that since it was posted prior to the new policy, it should remain. This was incorrect; Meta’s policies are retroactive. Old posts that violate new policies can be removed once discovered. Why the human reviewer did not know that, or chose to override that, remains unclear.
3. Meta used the COVID-19 pandemic as a reason to not review content. Content on Meta platforms such as Facebook and Instagram are evaluated by a mix of machines and people. During the course of its investigation, OSB learned that in March 2020, Meta sent home many of its human content reviewers due to the COVID-19 pandemic. In their place, Meta instituted an automated content review system that closed out large numbers of complaints based on a set of criteria that Meta has not revealed. That automated system was still in place in 2023—long after COVID-19 was downgraded from a public health emergency and employees had safely returned to offices. (Meta, in fact, mandated that employees return to work last year). It suggests, as mentioned earlier, that instead of investing significant resources into careful and rigorous content moderation by human beings—including detailed categorization and analysis of antisemitic content—Meta has been relying on A.I. systems that automatically close out requests based on unknown criteria, reducing costs and outsourcing complex or controversial issues to a third party.
The OSB investigation, along with supporting evidence provided by outside experts, also illuminated how widespread Holocaust denial is—not just on Facebook and Instagram, but social media more broadly. (TikTok is especially egregious; more on that in a subsequent newsletter). Some of the statistics are eye-opening:
Half of American youth have reported seeing Holocaust denial on social media; [1]
11 percent of young Americans believe that Jews caused the Holocaust; [2]
At least 36 groups on Facebook with over 360,000 followers in total were specifically dedicated to Holocaust denial or reproduced Holocaust denial; [3]
Facebook’s algorithm was recommending similar Holocaust denial content to users who followed public pages containing it; [4]
In one data set presented to Meta, out of 134 posts denying or distorting the Holocaust, the company removed only 20%; [5]
One grotesque Facebook video of Holocaust denial garnered more than 200,000 views in one instance; [6]
23% of young Americans believe the Holocaust did not happen, was a myth, has been greatly exaggerated or are unsure; [7]
In one survey, 48% of Americans 18-39 could not name a single one of the 40,000 ghettoes or concentration camps established during WWII. [8]
OSB also found that Holocaust denial-related content is easier to find and gets more interaction on Instagram than on Facebook. This may be surprising to some readers, but not those who have read my book History, Disrupted, particularly my chapter on Instagram and the visual past. (Shameless plug, I know).
Instagram has been a fiercely contested battleground for what I call e-history ever since its inception. In part this is because of its demographics: Instagram gained prominence by targeting adolescents and young adults, who provide impressionable targets for extremists to flood with conspiracy theories and disinformation. As well, many professional historians do not operate on Instagram countering historical disinformation; historians are largely on Twitter / X, with some now on BlueSky or Substack. As a visual medium, Instagram also privileges meme culture, which can be easily exploited by hate groups and conspiracy theorists to evade content moderation and make harmful messages appear fun, playful and non-threatening, e.g., using cartoon characters such as Squidward to capture young viewer attention. Because it privileges the visual, Instagram is also conducive to evocative, visually-arresting imagery such as from World War II. Finally, Instagram is highly addictive, making it easy to fall down rabbit holes. Once a user interacts with conspiracy theories, the algorithm will recommend more of them.
So, where does this leave us as we mark International Holocaust Remembrance Day in 2024?
On the positive, the OSB decision sets a precedent for removal of future offensive content, as well as shining a light on where Meta’s commitments to eliminating hate speech are falling short. On the other hand, millions of hateful posts still exist on social media. Instagram users posts 65,000 images every minute; over 93 billion posts are added to the site each day of the year. How much of that content is Holocaust denial? How much is other offensive content that minimizes or denies atrocities against persecuted groups? How much of it is disinformation, conspiracy, propaganda or extremist content intended to cause hate and incite violence? There remains no definitive way to answer that question, nor a truly effective way to moderate such a massive amount of content in real-time without a drastic reconfiguration of how the platforms operate.
Content moderation alone cannot be the sole solution, however. Even in European countries where Holocaust denial is outlawed, it continues to exist online and off. In the United States, some academics and civil society organizations that condemn Holocaust denial still argue against laws that would prohibit it, on the grounds that it would set the stage for more restrictive barriers to speech and the First Amendment. As a result, less than 80 years after the liberation of concentration camps—with people still alive today who survived those camps—millions of people around the world believe the conspiracy theories. We will never regulate our way out of that conundrum. We must continue to invest in education, commemoration, and perpetuation of the actual histories, legacies, and effects of the worst atrocity ever committed by one group of human beings on another.
Finally, there is an argument to be made that the Holocaust education we have done in the 20th century no longer works well in the 21st century. We likely need fresh approaches, new tactics, and revised curricula that are better suited for the technological and political realities that we operate in. Tech companies should be in consultation with historians; museums need to be in consultation with the private sector; government needs to be in consultation with civil society. Those interactions are happening; I participate in some and I hear about others. But such is why we created the History Communication Institute, to build bridges among diverse groups, create collaborations, and come up with new ways of communicating history’s lessons in an era where that has become increasingly complicated and complex to do so ethically and effectively. We need to keep pace with younger generations and how they use technology, and we need to be strategic in how we impart the realities of the past onto impressionable minds who have grown up distrustful of journalistic, academic and government sources.
This year’s International Day of Remembrance is a reminder not solely of the horrors of the Holocaust, but also of our mandate to refresh how we tell the story. One of the benefits of the OSB process is that it provides us insights into what still needs to be done and what stakeholders need to be at the table. We have to use that knowledge to formulate new directions and approaches. Never again is now—and the time to act is now, too.
Have a good week,
-JS
P.S. - I learned through my work with the OSB that there are currently no public historians on the Oversight Board. This seems like an oversight (pun intended); there should be historical expertise on the OSB to help render decisions on these cases. If you agree, you can nominate one for the Board by filling out this form.
Notes
[1] Response of the American Jewish Committee Jacob Blaustein Institute to the Meta Oversight Board on case 2023-22-IG-UA, available on the OSB website here.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Response of CyberWell, Ltd. to the Meta Oversight Board on case 2023-22-IG-UA, available on the OSB website here.
[6] Ibid.
[7] Response of The Conference on Jewish Material Claims Against Germany to the Meta Oversight Board on case 2023-22-IG-UA, available on the OSB website here.
[8] Ibid.
Very disturbing. Thank you to you and your colleagues for the admirable work you’re doing to combat this problem.