
Fazal Rahman lives in Pakistan and, up until recently, had never heard of the Holocaust. When asked by the BBC about the event, he confessed he did not know what the term meant. Yet he was familiar with images of it… A.I.-generated images, that is.
Rahman is part of a network of Pakistani content creators who use artificial intelligence applications to generate fictitious images of historical events, including the Holocaust. They principally post on Facebook, where their pages have amassed hundreds of thousands of followers. For producing content that generates a high number of interactions—such as A.I.-generated Holocaust imagery—they can earn $1,000 per month as part of Facebook’s content monetization program. The more clicks, views and shares their content gets, the more they get paid—particularly if their content is consumed by higher-income audiences in the U.S., U.K., and Europe. For Rahman, it has become his sole source of income.
How would someone such as Rahman, who’d never heard of the Holocaust, get the idea to post A.I.-generated Holocaust imagery? They ask ChatGPT. According to the BBC investigation, creators inside these content groups use AI chatbots to determine which historical events create high-performing social media content. The Holocaust was one of the answers.
Since 2022, there has been growing discussion about how artificial intelligence applications such as large language models, image generators and chatbots will affect “history”: both the professional discipline of history and public understandings of the past. In 2025, the contours of that influence are now visible. A.I. applications are being used around the world in dozens of history-related contexts. A shortlist includes:
An exhibition at The White House about America’s “Founding Fathers” and “Founding Mothers,” curated by PragerU, which includes historical portraits that morph into A.I.-generated video clips, activated by scanning a QR code;
Game designers, such as in the Netherlands, incorporating A.I.-generated historical imagery into their games, trying to, in their words, “leverage the potential of AI to bring history to life”;
Historians and researchers in multiple countries using tools such as Google’s Notebook LM to summarize and synthesize scholarly literature, as well as take notes and arrange book chapters;
Reviewers of peer review journals, in some instances, using ChatGPT or large language models to author their assessments of scholarly articles;
Holocaust museums—and other sites of conscience—using hologram technology coupled with A.I. to simulate conversations with deceased witnesses and survivors of past events;
Conservators and historic preservationists in Italy, Greece, India and China using A.I. to reconstruct destroyed archaeological sites or damaged cultural artifacts such as paintings, sculptures and mosaics;
Hitler speeches and writings translated by AI, remixed and propagated on social media set to background music;
The current Presidential administration in the U.S. using A.I. to generate a list of history books with perceived offensive key words in their titles and removing them from library shelves at the U.S. Naval Academy;
University presses licensing their books to train large language models;
Museums and universities establishing policies and procedures for using A.I. in the workplace;
Approximately one-third of professors across disciplines, including history, describing themselves as frequent users of generative AI tools, according to a recent New York Times article, including developing lesson plans, making lecture slides, and designing custom chatbots that answer student questions;
Students in classrooms worldwide using ChatGPT and other large language models to author homework assignments, historical essays, and, at least in one case, write a PhD dissertation.
(As proof of ChatGPT’s mass adoption among students, data has shown that ChatGPT’s usage peaks in May around final exams, declines over summer break, then spikes again when the school year starts.)
The proliferation of uses of A.I. in historical or history-adjacent spaces prompted the New York Times to publish an article in June, titled “A.I. Is Poised to Rewrite History. Literally.” Presumably in response, at least in part, the American Historical Association (AHA) released its Guiding Principles for Artificial Intelligence in History Education shortly after. Though only two months old, the AHA document, while laudable, already feels like an artifact, staking a claim to land that has experienced a seismic continental drift. The AHA document asserts several times that A.I. cannot replace history teachers or professors. The wider world seems to disagree. Over the summer, Microsoft released their top 40 occupations with the “highest AI applicability score.” Historians were number two, behind only interpreters and translators, with a “coverage” score of 0.91 out 1. According to the second largest corporation in the world (by market capitalization), most of what historians do can be replicated or outsourced to machines, either now or in the future.
History.AI is here, then—and with it, the likely alteration of the history profession as it has been practiced for decades. What it becomes is still to be determined. While some observers and traditionalists had held out hope that A.I. might be a passing fad, akin to the laser-disc or the unicycle, that appears unlikely. Much as A.I. applications and advancements will alter medicine, law, marketing and science, so, too, will they alter “history” in all its manifestations.
So far, no one has quite been able to articulate what that altered future looks like. The punditry around History.AI has largely mirrored conversations from prior decades when new platforms and technologies emerged. In the 2000’s, crowd-source historical knowledge on Wikipedia was perceived as a threat to expert-centric models of historical knowledge. The debate was whether Wikipedia would ever be accurate enough to be considered reliable history, or tackle the harder intellectual questions that historians felt they tackled. Today, Wikipedia is the fourth-most visited website in the world; its content informs everything from YouTube videos and Amazon Alexa to journalism and legal research.
With the advent of social media and “e-history” in the 2010’s, historians again criticized the diluted and inaccurate historical memes that circulated online, arguing that the viral past would never be a substitute for accurate and rigorous historical knowledge. Today, YouTube, Reddit, Instagram, Facebook and X are all among the top 10 most visited websites in the world, with some history-related accounts (as documented in my book) boasting millions of followers/subscribers.
While Wikipedia and social media have captured worldwide attention, professional history has clung to a rapidly diminishing air space. As has been well-documented in this newsletter and many other places, history departments in the North America and Europe are being shuttered, retiring history professors are not being replaced, fewer history degrees are being awarded, history museums are closing—some due to lack of visitors and donor support, others due to political pressures—funding for historical research is disappearing and full-time job openings for professional historians in universities or museums/historical societies are few-and-far-between, often with paltry salaries. Anecdotally, I know many historians and history-degree-holders working in jobs—or pursuing jobs—outside of the profession. It is all well-and-good for advocacy organizations in Washington, D.C., to plant a flag and say they are not moving. The problem is that everyone else has.
For a period, it seemed that so-called “hallucinations” and inaccuracies produced by generative A.I. might be the kryptonite that would protect expert-centric disciplines such as history. Surely the need for accuracy would always ensure the need for professional historians. Alas, the tech companies were playing chess while scholars were playing checkers. The more that everyone uses A.I. technologies, the better they become. We are all training the machines in real-time every day. Now armed with access to scholarly databases and thousands of university books, the A.I. applications are far more accurate and less “hallucinative” than they were even one year ago, and rapidly improving. In the case of historic preservation and recreation of bygone relics, A.I. is likely to become more accurate than a human ever could be.
In prior decades, there were arguments, too, that the proliferation of these technologies might help professional history—and, by virtue, public understandings of history. If only professional historians could leverage Wikipedia and social media to elevate their own voices, the technologies would boost the relevancy of history and historians in people’s everyday lives, and the web would be overflowing with accurate historical analysis. Some public intellectuals and journalists continue to hold out hope that A.I. will do similar, that its transformative power will “bring history to life.” These scenarios failed to account for the realities of how people behave within incentive structures shaped by modern technologies. If the ecosystem rewards rapid-fire posting on social media with financial, reputational and political gains regardless of accuracy, that is what people will do. Other forms of cultural production won’t disappear overnight; they’ll simply become novelty items that people admire but don’t feel are necessary in order to succeed in their everyday lives.
That is precisely what has happened to history; even as we are surrounded by billions of pieces of e-history every day online—some created by humans, most generated or circulated by A.I. and algorithms. With more than 20,000 history institutions in the U.S. alone, Americans struggle to name a single historian, identify a historian they know or have met, or even recite basic historical facts. (According to one survey, 48% of Americans could not name a single concentration camp or ghetto from World War II). While historians continue to publish books, Americans continue to read fewer and fewer of them. Amid a deluge of information, misinformation and disinformation—as well as unflinching demands on people’s time, energy and attention simply to make ends meet—rigorously researched information about the past is among the first items to be jettisoned.
Where is all this headed? Futurology is a tricky business, akin to batting in professional baseball; being right 30 percent of the time makes you eligible for a lucrative payday and consideration for the Hall of Fame.
The evolution of A.I. and the decline of “History” have not happened in a vacuum. Their stories are intertwined with broader political and geopolitical agendas. In the U.S., the race to lead the world in artificial intelligence has become a national imperative too consequential to finish runner-up. Across multiple administrations and Congresses in both political parties, it has resulted in a massive infusion of funding into S.T.E.M. (science, technology, engineering and math) at the expense of the humanities, as well as a deregulatory environment that has balked at imposing any serious restraints on the technologies, lest China, Russia, Saudi Arabia or India surpass us. It is a powerful political statement to be winning the global race for the future. It is less impactful to say we are winning the global race for the past.
At the college and university levels, the allure of high-paying S.T.E.M. careers that would justify the high costs of tuition lured students (and parents) out of the Liberal Arts buildings and into the science and business halls. A.I. has upended that story, too—coders and engineers are now facing bleak job prospects—but the students have not returned. With no history students, administrators can justify fewer history professors and no history departments. Why pay salaries plus benefits for professors to teach to empty classrooms? Students can learn the history they need for free from a variety of high-quality YouTube videos or podcasts. And while history museums and historic sites still retain high trust among American audiences, maintaining such sites is expensive (people, facilities, maintenance, programming, climate-controlled storage, etc.). Donors are donating less, some donors have passed away and others are scared to takes a stand on matters of history for fear that elected officials will come after them and their bank accounts.
So, while A.I. applications such as Notebook LM and artifact reconstruction software will help the remaining employed professional historians write their syllabi, refine their classroom slides, or do museum work, A.I. is unlikely to improve the state of professional history in North America or Europe… and beyond. At least as currently constructed, A.I. will not help students do better research or become better writers; will not improve the quality or accuracy of e-history in the public sphere or on social media; will not boost funding to history institutions that sorely need it; and likely will aid propagandists and repressive forces as they continue to pressure independent thought and scholarship on difficult topics. But A.I. will continue to be used because the discourse around A.I. taps into one of humanity’s biggest fears—a fear of being left behind. If everyone else is using it, surely, I need to as well. (Anecdotally, I hear this from students all the time. If all their classmates are using LLM’s, how can they not?). The more they are used, the better the applications become, and the more legacy professions fade into obscurity.
What A.I. really offers is proving too alluring to pass up—and, indeed, this has been the genius of how A.I. has been marketed to the public. A.I. tools offer the promise of being ahead of the curve: saving time, earning money, processing more information at greater scale, parsing data to reach a plausible explanation, and getting to an outcome faster, even if that answer is not the best one or even the correct one. All technologies have politics, and A.I.’s politics are engineered for speed, automation, profit and predictability. The best history, on the other hand, emerges from a deliberate and tortuous human exploration of what is often unpredictable and unexpected, usually without a clear profit motive. That experience, itself, could soon become a thing of the past.
What will replace it? For inspiration, I asked Google Gemini. It produced a lengthy A.I.-generated report that claimed that “instead of facing obsolescence, the profession of history is in the midst of a profound and necessary evolution.” It cited 24 websites, more than half of which were university webpages marketing history classes to their non-existent students, with three additional citations from Wikipedia. It was eerily similar to the A.I.-generated Holocaust images produced in Pakistan; it had the illusion of reality, while simultaneously being completely divorced from it.
A metaphor for our current times, perhaps.
Have a good week,
-JS
This post focused on the challenges. A subsequent post will focus on the solutions. There are some! Hope is not lost!
Citation is more than a hallucination. It establishes evidence to support or refute a declaration. So far, AI can't handle it. Friars in Spanish colonial Arizona encouraged Apaches to take Pima slaves. While fodder for a political point today, that fact alone doesn't tell us that the alternative would have meant execution. AI can't do deductive reasoning, nuance or serious complexity. But propagandists will love it -- and already use it to great advantage.
Interesting essay. It illuminates the trap of AI, that the more we use it, the more accurate it becomes. Diabolical.