This week our History Communication Institute convened a roundtable discussion about ChatGPT, open A.I. tools, and their potential effects on history. Listen to the podcast above or watch the event on YouTube.
I’ve written about artificial intelligence before in this post from April 2022:
And in this chapter of my book, which I titled “History.AI”:
I’ll have more to say about artificial intelligence in the months ahead.
In this event, we convened speakers from the U.S., Canada and Europe to provide insights into what these A.I tools do well and where they fall short.
Specifically, ChatGPT can:
Take a complicated block of text and make it simple and easy to understand;
Help overcome writer’s block by using a prompt to create a first draft that can be refined and edited;
Speed up the writing of repetitive texts that have standardized, boiler plate language such as press releases, form letters, syllabi or social media posts;
Help people with dyslexia, for whom writing can sometimes be difficult and painstaking.
But the wide-spread use and availability of ChatGPT raises numerous ethical, pedagogical and epistemological questions, including:
Can GPT distinguish between authoritative sources and non-authoritative sources? ChatGPT continually delivers wrong answers and false information, and seems to not be able to discern one source from another. GPT is a large language model (LLM), meaning that it crawls trillions of pieces of data from across the Web—including Wikipedia and Reddit—and figures out what words tend to appear in specific orders. It doesn’t think critically and it doesn’t know anything about the world. It certainly doesn’t know anything about history. If you ask it historical questions, particularly about local or niche subjects, you’ll receive a litany of fictitious answers and be told about people who never existed. This leads to question number two, namely:
What do people think that they can do with history when they use ChatGPT? This question was posed by author and historian Steve Minniear during our event. Historians may instantly recognize that GPT is terrible at answering historical questions, but do non-historians know that? Will non-historians recognize that the answers to their questions are wrong, false, fictitious or nonsensical? Will they care? A real-world example was provided by Johanna Porr Yaun, a state historian in Orange County, N.Y., when she used ChatGPT to generate a short paragraph about the O&W Railroad for a newsletter and fundraising appeal. ChatGPT generated a text that claimed the O&W was the first railroad to use electric locomotives—a sentence that sounds plausible but which is categorically false. As a professional historian, Johanna caught the error before it was printed and distributed. But what if she was a student? Or an advertising firm? Or an elected official? If A.I. tools are able to quickly generate answers that seem plausible, and advance a particular agenda, what’s to prevent them from being accepted and circulated widely? This leads to a third question:
If students use ChatGPT to generate essay responses and exam answers, is that cheating? Plagiarism? A learning opportunity? Something else entirely? At heart, historians are educators, and many educators—in high school and in college—are now debating difficult questions about how to respond to open A.I. tools. For example, Dr. Jimena Perry teaches in a small history department at Iona University where the professors know the students quite well. She mentioned during our event that her students, who were fearful of writing and not strong writers, have suddenly been turning in beautiful essays. Jimena also said that the students don’t actually know what they’re turning in; they are handing in well-written essays but don’t understand what’s on the page and don’t grasp the concepts. What should professors do? Grade harder? Grade tougher? Ban GPT? Change the assignments? Reconfigure the learning outcomes entirely? Kim Fortney, Deputy Director of National History Day, raised a similar question. If students use ChatGPT to compete in National History Day, is that permissible? Ethical? Are there detection tools that can tell when text has been generated by A.I.? (Spoiler alert: there are, but according to Nathan Lachenmyer they are quite terrible. They work less than 25 percent of the time and generate false positives more than 10 percent of the time.)
These questions just scratch the surface of the much deeper existential issues behind these technologies, questions the History Communication Institute will be grappling with in the months and years ahead.
It’s conceivable that a sizable portion of future history writing, journalistic writing, science writing, marketing copy, advertising, social media, even government and think tank reports, will be produced by machines, drawing on the petabytes of data already on the Web. As these technologies improve, the speed and quantity of machine-generated texts has the capacity to overwhelm what any humans can do. A.I. could produce thousands of press releases in the time it takes a PR professional to write a few dozen. A.I. could generate thousands of news articles in the time it takes journalists to write a handful. A.I. already generates some of the financial and weather-related news stories that appear across the Web. Will these texts be useful? Credible? High quality? High fidelity? Will any of that matter?
The futures of history, journalism, science, writers, researchers—even newsletter writers (!)—face many existential questions in this reality. It’s part of the reason why I wrote History, Disrupted and founded the History Communication Institute. The technologies we use profoundly reorganize and reorder our world, and what they mean for history, the humanities, education, the arts, journalism and our very concepts of knowledge and creativity—and how those are measured, evaluated, disseminated and compensated—feels like they are undergoing seismic disruptions. To be sure, this has been a process decades in the making, foreshadowed by many previous innovations. Now, it is confronting us head-on.
My contention has been that we need historians, ethicists, philosophers, journalists and concerned citizens actively involved in these conversations around A.I. with technologists, engineers, developers, investors, entrepreneurs and lawmakers. These technological advancements may enrich the pocket books of the select few who develop and market them, but will they deliver lasting benefits to society? As Nathan Lachenmyer asked during our event, what are the true social costs of these new tools? These questions do not currently have answers, nor are any meaningful regulations in place to ensure A.I. is utilized responsibly and ethically, however those terms are defined. Open A.I. tools may solve some existing problems, but they also may create newer, far worse problems.
If these questions and concerns resonate with you, I encourage you to get involved with the History Communication Institute (HCI). You can sign up for our email list; apply to join our Slack community; or make a donation to support our work.
Finally, if you have thoughts about ChatGPT, or have experimented with it to either positive or negative outcomes, could you leave a comment in the chat below? We want to hear about your experiences.
Enjoy the conversation—listenable as a podcast above or watchable as a video below.
Have a good week.
This is a reader-supported publication. To support it, become a free or paid subscriber.