What the humanities can offer A.I.
A humanities perspective could bring much-needed insights to policymakers
“Scarcely a new invention comes along that someone does not proclaim it to be the salvation of a free society.”
Those words were written in 1980 by scholar Langdon Winner, in his now-famous article Do Artifacts Have Politics? Winner listed an array of technologies that were marketed as guarantors of freedom and democracy: the radio, the television, nuclear power, even phosphate fertilizers. The internet and social media have since been added to the list, each promising to “democratize” society upon their releases.
One could argue it’s been a deviation from the norm, then, that the latest technology—artificial intelligence—has been proclaimed not as the salvation of society but rather its doom. A.I. has been described by pundits and experts as “a threat to humanity” and a technology that “could destroy the world.” In this very newsletter, I’ve warned about the dangers of A.I., including dictatorships powered by A.I.-surveillance, massive job displacement leading to social unrest, and the increased concentrations of wealth in the hands of the few who own A.I. companies.
The response from sincere and well-intentioned people in the responsible tech world—and from policymakers in the U.S. Congress, Canadian Parliament and European Parliament—has been to call for regulation. The premise has been that, ultimately, it is us who bear responsibility for the tech, not the tech itself. With the proper guardrails, A.I. can stimulate innovation while also ensuring security and accountability, to paraphrase U.S. Senator Chuck Schumer in his remarks this week.
It’s important to note, though, that Winner debunked this argument more than 40 years ago. Winner wrote that such a conclusion offers comfort to social scientists because it validates what they already want to believe. But certain technologies are “political phenomena in their own right,” Winner wrote. Some technologies are, from their very inception, only compatible with certain types of political relationships. A.I. is likely one of them.
In other words: the use of A.I. technologies inevitably dictates how we organize our lives. The more we use them, the more the flexibility to organize our lives in different ways disappears—and the more those arrangements give power to some and remove power from others. The more A.I. encroaches into every aspect of our lives, the more entrenched its power dynamics become.
Such is the reason why this week my History Communication Institute (HCI) released a statement arguing that historians and humanities scholars must be part of the debates over A.I. being held in Washington, Ottawa, Brussels and beyond. Debates over how technology shapes humanity and society have a very long history, and policymakers that are unaware of that history risk making choices that will accelerate the very harms they fear.
For a technology that pundits believe threatens the very existence of humanity, the presence of humanities scholars in debates about A.I. feels imperative. Technologists themselves also need a well-rounded history and humanities education in order to understand the potential effects of the products they make. Given how few S.T.E.M. majors take history and humanities classes, strengthening the humanities feels like an urgent policy issue.
Technologists need a well-rounded history and humanities education to understand the effects of the products they make. Strengthening the humanities, thus, feels like an urgent policy issue.
Winner’s article, of course, is not the sole voice on technology’s effects on humanity. His is one of thousands of relevant pieces of scholarship that could offer much-needed insights. Such scholarship dates back thousands of years if you go back to Plato’s writings about the technê.
While a handful of policymakers are aware of such works, I suspect most don’t have the time to read reams of humanities literature as they debate A.I. amid other societal challenges. Luckily, they don’t have to; there are historians and humanities scholars ready to assist—if only we were to receive an invitation.
To that end, the political journalist Michael Jones, who featured our A.I. statement in his reporting this week, shared our A.I. statement with the Biden Administration. They sent back a rather uninspired reply (in my humble opinion). Instead of embracing the opportunity to invite humanities scholars and historians into the conversation, they suggested that a few off-the-record meetings with representatives from “ed tech” and “ed policy” served as an ample substitute. If people inside The White House cannot distinguish between “ed tech” and the “humanities,” that, in itself, evinces how desperately our insights and scholarship are needed.
More optimistically, Senator Schumer announced this week that he will convene at least nine panels that will ask the “hardest questions” facing regulators regarding A.I. The hope is that Sen. Schumer—who once lived in the same apartment building as my father—will welcome a diversity of intellectuals and disciplines into the dialogue, including humanities scholars who are willing to question the very assumptions that much of the policy debates have been built on.
From my own reading of Winner and others—plus my writing of my book History, Disrupted that includes a chapter on artificial intelligence called “History.AI”—my current thinking is that the acceptance of A.I., and the big data that powers it, into our lives over the past decade-and-a-half has already created outcomes that are now nearly impossible to undo. The purpose of regulation at this juncture, then, should be to counterbalance the effects of these technologies that are already in motion.
In other words, it’s not about boundaries; it’s about counterweights.
The purpose of regulation, then, should be to counterbalance the effects of these technologies that are already in motion, before they become irreversible. It’s not about boundaries; it’s about counterweights.
For a concrete example, as market forces have inexorably pushed us towards privileging science, technology, engineering and math (S.T.E.M.), governments must be the counterweight that invests in the humanities and social sciences. Except the exact opposite has been happening. Governments have increased S.T.E.M. funding while cutting humanities funding at both the federal and local levels. Policymakers have done so largely in the name of national competitiveness and to bolster industry, which consistently laments a lack of “talent” even as it constantly shifts the goalposts on what skills that talent needs in order to gain and retain employment.
If I was in dialogue with Sen. Schumer, I would suggest that part of “regulating” the effects of A.I. on society must be ensuring that history and humanities education does not disappear along with it. Indeed, I made such an argument to the European External Action Service and the European Parliament last month when they graciously invited me to visit with them in Brussels—invitations for which I am extremely grateful and that have opened up lines of productive conversation. Exchanges between policymakers and historians are possible, if both parties are willing to invest the time and effort to make them happen.
The humanities can offer policymakers numerous ideas and insights—far more than can be included in this single newsletter article. If you agree, I encourage you to:
Read our statement on the need for historians and humanities scholars in the A.I. conversations;
Contact your Senator or your Representative and urge them to invite us into the dialogue—or, for foreign readers, contact your Parliamentarians.
As I said in March when I spoke at the Consulate General of Canada in New York, “All tech should involve the humanities”—to which everyone in the audience applauded and agreed.
Now is the time to make good on that promise.
Have a good week,
-JS