Author’s note: This is the first of several pieces on artificial intelligence that I plan to write over the coming months. I have a chapter in my book on how this relates to history, called “History.AI.” Read that, then read this.
History Club members know that I recently spoke about my book at the annual SxSW conference in Austin, Texas. I was there to talk about the past and the present, and did so in panel conversations sponsored by Rolling Stone / SHE Media and Smithsonian’s Made By Us. While not speaking, though, I made a concerted effort to attend panels that focused on the future, particularly artificial intelligence and machine learning. I expected to have concerns; I left with many.
A few disclaimers. First, I am not yet an expert in AI or ML technology, though I know enough to make intelligent conversation. I still have much to learn, however. Second, AI and ML are billion-dollar industries spanning government, academia and tech. While there are many overlaps, it can’t all be painted with the same broad brush. Third, current research may or may not emerge as viable products or applications. What we talk about in 2022 may never be implemented in 2023 and beyond.
More concerning than the products and technologies, though, are the philosophies that underpin their development—and how a few select individuals have enormous power to shape society for decades with limited oversight or regulation. In this first article on AI, let me introduce you to a few of those individuals and report to you what they said at SxSW.
The first panel I attended was titled, “A Connected World: The Future of AI Tech.” It featured:
Erwin Gianchandani of the National Science Foundation;
Yolanda Gil, the Senior Director for Major Strategic AI and Data Science Initiatives at the University of Southern California;
Dario Gil, a senior Vice President at IBM and member of the National Science Board; and,
Sethuraman Panchanathan, the director of the National Science Foundation.
Unless you’re in the industry, you’ve likely never heard of these individuals. They are not often written about by mainstream media outlets. American readers, in particular, will be forgiven if they did not know that Dr. Panchanathan was nominated in 2019 by President Trump to lead the National Science Foundation, confirmed by the Senate in 2020. Amid trying to figure out whether Trump had a relationship with an adult film actress, the individual appointed to head the $8.3* billion dollar agency that will shape how we live our lives for the next 30 years was largely overlooked.
It turned out this SxSW panel was a prelude to a bigger announcement from Panchanathan and Gianchandani: that NSF is reorganizing, creating a new directorate, headed by Gianchandani, that will bring new technologies to market faster, thereby ensuring “national competitiveness.” In other words, the new directorate’s purpose is to help America catch up in the AI competition with China by pumping billions of dollars into artificial intelligence and machine learning, leveraging private corporations and researchers at top universities. Indeed, a preview of this government/industry/academia collaboration was on display in this panel, where a professor, a business executive and two government officials spoke in lock-step on the marvels of future AI applications.
Dr. Panchanathan honed in on one particular talking point repeatedly: “Human performance augmentation,” to use his words. “Everyone is disabled,” he claimed, using the example that, “we are all 180-degrees blind; you can’t see behind you.” To remedy this apparent deficit in our human condition, everyone will one day be augmented by technology. Assistance, augmentation and even fusion will take on many different meanings in many different contexts, according to Panchanathan. AI may initially help the visually-impaired with sensory information that helps them navigate a crowded room, but those developments will eventually be passed on to everyone. “In 50 years, we will all be augmented and fused individuals, onboarded and inboarded,” he promised. “Millions of devices will surround us and human performance will be very exciting.”
This did not sound very exciting to me. In fact, it sounded quite concerning. Being fused and inboarded with millions of devices in the name of “augmenting” my performance did not sound like a future I had signed up for. (I think I perform pretty well as I am!). It was even more alarming considering it came from the director of a government agency with billions of dollars at his disposal to make it happen, and a mandate from the highest levels of government to accelerate it in the name of “national competitiveness.” Perhaps most concerning is how these decisions are being made largely beyond the oversight of taxpayers, journalists and Congress. The House Science Committee, which has jurisdiction over NSF, has not held a hearing on AI in more than two years. (See my previous newsletter on why Congress is ill-prepared to regulate Big Tech). There are no journalists assigned to cover the NSF on a regular basis that I could find (if someone knows this to be inaccurate, please correct me!), and as science journalist Charles Petit has written, many science news stories are fashioned out of “the press releases that tumble out of government-funded labs or universities.” Science reporters take government or academic information and repurpose it as news, as opposed to critically investigating it. Our elected officials are not speaking on the campaign trail about the million-dollar partnerships being entered into with private corporations. (NSF recently launched a $100 million partnership with Intel meant to wrestle dominance of the semi-conductor industry away from Asia). Instead, politicians divert our attention to gas prices and critical race theory, subjects for another newsletter. In our democracy, where our tax dollars fund huge investments that will dictate our future, consequential decisions are being made in a starkly undemocratic fashion.
The next panel I attended was called “Quantum Computing for Design and Social Good.” This panel again featured an executive from IBM, Russell Huffman, in conversation with three professors from The New School. (Notice a pattern of industry in coordination with academia, not to mention IBM using SxSW as a platform for marketing and public relations). That conversation focused on hyping quantum computing and its ability to “solve problems that classical computers cannot solve today.” The assumption in this panel, like the previous one, rested on the premise that improvements in tech will automatically lead to improvements in the human condition. This assumption seemed to ignore the past 15 - 20 years where rapid improvements in technology improved the human condition for some, while making it decidedly worse for others. It is also ignored the possibility that improvements in technology could create new problems—a question that many in tech repeatedly want to sidestep. Tech is always billed as a solution to existing problems; it is never conceived as a potential cause of new, far worse problems. It’s as if techno-optimists have no regard for history, which was actually admitted during the panel. “The leading use cases are financial tech, cyber security, computer recognition and voice recognition,” one panelist said. “The Liberal Arts are an afterthought.” This was one of the conclusions I reached in my book; worth a read if you’ve not yet done so.
A third panel I attended was an interview with Rana el Kaliouby. Kaliouby is an Egyptian-American entrepreneur who co-founded Affectiva, a company that specializes in facial recognition and so-called “emotional AI.” Affectiva has used videos, cameras, monitoring, laptops, smart phones and other devices to collect tens-of-millions of human facial expressions. According to Kaliouby, they now have more than 12 million face videos that translate into 6 billion facial frames of data. Feeding that data into machine learning applications, companies can then potentially do a myriad of things. Universities can use AI to monitor students’ facial expressions and determine if a student is bored or paying attention. Police officers can use AI to determine whether a driver is drunk, or to determine the emotional state of a passenger in a car. Netflix can use facial AI to gauge the reactions of viewers and determine whether a show is lagging and needs to be spiced up, or where attention may be dipping. Emotional AI can analyze your facial features to determine if you’re depressed or not (!!). Kaliouby admitted that “regulations are not in place to do this responsibly” and that “ethical lines” need to be drawn. But she did not elaborate on how those ethics and regulations would be put into place, and by whom. In the meantime, private companies are forging ahead to exploit these data sets. Kaliouby recently sold Affectiva to SmartEye, a Swedish eye-tracking conglomerate that operates around the world, including in China, where a massive country-wide surveillance system is in place. The price tag on the acquisition? $73.5 million.
The last panel I went to was meant to be an escape from these dystopian futures. Titled, “Artificial Life as Art: Reimagining AI,” it was a conversation between Sadiya Akasha and Nathan Lachenmyer of Sitara Systems, a Brooklyn-based technology and design firm. Akasha and Lachenmyer invited us to imagine a future where machines and humans co-exist, akin to how we co-exist with squirrels and birds. Machines will do what they do well; humans will do what we do well. It does not have to be a future of either servitude or displacement. To illustrate this, they worked with artist Anicka Yi to install intelligent floating machines inside the Tate Modern. The machines hovered above people in the gallery, existing alongside visitors while operating independently. Some people bonded with the machines, claiming that they had personalities beyond what were programmed. Others barely noticed them. But the experiment was intended to demonstrate that technology does not have to be apocalyptic, but could also be symbiotic and beautiful.
While I appreciated the sentiment, I could not really square this with what I heard in the previous panels. It actually served as a chilling metaphor: some people stare with wonder at technology, while others simply walk by without asking critical or skeptical questions. Indeed, that was a theme I found over and over at SxSW. People in positions of wealth and power—government officials, corporate executives, academics with tenure—sit on conference stages and promise us that the technological advancements that enrich their pocket books—and which are paid for by our tax dollars, tuition dollars and consumer spending—are surely going to benefit us all. It’s as if they are completely unaware of the histories of technology gone awry, falling into the wrong hands, and being used to harm, exploit and undermine society, democracy and human rights—which has been starkly evident, and perhaps arguably the defining characteristic, of the 21st century. It’s as if they’ve completely ignored history, and we all know what happens when you do that (hint: you are doomed to repeat it).
It seems to me, then, that we need more historians, ethicists, philosophers, journalists and concerned citizens actively involved in these conversations around AI. This was a conclusion I came to when writing my book, and I will have more to say about it, and AI, in future newsletters. In the meantime, I encourage you to take a moment and email members of the House Science Committee and ask them to hold a public hearing on how we create ethical applications and regulations of AI by government, academia and private industry. In the U.S., a significant portion of these technologies are being supported by federal funding, a.k.a. our tax dollars. We have a right to ask hard questions about them. Isn’t that how democracy should work, after all?
Have a good week.
*Correction: the original version of this article stated the NSF budget at $2.3 billion. It is actually $8.3 billion, as cited in this US Patent and Trademark Office notice.
History, Disrupted in the media
History, Disrupted continues to receive positive media attention:
I was on NPR again this week, appearing on the show Midday hosted by Tom Hall. We had a wonderful conversation about my book and how its themes manifest themselves in the media coverage of Ukraine. Listen to it here »
I was interviewed by the former editor-in-chief of Foreign Policy magazine Moises Naim for his television show Efecto Naim, which broadcasts in 23 countries to over 10 million viewers. The program aired last Sunday and is now available online. Watch it on YouTube »
Episode two of the podcast mini-series “Reframing History” is now available wherever you get your podcasts. This week’s episode specifically cites History, Disrupted as it examines the case of the misunderstood historical method. Listen to it here »
Haven’t gotten your copy of History, Disrupted yet? My website lists a few bookstores that are carrying it. You can also order from your local bookstore; just give them the ISBN 978-3030851163.
Already have your copy? Please leave a positive review on Amazon. It helps surface the book to more readers.
History Club meets Thursdays at 10 pm ET on Clubhouse. Want to participate? Download the app and join the club.
Want to support History Club? Your support allows me to publish posts like this.