Can tech become more human?
A “Responsible Tech Summit” seeks to bring more humanity into Silicon Valley
All Tech is Human is a remarkable organization. Founded in 2018 by David Ryan Polgar, in four short years it has grown into a community of more than 3,000 people. Their mission is quite straightforward: to unite a diverse group of stakeholders to co-create a better tech future.
As part of that mission they host events, and most recently ATiH hosted a Responsible Tech Summit in New York City at the Consulate General of Canada. I was fortunate enough to be invited, and so I thought I would share with you who I met and what I heard at this gathering of more than 120 tech leaders.
One question you may have is, why Canada? Apart from having a beautiful event space in the heart of Manhattan, Canada has been quite active in the digital realm. As I learned from Deputy Consul General André Frenette in his opening remarks, the government of Canada’s priorities include (1) bolstering cybersecurity; (2) countering disinformation; and (3) promoting “world-class business practices” though a Responsible Business Conduct Strategy. During the pandemic, the government partnered with Mastercard to build a $510 million intelligence and cyber center in Vancouver, called the Global Intelligence and Cyber Centre of Excellence. The government has also boosted funding for communications security, cyber operations, crypto, and its G7 RRM Coordination Unit, an initiative of the U.S., Canada, Japan, the U.K., France, Germany and Italy to coordinate and respond to threats against democracies. In short, Canada is spending a lot of money to compete in the digital arena. The government even has a dedicated service channel to recruit top tech talent.
Such began a full day of speakers, more than 25 in total. It would be too much to summarize all the panels, so instead let me offer three positives from the summit, as well as three areas for further consideration.
THREE POSITIVE TAKE-AWAYS
One of the remarkable aspects of ATiH is how it has united a diverse range of individuals working in tech. Summit participants included government officials, data scientists, researchers, professors, activists, engineers, regulators, journalists, artists, founders and program managers. 80% of the speakers were women, and 60% were people of color. There were even international speakers from the United Kingdom, Australia, Singapore and the United Nations. It was not a unisex, homogeneous affair, as might be expected from a tech gathering.
This is a credit to ATiH, who were clearly intentional in designing the program. But it also reflects an emerging sentiment that the future of tech cannot be left solely to engineers of a certain ilk. Increasingly, the tech world is integrating a diverse range of participants, and employers are now actively seeking out diversity. One speaker was actually the founder and CEO of a company called Diversio, which uses A.I. to improve company diversity and lets employers know where they may have blind spots.
2. Smart and impressive people
As I’ve lectured about my book History, Disrupted, I’ve continually been asked what gives me hope. My answer is always the same: people. In government, academia, tech, and media I’ve met so many amazingly smart and dedicated people. The Responsible Tech Summit was no exception. I met many wonderful people at the event, but allow me to single out two who really impressed me.
The first was Dona Bellow. Dona was educated as an international human rights jurist and now works in digital trust and safety. She has worked at Google, AirBnB, Twitter and Facebook. Dona was on a panel titled, “Imagining Positive Digital Spaces,” and was asked by the moderator what makes for a positive digital space. Her answer stuck with me: a digital space that she experienced for mothers of angel babies, children lost at very young ages or before being born. Dona said that such a space was painful and full of loss, and thus not “positive” in the traditional sense of the word. But it was healthy and imbued with common purpose. It was a digital space where privacy and safety were intentionally built into the design. And it was the type of space I never would have thought about as a man who does not have children—evidence of the importance of diversity. It was a potent reminder of how the social Web can still be a place where, in Dona’s words, “people congregate, communicate, and make memories.”
The second was Natalia Domagala, Head of Data and AI Ethics at the UK Cabinet Office. Since I formerly worked in government and still advise government officials, I’m attuned to how government representatives speak. To serve in government (in my view) is to commit fully to public service, being responsive to citizens, and working at all times in the public interest, not self-interest. Such was the spirit I sensed from Natalia, who displayed a well-needed reflexive attitude on how governments have their own duties in responsible tech. In her words, “At the end of each decision is a real person. We automate decisions because it saves money, it saves time, and frees up government officials to do other things. But citizens have a right to know how decisions were reached.” Her office conducted a public survey about what information people wanted to know from government about how it uses algorithms, and they developed standards and published them in plain English based on the results. She also said that, unlike Silicon Valley, government does not move fast because it wants to ensure it gets things right—which requires the resources and time only governments possess. This is further evidence of why government and private industry are necessary to balance each other.
Very often these types of gatherings make bold promises wrapped in flowery language. Thankfully, this summit did not. Each panel was quite realistic in the challenges we face and the choppy terrain ahead. In particular, I appreciated how several speakers recognized that content regulation on social media is, essentially, an unachievable goal. Fred Langford (Ofcom) and Dhanaraj Thakur (Center for Democracy & Technology) noted that users will always devise ways to work around content guidelines. (Taylor Lorenz wrote a recent article for The Washington Post on how social media users have developed ‘algo speak’ to evade censorship). Yaël Eisenstat was quite blunt about how social media platforms want us to spin our wheels debating content moderation, as that deflects from questions about their business models. And an employee at Meta was very honest about how challenging it is to find one piece of hateful content amid millions of posts being created every minute of every day around the world. Such plain talk is refreshing, and a credit to ATiH for inviting experts who understand the daunting task ahead.
There were other positive takeaways, both those stood out to me. There were also areas for improvement, three of which I’ll articulate below for our consideration:
THREE AREAS FOR IMPROVEMENT
1. Lack of diversity
At the same time that we applaud diversity, we can also recognize a lack of diversity.
At the Responsible Tech Summit, that lack of diversity manifested most starkly in a lack of political diversity. None of the speakers expressed a Conservative or Libertarian point-of-view, and nary a speaker even seemed to represent a politically moderate or centrist position. Of course, it’s not always easy to discern these things, and not everyone wears their political affiliations on their sleeves. But the gathering was overwhelmingly of a left-leaning political persuasion. Not having social Conservatives, Libertarians, Republicans, political centrists or even extreme Leftists on stage essentially excluded more than three-quarters of the U.S. population (not to mention the global population) and created an echo chamber that would have benefited from more poking, prodding and inquisition. The shortcomings were particularly felt in the panel on free expression, where panelists remarked about how “we” need to reign in free speech without being reflexive on who the “we” is—and what assumptions that pronoun carries about who gets invited into these types of spaces and who are excluded.
As someone who has worked with veterans and military his entire professional career, the absence of military and law enforcement professionals was also noticeable. Tech is increasingly used as a tool in warfare and policing, and if any professional class should be at the table discussing Responsible Tech, it should be them. Progressives tend to filter these questions through the lens of criminal justice and resistance to state power, but why not invite military officers, defense contractors, or customs and border patrol to the table to learn where tech and algorithms are necessary to protect human lives, as well as where tech may be being used irresponsibly? Since Mastercard was invoked, it also would have been interesting to have Fortune 500 companies represented at the table. How are large financial companies thinking about responsible tech, algorithms and user data? How are they collaborating with government, and are proper safeguards being put in place?
Responsible Tech is such a huge topic, there could be endless summits with endless speakers. Maybe some of these stakeholders were invited and declined. But as presented, the summit felt like its own social media filter bubble. To quote Yu Ping Chan of the United Nations, who spoke at the summit, “If you are having a conversation among those that already agree, you risk leaving out those that you need to persuade further.”
2. A lot of tech; not a lot of human
The summit offered much rumination on the regulation of tech; it did not offer much rumination on what it means to be human, or how technology has shaped and changed the human experience.
One noticeable absence was any discussion of religion—again, a symptom, I suspect, of a worldview wherein organized religion increasingly plays a minimal animating role in public and private life, or is viewed with suspicion for fear of its associations with Conservative politics. There are religious Progressives, of course, however the alignment with politics is inchoate, and Progressives who maintain religious practice often articulate it in broader spiritual terms, or see religion as a channel for their social activism. There was actually a chaplain at the summit, but he was humanist chaplain and spent his few minutes on stage promoting his forthcoming book.
The animating role of religion in public and online life was largely not addressed, as were other aspects of what make us human: emotions, anger, jealousy, competition, compassion, love, sex, empathy, friendship, family, creativity, struggles for power, nationalism, and patriotism. These were almost completely absent from the panels, which were highly technocratic and centered on laws, regulations, policies, algorithms, automations, and entities such as “governments,” “companies,” and “platforms.” All of these entities are created and instituted by human beings whose experiences of the world have been vastly re-organized by the technologies we use. For a public historian who has studied the humanities his entire life, an absence of humanities scholarship made the summit very much “tech” and very little “human.” In fact, the word “humanities” was not mentioned during the entire day. At selected moments, speakers invoked the social sciences, but not a single speaker argued for more humanities scholars to have a seat at the table.
Which brings me to my final point…
3. No historians, and a lack of historical perspective
This is History Club, after all, so this seems like an imperative. But the summit felt entirely divorced from the past. For those who have read my book, this will not come as a surprise, as serious history has often been an afterthought throughout the development of the social Web. When history is considered by technologists, it is often as a piece of e-history content that can generate likes or clicks, rarely as a serious analytical framework to understand world events.
Such was the case in this summit, where the word “history” was not uttered once; no historians were included as speakers; and wherein Australia’s e-safety commissioner said that we must “learn the lessons of today to make a better tomorrow,” but did not make any mention of learning from the past. The summit suffered from a peculiar form of presentism, wherein only today and tomorrow exist and nothing from what happened previous seems applicable or relevant. Indeed, in one panel the speakers were asked what skills are necessary for working in the tech sector, and none of them mentioned historical knowledge. The skills mentioned were inclusion, empathy, “cognitive diversity,” technical proficiency, and challenging the status quo—but historical understanding, research, writing, analysis, or learning how the past animates human experience was left out. This is, unfortunately, consistent with what I found when researching my book, where the insights of professional history rarely factor into the considerations of technocrats, and historical study (not to mention history classes) are deemed unnecessary by technologists.
To me, it seems self-evident that examining previous technological innovations and their societal effects can reveal something about today’s challenges. It also seems obvious that knowing about the genesis of nationalist movements, social movements, wars, and political developments can offer insights into how and why we behave as we do online. (Not to mention knowing something about the history of the Internet itself.) History has been invoked in disinformation campaigns, Putin’s invasion of Ukraine, by white supremacists and by Progressive activists in a variety of online spaces. Having a sense of history is, thus, innately human, and historical scholarship should be part of the conversations around human-centered tech. I look forward to partnering with All Tech is Human to make sure that is the case moving forward (more on that to come).
3.5 One final point…
There was actually one other historian at the event, and this brings me to my final (3.5) point.
At the start of the proceedings, Deputy Consul General Frenette recognized a special guest in the audience, Dr. Irwin Cotler. Dr. Cotler is an Emeritus Professor at McGill University and was recently appointed as the Special Envoy for Preserving Holocaust Remembrance and Combatting Antisemitism by the Canadian government.
Dr. Cotler was in New York receiving an honorary degree from the Jewish Theological Seminary, and only stayed for the first panel on free speech. His introduction was also the first- and only-time antisemitism was mentioned throughout the day. Repeatedly speakers cited sexism, misogyny, anti-Black racism, and discrimination against LGBTQ+ in online spaces… but no speaker expressed any concern for the vitriolic and vile increases in online hate speech against Jews that have animated real-world violence in Charlottesville, Pittsburgh, Colleyville, and Buffalo, all of which had online antisemitism as an instigator.
This may, again, be a consequence of a particular Progressive worldview wherein antisemitism is increasingly and alarmingly excused or brushed aside, especially in the context of U.S. racial politics or criticism of Israel. The shooting in Buffalo was actually referenced several times by speakers, but always in the context of racism and not antisemitism, even though antisemitism is central to replacement theory. Had Dr. Cotler stayed for the entire event, I would have been eager to hear his thoughts on how online hatred against Jews was elided throughout the day’s discussions.
(As an aside, All Tech is Human published a report on improving social media, interviewing 42 experts. None of the experts cited combatting antisemitism as an online priority.)
Kudos to All Tech is Human
Overall, the Responsible Tech Summit was a highly stimulating day that left me with much to think about. All Tech is Human is a terrific organization, and kudos to David and his team for organizing such an event with so many speakers and stakeholders. No event is perfect, and these types of summits should offer us items that we agree on as well as others that merit further discussion. Agreement and disagreement, cooperation and negotiation, are defining features of what it means to be human; and, increasingly, defining features of the future of tech. That is a positive development.
Have a good week.
History Club is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.