“The parallel [to nuclear weapons] is striking. Can you imagine the creators of nuclear weapons deciding they wanted to open source what they had done?”
On Tuesday, March 28th, Harvard Program of Science, Technology, and Society welcomed former Mayor of New York City Bill de Blasio for a discussion of AI for Cities or Cities for AI: Who Should Decide?, exploring the rising dangers of the development of artificial intelligence. Ultimately warning listeners of society’s lack of agency in AI development, de Blasio discussed the complexity of AI as a topic, more immediate issues taking the stage in political discourse, and a general “feeling of inevitability” around AI development.
With the creation and improvement of new technologies, AI companies and the federal government are failing to keep up with corresponding regulations by excluding citizens from public input. De Blasio referenced the war in Ukraine as an example of an issue that will always be prioritized over AI, despite AI’s imminent dangers.
He also explained that decisions regarding AI safety and regulation prioritize economic efficiency over job loss. “Along come these, I think well-meaning technocrats, and they present an idea that inherently is meant to put people out of work. And it was not a part of the discussion.” Although, de Blasio argued, AI can (and does) contribute to increased efficiency, it does so at the expense of putting millions of people out of work. “It is interesting when cost can be seen in terms of currency but not in terms of humanity,” he added.
To prove his point about the exclusion of public discourse and input, de Blasio told the audience, “If you feel that you have been consulted by your government [regarding AI regulation], raise your hands.” Not a single audience member did. De Blasio made it a point to differentiate between the federal and state governments, saying that, as mayor of New York, even he was never consulted in regard to open-sourcing, safety, and regulations.
De Blasio then moved on from the loss of jobs to discuss more imminent dangers. The dangers of artificial intelligence extend far beyond asking Chat GPT for help on an essay or problem sets. He called the lack of awareness and regulations around AI a “lack of critical thinking” and a cause of “blind faith.”
De Blasio went on to sum up Eliezer Yudkowsky’s paper, AGI Ruin: A List of Lethalities, which describes the dangers of AI, touching on everything from deepfakes to criminal activity. He stressed the fact that AI—in his mind, as dangerous as nuclear weapons—is being open-sourced and released freely to the public. “If you were working on something and you feared it might kill us all, one would argue you might stop. Or change your approach. But I certainly don’t think that you would take what you found and open source it,” de Blasio said.
De Blasio dedicated the second half of his talk to a discussion about how the Harvard community can make an impact on AI regulations. According to de Blasio, Harvard students and faculty have a greater impact on public issues than the standard American citizen. He used this as a call to action.
So, what can we do? More than anything, de Blasio emphasized a push for a “democratic methodology” for addressing AI. That is, the best thing we can do as Harvard students is to use our indubitably powerful voices to raise statements to our governments about the importance of democratic processes for AI development.
De Blasio told the audience that we can do more than “passive acceptance.” At the end of his talk, he asked us once again to raise our hands if we promised to become involved in the movement to democratize AI legislation. Everyone in the crowd raised their hands. If the perils of AI are truly as terrifying as de Blasio made them out to be, it would be in our best interest to at the very least, become educated about the things that AI can do and the potential dangers it can pose to our safety.
Abril Rodriguez Diaz ’26 (abrilrodriguezdiaz@college.harvard.edu) is now scared to jokingly type insults to Chat GPT.