What Harvard Law professor Cass Sunstein, one of America’s foremost legal scholars, had to say when he spoke to a Queen’s Law audience about AI and the right to freedom of speech may surprise you.
Questions about the emerging relationship between artificial intelligence (AI) and freedom of speech are top of mind nowadays for a wide swath of society – and in particular for lawyers, judges, and legal scholars. And so that relationship was a timely topic for the second annual Marcus-Matalon Lecture on U.S. Law.
On January 8, Professor Cass Sunstein from Harvard Law School explored the question of whether or not AI has the right to freedom of speech in U.S. law (and also in Canada where citizens enjoy similar constitutional protections). Sunstein, a leading American legal scholar, served in President Barack Obama’s administration as Administrator of the White House Office of Information and Regulatory Affairs and as a member of the Review Board on Intelligence and Communications Technologies and of the Pentagon’s Defense Innovation Board.
He began an insight-filled one-hour talk to his Queen’s Law audience with tongue firmly in-cheek when he assured his on-line-only audience that he is, in fact, “a human being who’s speaking and not an AI construct.”
That said, he began by observing that he regards it as a positive development that in the legal community, at least, the discussion of AI “has been met with receptivity rather than despair.”
Sunstein proceeded to frame a convincing argument in support of his contention that the utterances generated by AI do not enjoy the protections that are afforded by the provisions of the First Amendment of the U.S. Constitution. Regardless of whether its source is a person or AI, Sunstein said we should “scream it from the rafters” that “unprotected speech is . . . unprotected speech, and that self-evident, but sometimes elusive proposition should dispose of a wide range of actual and imaginable questions.”
However, he said that while in his view that’s how things currently stand, AI technology and its workaday embodiment, which is known as ChatGPT, are evolving at warp speed. That being so, the law may have to follow suit. That won’t be easy. Any discussion of the future of AI will be both complex and nuanced.
ChatGPT – the GPT acronym short for “generative pre-trained transformer” – is an AI-driven “natural language processing” tool that allows people to have human-like conversations and other interactions with “chatbots.” The AI language model is able to answer questions and to offer assistance with such tasks as composing emails, essays, code, and even doing online research. Not surprisingly, the legal profession has taken note and is approaching the use of AI guardedly. (Here in Canada, the Federal Court recently posted a statement on its website setting out strict guidelines for use of AI).
Any laws and other restrictions governing the use of AI promises to be fraught, for as Sunstein cautioned, “Even if AI . . . lacks First Amendment rights, restrictions on the speech of AI might violate the rights of human beings.”
As an example of the sort of perils to which he was alluding, Sunstein pointed to recent events in China. In April 2023, the bureaucratic agency that oversees AI use in that country announced draft regulations to govern the uses of generative AI. Among the restrictions set out was one that would forbid use of the technology to criticize the leaders of China’s communist party. While Sunstein said that “nothing of this kind seems imaginable” in the U.S., Canada, or Europe, he pointed out that such strictures spotlight the kind of freedom-of-speech issues that can – and may – arise here.
He noted that a long line of Supreme Court of the United States case law has drawn a line between statements that are “viewpoint-based” and those that are “content-based.” The pivotal question to be answered in each situation is: are we restricting all speech on a particular topic, without any regard to the specific position that’s being stated? Or are we restricting just one side or opinion on a topic? The former is a content-based restriction, but it is still viewpoint-neutral, while the latter is both content- and viewpoint-based.
Going forward, Sunstein said it remains to be seen how AI will evolve and how the courts and the legal profession will adapt to questions about the technology and freedom of speech. About all that seems certain is that the change process will challenge the notion that AI has no right to freedom of speech. “We might be able to imagine a new kind of AI . . . that might put pressure on this conclusion, but we’re not there yet,” said Sunstein.
The Marcus-Matalon Lecture on U.S. Law was launched last year thanks to the generosity of Stephen Marcus, Law’77, of the Washington, DC-based Marcus Firm PLLC, and his wife Renee Matalon, a Harvard Law’81 graduate who serves as a mediator with the District of Columbia Superior Court's Multi-Door Dispute Resolution Program. The five-year lecture series the couple have funded aims to provide insight into important U.S. legal and constitutional issues that are of interest to Canadian law students and to members of the legal profession and that may have implications for the Canadian legal system.
Listen to the audio recording of Professor Cass Sunstein’s 2024 Marcus-Matalon Lecture on U.S. Law on QLAW Pod.
By Ken Cuthbertson, Law’83