In a recent piece, Eugene Volokh, renowned legal scholar and Protect The 1st Senior Legal Advisor, examined the question of the potential for notoriously loose-lipped AI chatboxes, like Google’s Bard or ChatGPT, to be successfully sued for defamation.
Google, he notes, posts a disclaimer with Bard, but disclaimers don’t protect other news organizations from being sued for defamation. Volokh writes:
“No newspaper can immunize itself from libel lawsuits for a statement that ‘Our research reveals that John Smith is a child molester’ by simply adding ‘though be warned that this might be inaccurate’ (much less by putting a line on the front page, ‘Warning: We may sometimes publish inaccurate information’).”
In one instance, ChatGPT seems to have made up accusations of tax fraud, sexual harassment, guilty pleas, and other charges against law professors, with damaging quotes about them by reputable observers that were never made. It appears that chatbots sometimes write novels. Volokh then goes on to produce a solid primer on libel law in a few hundred words and how it might apply to this technology.
When one clicks on Bard, Google warns the user that this chatbot “is an experiment and may give inaccurate or inappropriate responses.” For the makers of this astonishing but still-flawed technology, we wonder if the inevitable defamation that comes from the mouths of these technological babes will produce a bonanza for trial lawyers on the same scale as the Americans with Disabilities Act.
Under the ADA, small businesses became liable for lacking wheelchair ramps and other handicapped accessible entries into their businesses. As a result, a whole industry of Lincoln lawyers popped up, driving around looking for restaurants and retail outlets that lacked a ramp, or with entry points that were not fully up to the code, engendering misery and bankruptcy for many mom and pop cafes and stores. Will some lawyers start filing queries about doctors, lawyers, law professors, sports stars, and leading business leaders in a search for defamation? And if they do, will they bear any responsibility for helping to create it?
Defamation law is a tricky affair. It can be both a censor and an enabler of free speech. AI is a new technology that needs robust public participation to correct its flaws. Perhaps it would be best if chatbots were given a safe harbor in exchange for a promise from developers to immediately correct defamatory statements. Perhaps, too, some sort of watermark and its sonic equivalent would be in order for chatbot responses. This is young technology, still in the nest, and it will have an inevitable period of awkward development.