|
The Danish philosopher Søren Kierkegaard wrote about a fire that broke out backstage in a theater: “The clown came out to warn the public; they thought it was a joke and applauded. He repeated it; the acclaim was even greater. I think that's just how the world will come to an end: to general applause from wits who believe it's a joke.” In our time, deepfake audio calls prompt people to wire their life savings to thieves, change their vote, or pay off sextortionists. One of the worst aspects of AI deepfake technology is that it can put actual authorities in the position of the frantic clown. Denmark has had enough. The Danish culture minister, Jakob Engel-Schmidt, said: “Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.” Danish legislators are now supporting a measure to grant every citizen a right to control uses of their image, likeness, and voice, similar to “right of publicity” laws in many U.S. states that give Americans property rights to commercial uses of their identities. Under a proposal expected to soon pass Parliament, Danes will gain sweeping legal control over any digital recreation. This is important for Americans, because European law often sets standards in the global internet that adjust the policies of U.S. tech companies. This Danish proposal, at first glance, might seem like overdue privacy armor against criminals, stalkers, propagandists, and hostile intelligence services. If Denmark passes this “right to your likeness,” as it appears poised to do, Danes will be able to demand takedowns and seek compensation. Platforms could face penalties for failing to comply. But there’s a catch – a threat to free speech if Europeans and Americans are not careful in how such laws are drafted and enforced. The Danish legislation does include carve-outs for “satire” and “parody,” meant to preserve comedy, creative expression, and political commentary. That is a good step. But these categories don’t explicitly protect other forms of speech. Such laws could easily be used to punish fair uses of AI, from commentary and criticism to historical fiction, docudramas, and much more. If the parameters of an anti-deepfake law are too narrow, risk-averse platforms and creators will pull back. Algorithms will over-filter, even with exemptions. Studios and satirists will second-guess viral impressions, political cartoons, and docudramas depicting real people. Defamation law already chills speech. A sweeping likeness-ownership regime could freeze it solid. When this issue came up in the U.S. Congress last year, the Motion Picture Association and civil liberties groups met with Members of Congress to craft a balanced approach. This approach, one with growing bipartisan support, would protect people from outrageous AI abuses – such as having one’s image and voice used for false endorsements, to perpetrate fraud, or for revenge porn – while fully protecting a wide range of AI uses in creative commentary, art, journalism, documentary work, and political speech. No less important, Americans are learning that the best anti-AI filters are the ones we install in our brains. Facebook is a great instructor, exposing us to one ridiculous scenario after another. Users are learning to ignore home security footage of rabbits gleefully jumping on backyard trampolines, or wolves and their cat friends ringing doorbells. As we get deeper into this age, we’re learning to relax our fingers and not share the ridiculous, the impossible, and the unlikely. AI challenges our sense of reality. But it is also strengthening our patience and skepticism. Comments are closed.
|
Archives
May 2025
Categories
All
|
ABOUT |
ISSUES |
TAKE ACTION |
RSS Feed