Deepfake Dilemma: What Are Lawmakers Doing About AI Rip-offs?
Throughout the past year, artificial intelligence has grown rapidly impressive — and for many, rapidly concerning. For some time now, AI has been able to replicate people's personalities and voices. As 2023 wrapped up, lawmakers at the state and federal level have been grappling with how to address the potential abuse of the technology. We'll take a look at a few examples to get an idea of the legal landscape around deepfakes.
It's Not Just Pop Stars Anymore
Much of the year saw the technology targeting celebrities, who had the leverage to at least start a legal battle back. Musicians like Drake and the Weeknd, comedians like Sarah Silverman, and authors like Michael Chabon have all taken generative AI companies to court over the use of their copyrighted material, generating imitations of their art, or both.
But it's not just big-name entertainers who are being duplicated by deepfakes. More recently, psychologists Martin Seligman and Esther Perel had AI versions of themselves created without their consent. These AI replicas were built using publicly available materials like books and podcasts. The creators of AI Seligman and Perel claim their intentions were good, aiming to help people struggling with mental health. The human psychologists aren't entirely unsympathetic to the mission, either; Seligman sees his AI counterpart as a way to extend his legacy and make his work accessible to more people.
Ethical and Legal Concerns
But even if the creators of the imitation shrinks have good intentions, the fact remains that the original humans were not asked or even told about the use of their likeness to create the psychbots. This raises questions about copyright, privacy, and potential misuse of personal information, such as for scams or spreading misinformation. Deepfakes have even been known to be used in courtrooms as false evidence.
It becomes even more complicated given the fact that the creators and subjects are not all based in the same country. For example, the creators of fake Seligman (an American) were based in Beijing and Wuhan, China. The international impact of AI-generated likenesses further complicates data privacy risks, especially in China where an authoritarian government has access to significant and private information.
State Legislators Spring to Action
As the year drew to an end, it became pretty clear that AI-generated digital replicas came with a wave of ethical and legal complexities that need to be addressed. So what have U.S. lawmakers done about it?
Individual states have already started passing their own laws against unauthorized fakes. For example, a couple of months ago, New York Governor Kathy Hochul signed into effect a law making it illegal in the state to distribute deepfakes without consent. Offenders can be sued by their victims and could face up to a year in jail along with a $1,000 fine. Hinchey emphasized the need for updated laws to combat digital violation, especially as deepfake technology has been used predominantly against women. The legislation aims to close gaps in addressing non-consensual sharing of fake intimate images online, complementing previous laws against revenge porn. The push for this bill came after reports showed a significant increase in non-consensual deepfake videos. California is currently cracking down with its own laws on unauthorized AI-generated images.
The Feds Follow Suit
And what does the federal government have to say about AI imitations? “NO FAKES." Literally. This past October, a group of Senators (Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (Congress likes forced acronyms as much as the people behind the chatbots). Still in the early stages of the legislative process, this proposed bipartisan bill aims to protect individuals' voices and visual likenesses from unauthorized deepfakes and other synthetic media.
The bill's proponents aim to make it illegal to create or use a deepfake of someone's voice or likeness without their consent, although with limited exceptions (such as for parody, satire, and artistic expression). Makers of AI-generated content would have to obtain a license from the original human to use their likeness to create fakes, which would allow the original human to profit from the use of their likeness. The bill seeks to cover a wide range of uses, including deepfakes used in advertising, entertainment, journalism, and even political campaigns.
If passed, the NO FAKES Act would apply to both living and deceased individuals. The right to one's image or voice would extend beyond death, lasting for 70 years after the individual's passing. The law would also provide legal remedies for violating its provisions, including the right of the original human to sue the person using their likeness without permission. Those whose rights are violated would be able to recover damages as well as injunctive relief to stop the further spread of the deepfake.
Proponents of the NO FAKES Act argue that it is necessary to protect people's privacy and prevent the spread of misinformation. They also highlight the potential for deepfakes to be used for malicious purposes, such as defamation or fraud. However some critics argue that the law could stifle creativity and free speech. They also worry that it could be difficult to enforce, given the rapid advancements in deepfake technology.
Although the future of the NO FAKES Act is uncertain, it represents a significant step in the debate over how to regulate deepfakes and other synthetic media. Its passage could have a major impact on the future of entertainment, journalism, and even politics. It's important to keep in mind that the above examples represent only the legislative action in the United States, while the use of AI-generated likenesses is a potential problem that crosses borders.
Related Resources:
- Celebrated Writers File Copyright Lawsuit Against AI (FindLaw's Law and Daily Life)
- Legal (and Moral) Issues in AI-generated content (FindLaw's Law and Daily Life)
- Generative AI: Biggest Threat to the Music Industry Since Napster? (FindLaw's Practice of Law)