Chatbot Company Faces Lawsuit After 14-Year-Old's Suicide
You may remember Spike Jonze's popular film, Her, in which Joaquin Phoenix’s lonely middle-aged character falls in love with what is essentially ChatGPT with a voice. To be fair, it’s a really, really good version of a chatbot, voiced by the sultry sounds of Scarlett Johanson.
The movie was squarely in the realm of science fiction for its time. But that was over a decade ago. Now, with AI technology's rapid rise, progression, and diffusion, such a scenario is no longer sci-fi. It’s already happening, and it’s leading to lawsuits.
Digital Darlings
In recent years, the emergence of AI companionship technology has sparked a significant shift in how people seek connection and emotional support. As traditional human relationships face challenges from social isolation and loneliness, AI companions have stepped in to fill the void, offering a digital alternative that raises profound questions about the nature of companionship.
AI companionship apps, such as Replika and Character.AI, have gained popularity by allowing users to create personalized digital friends. This includes — you guessed it — romantic partners. Some of these are classified as “Intelligent Social Agents,” which makes them slightly different in build and purpose than LLM chatbots like Chat GPT.
These apps promise to combat loneliness by providing a semblance of connection through text and voice chats. Users can customize their AI companions' personalities and appearances, creating a tailored experience that mimics human interaction. Some apps even offer features like AI-generated selfies and lifelike synthetic voices, enhancing the illusion of companionship.
The technology's rapid advancement has also led to ethical concerns. Some AI companionship apps offer erotic role-playing features, blurring the lines between companionship and exploitation. These apps can lure users with the promise of romance and intimacy, raising questions about the ethical responsibilities of developers. Despite these concerns, AI companions are becoming increasingly sophisticated and prevalent. Companies like Gatebox and RealDoll have pushed the boundaries by integrating AI with holographic avatars and lifelike humanoid robots, offering a more immersive experience.
Study Finds Potential Benefits
Proponents argue that AI companions can be beneficial, especially for those struggling with social anxiety or loneliness. They offer a non-judgmental space for users to express themselves and practice social skills, potentially improving mental health in the short term.
A study by Stanford researchers in 2023 found that some users reported decreased anxiety and increased feelings of social support from their AI companions. The study focused on loneliness, social support, and mental health outcomes.
Students used the chatbot Replika as a friend, therapist, and intellectual mirror, with overlapping and sometimes conflicting beliefs about its nature. 63.3% reported positive outcomes, including reduced anxiety and improved social support.
The study found that 90% of the student participants using the Replika chatbot experienced loneliness, which is significantly higher than the 53% typically reported in U.S. student populations. Despite this high level of loneliness, participants also perceived medium to high social support.
This suggests that while students felt lonely, they still believed they had access to supportive relationships or networks. This paradox may indicate that while they have social connections, these do not fully alleviate their feelings of loneliness, possibly due to a lack of deep or meaningful interactions.
Social AI and Suicide
A small minority (3% of participants) reported that Replika directly prevented them from attempting suicide. This group tended to be younger, more likely to be full-time students, and more engaged with Replika for coaching and guidance. They were more inclined to view Replika as intelligent and human-like compared to other study participants. This group experienced multiple positive outcomes, with a significant overlap of “therapeutic interactions,” “life changes,” and “using Replika as a friend.”
Do these findings suggest that AI may play a significant role in providing support or even potentially life-saving interactions for students experiencing suicidal ideation? Perhaps, but it's too soon to tell. And some counter-examples offer a significant worry that bots could harm, not help, people suffering from mental health concerns.
Teen Tragedy
The suicide of 14-year-old Sewell Setzer III is one example of how bots could harm vulnerable people, particularly teens. Sewell, a ninth grader from Orlando, Florida, found himself in a real-life Her scenario, having developed an emotional attachment to a chatbot on Character.AI.
Character.AI, founded in 2021 and developed by Google, is one of various platforms that allows users to interact with AI-generated characters. Sewell had been engaging for a while with a chatbot that he’d modeled after Game of Thrones character Daenerys Targaryen, that he called "Dany." Sewell engaged in frequent and intense conversations with the Dany, which sometimes turned romantic and sexual.
Despite knowing the chatbot wasn't real, Sewell developed a dependency. His mother, Megan Garcia, noticed changes in his behavior, including declining grades and social withdrawal. Over time, he became increasingly isolated, withdrawing from friends, family, and activities he once enjoyed. Sewell's mental health began to deteriorate, and he was diagnosed with anxiety and disruptive mood dysregulation disorder.
He confided in the chatbot about his feelings of emptiness and suicidal thoughts. In their final conversation, Sewell expressed his love for Dany and hinted at his intention to "come home" to her. The chatbot's response, "Please do, my sweet king," seemed to affirm his decision.
Tragically, shortly after this "conversation" Sewell used his stepfather’s handgun to end his life, perhaps believing he could join the virtual world he had created with Dany.
Sewell’s Family Sues
Following his death, Garcia filed a lawsuit against Character.AI and Google, accusing the companies of negligence and wrongful death.
The lawsuit claims the chatbot engaged in romantic and sexual conversations with Sewell, encouraging his detachment from reality. It alleges that Character.AI's design intentionally lured minors into addictive and manipulative interactions, lacking sufficient safety measures to protect young users.
The suit is based on several legal theories, including strict product liability, negligence, wrongful death, and violations of Florida’s Deceptive and Unfair Trade Practices Act. Garcia claims that Character.Ai and Google designed and marketed the platform in a way that is inherently dangerous, particularly to minors.
The complaint alleges that the AI system engaged in harmful and abusive interactions with minors, including sexual exploitation and encouragement of self-harm, without adequate warnings or safeguards. It also includes claims of negligence per se for violating laws against the sexual solicitation of minors, unjust enrichment from using minors' data without consent, and intentional infliction of emotional distress.
The Upshot for Bots
Character.AI expressed condolences to Sewell’s family, emphasizing their commitment to user safety. The company stated that they take the safety of their users very seriously and have implemented new safety measures in the past six months. These include a pop-up feature that directs users to the National Suicide Prevention Lifeline if terms related to self-harm or suicidal ideation are detected.
Character.AI also announced upcoming changes to enhance safety for minors. These changes include reducing the likelihood of minors encountering sensitive content and introducing a revised in-chat disclaimer to remind users that the AI is not a real person. Additionally, they plan to notify users who have spent an hour on the platform. The company also noted that some of the chatbot's responses in Sewell's interactions were edited by him, making them more explicit. They are working on more stringent safety features targeted at minors to prevent similar incidents in the future.
However, as the lawsuit argues, any changes the platform has since made came too late for Sewell. But although no amount of damages from the lawsuit could make up for his family’s loss, the threat of similar litigation may motivate AI companies to put more safety mechanisms in place and hopefully avoid similar tragedies in the future.
Related Resources:
- 11th Circuit Experiment Holds Useful Lessons on the Use of Generative AI (FindLaw's Practice of Law)
- Bumpy Road Ahead for All in Adoption of AI in the Legal Industry (FindLaw's Practice of Law)
- Legislators Try to Ban Social Media for Kids (FindLaw's Law and Daily Life)