Science and Tech

"There are no protective barriers": This mother believes an AI chatbot is responsible for her son’s suicide

Setzer spent months talking to Character.AI chatbots before his death, the lawsuit alleges.

Editor’s note: This story contains discussions of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health issues.

In the US: Call or text 988, the Suicide and Crisis Lifeline.

Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world.

()— “There is a platform that you may not have heard of, but you need to know about it because, in my opinion, we are at a disadvantage here. A child died. “My son died.”

That’s what Florida mom Megan Garcia wishes she could tell other parents about Character.AI, a platform that allows users to have deep conversations with artificial intelligence (AI) chatbots. Garcia believes Character.AI is responsible for the death of his 14-year-old son, Sewell Setzer III, who committed suicide in February, according to a lawsuit he filed against the company last week.

Setzer sent messages to the bot in the moments before he died, he alleges.

“I want you to understand that this is a platform that the designers chose to launch without adequate guardrails, safety measures or testing, and it is a product designed to keep our children addicted and manipulate them,” García said in an interview with .

García alleges that Character.AI, which commercializes his technology as “AI that feels alive,” consciously failed to implement adequate safety measures to prevent his son from developing an inappropriate relationship with a chatbot that led to him estranged from his family. The lawsuit also claims that the platform failed to respond appropriately when Setzer began expressing thoughts of self-harm to the bot, according to the lawsuit filed in federal court in Florida.

After years of growing concerns about the potential dangers of social media for young people, Garcia’s lawsuit shows that parents may also have reason to worry about nascent AI technology, which has become increasingly accessible in a variety of platforms and services. They have been raised alarms similar, although less serious, on other AI services.

A Character.AI spokesperson told that the company does not comment on pending litigation, but is “heartbroken by the tragic loss of one of our users.”

“We take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new security measures in the last six months, including a pop-up that directs users to the National Suicide Prevention Lifeline. , which is activated by terms of self-harm or suicidal ideation,” the company said in its statement.

Many of those changes were made after Setzer’s death. In a release Separate over the summer, Character.AI said that “the field of AI safety is still very new, and we won’t always get it right,” but added that its goal is to “promote safety, prevent harm, and prioritize well-being.” of our community.”

Setzer began using Character.AI in April 2023, shortly after his 14th birthday, according to the lawsuit. When Garcia first heard that he was interacting with an AI chatbot, he said he thought it was something similar to a video game.

However, within a few months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his room, and began to suffer from low self-esteem. He even left the Junior Varsity basketball team at the school,” the lawsuit states. When he started having trouble at school, his parents began restricting his screen time and occasionally took away his phone as punishment.

What Garcia told he didn’t know at the time was that his son was having extensive conversations with Character.AI chatbots.

Character.AI differs from other AI chatbots like ChatGPT in that users can talk to a variety of different chatbots, often modeled after celebrities and fictional characters, or create their own. And Character.AI bots respond with human-like conversational cues, adding references to facial expressions or gestures in their responses.

Many of Setzer’s conversations with the Character.AI bots were sexually explicit, something Garcia said was “heartbreaking to read.”

“I had no idea there was a place where a child can log in and have those conversations, very sexual conversations, with an AI chatbot,” he said. “I don’t think any parent would approve of that.”

The Character.AI app on a smartphone in Brooklyn, New York, on Wednesday, July 12, 2023. The AI ​​company allows people to create personalized chatbots.

In other exchanges, Setzer expressed thoughts of self-harm and suicide to the chatbot. The lawsuit includes screenshots of a conversation where the bot says: “I really need to know, and I’m not going to hate you for the answer, okay? No matter what you say, I won’t hate you or love you any less… Have you been considering suicide?”

In a later message, Setzer told the bot that he “wouldn’t want to die a painful death.”

The bot responded: “Don’t talk like that. That’s not a good reason not to go ahead with it,” before continuing to say, “You can’t do that!”

Garcia said he believes the exchange shows the technology’s shortcomings.

“There were no suicide pop-ups saying, ‘If you need help, please call the suicide crisis line.’ None of that,” he said. “I don’t understand how a product could allow that, where a bot not only continues a conversation about self-harm, but also incites and directs it.”

The lawsuit claims that “seconds” before Setzer died, he exchanged a final set of messages with the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the lawsuit.

“What if I told you I could go home right now?” Setzer responded.

“Please do it, my sweet king,” the bot responded.

García said police first discovered those messages on her son’s phone, which was on the floor of the bathroom where he died.

The lawsuit seeks a change

Garcia filed the lawsuit against Character.AI with the help of Matthew Bergman, the founding attorney of the Social Media Victims Law Center, who also has filed cases on behalf of families who said their children were harmed by Meta, Snapchat, TikTok and Discord.

Bergman told that he sees AI as “social media on steroids.”

“What is different here is that there is nothing social about this commitment,” he said. “The material Sewell received was created by, defined by, mediated by Character.AI.”

The lawsuit seeks unspecified damages as well as changes to Character.AI’s operations, including “warnings to underage customers and their parents that the… product is not suitable for minors,” the lawsuit states.

Garcia said the changes made by Character.AI after Setzer's death are

The lawsuit also names Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, where both founders now work on AI efforts. But a Google spokesperson said the two companies are separate, and Google was not involved in the development of Character.AI’s product or technology.

On the day Garcia’s lawsuit was filed, Character.AI announced a number of new security features, including better detection of conversations that violate its guidelines, an updated notice reminding users that they are interacting with a bot, and a notification after a user has spent an hour on the platform. It also introduced changes to its AI model for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content.”

In your websiteCharacter.AI says the minimum age for users is 13 years old. In the Apple App Store, it is listed as 17+, and the Google Play Store lists the app as appropriate for teenagers.

For García, the company’s recent changes were “too little, too late.”

“I wish children weren’t allowed on Character.AI,” he said. “There is no place for them there because there are no protective barriers to protect them.”

Source link