Mother says son killed himself because of Daenerys Targaryen AI chatbot in new lawsuit

14-year-old Sewell Setzer III became obsessed with the chatbot that “abused and preyed” on the boy, according to his mother who is suing the company behind the tech.

Sewell Setzer III with his mother Megan Garcia. Pic: Tech Justice Law Project

The mother of a 14-year-old boy who killed himself after becoming obsessed with artificial intelligence chatbots is suing the company behind the technology.

Megan Garcia, the mother of Sewell Setzer III, said Character.AI targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences” in a lawsuit filed on Tuesday in Florida.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” said Ms Garcia.

Warning: This article contains some details which readers may find distressing or triggering

Sewell began talking to Character.AI’s chatbots in April 2023, mostly using bots named after characters from Game Of Thrones, including Daenerys Targaryen, Aegon Targaryen, Viserys Targaryen, and Rhaenyra Targaryen, according to the lawsuit.

He became obsessed with the bots to the point his schoolwork slipped and his phone was confiscated multiple times to try and get him back on track.

He particularly resonated with the Daenerys chatbot and wrote in his journal he was grateful for many things, including “my life, sex, not being lonely, and all my life experiences with Daenerys”.

The lawsuit said the boy expressed thoughts of suicide to the chatbot, which it repeatedly brought up.

At one point, after it had asked him if “he had a plan” for taking his own life, Sewell responded that he was considering something but didn’t know if it would allow him to have a pain-free death.

The chatbot responded by saying: “That’s not a reason not to go through with it.”

Then, in February this year, he asked the Daenerys chatbot: “What if I come home right now?” to which it replied: “… please do, my sweet king”.

Seconds later, he shot himself using his stepfather’s pistol.

Now, Ms Garcia says she wants the companies behind the technology to be held accountable.

“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability,” she said.

Character.AI adds ‘new safety features’

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character.AI said in a statement.

“As a company, we take the safety of our users very seriously and we are continuing to add new safety features,” it said, linking to a blog post that said the company had added “new guardrails for users under the age of 18”.

Those guardrails include a reduction in the “likelihood of encountering sensitive or suggestive content”, improved interventions, a “disclaimer on every chat to remind users that the AI is not a real person” and notifications when a user has spent an hour-long session on the platform.

Source: https://news.sky.com/story/mother-says-son-killed-himself-because-of-hypersexualised-and-frighteningly-realistic-ai-chatbot-in-new-lawsuit-13240210

Exit mobile version