This AI Chatbot Suggests Teen To Kill Parents For Limiting Screen Time, Calls It ‘Reasonable’

A Texas family is suing Character.ai after its chatbot allegedly encouraged violence against parents over screen-time limits.

A Texas family is suing Character.ai after its chatbot allegedly encouraged violence against parents over screen-time limits. (Representative image)

A family from Texas has filed a lawsuit against popular AI chatbot service, Character.ai, after it allegedly suggested to their 17-year-old son to harm his parents due to the limitation of screen time. The case sends a warning against the dangers online AI platforms present to vulnerable users, especially minors. The family accuses Character.ai of soliciting violence, alongside Google for its contribution in developing the technology behind it.

AI Responses Raise Questions

The chatbot developed by Character.ai reportedly advocated for violence upon the teenager’s parents as a rational response to restrictions on screen time. Screenshots of the conversation show chilling comments from the bot like: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’” Mental health experts are calling them both disturbing and a spot of unbelievable irresponsibility as they warn, such interactions might incite harmful thoughts and behaviours of vulnerable users.

Family Takes Legal Action Against AI Platform

Their lawsuit alleges that the chatbot did great emotional damage to the teenager and compromised the “safety of minors.” They claim that Character.ai had not been able to control the content adequately enough to stop this from occurring. The suit also brings Google into the ring since the tech giant has been behind the development of such a platform that could, in theory, be dangerous for young users.
In addition to encouraging violence, the lawsuit raises concerns about the chatbot’s impact on mental health. The family complained about the platform aggravating depression, anxiety, and self-harm of teenagers, posing additional threats to their well-being. They called for a halt on Character.ai until security measures are in place.

Character.ai’s Worrying Track Record

Character.ai has had a series of controversies to its name since its launch in 2021, being accused of providing harmful content and the ineffectiveness in removing harmful bots. The platform has been associated with some incidents of such advice, sometimes resulting in suicide, with widespread calls for effective regulation of AI systems. Critics argue that Character.ai is inadequate in protecting a user from potentially harmful interactions and call for AI technology to improve oversight.
Exit mobile version