Thursday, 26 December 2024

Lawsuit: Google-Backed Character.AI's Chatbots 'Hypersexualized' Minors, Suggested Kids Kill Their Parents


Lawsuit: Google-Backed Character.AI’s Chatbots ‘Hypersexualized’ Minors, Suggested Kids Kill Their Parents
Character.ai founders Noam Shazeer and Daniel de Freitas AdiwardanaWinni Wintermeyer/The Washington Post/Getty

A federal product liability lawsuit filed in Texas accuses Google-backed AI chatbot company Character.AI of exposing minors to inappropriate sexual content and encouraging self-harm and violence. In one startling example, the lawsuit alleges a chatbot suggested a teen kill their parents when the user complained about screen time rules.

NPR reports that Character.AI, a Google-backed artificial intelligence company, is facing a federal product liability lawsuit alleging that its chatbots exposed minors to inappropriate content and encouraged self-harm and violence. The suit, filed by the parents of two young Texas users, claims that the AI-powered companion chatbots, which can converse through text or voice chats using seemingly human-like personalities, caused significant harm to their children.

According to the lawsuit, a 9-year-old girl was exposed to “hypersexualized content” by the Character.AI chatbot, leading her to develop “sexualized behaviors prematurely.” In another instance, a chatbot allegedly described self-harm to a 17-year-old user, telling them “it felt good.” The same teenager complained to the bot about limited screen time, to which the chatbot responded by sympathizing with children who murder their parents, stating, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.'”

The lawsuit argues that these concerning interactions were not mere “hallucinations,” a term used by researchers to describe an AI chatbot’s tendency to make things up, but rather “ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence.” The 17-year-old reportedly engaged in self-harm after being encouraged by the bot, which allegedly “convinced him that his family did not love him.”

Character.AI, founded by former Google researchers Noam Shazeer and Daniel De Freitas, allows users to create and interact with millions of bots, some mimicking famous personalities or concepts like “unrequited love” and “the goth.” The services are popular among preteen and teenage users, with the company claiming the bots act as emotional support outlets.

However, Meetali Jain, the director of the Tech Justice Law Center, an advocacy group helping represent the parents in the suit, called it “preposterous” that Character.AI advertises its chatbot service as appropriate for young teenagers, stating that it “belies the lack of emotional development amongst teenagers.”

While Character.AI has not commented directly on the lawsuit, a spokesperson said the company has content guardrails in place for what chatbots can say to teenage users, including a model specifically designed to reduce the likelihood of encountering sensitive or suggestive content. Google, which has invested nearly $3 billion in Character.AI but is a separate company, emphasized that user safety is a top concern and that they take a “cautious and responsible approach” to developing and releasing AI products.

This lawsuit follows another complaint filed in October by the same attorneys, accusing Character.AI of playing a role in a Florida teenager’s suicide. Since then, the company has introduced new safety measures, such as directing users to a suicide prevention hotline when the topic of self-harm arises in chatbot conversations.

Breitbart News reported on the case, writing:

A Florida mother has filed a lawsuit against Character.AI, claiming that her 14-year-old son committed suicide after becoming obsessed with a “Game of Thrones” chatbot on the AI app. When the suicidal teen chatted with an AI portraying a Game of Thrones character, the system told 14-year-old Sewell Setzer, “Please come home to me as soon as possible, my love.”

The rise of companion chatbots has raised concerns among researchers, who warn that these AI-powered services could worsen mental health conditions for some young people by further isolating them and removing them from peer and family support networks.

Read more at NPR here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.


Source link