California lawmakers tackle potential dangers of AI chatbots after parents raise safety concerns
- Share via
When her 14-year-old son took his own life after interacting with artificial intelligence chatbots, Megan Garcia turned her grief into action.
Last year, the Florida mom sued Character.AI, a platform where people can create and interact with digital characters that mimic real and fictional people.
Garcia alleged in a federal lawsuit that the platform’s chatbots harmed the mental health of her son Sewell Setzer III and the Menlo Park, Calif., company failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters.
An artificial intelligence startup is under fire for allegedly releasing chatbots that harmed the mental health of young people.
Now Garcia is backing state legislation that aims to safeguard young people from “companion” chatbots she says “are designed to engage vulnerable users in inappropriate romantic and sexual conversations” and “encourage self-harm.”
“Over time, we will need a comprehensive regulatory framework to address all the harms, but right now, I am grateful that California is at the forefront of laying this ground,” Garcia said at a news conference on Tuesday ahead of a hearing in Sacramento to review the bill.
Suicide prevention and crisis counseling resources
If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.
As companies move fast to advance chatbots, parents, lawmakers and child advocacy groups are worried there are not enough safeguards in place to protect young people from technology’s potential dangers.
To address the problem, state lawmakers introduced a bill that would require operators of companion chatbot platforms to remind users at least every three hours that the virtual characters aren’t human. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources.
Under Senate Bill 243, the operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements.
The legislation, which cleared the Senate Judiciary Committee, is just one way state lawmakers are trying to tackle potential risks posed by artificial intelligence as chatbots surge in popularity among young people. More than 20 million people use Character.AI every month and users have created millions of chatbots.
Lawmakers say the bill could become a national model for AI protections and some of the bill’s supporters include children’s advocacy group Common Sense Media and the American Academy of Pediatrics, California.
“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of the products. The stakes are high,” said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, at the event attended by Garcia.
But tech industry and business groups including TechNet and the California Chamber of Commerce oppose the legislation, telling lawmakers that it would impose “unnecessary and burdensome requirements on general purpose AI models.” The Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, says the legislation raises 1st Amendment issues.
“The government likely has a compelling interest in preventing suicide. But this regulation is not narrowly tailored or precise,” EFF wrote to lawmakers.
Character.AI has also surfaced 1st Amendment concerns about Garcia’s lawsuit. Its attorneys asked a federal court in January to dismiss the case, stating that a finding in the parents’ favor would violate users’ constitutional right to free speech.
Chelsea Harrison, a spokeswoman for Character.AI, said in an email the company takes user safety seriously and its goal is to provide “a space that is engaging and safe.”
“We are always working toward achieving that balance, as are many companies using AI across the industry. We welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” she said in a statement.
She cited new safety features, including a tool that allows parents to see how much time their teens are spending on the platform. The company also cited its efforts to moderate potentially harmful content and direct certain users to the National Suicide and Crisis Lifeline.
Social media companies including Snap and Facebook’s parent company Meta have also released AI chatbots within their apps to compete with OpenAI’s ChatGPT, which people use to generate text and images. While some users have used ChatGPT to get advice or complete work, some have also turned to these chatbots to play the role of a virtual boyfriend or friend.
Lawmakers are also grappling with how to define “companion chatbot.” Certain apps such as Replika and Kindroid market their services as AI companions or digital friends. The bill doesn’t apply to chatbots designed for customer service.
Padilla said during the press conference that the legislation focuses on product design that is “inherently dangerous” and is meant to protect minors.
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.