Trends-CA

Mom who sued Character.AI over son’s suicide says the platform’s new teen policy comes ‘too late’

In a step toward making its platform safer for teenage users, Character.AI announced this week that it will ban users under 18 from chatting with its artificial intelligence-powered characters.

For Megan Garcia, the Florida mother who sued the company last year over the suicide of her 14-year-old son, Sewell Setzer, the move comes “about three years too late.”

“Sewell’s gone; I can’t get him back,” she said in an interview Thursday following Character.AI’s announcement. “It’s unfair that I have to live the rest of my life without my sweet, sweet son. I think he was collateral damage.”

Founded in 2021, the California-based chatbot startup offers what it describes as “personalized AI.” It provides a selection of premade or user-created AI characters to interact with, each with a distinct personality. Users can also customize their own chatbots.

Garcia’s was the first of five families who have sued Character.AI on behalf of harm they allege their children suffered. Garcia’s case is one of two accusing it of being liable for a child’s suicide, and all five families have accused its chatbots of engaging in sexually abusive interactions with their children.

In its previous response to Garcia’s lawsuit, Character.AI argued that the speech its chatbots produced is protected by the First Amendment, but a federal judge this year rejected the argument that AI chatbots have free speech rights.

Character.AI has also continued to emphasize its investment in trust and safety resources. Over the past year, it wrote in a blog post Wednesday, it has implemented “the first Parental Insights tool on the AI market, technical protections, filtered Characters, time spent notifications, and more — all designed to let teens be creative with AI in safe ways.”

The company’s bar on minors, which will take effect by Nov. 25, is the biggest measure it has taken to date.

Still, Garcia expressed mixed emotions in response to the news, saying she feels the changes came at the expense of families whose kids count themselves as users.

“I don’t think that they made these changes just because they’re good corporate citizens,” she said. “If they were, they would not have released chatbots to children in the first place, when they first went live with this product.”

Other tech companies, including Meta and OpenAI, have also rolled out more guardrails in recent years as AI developers face intensified scrutiny over chatbots’ ability to mimic human connection. As people increasingly turn to such bots for emotional support and life advice, recent incidents have spotlighted their potential to manipulate vulnerable people by facilitating a false sense of closeness or care.

Many parents and online safety advocates think more can be done. Last month, Garcia and others urged Congress to push for more safeguards around AI chatbots, claiming tech companies designed their products to “hook” children.

Wednesday on X, the consumer advocacy organization Public Citizen echoed a similar call to action, writing that “Congress MUST ban Big Tech from making these AI bots available to kids.”

Garcia said she is waiting to see proof that Character.AI will be able to accurately verify users’ ages. She also wants the company to be more transparent about what it is doing with the data it has collected from minors on the platform.

Character.AI’s privacy policy mentions that the company might use user data to train its AI models, provide tailored advertising and recruit new users. It does not sell user voice or text data for any of its users, a spokesperson told NBC News.

Also in its announcement Wednesday, the company said it is introducing an in-house age assurance model for use alongside third-party tools, including the online identity verification software Persona.

“If we have any doubts about whether a user is 18+ based on those tools, they’ll go through full age verification via Persona if they want to use the adult experience,” the spokesperson wrote in an email. “Persona is highly regarded in the age assurance industry and companies including LinkedIn, OpenAI, Block, and Etsy use it.”

Matt Bergman, a lawyer and founder of the Social Media Victims Law Center, said he and Garcia are “encouraged” by the move to ban minors from chatting with its bots.

“This never would have happened if Megan had not come forward and taken this brave step and other parents that have followed,” said Bergman, who represents multiple families who have accused Character.AI of enabling harm to their children.

“The devil is in the details, but this does appear to be a step in the right direction, and we would urge other AI companies to follow Character.AI’s example, albeit they were late to the game,” Bergman said. “But at least now they seem much more serious than they were.”

Garcia’s lawsuit, filed last October in U.S. District Court in Orlando, has now reached the discovery phase. She said that there is still “a long road ahead” but that she is prepared to continue fighting in hope that other AI companies will follow suit in implementing more safety measures for children.

“I’m just one mother in Florida who’s up against tech giants. It’s like a David and Goliath situation,” Garcia said. “But I’m not afraid. I think that the love I have for Sewell and me wanting to hold them accountable is what gives me a little bit of bravery in this situation.”

If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button