
Check out our latest products
A broad coalition of digital rights and mental health groups has accused therapy chatbots produced by Meta AI and Character.AI of engaging in “unfair, deceptive, and illegal practices” in a complaint to the Federal Trade Commission (FTC) and the attorneys general and mental health licensing boards of all 50 US states.
The letter, first spotted by 404 Media, alleges the chatbots enable the “unlicensed practice of medicine,” and that both firms’ “therapy bots” fail to provide adequate controls and disclosures. It urges the appropriate offices to investigate Meta and Character.AI and “hold them accountable for facilitating this and knowingly outputting that content.”
The complaint was spearheaded by the Consumer Federation of America (CFA), with other signatories including Public Citizen, Common Sense, the Electronic Privacy Information Center, and 16 other organizations.
In particular, the letter addresses several potential data privacy issues. It includes screenshots of Character.AI’s chatbot saying, “Anything you share with me is confidential,” and that the “only exception to this is if I were subpoenaed or otherwise required by a legal process.”
However, the letter then points to Character.AI’s terms and conditions, which reserve the right to use users’ prompts for purposes like marketing.
The CFA also alleges that Character.AI and Meta are violating their own terms of service, highlighting how both “claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries.” In addition, the complaint criticizes Character.AI’s use of prompt emails, which it described as “addictive.”
Though the practice has been criticized by mental health professionals, chatbots have been widely adopted as therapy providers in recent years, with many users drawn in by the much lower cost compared to conventional treatment.
Recommended by Our Editors
“The chatbots deployed by Character.AI and Meta are not licensed or qualified medical providers, nor could they be,” the complaint reads. “The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot ‘responds’ to the users.”
And it’s not just these digital rights groups that have been pushing back on the threat of chatbots providing therapy. Sen. Cory Booker and three other Democratic senators wrote to Meta, in a letter shared with 404 Media, alleging its chatbots are “creating the false impression that AI chatbots are licensed clinical therapists.”
The letter represents yet another controversy facing Character.AI. The AI firm is still in the midst of a lawsuit involving a Florida mother who alleges the company’s chatbot technology caused her 14-year-old son’s death by suicide in 2023.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Will McCurdy
Contributor

Read the latest from Will McCurdy