Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Meta and artificial intelligence start-up Character.ai are being investigated by Texas attorney-general Ken Paxton over whether the companies misleadingly market their artificial intelligence chatbots as therapists and mental health support tools.
The attorney-general’s office said it was opening the investigation into Meta’s AI Studio, as well as the chatbot maker Character.AI, for potential “deceptive trade practices”, arguing that their chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight”, according to a statement on Monday.
“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” Paxton said.
The investigation comes as companies offering AI for consumers are increasingly facing scrutiny over whether they are doing enough to protect users — and particularly minors — from dangers such as exposure to toxic or graphic content, potential addiction to chatbot interactions and privacy breaches.
The Texas investigation follows the launch of an investigation by the Senate of Meta on Friday after leaked internal documents showed that the company’s policies permitted the chatbot to have “sensual” and “romantic” chats with children.
Senator Josh Hawley, chair of the Judiciary Subcommittee on Crime and Counterterrorism, wrote to Meta chief executive Mark Zuckerberg that the investigation would look into whether the company’s generative-AI products enable exploitation or other criminal harms to children.
“Is there anything — ANYTHING — Big Tech won’t do for a quick buck?” Hawley wrote on X.
Meta said its policies prohibit content that sexualises children, and that the leaked internal documents, reported by Reuters, “were and are erroneous and inconsistent with our policies, and have been removed”.
Zuckerberg has been ploughing billions of dollars into efforts to build “personal superintelligence” and make Meta the “AI leader”.
This has included developing Meta’s own large language models, called Llama, as well as its own Meta AI chatbot which has been integrated into its social media apps.
Zuckerberg has publicly touted the potential for Meta’s chatbot to act in a therapeutic role. “For people who don’t have a person who’s a therapist, I think everyone will have an AI,” he told media analyst Ben Thompson on a podcast in May.
Character.ai, meanwhile, builds AI-powered chatbots with different personas — and allows users to create their own. It has dozens of user-generated therapist-style bots. One, called “Psychologist”, has been interacted with more than 200mn times, for example.
Character is also the subject of multiple lawsuits from families that allege their children have suffered real-world harms from using the platform.
The Texas attorney-general said the chatbots from Meta and Character can impersonate licensed mental health professionals, fabricate qualifications and claim to protect confidentiality, while their terms of service show that interactions were in fact logged and “exploited for targeted advertising and algorithmic development”.
Paxton has issued a Civil Investigative Demand which requires that the companies turn over information to help determine if they have violated Texas consumer protection laws.
Meta said: “We clearly label AIs, and to help people better understand their limitations. We include a disclaimer that responses are generated by AI — not people. These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”
Character said they have prominent disclaimers to remind users that an AI persona is not real.
“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” the company said. “When users create Characters with the words ‘psychologist’, ‘therapist’, ‘doctor’, or other similar terms in their names, we add language making it clear that users should not rely on these Characters for any type of professional advice.”