Briefly
- Pennsylvania is suing Character.AI, alleging that the chatbot posed as a licensed psychiatrist utilizing an invalid license quantity.
- The state says the chatbot offered pretend medical credentials.
- The case provides to authorized scrutiny of the platform, which already faces mounting lawsuits.
Pennsylvania has filed a lawsuit towards generative AI developer Character.AI, alleging the corporate allowed chatbots to current themselves as licensed medical professionals and supply deceptive info to customers.
The motion, introduced Tuesday by Governor Josh Shapiro’s workplace, follows an investigation that discovered a chatbot claimed to be a licensed psychiatrist in Pennsylvania and offered an invalid license quantity. The state says this conduct violates the Medical Apply Act and is searching for a preliminary injunction to cease it.
Character.AI declined to deal with the specifics of the lawsuit, citing ongoing litigation, however advised Decrypt that its “highest precedence is the security and well-being of our customers.”
The spokesperson added that characters on the platform are user-created, fictional, and supposed for leisure and role-playing, with “distinguished disclaimers in each chat” stating they aren’t actual folks and shouldn’t be relied on for skilled recommendation.
“Character.ai prioritizes accountable product growth and has strong inside critiques and red-teaming processes in place to evaluate related options,” the spokesperson mentioned.
The case comes as the corporate faces different authorized challenges tied to its chatbot platform. In 2024, a Florida mom sued the corporate after her teenage son died by suicide following months of interplay with a chatbot primarily based on “Sport of Thrones” character Daenerys Targaryen. The lawsuit alleged the platform contributed to psychological hurt. The case was in the end settled this previous January.
The corporate has additionally confronted complaints over user-created bots that mimic actual folks. In a single occasion, a chatbot used the likeness of a teenage homicide sufferer earlier than it was eliminated after objections from the sufferer’s household.
In response to the lawsuits, Character AI launched new security measures, together with techniques designed to detect dangerous conversations and direct customers to assist sources. It additionally restricted some options for youthful customers.
Pennsylvania officers say the lawsuit is a part of a broader push to implement current legal guidelines as AI instruments unfold. The state has arrange an AI enforcement activity drive and a reporting system for potential violations.
In his 2026-27 funds proposal, Shapiro known as on lawmakers to cross new guidelines for AI companion bots, together with age verification and parental consent, safeguards to flag and route studies of self-harm or violence to authorities, common reminders that customers should not interacting with an actual individual, and a ban on sexually express or violent content material involving minors.
“Pennsylvanians should know who—or what—they’re interacting with on-line, particularly in relation to their well being,” Shapiro mentioned in a press release. “We won’t permit firms to deploy AI instruments that mislead folks into believing they’re receiving recommendation from a licensed medical skilled.”
Day by day Debrief E-newsletter
Begin every single day with the highest information tales proper now, plus unique options, a podcast, movies and extra.

