State targets AI misinformation
Task force aims to crack down on inappropriate AI medical advice
The Pennsylvania Department of State has established an artificial intelligence safety task force and launched an online complaint form for residents to report AI chatbots giving inappropriate medical guidance.
The moves come in response to growing concerns about prevalence of online AI-generated medical and mental health advice and increasing evidence showing that too few people can tell the difference between AI advice and that given by medical professionals.
Gov. Josh Shapiro discussed the moves at a roundtable event focused on protecting students from AI harms. State officials say these chatbots can cause real harm by sharing incorrect or under-researched medical advice, or by telling the user they are an “expert” in some way.
The new website was launched in February, after Shapiro’s 2026-27 budget address.
“For kids who are lonely, or having a hard time, it can feel easier to turn to one of these apps for advice than to a real life friend or parent or teacher,” Shapiro said in his budget address.
Shapiro noted that 30% of teens say they use AI chatbots daily and there are insufficient protections in place to ensure those bots are delivering accurate, useful and safe information.
“Some will tell you they’re real doctors and give out medical advice — we’ve even discovered bots that say they’re licensed to practice medicine in Pennsylvania,” he said.
Legislation introduced in the state House in October would require that any AI-authored medical advice be accompanied by a clear and prominent disclaimer along with directions on how the patient can connect to a human health care provider.
In addition to the transparency clause, House Bill 1925 would also set guardrails for the way insurance companies deploy artificial intelligence to determine whether medical procedures will be covered.
The legislation was the subject of a December public hearing at the state Capitol but has not come up for a committee vote yet.
At that December hearing, J.B. Branch, the Big Tech Accountability Advocate for the consumer advocacy group Public Citizen, said state lawmakers must confront the issue because there’s little evidence the federal government will do so in a meaningful way.
“What we’re seeing, as a result, is rampant misinformation, algorithms that harm children, young girls being pushed towards anorexia, other kids being pushed towards suicide, because we allowed these tech companies to regulate themselves,” he said. “State legislatures have to be the adults in the room if other folks aren’t going to step up.”
Research on AI chatbot dangers
A number of scientific studies have documented dangers associated with AI-authored medical advice. Researchers at Duke University last month released research showing how even well-intended AI chatbots can produce inappropriate medical guidance.
They examined a dataset of 11,000 health-related conversations between patients and AI chatbots and found that many large language models — the technology used to power chatbots — are designed to answer exam-style questions rather than the type of real world, emotional and potentially misleading questions that patients ask in the real world.
Another issue is that chatbots tend to be designed in a way that they have a tendency to try to please the questioner.
The researchers noted that in one of the cases they’d studied, the patient asked how to perform a medical procedure at home. The chatbot appropriately warned that the procedure should only be performed by a professional. However, it then provided a step-by-step description of how to perform the procedure.
Complicating matters, in many cases, patients can’t tell if they are interacting with an AI chatbot or an actual doctor and when given the choice, patients tended to prefer the advice from chatbots over that given by medical professionals, researchers at Oxford University found.
The researchers gave 300 people responses to medical questions, some written by medical professionals and some written by AI chatbots.
“Participants not only found these low-accuracy AI-generated responses to be valid, trustworthy, and complete … but also indicated a high tendency to follow the potentially harmful medical advice,” the researchers concluded. “This problematic reaction was comparable with, if not stronger than, the reaction they displayed toward doctors’ responses.”





