
General LLMs such as ChatGPT or Gemini are not designed or approved as therapeutic applications, but can be quickly personalized and respond in a human-like manner through simple prompts or specific settings. This form of interaction can have a negative impact on young people and people with mental health issues. It is now known that users can form strong emotional bonds with these systems. Nevertheless, AI characters are largely unregulated in the EU and the USA. Unlike clinical or therapeutic chatbots, which are explicitly developed, tested and approved for medical purposes.
“AI characters currently fall through the gaps in existing safety regulations,” explains Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University and co-author of the first publication. “They are often not classified as products and therefore evade safety testing. And even where they are newly regulated as products, there is still a lack of clear standards and effective supervision.”
Background: Digital exchange, real responsibility
In recent months, there have been international reports of cases in which young people have suffered mental health crises following intensive exchanges with AI chatbots. The researchers see an urgent need for action: systems that imitate human behavior must meet clearly defined safety requirements and operate within a reliable legal framework. However, AI characters are currently entering the market without first undergoing regulatory review.
In their second publication in npj Digital Medicine, “If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck”, the authors draw attention to the growing number of chatbots that give therapy-like advice or even imitate licensed medical professionals – without any approval. They argue that LLMs with such functions should be classified as medical devices, with clear safety standards, transparent system behavior and continuous monitoring.
“AI characters are already part of many people’s everyday lives. These chatbots often give the impression of providing medical or therapeutic advice. We need to ensure that AI-based software is safe. It should support and help, not harm. This requires clear technical, legal and ethical rules,” says Stephen Gilbert, Professor of Medical Device Regulatory Science at the EKFZ for Digital Health at TU Dresden.
Proposed solution: A “guardian angel AI” that pays attention
The research team emphasizes that the transparency requirement of the European AI Act – i.e. the obligation to disclose that communication with an AI is taking place – is not sufficient to protect vulnerable groups. The team calls for mandatory safety and monitoring standards, supplemented by voluntary guidelines that help developers to make their systems safe.
As a concrete measure, the authors suggest equipping future AI applications with a chat storage function and linking them to a “Guardian Angel AI” or “Good Samaritan AI” – an independent, supportive AI instance that monitors the course of the conversation and intervenes if necessary. Such an additional system could recognize early warning signals, alert users to offers of help or warn of risky conversation patterns.
Recommendations for the safe use of AI
In addition to such protective mechanisms, the researchers recommend robust age verification, age-appropriate safety measures and mandatory risk assessments before market entry. They emphasize that LLMs should clearly communicate that they are not approved medical devices in the field of mental health. Chatbots must not act as therapists and should be limited to general, non-medical information. They should also recognize when professional help is needed and refer users to appropriate support services. Simple, freely accessible tests could help to continuously check the safety of chatbots.
“As doctors, we know how strongly human language influences experience and mental health,” says Falk Gerrik Verhees, a psychiatrist at the Carl Gustav Carus University Hospital in Dresden. “AI characters use the same language to simulate trust and closeness – so regulation is essential. We need to ensure that these technologies are safe and protect the mental well-being of users rather than endangering it,” he adds.
“The guardrails we have presented are crucial to ensure that AI applications are actually used safely and in the best interests of people,” says Max Ostermann, researcher in Prof. Gilbert’s Medical Device Regulatory Science team and author of the publication. Gilbert and first author of the publication in npj Digital Medicine.
Note
If you yourself or someone close to you is in crisis, you can find help day and night from the TelefonSeelsorge at 116 123 and online at www.telefonseelsorge.de.
Publications
Mindy Nunez Duffourc. Falk Gerrik Verhees, Stephen Gilbert: AI characters are dangerous without legal guardrails; Nature Human Behavior, 2025.
doi: 10.1038/s41562-025-02375-3. URL: https://www.nature.com/articles/s41562-025-02375-3
Max Ostermann, Oscar Freyer, F. Gerrik Verhees, Jakob Nikolas Kather, Stephen Gilbert: If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck; npj Digital Medicine, 2025.
doi: 10.1038/s41746-025-02175-z. URL: https://www.nature.com/articles/s41746-025-02175-z
Else Kröner Fresenius Center (EKFZ) for Digital Health
The EKFZ for Digital Health at TU Dresden and the University Hospital Carl Gustav Carus Dresden was founded in September 2019. It is funded by the Else Kröner-Fresenius Foundation with a total of 40 million euros for a period of ten years. The center focuses its research activities on innovative, medical and digital technologies at the direct interface to patients. The aim is to fully exploit the potential of digitalization in medicine in order to sustainably improve healthcare, medical research and clinical practice.
– – – – – –
Further links
👉 https://tu-dresden.de
👉 https://digitalhealth.tu-dresden.de/
Photo: pixabay