eClinicalMedicine: The risks of AI-generated health advice

27 March, 2026

Extracts below from an editorial in eClinicalMedicine, a Lancet journal. Full text here: https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(26)00098-2/fulltext?dgcid=raven_jbs_etoc_email

CITATION: Editorial: The risks of AI-generated health advice

eClinicalMedicine, Volume 93 March 2026 Open access

As artificial intelligence (AI) becomes more embedded in daily life, growing numbers of people are turning to generative AI chatbots for health advice. OpenAI, the developer of the popular platform ChatGPT, reports that, worldwide, over 230 million people use the tool for health and wellness advice each week. In the context of overstretched health systems, online living, and convenience culture, generative AI can appear to be an accessible alternative to professional care. However, mounting evidence shows that these tools often provide misleading or dangerous information, underscoring the need for research, regulation, and public guidance...

Although AI models can perform well on certain medical benchmarks, such as achieving passing scores on standardised medical examinations, their performance in clinical decision support is currently poor...

In January, OpenAI launched ChatGPT Health, which integrates medical records and wellness-app data to provide personalised guidance. Although framed as a support tool rather than a substitute for clinical care, the tool is designed to guide “how urgently to encourage follow-ups with a clinician” and how to “prioritise safety in moments that matter”. However, an independent safety evaluation found that ChatGPT Health under-triaged 52% of emergency medical scenarios presented. Cases involving asthma exacerbation, diabetic ketoacidosis, and respiratory failure were often misclassified as suitable for delayed evaluation, and suicide-related prompts triggered inconsistent crisis-intervention responses...

With the rapidly expanding capabilities of AI, chatbots and other AI tools have the potential to support patients and assist established health systems. However, the current risks of widely accessible, unregulated, and unsafe AI technology are profound. Safeguarding people will require coordinated global action to ensure that these technologies support, rather than undermine, safe, equitable, and trustworthy health care. While we await the technological development and regulation that will result in safer AI tools, public health messaging, for example, publicity campaigns and implementation of disclaimers, is urgently needed to raise awareness of dangers and guide safe use...

HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org

Author: 
Neil Pakenham-Walsh