‘Deepfake Doctors: How AI Spreads Medical Disinformation’ (3)

5 October, 2025

Further to my message just now, I note the title of the article says 'How AI spreads medical disinformation'. This implies that deepfake technology is the agent of disinformation when it is in fact the tool used by a human agent. A more accurate (but less catchy) title would be 'How AI can be used to spread medical disinformation’.

This raises questions about the more common form of AI: chatbots such as ChatGPT.

To what extent do current chatbots generate medical misinformation and what are the consequences?

Is it likely, or even conceivable, that chatbots could be created that espouse and generate medical disinformation and conspiracy theories? How many people are going to believe and 'listen' to such chatbots, in preference to more 'mainstream' chatbots such as ChatGPT?

Best wishes, Neil

HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org