Hi Neil,
In response to your question, “Is it likely, or even conceivable, that chatbots could be created that espouse and generate medical disinformation and conspiracy theories?”, they already are, in the case of Grok, a product of Elon Musk’s companies. It bears his particular stamp of misinformation and disinformation as described here: https://www.pbs.org/newshour/amp/politics/why-does-the-ai-powered-chatbo...
Examples of medically related misinformation or disinformation:
“When someone asked Grok what would be altered in its next version, the chatbot replied that xAI would likely "aim to reduce content perceived as overly progressive, like heavy emphasis on social justice topics, to align with a focus on 'truth' as Elon sees it." Later that day, Musk asked X users to post "things that are politically incorrect, but nonetheless factually true" that would be used to train the chatbot.
The requested replies included numerous false statements: second hand smoke exposure isn't real (it is), former first lady Michelle Obama is a man (she isn't), and COVID-19 vaccines caused millions of unexplained deaths (they didn't).”
The explanation:
“Experts told PolitiFact that Grok's training — including how the model is told to respond — and the material it aggregates likely played a role in its spew of hate speech.
"All models are 'aligned' to some set of ideals or preferences," said Jeremy Blackburn, a computing professor at Binghamton University. These types of chatbots are reflective of their creators, he said.
Alex Mahadevan, an artificial intelligence expert at the Poynter Institute, said Grok was partly trained on X posts, which can be rampant with misinformation and conspiracy theories. (Poynter owns PolitiFact.)”
Experts told PolitiFact that Grok's training — including how the model is told to respond — and the material it aggregates likely played a role in its spew of hate speech.
"All models are 'aligned' to some set of ideals or preferences," said Jeremy Blackburn, a computing professor at Binghamton University. These types of chatbots are reflective of their creators, he said.
Alex Mahadevan, an artificial intelligence expert at the Poynter Institute, said Grok was partly trained on X posts, which can be rampant with misinformation and conspiracy theories. (Poynter owns PolitiFact.)"
Unfortunately, the misinformation or disinformation of chatbots is already among us, a product of its creators and its source material.
Margaret
Margaret Winker, MD
eLearning Program Director
Trustee
World Association of Medical Editors
***
wame.org
WAME eLearning Program
@WAME_editors
www.facebook.com/WAMEmembers
HIFA profile: Margaret Winker is Secretary and Past President of the World Association of Medical Editors in the U.S. Professional interests: WAME is a global association of editors of peer-reviewed medical journals who seek to foster cooperation and communication among editors, improve editorial standards, promote professionalism in medical editing through education, self-criticism, and self-regulation, and encourage research on the principles and practice of medical editing. margaretwinker AT gmail.com