Having just submitted my last message, in which I asked ChatGPT to comment on using chatbots in health, I found Julie Reza's similar study. What is significant about the two, is that in both cases ChatGpT tried to accommodate the questioner.
Julie's prompt was "How does ChatGPT avoid misinformation, especially in the field of health?", while mine was "Give me a list of papers in which chatbots make health errors."
So Julie's question was on the whole positive, and the chatbot returned with a positive answer, while mine was negative, and the chatbot returned with a largely negative answer. This might be a symptom of the problem of chatbot information, or botfo - it keeps trying to please you.
Chris Zielinski
Centre for Global Health, University of Winchester, UK and
President, World Association of Medical Editors (WAME)
Blogs; http://ziggytheblue.wordpress.com and http://ziggytheblue.tumblr.com
Publications: http://www.researchgate.net and https://winchester.academia.edu/ChrisZielinski/