The news: GenAI models can easily be influenced to perpetuate false health facts when they’re fed made-up medical terms and information, per a new Mount Sinai study published in Nature last week.
Digging into the details: Researchers gave six popular chatbots, including ChatGPT and DeepSeek-R1, 300 fictional medical scenarios that mirrored clinical notes. The chatbots returned inaccurate information between 50% and 82% of the time.
Researchers tested the chatbots in two ways:
Zooming out: While GenAI chatbots are prone to perpetuate medical misinformation, they’re also providing fewer disclaimers on health information, per a study cited in MIT Technology review.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com