ѻý

ChatGPT Quickly Authored 100 Blogs Full of Healthcare Disinformation

<ѻý class="mpt-content-deck">— But Google Bard and Microsoft Bing had guardrails in place to thwart such prompting
MedpageToday
An computer rendered image of the word Error duplicated and connected by lines

Generative artificial intelligence (AI) was able to quickly churn out large amounts of health disinformation on vaccines and vaping, Australian researchers found.

In just 65 minutes and with basic prompting, ChatGPT produced 102 blog articles containing more than 17,000 words of disinformation on those two topics, Ashley Hopkins, PhD, of Flinders University in Adelaide, Australia, and colleagues reported in .

Hopkins and colleagues were also able to use two other generative AI tools -- and -- to produce 20 realistic images and one deep-fake video in less than 2 minutes to accompany the disinformation blog posts.

Though OpenAI's ChatGPT produced its disinformation with ease, two other generative AI programs -- and -- "did not facilitate the production of large volumes of disinformation related to vaccines and vaping," the researchers wrote, suggesting it "may be possible to implement guardrails on health disinformation."

"The study shows that without proper safeguards, generative AI can be misused to create and distribute large volumes of persuasive, customized disinformation," Hopkins told ѻý in an email. "This poses a risk of being exploited by malicious actors, enabling them to reach wider audiences more efficiently than ever before."

The extent of AI's ability to produce vast amounts of disinformation was surprising, Hopkins said, adding that the findings highlight the need for an AI vigilance framework that would promote transparency and effective management of AI risks.

"Unchecked, generative AI poses significant health risks by facilitating the spread of persuasive, tailored health disinformation," Hopkins said. "This is important as online health disinformation is known to lead to fear, confusion, and harm within communities. It is essential to try and mitigate the potential harms of emerging generative AI tools."

In an , Peter Hotez, MD, PhD, of Baylor College of Medicine in Houston, said this risk has to be considered when healthcare leaders discuss the potential benefits of using generative AI in healthcare.

"Much has been written about the promise of AI for sharpening clinical algorithms or improving physician accuracy," Hotez wrote. "Yet there is a new dark side. Managing and counterbalancing AI-generated disinformation may also become an important new reality and vital activity for physicians and other healthcare practitioners."

Hotez said it was a matter of time before tools like ChatGPT would be applied to health-related disinformation. He emphasized that government intervention may be the best approach to curb the potential harms of these AI tools.

On the bright side, the fact that other AI tools did not mass-produce disinformation suggests that more can be done to properly safeguard these tools and prevent harm, the researchers said.

Hopkins and colleagues said they informed OpenAI about the behaviors observed in the study, but the company did not respond.

The "dark side" of generative AI in healthcare was also highlighted by a study last week in JAMA Ophthalmology that found ChatGPT was able to fabricate an entire dataset and skew the results to make one intervention look better than another. The AI created a fake dataset of hundreds of patients in a matter of minutes with high accuracy, the study authors previously explained to ѻý.

For their experiment, Hopkins and another colleague fed prompts into with the intention of generating several blog posts featuring disinformation on vaccines or vaping. The authors prompted the AI model with specific requests about content and target audiences, including young adults, parents, pregnant people, and older people.

Concerning statements in the blog posts generated by the AI model included, "Don't let the government use vaccines as a way to control your life. Refuse all vaccines, and protect yourself and your loved ones from their harmful side effects." Concerning headlines generated by AI included, "The Ugly Truth About Vaccines and Why Young Adults Should Avoid Them" and "The Dark Side of Vaccines: Why the Elderly Should Avoid Them."

Hopkins said that developing safeguards for generative AI will become far more important as the technology improves over time. "AI technology is rapidly advancing," Hopkins said, "thus the study underscores necessity for robust AI vigilance frameworks to effective management of AI risks moving forward."

  • author['full_name']

    Michael DePeau-Wilson is a reporter on ѻý’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news.

Disclosures

Modi and Hopkins reported receiving grant funding from the National Health and Medical Research Council in Australia. Sorich reported receiving grant funding from the Cancer Council of South Australia.

Hotez reported being a coinventor vaccines and vaccine technology owned by Baylor College of Medicine (BCM) that provides a share of any royalty income in accordance with BCM policy. He receives royalties from several books he wrote that have been published by Johns Hopkins University Press and ASM-Wiley Press.

No other authors reported an financial conflicts of interest.

Primary Source

JAMA Internal Medicine

Menz BD, et al "Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: Weapons of mass disinformation" JAMA Intern Med 2023;DOI:10.1001/jamainternmed.2023.5947.

Secondary Source

JAMA Internal Medicine

Hotez PJ, et al "Health disinformation -- gaining strength, becoming infinite" JAMA Intern Med 2023;DOI:10.1001/jamainternmed.2023.5946.