ѻý

California Nurses Rally Against AI Tools in Healthcare

<ѻý class="mpt-content-deck">— They demand hospitals take pause before rushing implementation of "untested" technologies
MedpageToday
A photo of Kaiser Permanente San Francisco Medical Center

Union nurses rallied against the use of artificial intelligence (AI) tools they called "untested" and "unregulated" during a protest outside the Kaiser Permanente San Francisco Medical Center on Monday.

Despite reports that health systems are investing millions of dollars in AI technologies, Michelle Gutierrez-Vo, RN, BSN, a charge nurse at Kaiser Permanente Fremont Medical Center and president of the California Nurses Association (CNA), said that neither the hospitals nor the tech industry have proven that these tools improve the quality of patient care.

In December 2023, Kaiser Permanente it had awarded to five healthcare organizations geared toward projects to deploy AI and machine learning algorithms "to enhance diagnostic decision-making in healthcare."

"Just like there's oversight for medications ... when it comes to being used for patients, this is just as dangerous," Gutierrez-Vo said of AI tools used in health systems, including those that help staff hospital units, determine resource needs, and filter messages to clinicians.

Hospitals rushing to launch these "untested and unregulated AI technologies" is what brought so many nurses out to protest, she noted. A CNA spokesperson reported that more than 200 nurses attended the rally.

"We want [hospitals] to take a huge pause and reflect and be accountable to make sure they test it, they validate it, before using it on our patients," Gutierrez-Vo said.

She added that she hopes regulators and the public will join with the CNA in demanding that developers and employers prove these new systems are safe, effective, and equitable.

image
Photo credit: California Nurses Association

In fact, lawmakers and regulators have begun to focus on the development and implementation of AI in healthcare, and research has shown that AI can be an effective clinical support tool in certain situations.

Experts have cautioned, however, that generative AI tools still require human oversight due to the risks of using the technology in healthcare settings, such as its tendency to introduce biased or incorrect information into clinical decision-making.

Douglas B. Johnson, MD, MSCI, a medical AI researcher from Vanderbilt University Medical Center in Nashville, Tennessee, told ѻý that the standard approach to implementing this technology has been to test it with a small team within an institution first.

"You can certainly implement it too soon or implement it too broadly," he said. "There are many ways that it could potentially go wrong."

AI technology offers several intriguing benefits, such as decreasing burnout, he added, but implementing the technology should balance those benefits with patient safety, as well as buy-in from nurses, physicians, and administrators.

"It's very important to get everyone in the room because all stakeholders bring a potentially valuable perspective, especially if they're the ones who are actually implementing it," Johnson said. "It's also helpful for people who are going to use the tool to be educated on it thoroughly as well."

Gutierrez-Vo said one of the CNA's biggest concerns are systems within electronic health records that "ration care by under-predicting how sick a patient might become."

Kaiser Permanente, like many health systems, uses Epic, a popular electronic health record vendor, which has a patient acuity system that assigns patients a number to reflect how ill they are. But it doesn't factor in a patient's changing mental status and language barriers, or account for a patient whose mobility may have declined, Gutierrez-Vo explained.

"There's a lot of different nuances in the human condition that only eyes and ears, and a trained, specialized nurse can tell ... how much of their time was needed in order to make sure this patient was safe, and how much more is going to be needed for the next shift," she said.

In 2019, Kaiser Permanente Northern California launched the Desktop Medicine Program, which according to a , uses natural language processing algorithms to tag messages with category labels and route them to the appropriate respondents. The system was found to have funneled 31.9% of over 4.7 million patient messages to a "regional team" of medical assistants, tele-service representatives, pharmacists, and other physicians who "resolved" those messages before they reached individual physician inboxes.

But Gutierrez-Vo said that she finds the messaging system problematic.

If a patient who recently had a heart attack messages his physician requesting to refill a nitroglycerin prescription, that message should be flagged as urgent. In this new system, it will be categorized as a medication request, deemed "non-urgent," and directed to a pharmacist, despite signaling a "life-or-death situation," she explained.

While nursing unions have negotiated "enforceable language" in their contracts requiring that they be notified of new technologies and modifications, employers aren't always complying with those contracts, Gutierrez-Vo said. If nurses call for a "hard stop," because, for example, the staffing that results from these technologies appears to be inappropriate, management is accountable for making those changes immediately.

Kaiser Permanente did not immediately respond to a request for comment.

  • author['full_name']

    Michael DePeau-Wilson is a reporter on ѻý’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news.

  • author['full_name']

    Shannon Firth has been reporting on health policy as ѻý's Washington correspondent since 2014. She is also a member of the site's Enterprise & Investigative Reporting team.