Introduction of artificial intelligence (AI) into screening mammography for breast cancer significantly improved screening performance while reducing radiologists' workload, a large retrospective cohort study showed.
A comparison of screening performance before and after introduction of AI showed that recall and false-positive rates decreased significantly (P<0.001). Cancer detection, positive predictive value (PPV), and detection of small cancers increased (P=0.01, P<0.001, P=0.02), whereas the rate of invasive cancer decreased (P=0.04).
Using AI to eliminate the need for dual-radiologist review of negative scans helped decrease workload by 33.5%, Andreas Lauritzen, PhD, of the University of Copenhagen, and co-authors reported in .
"All screening performance indicators improved except for node-negative rate [which did not differ], while the reading workload decreased," the authors wrote in summary. "When the AI examination score threshold for defining likely normal screenings was increased from 5 to 7 [on a scale of 1-10], the recall rate, false-positive rate, and rate of node-negative cancers all decreased significantly," they added.
"In other words, with a higher threshold, fewer women were recalled, and recalls more frequently resulted in a breast cancer diagnosis."
The findings are important because other recently reported studies of AI in mammography have come from Sweden. The Danish study provides some reassurance that the results hold up in other countries, said Laura Heacock, MD, of NYU Langone Health in New York City.
How AI influences screening mammography in the U.S. remains to be seen.
"In Europe, all mammograms are read by two radiologists," she told ѻý via email. "If they disagree, they then have a consensus meeting where they make a joint decision. This is different from the United States where one radiologist reads the mammogram."
"Using AI as a second reader decreased mammography reading workload by 33.5%, which allowed those radiologists to do other needed radiology tasks instead (procedures, look at other studies waiting for a read)," Heacock continued. "And switching one of the two readers to AI increased cancer detection rates and reduced false positives, so it improved patient care. The extra cancer by AI tended to be smaller, which meant patients are more likely to have good treatment outcomes. A limitation is that they found slightly more pre-invasive cancers."
The Danish results are similar to or slightly better than the previously published Swedish trials, she added. Follow-up data from the Swedish trials are yet available but the studies showed use of AI as a second reader did not have a negative effect on patient care.
AI systems similar to the one in the Danish study have already been installed in more than 50 sites in five states and the District of Columbia, said Wei Yang, MD, of the University of Texas MD Anderson Cancer Center in Houston.
"The combination of increased breast cancer detection rate, lower false-positive rate, fewer callbacks, and reduced radiologist workload is compelling," said Yang. "How this data will play out in the United States where screening mammography is performed annually without 'double reads' versus every 2-3 years with double reading in Europe will be interesting."
Lauritzen and colleagues reported findings from an analysis of mammographic data for women who underwent biennial breast cancer screening in Denmark during 2020-2022. The study population comprised 60,751 women who were screened prior to implementation of the AI system and 58,246 afterward.
Historically, all mammograms had been read by two radiologists. After implementation of the AI system in November 2021, initial reading was performed by one radiologist and the AI system.
Mammograms considered "likely normal" by a radiologist and AI were accepted as normal. Higher-risk mammograms were evaluated by two radiologists with AI support. The definition of "likely normal" initially was a score ≤5, which was increased to ≤7 in May 2022.
Investigators evaluated parameters consistent with Danish and European guidelines. Comparing outcomes before and after implementation, use of AI yielded:
- Lower recall rate: 3.09% vs 2.46%
- Fewer false-positives: 2.39% vs 1.63%
- Increased PPV: 22.6% vs 33.6%
- Increased cancer detection rate: 0.70% vs 0.82%
- Detection of more small tumors (≤1 cm): 36.6% vs 44.9%
- Fewer invasive cancers: 84.9% vs 79.6%
The incidence of node-negative cancers did not change significantly (76.7% vs 77.98%). The radiology workload was decreased for 38,977 of 116,492 reads.
The study adds to the Danish team's previous simulation study of the AI system, but "is of greater interest to radiologists because it reflects the true impact of AI in a real-life clinical practice," wrote Amie Lee, MD, of the University of California San Francisco, and Sarah Friedewald, MD, of Northwestern University in Chicago, in an .
"This is a welcome addition to the limited available studies evaluating prospective implementation of AI support tools in actual clinical practice," Lee and Friedewald noted.
The results are consistent with those of two recent studies from Sweden, they added. One study showed that double reading by radiologists with AI support was superior to double reading. showed that reading by one radiologist and an AI system was superior to reading by two radiologists.
Lauritzen and co-authors acknowledged several limitations to their research. The median screening interval was longer with AI due to the COVID-19 pandemic, lack of radiologists, and waiting time (probably contributing to higher cancer detection rates). The study also was not able to assess the individual effects of AI screening stratification and AI-assisted decision support. Radiologists who performed single readings were not blinded to the fact that the AI system had categorized these screenings as likely normal.
The study also did not provide data on radiologists' degree of agreement with AI's markings, they added. It had insufficiently long follow-up on interval cancers after implementation of AI.
Disclosures
This study was supported in part by Eurostars and by the Pioneer Centre for Artificial Intelligence.
Lauritzen reported support from the Capital Region of Denmark. Co-authors reported multiple relationships with industry.
Heacock and Yang reported no relevant financial disclosures.
Lee disclosed a patent/royalty/intellectual property interest. Friedewald disclosed a relationship with Hologic.
Primary Source
Radiology
Lauritzen AD, et al "Early indicators of the impact of using AI in mammography screening for breast cancer" Radiology 2024; DOI: 10.1148/radiol.232479.
Secondary Source
Radiology
Lee AY, Friedewald SM "Clinical implementation of AI in screening mammography: the essential role of prospective evaluation" Radiology 2024; DOI: 10.1148/radiol.241124.