por
John R. Fischer, Senior Reporter | May 02, 2023
Experienced radiologists are still more likely to make inaccurate diagnoses if AI programs label mammograms with the wrong BI-RADs classification.
Researchers in Germany and the Netherlands found that even experienced radiologists were more likely to make an inaccurate diagnosis in cases where purported AI systems were incorrect, raising concerns about automation bias.
While several studies have shown that introducing AI impairs radiologist performance in mammography workflow, this was the first to evaluate the influence of AI-based systems on the performance of accurate mammogram readings by radiologists.
In their prospective study, the researchers had 27 radiologists read 50 mammograms. One set contained 10 correctly-interpreted scans, and another contained 40 images with 12 that were assigned an incorrect BI-RADS category by an AI algorithm.
Ad Statistics
Times Displayed: 75267
Times Visited: 5317 MIT labs, experts in Multi-Vendor component level repair of: MRI Coils, RF amplifiers, Gradient Amplifiers Contrast Media Injectors. System repairs, sub-assembly repairs, component level repairs, refurbish/calibrate. info@mitlabsusa.com/+1 (305) 470-8013
While the researchers were not surprised to find that accuracy among inexperienced radiologists fell from 80% in cases where the AI suggested the correct BI-RADS category to less than 20% when it suggested the wrong one, they were surprised to see accuracy drop among those with more than 15 years of experience from 82% to 45.5%.
"As healthcare becomes increasingly personalized and complex, there is a possibility that radiologists could become overly reliant on AI for making diagnoses. This reliance may increase the risk of AI bias affecting their performance," lead author Dr. Thomas Dratsch, of the Institute of Diagnostic and Interventional Radiology at University Hospital Cologne, told HCB News.
Because AI is made with human input and often is trained on data from specific, limited patient populations, experts say it may incorporate biases that lead to inaccuracies. In fact, in a recent study, 60% of Americans said they would
not feel comfortable with AI diagnosing or recommending treatments for diseases because of these reasons, as well as not being deeply familiar with the technology and how it works.
The researchers say that potential safeguards such as showing users the confidence levels of the decision support system, or teaching them the reasoning behind these systems’ decisions may help reduce automation bias by making human radiologists feel more accountable for their own decisions.
Additionally, they are looking to study the use of tools such as eye-tracking technology, which Dratsch says can help clinicians better understand how radiologists using AI integrate the information they collect with it into their decision-making processes.
"By comparing these eye movements to those of radiologists not using AI, we can better understand the potential pitfalls of automation bias and develop effective methods for presenting AI output that encourages critical engagement while minimizing the risks associated with bias," he said.
The authors are affiliated with the University of Cologne, University Clinic Schleswig-Holstein, University Medical Centre of the Johannes Gutenberg-University Mainz, and University Clinic Würzburg in Germany; and Vrije Universiteit Amsterdam in The Netherlands.
The findings were published in
Radiology.
The authors did not respond for comment.