por
John R. Fischer, Senior Reporter | December 23, 2021
AI models for diagnosing and improving medical imaging workflow are susceptible to manipulation by cyber attacks
While an essential tool for speeding up and improving the accuracy of diagnoses, AI models are susceptible to adversarial cyberattacks. Such attacks may range from tiny manipulations to more sophisticated versions that target sensitive content, such as cancerous regions in an image, and result in inaccurate diagnoses that, in turn, lead to poor decision-making by radiologists.
These vulnerabilities are the topic of a new study by researchers at the University of Pittsburgh who simulated an attack that falsified mammogram images and fooled both the AI models and human breast imaging experts that assessed them.
Using mammogram images, the team trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy and then developed a generative adversarial network to insert or remove cancerous regions from negative or positive images, respectively.
Ad Statistics
Times Displayed: 51584
Times Visited: 695 Reveal Mobi Pro integrates the Reveal 35C detector with SpectralDR technology into a modern mobile X-ray solution. Mobi Pro allows for simultaneous acquisition of conventional & dual-energy images with a single exposure. Contact us for a demo at no cost.
The GAN made 44 positive images look negative, and the model classified 42 of them as negative. It classified 209 of 319 negative images made to look positive as positive. It was fooled by 69.1% of the fake images. Five human radiologists looked over the images and ranged in accuracy between 29% and 71%, depending on the individual.
“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue. By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust,” said senior author Shandong Wu, associate professor of radiology, biomedical informatics and bioengineering at Pitt, in a statement.
Wu says that cybersecurity education is more critical than ever in today’s medical environment, as AI is increasingly used in medical imaging and other healthcare fields, as well as for the protection of patient data and blocking of malware.
Another well-known type of attack is upon the IoT, with ransomware being the most common. In the past 18 months,
82% of healthcare providers experienced some form of an IoT cyberattack, with 34% hit by ransomware, according to a paper by data security firm Medigate and cloud-based protection provider CrowdStrike.
Ransomware attacks are becoming increasingly more common, and of those who experienced one, 33% paid the ransom, but only 69% reported that doing so led to the full restoration of their data, according to the study. The findings show that healthcare delivery organizations need to boost their security infrastructures by incorporating more basic defense. Among these are cyber-insurance considerations and addressing the fact that there is still no standard that outlines attack restoration costs.
For his study, Wu says the next step for the researchers will be to find ways to make AI models more robust against adversarial attacks. “We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care.”
The findings were published in
Nature Communications