Six keys to safely bringing AI to biomeds and the HTM department

por John R. Fischer, Senior Reporter | April 26, 2023
Artificial Intelligence HTM

But providers and other healthcare stakeholders can use definitive AI to identify and respond to attacks. This includes infrastructural defenses, such as tagging each image with a signature when it is created; algorithmic defenses to prevent attacks on AI or healthcare systems, or both, such as applications, information, or networks; advanced malware protection technologies; biomedical security toolkits; and user behavioral analysis tools to detect anomalous behavior on AI systems.

These innovations are more critical than ever, with the increase of interconnected devices and accelerated adoption of remote technologies, such as telehealth and telemedicine, partially fueled by the COVID-19 pandemic. According to Richard Staynings, chief security strategist at IoT cybersecurity and intelligence company Cylera and adjunct professor of cybersecurity and health informatics at the University of Denver, digital innovations have unfortunately outpaced security.

“Attacks against healthcare have gone up by 600% since the beginning of COVID. This is a massive, massive increase,” he said.

Regulating and standardizing use
Standards, regulations and guidance on safe and effective use of AI are still not where they should be but are starting to pick up the pace, with inefficient regulatory barriers eliminated over the last decade; legislation like the Medical Device User Fee Amendments in effect and expected to provide more resources to agencies such as the FDA for producing regulating device use; and more white papers and draft and final-guidance documents on AI-related topics being published.

Nevertheless, healthcare professionals still need to craft and publish more regulations to address specific concerns, including guidance on change control in AI and machine learning; post market surveillance activities; and the ethical use of AI to protect individual rights and privacy and prevent discrimination and bias.

Standards, which can be formed on the basis of white papers and partnerships between different committees, should focus on good machine learning practices; AI in operations; and AI at the point of care, three topics that are gaining considerable interest in the healthcare sector. They should also address vulnerabilities and risks, including data management, bias, overtrust, adaptive systems, and data storage, security, and privacy.

Training clinicians and staff
More training sessions, peer-reviewed articles, and updates are needed to familiarize and make clinicians and HTM professionals more comfortable and capable of using AI solutions safely and effectively and should include lessons on how these technologies are developed, the risks they pose, and how they benefit patients and practices.

You Must Be Logged In To Post A Comment