Clinicians hesitant to use AI, even in cases where algorithms are validated and reliable
por
John R. Fischer, Senior Reporter | June 14, 2021
Artificial Intelligence
Rad Oncology
Even when validated, radiation oncologists are hesitant to deploy a treatment plan generated by AI in a clinical setting compared to one devised by a human physician
Even when radiation treatments devised by AI machine learning algorithms are deemed more effective than those created by humans, clinicians are hesitant to implement them in clinical scenarios.
At least that’s what researchers at Princess Margaret Cancer Centre in Toronto found when comparing the two types of treatment to each other and then applying them in clinical settings for the care of prostate cancer patients.
“From our study, one key result that will change how we develop protocols in the future, is the fact physicians selected machine learning treatments less often in the prospective evaluation setting when patient care was at stake compared with evaluation of treatments in the retrospective setting," Dr. Leigh Conroy, medical physicist at UHN's Princess Margaret Cancer Centre, told HCB News.
When comparing the two, physicians found 89% of treatments generated by ML to be clinically acceptable, and selected 72% over human-generated treatments. In addition, the ML radiation treatment process was 60% faster than the conventional process by humans, which reduced overall time from 118 hours to 47 hours. This translated to cost savings and improved quality of care.
When compared among a group of patients that already underwent radiotherapy, the number of ML-generated ones chosen by radiation oncologists was 83% more than human ones. But when asked in a clinical capacity which should be used for patients who had yet to undergo treatment, oncologists were less likely to recommend ML-generated treatment, with the number dropping to 61%.
The researchers chalk this up to fears of deploying inadequately validated AI systems, as any AI-generated treatments judged to be superior and preferable to their human counterparts would be used in the actual treatments for the pre-treatment group.
“Through the process we gained insight into the requirement for multiple levels of validation and feedback informed by the experts at each validation stage to ensure quality, efficacy, and applicability of machine learning in clinical deployments," said Dr. Tom Purdie, medical physicist at Princess Margaret Cancer Centre.
These insights helped the team form a framework from machine learning technical development to clinical evaluation in a prospective setting. This framework helped build trust with the physician by providing them feedback to understand how confident the machine learning system assessed itself as being suitable for each patient. The team also showed that potential biases affecting clinical judgement and inter- and intra-physician decision making are important considerations when validating machine learning technologies.
“We showed that the system can provide evidence for the decisions made by machine learning, overcoming the challenge of treating machine learning as a ‘black box,’” Dr. Chris McIntosh, scientist at UHN's Peter Munk Cardiac Centre, Techna Institute, and chair of medical imaging and AI at the joint department of medical imaging.
A specialized radiation therapist individually created the human-generated treatments in accordance with normal protocol. Each ML treatment was developed by a computer algorithm trained on a high-quality, peer-reviewed database of radiation therapy plans from 99 patients previously treated for prostate cancer at Princess Margaret. Integrating ML treatments and making people feel comfortable with the process took more than two years.
The team will next compare both types of treatment for lung and breast cancer, with the goal of reducing cardiotoxicity, a possible side effect of treatment.
The findings were published in Nature Medicine.
|
|
You Must Be Logged In To Post A Comment
|