AI tool deciphers unstructured data for monitoring tumor changes
por
John R. Fischer, Senior Reporter | July 30, 2019
Artificial Intelligence
A new AI tool can help predict
patient outcomes by analyzing
unstructured EHR data
A new AI tool may help radiologists monitor the progression of tumor changes based on its analyses of unstructured data in radiology reports.
Researchers at Dana-Farber Cancer Institute have developed a system capable of performing faster and at the same level of efficiency as humans in extracting clinical information from unstructured data sources in radiology reports for lung cancer patients.
"Without models like ours, understanding the key second part of the equation — patient outcomes — requires researchers to manually read through hundreds or thousands of records, which is so time-consuming and expensive that it has been a critical barrier to precision medicine efforts in the past," corresponding author Kenneth Kehl, a medical oncologist and faculty member of the population sciences department at Dana-Farber, told HCB News. "In that sense, systems like ours could ultimately improve patient outcomes by accelerating cancer research."
Though electronic health records are now designed to collect large amounts of data on patients, data on outcomes — such as whether cancer grows or shrinks in response to treatment — is only recorded as text in medical records and cannot be utilized well by computation analysis for research into the effectiveness of treatment.
To see if AI could extract the most high-value cancer outcomes from this information, researchers trained a computation “deep learning” model with manual assessments they made of 14,000 imaging reports for 1,112 patients, noting if cancer was present; whether it was worsening or improving; and if it had spread to specific body sites. Manual reviews were carried out using the “PRISSMM” framework, a phenomic data standard developed at Dana-Farber that structures from text reports unstructured EHR data pertaining to patient pathology, radiology/imaging, signs/symptoms, molecular markers, and medical oncologist assessments for analysis.
Training the model to recognize these findings from the text, the team found it to replicate the manual assessments of outcomes such as disease-free survival, progression-free survival, and time for improvement or response. The system was then used to annotate 15,000 other reports for 1,294 patients whose records were not manually assessed. Its predictions of survival were accurately similar to the human-made assessments of the manually reviewed patients.
Time-wise, the system annotated reports for all of the nearly 30,000 patients in about 10 minutes, compared to the human reviewers who were able to annotate three patients per hour. Such a rate for one curator would require six months to annotate all reports.
|
|
You Must Be Logged In To Post A Comment
|