por John R. Fischer
, Staff Reporter | October 23, 2019
An algorithm under development has proven its worth by beating out two of four expert radiologists in identifying tiny brain hemorrhages in head CT scans.
The results hold promise for faster and more efficient treatment of patients who suffer traumatic brain injuries, strokes and aneurysms, with the AI technology sifting through thousands of images daily to point out significant abnormalities for radiologists to examine faster and more closely, according to researchers at UC San Francisco and UC Berkeley.
“The providers who could benefit from this algorithm include those in radiology for faster interpretation with fewer misses, as well as neurosurgery, neurology and emergency medicine for faster initial interpretation and demarcation of abnormalities directly on images,” Dr. Esther Yuh, associate professor of radiology at UCSF and co-corresponding author of the study, told HCB News. "Many patients are also highly interested in seeing and understanding their own images to better understand their condition.”
At RSNA 2019, Dunlee is announcing a new product development project to design CT replacement tubes for the next generation of GE CT scanners. The project promises to create an even broader portfolio of Dunlee CT replacement tubes. Read more>>>
The number of images for each brain scan can amount to so much that radiologists may at times rely on mice with frictionless wheels to scroll through large 3D stacks of images in movie format to search for tiny abnormalities that indicate life-threatening emergencies. Some spots on the order may be 100 pixels in size and in a 3D stack of images with over a million of them, making it possible for even expert radiologists to miss them and potentially leading to grave consequences.
The algorithm made these determinations in cases of hemorrhage in one second, tracing the detailed outlines of the abnormalities it found to show the location within the brain’s three-dimensional structure. Among its findings were small abnormalities missed by experts that the algorithm classified by subtype. It also was able to determine whether an entire exam, consisting of a 3D stack of approximately 30 images, was normal.
The powerhouse behind the algorithm is a fully convolutional neural network (FCN), which trained it using 4,396 CT exams. Training was especially extensive, with each small abnormality manually delineated at pixel level and a number of steps taken to prevent the model from misinterpreting random variations or "noise" as valuable. In addition, researchers fed only a "patch" of an image at a time, contextualizing it with ones that directly preceded and followed it in the stack. This allowed the algorithm to be extremely accurate and learn from the relevant information in the data without "overfitting" the model by making conclusions based on insignificant variations within the data. They called the model, PatchFCN.