AI power is real but muzzled by lack of data sharing and standardization
AI adds a new dimension to brain and abdominal imaging in a number of clinical scenarios, but lack of data sharing as well as reproducibility and standardization issues must be addressed, top European radiologists explained during the ESR AI Premium event.
Detecting the invisible and the visible
In the brain, AI can help to detect things that are invisible to the human eye. “When we do non-contrast CT of the brain, there are very subtle changes in the density that we may not detect with the eye. These small changes are invisible to the naked eye, but software can help to visualize and identify them,” Paul Parizel, full professor at the University of Western Australia in Perth, said.
Detecting the penumbra and collateral circulation in patients with acute stroke has long been challenging. Until recently, this task could only be done with angiography by looking at late refill of vessels. CT perfusion and CT angiography combined with the right software can now provide maps to view the desired parts of the penumbra.
Medical imaging AI can do more than just reveal things that escape the human eye. It can also help to detect things that are already visible, but hard to pick up. “Identifying abnormalities in some images might be challenging. We are looking at multiple data sets. We can miss things when dealing with large amounts of data,” Parizel said.
For example, on a non-contrast CT, radiologists can easily see the timing of workflows, but small fractures of the circulus spine are trickier to spot and can be missed over time.
AI can also boost characterization of lesions with imaging. Different tools exist that can assist both new and experienced radiologists in segmenting abnormalities and mass lesions, by using 3D reconstruction.
Quantitative imaging has been used for some time to segment CSF, grey matter, white matter, and all brain tissue. Quantification also proves to be very useful in patients with multiple sclerosis (MS), Parizel explained.
“In these patients, we can identify the regions and measure the volumes of white and grey matter, feed that into the computer, and come out with a quantified data set, from which we can segment out white matter lesions and do 3D reconstruction to get the volumes of these lesions,” he said.
Quantification enables the ability to follow the evolution of the brain under therapy and have a fully documented report of the MRI scan, in which physicians can compare current scans to previous scans, and follow the measurements of WM lesions’ volumes.
“Once you have a technique to segment out lesions, you can use that to orientate teaching. We can teach software to identify different types of hemorrhages and automatically segment different traumatic lesions types, volume, etc.,” he added.
Measuring the midline shift in a quantitative manner would improve prediction by boosting reproducibility. “If you ask five radiologists, you will get five different opinions. It could be that you made a small error. In following a patient over time, reproducibility is more important than the truth,” Parizel insisted.
The contribution of AI to workflow integration could also be a game changer by helping radiologists to prioritize tasks.
“With AI we are adding a new dimension to our work. AI will help make radiology proactive,” he concluded.
Future advances need more standardization
AI also allows for a deep analysis of patients’ internal organs without the need for surgery. “Abdominal imaging is actually one of the medical specialties in which AI has enabled more scientific advances in recent years,” Daniele Regge, Associate Professor, University of Torino, School of Medicine, Torino, Italy, said.
Regge has been involved in the development of imaging analysis software in the field of oncology and, in particular, Computer Aided Diagnosis (CAD). He is now also working on deep learning (DL) systems, especially using Convolutional Neural Networks (CNN), to collect more information from medical images.
“CNN are particularly useful to study kidney volume in patients with polycystic kidney and measure volume evolution and growth over time. This information can help to decide when patients should undergo surgery,” he said.
The next step, Regge predicts, is lesion detection, both for diagnosis and tumor staging, or to assess tumor burden longitudinally, in treatment response or biopsy guidance, for example in the prostate.
Still, a lot of research is needed before radiologists can completely rely on CNN and AI, mostly because of the lack of uniformity in the format in which the results are obtained and processed. “This hampers the study of different pathologies, even for a single organ, as well as data sharing between medical centers that use different software in their equipment,” he said.
More standardization and clinical data validation are also required, and protocols must be implemented in that sense while decreasing errors in the sampling procedures.
Companies are going to great lengths to develop fast and powerful algorithms that are able to detect, in a matter of seconds, the slightest variation of an organ’s volume over time, or an incipient disease. But they are working on their own software without unifying protocols or procedures, and as long as they do not share this information with the medical community, evaluating software reliability will remain tricky, Regge explained.