I Can See Clearly Now: AI Smartens Up OCT Medical Imaging
To help life-saving procedures to insert stents that keep open coronary arteries through which blood supplies our hearts with oxygen, medicine is increasingly adopting optical coherence tomography (OCT). Using intravascular OCT tools that illuminate tissues with lasers and collect reflections to build 3D images, surgeons can look into tissue surfaces to guide their decisions.
"Decisions are taken on the timescale of minutes," comments Gijs van Soest from Erasmus University Medical Center, in Rotterdam, the Netherlands. In this context, an OCT acquisition typically produces a few hundred frames that show geometries of atherosclerotic lesions that impede blood flow in an artery. "You basically look how big the free area is for the blood to flow through," van Soest explains. "But if you want to detect more pathological features, you have to have a human operator scrolling through all those hundreds of frames. It's just too labor intensive." The process could take half an hour, far too long during a surgical procedure, he explains.
And even when these large data sets are used for diagnosis outside of surgical settings, manual processing is often split amongst multiple hospitals, which can introduce errors. van Soest's Erasmus MC colleague Shengnan Liu adds that artificial intelligence (AI) techniques could help speed up the process, and improve reproducibility.
Yet this is just one of many instances where pairing AI and OCT shows great promise. The most common medical OCT application is ophthalmology, for which Somayyeh Soltanian-Zadeh from Duke University, is developing AI techniques.
Soltanian-Zadeh explains that the most relevant AI form is usually a convolutional neural network (CNN). CNNs are stacks of convolutional layers that process a signal, she says. The convolution process applies a filter, such as a mask in the shape that one wishes to identify, to the signal.
"Usually the filter is much smaller than your signal and you have to shift it across your signal," Soltanian-Zadeh says. An example is finding the character ‘a' in a body of text. Convolution would move a filter shaped like the character ‘a' around over the text, counting each time the filter and the text matched. Rather than being provided with filters, a neural network learns to create a set of filters to solve a task.
Building on such techniques, deep learning can supersede traditional machine learning and is relatively fast once trained, Soltanian-Zadeh adds. "In different fields, they are getting the best results compared to any other method," she says. "We are able to achieve human-level counting of ganglion cells from Adaptive Optics OCT (AO-OCT)."
Weak supervision's strength
Soltanian-Zadeh and her colleagues in Sina Farsiu's Duke team exploit volumetric data produced by AO-OCT. In February, in session four of Ophthalmic Technologies XXX, as part of SPIE Photonics West, she presented her results on using a CNN with a weak supervision approach. An expert ophthalmologist manually localizes each cell in a volume, providing labels to train the network. The Duke team's goal is to then apply the CNN to tasks other than cell localization, which is where the weak supervision comes in. "Weak supervision is about leveraging granular input from human experts to solve a more complicated task," Soltanian-Zadeh explains. "This approach is useful because it bypasses the more time-consuming and extensive expert labelling required."
Soltanian-Zadeh uses this deep learning approach for automatic processing of AO-OCT data. "Our research focuses on the accurate quantification of ganglion cells of different shapes and sizes in the retina," she says. Adaptive optics corrects for distortions in the signal caused by light travelling through the tissue in order to achieve higher resolution OCT. Applying their deep learning approach to data acquired by this technique, the Duke team have been able to not only quantify the cell density, but also to locate each cell, by "segmenting" which pixels belong to each individual cell.
The Duke team is using this approach in collaboration with two other institutions that use AO-OCT imaging systems, specifically the US Food and Drug Administration (FDA) and Indiana University. "We are providing them with our code and they are using them for clinical and long-term studies," Soltanian-Zadeh explains. "From what I have seen, we are the first to use weak supervision for this specific type of data," she says.
Duke University team can identify and ‘segment' individual ganglion cells in adaptive optics (AO)-OCT volumes using AI from a human retina (left), with ganglion cells identified with AI (right). Different colors denote different cells. Credit: Somayyeh Soltanian-Zadeh/Duke University
Yali Jia, from Oregon Health and Science University, notes that one area where OCT has been used extensively in ophthalmology is in diagnosing retinal diseases. "However, OCT quantitative biomarkers cannot be easily accepted due to the limitation of segmentation on low quality scans," she says. "Deep learning is the perfect tool to improve the robustness of segmentation of OCT biomarkers. On the other hand, deep learning has shown the promise of classifying retinal diseases using fundus photography." As OCT reveals more pathological information than other imaging methods, Jia believes AI-aided disease classification using OCT isn't far off.
Jia's team is therefore also presenting its work using such approaches in diagnosis of "wet" and other neovascular forms of age-related macular degeneration (AMD) at Ophthalmic Technologies XXX, in session two. AMD is the leading cause of severe, irreversible vision loss in individuals over 50 years old in Western societies. Wet AMD is characterized by choroidal neovascularization (CNV), the creation of new blood vessels in the eye's choroid layer. Identifying CNV is therefore essential for accurately identifying wet AMD, Jia explains. Her team therefore uses CNNs for automated diagnosis and quantification of CNV in projection-resolved optical coherence tomographic angiography (PR-OCTA).
Compared to conventional OCTA, PR-OCTA produces fewer artefacts and can visualize CNV both in cross-sectional and forward-facing views at the outer retinal slab. "However, CNV identification and segmentation remains difficult even with PR-OCTA due to the presence of residual artefacts," Jia notes. "Conventional image processing cannot classify CNV and non-CNV scans, because they always segment CNV, regardless of whether it actually exists in an input scan. They can be easily fooled by background noise. Deep learning will intelligently recognize the target we are looking for."
Commercializing AI in AMD
Her team therefore developed a fully automated CNV diagnosis and segmentation algorithm using CNNs. Jia explains that her team's algorithm incorporates two CNNs, one for CNV membrane identification and segmentation and the other for pixel-wise vessel segmentation. To train the CNNs, they exploited a clinical data set, including both scans with and without CNV, and scans of eyes with different pathologies. They didn't exclude any scans due to image quality. In testing, all CNV cases were diagnosed, and 95% of non-CNV controls were also not incorrectly diagnosed with CNV.
"By enabling fully automated categorization and segmentation, the proposed algorithm should offer benefits for CNV diagnosis, visualization and monitoring," Jia says. She emphasizes that the work is still only early-stage, and that OCT angiography itself is still pretty young. Nevertheless Jia believes that "this approach will be explored by other groups and be commercialized very soon. I am closely working with OCT instrument makers and always share the technologies to them," she says. "Most of them have been successfully commercialized."
Yali Jia's tem is developing a fully automated choroidal neovascularization (CNV) diagnosis and segmentation algorithm for age-related macular degeneration using convolutional neural networks to analyse OCT images. Image credit: Oregon Health and Science University.
OCT imaging requires high levels of user expertise to obtain information relevant to surgeons or neurologists, says Gereon Hüttmann from the University of Lübeck, Germany. This is a barrier to the exploitation of OCT's potential to be a useful technology. Neural networks help with this in that "you can basically implement the knowledge and experience of an expert without having to employ one," adds Hüttmann. Yet researchers face challenges delivering this, in particular in producing high-quality labelled data so that it can train the neural networks what to look for. "Of course, you have to train all the networks with data which were annotated or interpreted already by an expert," Hüttmann says.
To do this, Hüttmann has teamed up with experts in AI from the University of Lübeck's Institute of Medical Informatics (IMI). In this work, the IMI is using a three-dimensional fully convolutional network, adapted from an approach developed by Olaf Ronneberger at the University of Freiburg, Germany called u-net. "We put reconstructed OCT volumes into u-net," explains the IMI's Timo Kepp. "We trained it to segment pigment epithelial detachment structures and retinal thickness - both biomarkers for disease progression in AMD. After this we performed a shape refinement with an auto-encoder to correct for artefacts introduced by segmentation errors or motion. This auto-encoder learns the shape of the retina and that can compensate for errors." The deep learning approach appeared in a poster at Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XXIV at SPIE Photonics West. Hüttmann's broader OCT work appeared in several talks at that conference.
The Lübeck researchers exploit deep learning to develop an OCT imager for use at home that is smaller and more affordable than standard imagers. Patients can take an OCT themselves which can then be used in the treatment of AMD, Hüttmann explains. These devices can be used every day but generate a large amount of three-dimensional data that must be analyzed to track disease progression, which AI can be used to do. The Lübeck team is also expanding beyond 3D segmentation, into 4D analysis, tracking changes over time.
A spin-out company named Visotec GmbH is commercializing the technology. Hüttmann and colleagues are also looking at a cloud-computing approach, so that people can upload the images from anywhere to be analyzed. Their research is "a way away from application," Hüttmann admits. "But if we show we can generate clinically relevant data and that deep learning can do its job, then the product isn't far away from implementation."
Deep learning has helped to develop an OCT imager for use at home that is smaller and more affordable than standard imagers. Credit: Gereon Hüttmann/University of Lübeck
U-net biomedical image segmentation also comes into the Erasmus MC team's intravascular OCT system. "We start with an annotated data set, which has labels applied to the image," says van Soest. "And we train the network on those images to automatically identify those labels. We use a relatively small sampling of small size images which simplifies the
training process and attempt to utilize these efficiently by applying a transformation, generating images which can be used to enlarge the data set," adds Liu. "We have attained quite good results using only 200 images from 20 patients."
They presented this work in the first session at Diagnostic and Therapeutic Applications of Light in Cardiology 2020 at SPIE Photonics West in February.
The Erasmus MC researchers have not yet tested the neural network "in a real-time practical setting" van Soest adds, so getting clinical validation is important. "The next step is to really perform clinical studies where we relate the analysis of this software package to real world outcomes, or we can retrospectively test this inpatient data that we already have on file."
Machine learning technologies can facilitate image processing to distinguish healthy and unhealthy vascular tissue, says van Soest. AI recognizes healthy tissue that is consistent in appearance, he explains, and marks anything else as unhealthy. Their approach then analyses those areas using optical attenuation measurements. Unstable atherosclerotic plaques that could give rise to potentially fatal disease outcomes have higher degrees of optical attenuation than stable plaques or healthy tissue, Liu says.
The approach cannot yet fully diagnose OCT images of atherosclerotic plaques, Liu adds, because they are very heterogenous. "We identify whether something is healthy or diseased, and that's something that the network does relatively well," she says. "And then the diseased parts, we analyze by other methods."
‘Not a fan of AI'
The Erasmus MC researchers see a trend emerging towards toolboxes for different tasks in AI for OCT image analysis "that require less of a computer science background," Liu says. She adds that the amount of labelled data available to train neural networks with is growing, and that standards are getting more attention. Yet Jia warns against relying on such techniques too much. "I am not a real fan of AI," she says. "AI is not always powerful, and we may need to know our study goals before we try to use it. I just use it when I know it would be really helpful. I have accumulated many OCT angiography data, and also have been exploring this for quite a few years using non-AI tools. With extensive knowledge and background, we can think about how to let AI assist our work. Otherwise, I think it's very risky to apply AI everywhere."
Hüttmann, by contrast, stresses that OCT imagers are still very expensive, and the images complicated to interpret. "A lot of fields need high-performance, low cost OCT and we're seeing this coming in now," he says. "Cheaper approaches such as full-field OCT and integrated chip OCT are being developed. Deep learning has the potential to facilitate interpretation of OCT images, removing the need for the images to be sent to a specialist. These two factors combined with higher resolutions will open a lot of applications for OCT. At the moment OCT is like a smartphone without apps — despite the underlying power of the technology, without a user interface it is useless."
Soltanian-Zadeh similarly makes a comparison to smart phones. "In my experience deep learning/AI in medical imaging fields often lags behind machine learning in the broader community, which incorporates images taken from cell phones and cameras," she says. "I think the medical imaging field is catching up with the new trends that are being developed in machine learning."
Andy Extance is a freelance science journalist based in the UK. A version of this article was originally published in the 2020 SPIE Photonics West Show Daily.
Related SPIE content:
|Enjoy this article?
Get similar news in your inbox