Proceedings Volume 11317

Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging

Andrzej Krol, Barjor S. Gimi
cover
Proceedings Volume 11317

Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging

Andrzej Krol, Barjor S. Gimi
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 20 March 2020
Contents: 13 Sessions, 84 Papers, 47 Presentations
Conference: SPIE Medical Imaging 2020
Volume Number: 11317

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11317
  • Neurological Imaging I
  • Neurological Imaging II
  • Vascular and Pulmonary Imaging
  • Innovations in Image Processing I
  • Innovations in Image Processing II
  • Deep Convolutional Neural Networks in Molecular, Structural, and Functional Imaging I
  • Deep Convolutional Neural Networks in Molecular, Structural, and Functional Imaging II
  • Novel Imaging Methods
  • Ocular and Optical Imaging
  • Bone and Skeletal Imaging, Segmentation, Registration, and Decision-making
  • Cardiac Imaging and Nanoparticle Imaging
  • Poster Session
Front Matter: Volume 11317
icon_mobile_dropdown
Front Matter: Volume 11317
This PDF file contains the front matter associated with SPIE Proceedings Volume 11317, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Neurological Imaging I
icon_mobile_dropdown
Graph embedding using Infomax for ASD classification and brain functional difference detection
Significant progress has been made using fMRI to characterize the brain changes that occur in ASD, a complex neuro-developmental disorder. However, due to the high dimensionality and low signal-to-noise ratio of fMRI, embedding informative and robust brain regional fMRI representations for both graph-level classification and region-level functional difference detection tasks between ASD and healthy control (HC) groups is difficult. Here, we model the whole brain fMRI as a graph, which preserves geometrical and temporal information and use a Graph Neural Network (GNN) to learn from the graph-structured fMRI data. We investigate the potential of including mutual information (MI) loss (Infomax), which is an unsupervised term encouraging large MI of each nodal representation and its corresponding graph-level summarized representation to learn a better graph embedding. Specifically, this work developed a pipeline including a GNN encoder, a classifier and a discriminator, which forces the encoded nodal representations to both benefit classification and reveal the common nodal patterns in a graph. We simultaneously optimize graph-level classification loss and Infomax. We demonstrated that Infomax graph embedding improves classification performance as a regularization term. Furthermore, we found separable nodal representations of ASD and HC groups in prefrontal cortex, cingulate cortex, visual regions, and other social, emotional and execution related brain regions. In contrast with GNN with classification loss only, the proposed pipeline can facilitate training more robust ASD classification models. Moreover, the separable nodal representations can detect the functional differences between the two groups and contribute to revealing new ASD biomarkers.
Classification of attention-deficit/hyperactivity disorder from resting-state functional MRI with mutual connectivity analysis
Previous studies have shown that functional brain connectivity in the Attention-Deficit/Hyperactivity Disorder (ADHD) shows signs of atypical or delayed development. Here, we investigate the use of a nonlinear brain connectivity estimator, namely Mutual Connectivity Analysis with Local Models (MCA-LM), which estimates nonlinear interdependence of time-series pairs in terms of local cross-predictability. As a reference method, we compare MCA-LM performance with cross-correlation, which has been widely used in the functional MRI (fMRI) literature. Pairwise measures like MCA-LM and cross-correlation provide a high-dimensional representation of brain connectivity profiles and are used as features for disease identification from fMRI data. Therefore, a feature selection step is implemented by using Kendall’s Tau rank correlation coefficient for dimensionality reduction. Finally, a Support Vector Machine (SVM) is used for classifying between subjects with ADHD and healthy controls in a Multi-Voxel Pattern Analysis (MVPA) approach on a subset of 176 subjects from the ADHD- 200 data repository. Using 100 different training/test separations and evaluating a wide range of numbers of selected features, we obtain a mean Area Under receiver operating Curve (AUC) range of [0.65,0.70] and a mean accuracy range of [0.6,0.67] for MCA-LM, which outperforms cross-correlation, which yields a mean AUC range of [0.6,0.64] and a mean accuracy range of [0.57,0.59]. Our results suggest that MCA-LM as a nonlinear measure is better suited at extracting relevant information from fMRI time-series data than the current clinical standard of cross-correlation, and may thus provide valuable contributions to the development of novel imaging biomarkers for ADHD.
Minimizing cotton retention in neurosurgical procedures: which imaging modality can help?
Raphael Bechtold, Niki Tselepidakis, Benjamin Garlow, et al.
Cotton balls are used in neurosurgical procedures to assist with hemostasis and improve vision within the operative field. Although the surgeon can reshape pieces of cotton for multiple intraoperative uses, this customizability and scale also places them at perpetual risk of being lost, as blood-soaked cotton balls are visually similar to raw brain tissue. Retained surgical cotton can induce potentially life-threatening immunologic responses, impair postoperative imaging, lead to a textiloma or misdiagnosis, and/or require reoperation. This study investigated three imaging modalities (optical, acoustic, and radiographic) to find the most effective method of identifying foreign bodies during neurosurgery. First, we examined the use of dyes to increase contrast between cotton and surrounding parenchyma (optical approach). Second, we explored the ability to distinguish surgical cotton on or below the tissue surface from brain parenchyma using ultrasound imaging (acoustic approach). Lastly, we analyzed the ability of radiography to differentiate between brain parenchyma and cotton. Our preliminary testing demonstrated that dark-colored cotton is significantly more identifiable than white cotton on the surface level. Additional testing revealed that cotton has noticeable different acoustic characteristics (eg, speed of sound, absorption) from neural tissue, allowing for enhanced contrast in applied ultrasound imaging. Radiography, however, did not present sufficient contrast, demanding further examination. These solutions have the potential to significantly reduce the possibility of intraoperative cotton retention both on and below the surface of the brain, while still providing surgeons with traditional cotton material properties without affecting the surgical workflow.
Inferring brain functional hubs by eigencentrality mapping of phase fMRI connectivity
Zikuan Chen, Qiao Shi, Ebenezer Daniel, et al.
Considering brain functional connectivity (FC) as a graph network, we can identify the brain function hub nodes that have the most dense and heavy connections in the network. For a real-valued FC matrix (unsigned connections in a value range [0,1]), we can identify the hub nodes by a new method of eigencentrality mapping, which not only counts for the connections to other nodes but also the other nodes’ centrality values through the eigen decomposition of the FC matrix. In addition, there are two kinds of fMRI data, magnitude and phase, that can be used for brain FC and hub analysis. Although both magnitude and phase fMRI data are generated from the same magnetic source through different transformations, they are different in signal measurements, consequently leading to different inferences. We herein report on brain functional hub analysis by constructing the FC matrix from phase fMRI data and identifying the hub nodes by eigencentrality mapping. In our experiment, we collected a cohort of 160 complex-valued fMRI dataset (consisting of magnitude and phase in pairs), and performed independent component analysis (ICA), FC matrices calculation and FC matrices eigen decomposition; thereby obtained the node eigencentrality values in the largest eigenvalue-associated eigenvector. Our results showed that phase fMRI data analysis could determine the resting-state brain functional hubs primarily in the central subcortex and the posterior brain region (parieto-occipital lobes and cerebellum), which were different from the magnitude-inferred hubs in brain superior region (frontal and parietal lobes).
Reduced sine hyperbolic polynomial model for brain neuro-developmental analysis
Peyman H. Kassani, Vince D. Calhoun, Yu-Ping Wang
Several studies on brain development have only considered functional connectivity (FC) of different brain regions. In the following study, we propose to add effective connectivity (EC) through Granger causality (GC) for the task of brain maturation. We do this for two different groups of subjects, i.e., children and young adults. We aim to show that the inclusion of causal interaction may further discriminate brain connections between two age groups . We extract EC feature by a new kernel-based GC (KGC) method based on a reduced Sine hyperbolic polynomial (RSP) neural network which helps to learn nonlinearity of complex brain network. Our new EC-based feature outperformed FC-based feature evaluated on Philadelphia neurocohort (PNC) study with better separation between the two different age groups. We also showed that the fusion of two sets of features (FC + EC) improved brain age prediction accuracy by more than 4%.
Neurological Imaging II
icon_mobile_dropdown
Identification of dementia subtypes based on a diffusion MRI multi-model approach
Rajikha Raja, Arvind Caprihan, Gary A. Rosenberg, et al.
Dementia refers to symptoms associated with cognitive decline which is widespread in aging population. Among the various subtypes of dementia, Alzheimer’s disease (AD) and vascular cognitive impairment (VCI) are the two most prevalent types. The main aim of this study is to identify biomarkers which could accurately distinguish between the two dementia subtypes, AD and VCI, in order to aid the physician in planning disease specific treatments. Diffusion weighted MRI (DW-MRI) studies have been widely reported in neuroimaging research as an efficient biomarker in identifying the pathologies associated with dementia. Generally, these studies utilize the metrics estimated from a specific DW-MRI model. For the first time, we attempted to use diffusion derived metrics from more than a single model through fusion technique. In this study, the metrics from two well known DW-MRI models such as diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) are fused using a multiset canonical correlation analysis combined with joint independent component analysis (mCCA+jICA) fusion framework to investigate the potential differences between AD and VCI groups. The participants include 35 healthy controls, 24 AD subjects and 23 VCI subjects. DWMRI data acquired with maximum b-value greater than or equal to 2000 s/mm2 which is suitable for DKI fitting. DTI derived metrics such as mean diffusivity (MD), fractional anisotropy (FA), axial diffusivity (AxD) and radial diffusivity (RD) and DKI metrics such as mean kurtosis (MK), axial kurtosis (AK) and radial kurtosis (RK) are the diffusion features fused to obtain 8 independent components for each feature along with corresponding mixing coefficients. Performance of the proposed multi-model fusion framework is evaluated by comparing the group level testing carried out on features from individual diffusion models with the fused features from proposed method. Results showed that fusion methodology outperformed conventional unimodel approach in terms of distinguishing between subject groups. Diffusion features from individual models successfully distinguished between HC and disease groups (HC Vs AD and HC Vs VCI) with a minimum p-value of 0.00123 but failed to differentiate AD and VCI. On the other hand, the group differences between mixing coefficients obtained from fusion, showed differences between all pairs of subject groups (HC Vs AD, HC Vs VCI and AD Vs VCI). The significant p-value between AD and VCI obtained was 0.000897. The independent spatial components corresponding to mixing coefficient of minimum p-value was overlapped on MNI white matter (WM) tract atlas to identify the prominent WM tracts which showed a significant difference between AD and VCI. The WM tracts thus identified were superior longitudinal fasciculus, anterior thalamic radiation, optic radiation, cingulum and arcuate fasciculus. ROC analysis showed increased area under curve for fused features (average AUC=0.913) as compared to that of unimodel features (average AUC=0.77) which shows the increased sensitivity of proposed method.
A graph deep learning model for the classification of groups with different IQ using resting state fMRI
Functional connectivity (FC) analysis, which measures the connection between different brain regions, has been widely used to study brain function and development. However, FC-based analysis breaks the local structure in MRI images, resulting in a challenge for applying advanced deep learning models, e.g., convolutional neural networks (CNN). To fit the data in a non-Euclidean domain, graph convolutional neural network (GCN) was proposed, which can work on graphs rather than raw images, making it a suitable model for brain FC study. The small sample size is another challenge. Compared with natural images, medical images are usually limited in data sample size. Moreover, labeling medical images requires laborious annotation and is time-consuming. These limitations result in low accuracy and overfitting problem when training a conventional deep learning model on medical images. To address this problem, we employed a semi-supervised GCN with a Laplacian regularization term. By exploiting the between-sample information, semi-supervised GCN can achieve better performance on data with limited sample size. We applied the semi-supervised GCN model to a brain imaging cohort to classify the groups with different Wide Range Achievement Test (WRAT) scores. Experimental results showed semi-supervised GCN can improve classification accuracy, demonstrating the superior power of semi-supervised GCN on small datasets.
Intraoperative ultrasound to monitor spinal cord blood flow after spinal cord injury
Amir Manbachi, Sandeep Kambhampati, Ana Ainechi, et al.
Spinal cord injury (SCI) affects approximately 2.5 million people worldwide. The primary phase of SCI is initiated by mechanical trauma to the spinal cord, while the secondary phase involves the ensuing tissue swelling and ischemia that worsen tissue damage and functional outcome. Optimizing blood flow to the spinal cord after SCI can mitigate injury progression and improve outcome. Accurate, sensitive, real-time monitoring is critical to assessing the spinal cord perfusion status and optimizing management, particularly in those with injuries severe enough to require surgery. However, the complex anatomy of the spinal cord vasculature and surrounding structures present significant challenges to such a monitoring strategy. In this study, Doppler ultrasound was hypothesized to be a potential solution to detect and monitor spinal cord tissue perfusion in SCI patients who required spinal decompression and/or stabilization surgeries. This approach could provide real-time visual blood flow information and pulsatility of the spinal cord as biomarkers of tissue perfusion. Importantly, Doppler ultrasound could be readily integrated into the surgical workflow, because the spinal cord was exposed during surgery, thereby allowing easy access for Doppler deployment, while keeping the dura intact. Doppler ultrasound successfully measured blood flow in single and bifurcated microfluidic channels at physiologically relevant flow rates and dimensions in both in-vitro and in-vivo porcine SCI models. Furthermore, perfusion was quantified from the obtained images. Our results provide a promising and viable solution to intraoperatively assess and monitor blood flow at the SCI site to optimize tissue perfusion and improve functional recovery in SCI patients.
Deep learning of volumetric 3D CNN for fMRI in Alzheimer’s disease classification
Functional magnetic resonance imaging has a potential to provide insight into early detectors or biomarkers for various neurological disorders. With the advent of recent developments in deep learning, it may be possible to extract detailed information from neuroimaging data that is difficult to acquire using traditional techniques. Here we propose one such deep learning approach that makes use of a 3D Convolutional Neural Network to predict the onset of Alzheimer’s disease even in a single subject based on resting state fMRI data. This approach extracts both spatial and temporal features from the 4D volume and eliminates the traditional complicated steps of feature extraction. In our experiments, a relatively simple deep learning architecture yields high performance in Alzheimer’s disease classification. This illustrates the possibility of using volumetric feature extractors and classifiers as a tool to obtain biomarkers for neurological disorders and another step towards use of clinical fMRI.
How to meet the 10 ps Coincidence Timing Resolution PET challenge
Eric S. Harmon, Michael O. Thompson, C. Ross Schmidtlein, et al.
A new challenge for time-of-flight (TOF) Positron Emission Tomography (PET) is achieving 10 ps Coincidence Timing Resolution (CTR). Such a short CTR would enable a 20-fold higher TOF-related effective sensitivity gain (TOF-gain) and direct reconstruction in PET imaging. Ultrashort CTR greatly benefits brain PET imaging because owing to the relatively small size of human head, TOF-gain only begins to be significant for CTR < 150 ps. The Brain PET (BET) consortium evaluates the potential for achieving 10 ps CTR using an updated Monte Carlo modeling program (MCPET3). This new version includes the ability to set a constant refractive index at each scintillator segment face to model the effects of optical index coupling glues. In addition, the new version provides a simple method for evaluating the effect of Cherenkov photons on the CTR. The latest modeling results are compared to recent world-record experimental CTR with good agreement and only a few adjustable parameters. The results indicate that 50 ps CTR is likely to be attained in the near future, but achieving 10 ps CTR will require a number of substantial improvements in PET detector blocks technology. Based on our simulations, we estimate that in order to achieve the 10 ps CTR a 20-fold increase in scintillator intensity (photons/ps) is required, along with additional improvements in single photon timing resolution.
Vascular and Pulmonary Imaging
icon_mobile_dropdown
Co-registration of infarct core location with angiography during mechanical thrombectomy procedures for treatment progression monitoring
Purpose: The standard workflow for patients affected by acute ischemic stroke (AIS) due to large vessel occlusions (LVO) includes diagnosis using CT-perfusion (CTP) followed by image-guided mechanical thrombectomy (MT). The CTP is used to establish the ischemic tissue location and size; however, during the MT procedure this information is not currently available in the angiographic suites. We developed a method to co-register the infarct location with the angiograms in order to monitor the reperfusion of the tissue affected by the ischemia. Materials and Methods: We used the complex tortuosity of the cerebral vasculature as its own fiducial system to co-register the CTP data with the angiographic sequences acquired during MT. A carotid segment was used to create a complete set of projections for different 3D transformations which included: rotation and translation. The 3D transformation which minimized the difference between the projected data and the angiogram section containing the carotid was selected and applied to co-register the infarct core from CTP onto the angiographic sequences. Angiographic parametric imaging was performed and average mean transit time (MTT), time to peak (TTP), cerebral blood volume (CBV) and cerebral blood flow (CBF) measurements were monitored in the infarct core pre-and post-thrombectomy. We then tested coregistration accuracy using phantoms as a ground truth to evaluate registration accuracy. Results: Using the proposed method we were able to monitor flow changes in the infarct core during MT. Changes of parameters between 50% and 100% were observed following the MT. Co-registration performance analysis yielded a 0.69 average DICE coefficient score for the entire image and a 0.82 average DICE coefficient for the Circle of Willis. Conclusions: CTP data can be used to improve guidance during MT for patients with AIS using infarct core co-registration with angiographic sequences.
Exploration of pathology-specific flow patterns utilizing high speed angiography at 1000 fps
While angiography may be considered the gold standard for evaluating diseases of the human vasculature, vascular flow details are unavailable due to the low temporal resolution of flat panel detectors (FPDs) which operate at a maximum of 15 – 30 fps. Higher frame rates are necessary to extract any meaningful flow detail, which may act as additional information that can be used to characterize flow-dependent disease states. These higher rates have become available with recent advances in photon-counting detector (PCD) technology. The XCounter Actaeon was used to perform high frame rate imaging at 1000 fps. The Actaeon also provides superior spatial resolution due to its 100 um pixel size and electronic charge sharing correction, making it a good candidate for small ROI imaging. With this detector, “High Speed Angiography” (HSA) was performed on a variety of 3D printed patient-specific vasculature and interventional devices, using a simulated flow loop and iodinated contrast media. The images, which illustrate pathology-dependent flow detail, were recorded in a sequence of 1 ms frames. In addition, the energy discrimination capabilities of the Actaeon were used such that with a lower energy threshold, instrumentation noise was virtually negligible. The per frame noise quality and overall patient dose were acceptable as compared to standard angiography dose rates using FPDs. The previously unseen flow detail may give new insight into the diagnosis, progression, and treatment of neuro and cardiovascular pathologies.
A preliminary study of visualizing texture components of stage IA lung adenocarcinoma in three-dimensional thoracic CT images with structure-texture image decomposition
Y. Kawata, N. Niki, M. Kusumoto, et al.
Lung adenocarcinomas are the most prevalent subtype of non-small cell lung cancers which are found as the most common true-positive finding in a lung cancer screening population. The ability to preoperatively identify patients with a high rate of relapse becomes crucial to guide treatment decisions and to develop risk-adapted treatment strategies. Considerable research efforts have been performed to enable the stratification of adenocarcinoma aggressiveness based on preoperative CT image analyses for optimal therapeutic management to maximize patient survival and preserve lung function. It is currently a major focus to quantitatively evaluate adenocarcinoma aggressiveness according to computerextracted imaging features (radiomics) in three-dimensional (3D) thoracic CT images. Texture features are known to measure tumor heterogeneity and have been identified as the features having a potential correlation to outcomes in lung cancer. Nevertheless, a spatial configuration of texture caused by the tumor heterogeneity remains elusive. In this study, we present a visualization method to reveal a spatial configuration of the texture of pulmonary nodules in 3D thoracic CT images through a structure-texture image decomposition. Applying the method to an example of early-stage lung adenocarcinomas graded with texture features based on the popular algorithm such as gray-level co-occurrence matrix (GLCM), we present that the preliminary results reveal the presence of intensity structure caused by tumor heterogeneity.
3D microstructure analysis of human pulmonary emphysema using a synchrotron radiation CT
Kurumi Saito, Shota Fuketa, Ryouhei Shimatani, et al.
Progressive life-threatening lung disease is serious illness. Quantitative three-dimensional (3D) analysis of microstructure of human lung provides detailed information to understand the onset and progression of lung disease. We reported how to analyze normal structures of pulmonary lobule. This study is to elucidate 3D microstructures of emphysema. Two specimens of emphysema are examined using SRμCT. Compare 3D microstructures of bronchial and vasculature between emphysema and normal lungs. Eleven lung specimens were inflated and fixed using the Heitzman fixation method. These measurement experiments were performed at 20 or 25 keV X-ray energy on the SPring-8 beamline BL20B2. The reconstructed 3D image is about 13,370 x 13,370 x 4,900 voxels with an isotropic voxel size of 3μm. Vasculature and bronchi were extracted using a hierarchical algorithm with curvature analysis. The capillary bed and alveoli clusters of normal lung and emphysema were analyzed. In the case of emphysema, large alveoli clusters and thinner capillary beds were observed.
Innovations in Image Processing I
icon_mobile_dropdown
Quantitative assessment of weight-bearing fracture biomechanics using extremity cone-beam CT
Purpose: We investigate an application of multisource extremity Cone-Beam CT (CBCT) with capability of weight-bearing tomographic imaging to obtain quantitative measurements of load-induced deformation of metal internal fixation hardware (e.g. tibial plate). Such measurements are desirable to improve the detection of delayed fusion or non-union of fractures, potentially facilitating earlier return to weight-bearing activities. Methods: To measure the deformation, we perform a deformable 3D-2D registration of a prior model of the implant to its CBCT projections under load-bearing. This Known-Component Registration (KC-Reg) framework avoids potential errors that emerge when the deformation is estimated directly from 3D reconstructions with metal artifacts. The 3D-2D registration involves a free-form deformable (FFD) point cloud model of the implant and a 3D cubic B-spline representation of the deformation. Gradient correlation is used as the optimization metric for the registration. The proposed approach was tested in experimental studies on the extremity CBCT system. A custom jig was designed to apply controlled axial loads to a fracture model, emulating weight-bearing imaging scenarios. Performance evaluation involved a Sawbone tibia phantom with an ~4 mm fracture gap. The model was fixed with a locking plate and imaged under five loading conditions. To investigate performance in the presence of confounding background gradients, additional experiments were performed with a pre-deformed femoral plate placed in a water bath with Ca bone mineral density inserts. Errors were measured using eight reference BBs for the tibial plate, and surface point distances for the femoral plate, where a prior model of deformed implant was available for comparison. Results: Both in the loaded tibial plate case and for the femoral plate with confounding background gradients, the proposed KC-Reg framework estimated implant deformations with errors of <0.2 mm for the majority of the investigated deformation magnitudes (error range 0.14 - 0.25 mm). The accuracy was comparable between 3D-2D registrations performed from 12 x-ray views and registrations obtained from as few as 3 views. This was likely enabled by the unique three-source x-ray unit on the extremity CBCT scanner, which implements two off-central-plane focal spots that provided oblique views of the field-of-view to aid implant pose estimation. Conclusion: Accurate measurements of fracture hardware deformations under physiological weight-bearing are feasible using an extremity CBCT scanner and FFD 3D-2D registration. The resulting deformed implant models can be incorporated into tomographic reconstructions to reduce metal artifacts and improve quantification of the mineral content of fracture callus in CBCT volumes.
Segmentation of 4D images via space-time neural networks
Changjian Sun, Jayaram K. Udupa, Yubing Tong, et al.
Medical imaging techniques currently produce 4D images that portray the dynamic behaviors and phenomena associated with internal structures. The segmentation of 4D images poses challenges different from those arising in segmenting 3D static images due to different patterns of variation of object shape and appearance in the space and time dimensions. In this paper, different network models are designed to learn the pattern of slice-to-slice change in the space and time dimensions independently. The two models then allow a gamut of strategies to actually segment the 4D image, such as segmentation following just the space or time dimension only, or following first the space dimension for one time instance and then following all time instances, or vice versa, etc. This paper investigates these strategies in the context of the obstructive sleep apnea (OSA) application and presents a unified deep learning framework to segment 4D images. Because of the sparse tubular nature of the upper airway and the surrounding low-contrast structures, inadequate contrast resolution obtainable in the magnetic resonance (MR) images leaves many challenges for effective segmentation of the dynamic airway in 4D MR images. Given that these upper airway structures are sparse, a Dice coefficient (DC) of ~0.88 for their segmentation based on our preferred strategy is similar to a DC of <0.95 for large non-sparse objects like liver, lungs, etc., constituting excellent accuracy.
Localization and segmentation of optimal slices for chest fat quantification in CT via deep learning
Accurate measurement of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the thorax is important for understanding the impact of body composition upon various clinical disorders. The aim of this paper is to explore a practical system for the automatic localization of the axial slices through the thorax at the T7 and T8 vertebral levels in computed tomography (CT), and automatic segmentation of VAT in T7 slice and SAT at T8 slice via deep learning (DL). The methodology mainly consists of two models: the localization model based on AlexNet and the segmentation model based on UNet. For the first one, two slices (T7 and T8) at the middle of the seventh and eighth thoracic vertebrae, respectively, from the full or partial body scan of each patient are automatically detected. For the second one, all the CT images and the associated adipose tissue ground truth segmentations are used for training, where just T7 and T8 slices are tested by the two-label Unet. The datasets from four universities (Penn, Duke, Columbia, and Iowa) are used for training and validation of the models. In the experiments, relevant statistical parameters including Mean Distance (MD), Standard Deviation (SD), True Positive Rate (TPR), and True Negative Rate (TNR) indicate that the proposed algorithm has high reliability and may be useful for fully automated body composition analysis with high accuracy.
Abdominal muscle segmentation from CT using a convolutional neural network
CT is widely used for diagnosis and treatment of a variety of diseases, including characterization of muscle loss. In many cases, changes in muscle mass, particularly abdominal muscle, indicate how well a patient is responding to treatment. Therefore, physicians use CT to monitor changes in muscle mass throughout the patient’s course of treatment. In order to measure the muscle, radiologists must segment and review each CT slice manually, which is a time-consuming task. In this work, we present a fully convolutional neural network (CNN) for the segmentation of abdominal muscle on CT. We achieved a mean Dice similarity coefficient of 0.92, a mean precision of 0.93, and a mean recall of 0.91 in an independent test set. The CNN-based segmentation method can provide an automatic tool for the segmentation of abdominal muscle. As a result, the time required to obtain information about changes in abdominal muscle using the CNN takes a fraction of the time associated with manual segmentation methods and thus can provide a useful tool in the clinical application.
A prospective randomized clinical trial for measuring radiology study reporting time on Artificial Intelligence-based detection of intracranial hemorrhage in emergent care head CT
Axel Wismüller, Larry Stockmaster
The quantitative evaluation of Artificial Intelligence (AI) systems in a clinical context is a challenging endeavor, where the development and implementation of meaningful performance metrics is still in its infancy. Here, we propose a scientific concept, Artificial Intelligence Prospective Randomized Observer Blinding Evaluation (AI-PROBE) for quantitative clinical performance evaluation of radiology AI systems within prospective randomized clinical trials. Our evaluation workflow encompasses a study design and a corresponding radiology Information Technology (IT) infrastructure that randomly blinds radiologists with regards to the presence of positive reads as provided by AI-based image analysis systems. To demonstrate the applicability of our AI-evaluation framework, we present a first prospective randomized clinical trial on investigating the effect of automatic identification of Intra-Cranial Hemorrhage (ICH) in emergent care head CT scans on radiology study Turn-Around Time (TAT) in a clinical environment. Here, we acquired 620 consecutive non-contrast head CT scans from CT scanners used for inpatient and emergency room patients at a large academic hospital over a time period of 14 consecutive days. Immediately following image acquisition, scans were automatically analyzed for the presence of ICH using commercially available software (Aidoc, Tel Aviv, Israel). Cases identified as positive for ICH by AI (ICH-AI+) were automatically flagged in the radiologists' reading worklists, where flagging was randomly switched off with a probability of 50%. Study TAT was measured automatically as the time difference between study completion and first clinically communicated study reporting, with time stamps for these events automatically retrieved from various radiology IT systems. TATs for flagged cases (73 ± 143 min) were significantly lower than TATs for non-flagged (132 ± 193 min) cases (p<0.05, one-sided t-test), where 105 of the 122 ICH-AI+ cases were true positive reads. Total sensitivity, specificity, and accuracy over all analyzed cases were 95.0%, 96.7%, and 96.4%, respectively. We conclude that automatic identification of ICH reduces study TAT for ICH in emergent care head CT settings, which carries the potential for improving clinical management of ICH by accelerating clinically indicated therapeutic interventions. In a broader context, our results suggest that our AI-PROBE framework can contribute to a systematic quantitative evaluation of AI systems in a clinical workflow environment with regards to clinically meaningful performance measures, such as TAT or diagnostic accuracy metrics.
Innovations in Image Processing II
icon_mobile_dropdown
Automated computer-based enumeration of acellular capillaries for assessment of diabetic retinopathy
Diabetic retinopathy (DR) is the most common complications of diabetes; if untreated the DR can lead to a vision loss. The treatment options for DR are limited and the development of newer therapies are of considerable interest. Drug screening for the retinopathy treatment is undertaken using animal models in which the quantification of acellular capillaries (capillary without any cells) is used as a marker to assess the severity of retinopathy and the treatment response. The traditional approach to quantitate acellular capillaries is through manual counting. The purpose of this investigation was to develop an automated technique for the quantitation of acellular capillaries using computer-based image processing algorithms. We developed a custom procedure using the Python, the medial axis transform (MAT) and the connected component algorithm. The program was tested on the retinas of wild-type and diabetic mice and the results were compared to single blind manual counts by two independent investigators. The program successfully identified and enumerated acellular capillaries. The acellular capillary counts were comparable to the traditional manual counting. In conclusion, we developed an automated computer-based program, which can be effectively used for future pharmacological development of treatments for DR. This algorithm will enhance consistency in retinopathy assessment and reduce the time for analysis, thus, contributing substantially towards the development of future pharmacological agents for the treatment of DR.
Deep semantic segmentation of Diabetic Retinopathy lesions: what metrics really tell us
Segmentation of lesions in eye fundus images (EFI) is a difficult problem, due to small sizes, varying morphologies, similarities and lack of contrast. Today, deep learning segmentation architectures are state-of-the-art in most segmentation tasks. But metrics need to be interpreted adequately to avoid wrong conclusions, e.g. we show that 90% global accuracy of the Fully Convolutional Network (FCN) does not mean it segments lesions very well. In this work we test and compare deep segmentation networks applied to find lesions in the Eye Fundus Images, focusing on comparison and how metrics really should be interpreted to avoid mistakes and why. In the light of this analysis, we finalize by discussing further challenges that lie ahead.
Sparse-view CT perfusion with filtered back projection image reconstruction
CT perfusion (CTP) efficiently provides valuable hemodynamic information for triage of acute ischemic stroke patients at the expense of additional radiation dose from consecutive CT acquisitions. Low-dose CTP is therefore highly desirable but is often attempted by iterative or deep learning reconstructions that are computationally intensive. We aimed to demonstrate that acquiring fewer x-ray projections in a CTP scan while reconstructing with filtered back projection (FBP) can reduce radiation dose without impacting clinical utility. Six CTP studies were selected from the PRove-IT clinical database. For each axial source CTP slice, a 984-view sinogram was synthesized using a Radon Transform and uniformly under-sampled to 492, 328, 246, and 164-views. An FBP was applied on each sparse-view sinogram to reconstruct source images that were used to generate perfusion maps using a delay-insensitive deconvolution algorithm. The resulting Tmax and cerebral blood flow perfusion maps were evaluated for their ability to identify penumbra and ischemic core volumes using the Pearson correlation (R) and Bland-Altman analysis. In addition, sparse-view perfusion maps were assessed for fidelity to original full-view maps using structural similarity, peak signal-to-noise ratio, and normalized root mean squared error. Ischemic penumbra and infarct core volumes were accurately estimated by all sparse-view configurations (R<0.95, p<0.001; mean difference <3 ml) and overall perfusion map fidelity was well-maintained up to 328-views. Our preliminary analysis reveals that radiation dose can potentially be reduced by a factor of 6 with further validation that the errors in ischemic volume measurement do not impact clinical decision-making.
Segmentation of epicardial adipose tissue in cardiac MRI using deep learning
Epicardial adipose tissue (EAT) is the layer of fat that accumulates around the myocardium of the heart and is a contributing factor to cardiovascular disease. Identification and quantification of this fat depot is important in ongoing studies of intervention. We have manually traced the EAT in 20 cardiac MRI scans, but this process is tedious and time-consuming. The goal of this project was to develop a segmentation algorithm that would shorten the time it takes to quantify the EAT. The validation data consisted of pre-intervention and post-intervention MRI scans from 12 (4 subjects did not have post-intervention scans) volunteer female subjects. The EAT, myocardium, and ventricles were manually traced in each slice of each scan. For the automated algorithm, preprocessing consisted of transforming the image data to the polar domain using the centroid of the traced inner EAT contour. In the polar image, each radial angle contained an inner-contour point and an outer-contour point, identifying the thickness of the fat at that radial location in that slice. These two locations on each single angle view served as the input for the neural network along with the angle, the slice location, and time in the cardiac cycle including either end-diastole or end-systole. Two neural networks were trained, one for the inner edge of EAT and a second for the outer edge of EAT. The networks returned the location of the contours in each radial angle and this was compared with the traced solutions. The mean dice similarity coefficient for the automatically identified EAT vs. the manually traced EAT was 0.56 ± 0.12. The current algorithm produces promising results that warrant further investigation and development.
IRIS: interactive real-time feedback image segmentation with deep learning
Volumetric examinations of the aorta are nowadays of crucial importance for the management of critical pathologies such as aortic dissection, aortic aneurism, and other pathologies, which affect the morphology of the artery. These examinations usually begin with the acquisition of a Computed Tomography Angiography (CTA) scan from the patient, which is later on postprocessed to reconstruct the 3D geometry of the aorta. The first postprocessing step is referred to as segmentation. Different algorithms have been suggested for the segmentation of the aorta; including interactive methods, as well as fully automatic methods. Interactive methods need to be fine-tuned on each single CTA scan and result in longer duration of the process, whereas fully automatic methods require the possession of a large amount of labeled training data. In this work, we introduce a hybrid approach by combining a deep learning method with a consolidated interaction technique. In particular, we trained a 2D and a 3D U-Net on a limited number of patches extracted from 25 labeled CTA scans. Afterwards, we use an interactive approach, which consists in defining a region of interest (ROI) by just placing a seed point. This seed point is later used as the center of a 2D or 3D patch to be fed to the 2D or 3D U-Net, respectively. Due to the low content variation of these patches, this method allows to correctly segment the ROIs without the need for parameter tuning for each dataset and with a smaller training dataset, requiring the same minimal interaction as state-of-the-art interactive methods. Later on, the new segmented CTA scans can be further used to train a convolutional network for a fully automatic approach.
Quantification of flow through intracranial arteriovenous malformations using Angiographic Parametric Imaging (API)
Kyle A. Williams, Mohammed Mahdi Shiraz Bhurwani, Kenneth V. Snyder, et al.
Purpose: Intracranial arteriovenous malformations (AVMs) are severe neurovascular diseases in which the arterial branches of an area of the brain communicate directly with venous circulation through a network of dilated vasculature (nidus) which significantly increases the risk of hemorrhage. Treatment plans typically incorporate direct embolization with liquid materials delivered via micro-catheters under fluoroscopy. Currently, the progression and success of this procedure are qualitatively evaluated using digitally subtracted angiographic (DSA) sequences. This study sought to validate the use of Angiographic Parametric Imaging (API) for quantitative analysis of the hemodynamic changes caused by embolization treatment using imaging biomarkers. Materials and Methods: 36 patients with AVMs were selected randomly from a list of patients with known symptoms at presentation. For each, at least one set of frontal and lateral angiograms were analyzed using API. Parametric maps were calculated for five imaging biomarkers, including time to peak (TTP), mean transit time (MTT), time to arrival (TTA), peak height (PH), and area under the curve (AUC). Regions of interest (ROIs) were selected over the feeding arteries, AVM nidus, and draining veins. Average ROI parameters were calculated and changes in flow due to embolization were quantified through a percent change analysis. Results were verified using correlation coefficients across AVM vasculature at multiple sites following normalization. Results: Frontal to lateral correlation coefficients; TTP, 0.54±0.07; MTT, 0.24±0.09; TTA 0.60±0.06; PH, 0.33±0.08; AUC, 0.22±0.09. Nidus to principle draining vein (PDV) correlation coefficients; TTP, 0.75±0.03; MTT, 0.64±0.04; TTA, 0.80±0.02; PH, 0.32±0.06; AUC, 0.68±0.04. PH and AUC values affected by DSA inversion. Conclusions: The study concludes that the API software is reliable in determining the flow parameters throughout the AVM, provided that the selected ROI is consistent between frontal and lateral scans and DSA selection is optimal.
Deep Convolutional Neural Networks in Molecular, Structural, and Functional Imaging I
icon_mobile_dropdown
Supervised machine learning for region assignment of zebrafish brain nuclei based on computational assessment of cell neighborhoods
Samarth Gupta, Yuan Xue, Yifu Ding, et al.
Histological studies provide cellular insights into tissue architecture and have been central to phenotyping and biological discovery. Synchrotron X-ray micro-tomography of tissue, or “X-ray histotomography”, yields three-dimensional reconstruction of fixed and stained specimens without sectioning. These reconstructions permit the computational creation of histology-like sections in any user-defined plane and slice thickness. Furthermore, they provide an exciting new basis for volumetric, computational histological phenotyping at cellular resolution. In this paper, we demonstrate the computational characterization of the zebrafish central nervous system imaged by Synchrotron X-ray micro-CT through the classification of small cellular neighborhood volumes centered at each detected nucleus in a 3D tomographic reconstruction. First, we propose a deep learning-based nucleus detector to detect nuclear centroids. We then develop, train, and test a Convolutional Neural Network architecture for automatic classification of brain nuclei using five different neighborhood sizes, which correspond to 8, 12, 16, 20 and 24 isotropic voxel dimensions respectively. We show that even with small cell neighborhoods, our proposed model is able to characterize brain nuclei into the major tissue regions with a Jaccard score of 74.29% and F1 score of 85.34%. Using our detector and classifier, we obtained very good results for fully segmenting major zebrafish brain regions in the 3D scan through patch wise labeling of cell neighborhoods.
Deep learning based high-resolution reconstruction of trabecular bone microstructures from low-resolution CT scans using GAN-CIRCLE
Indranil Guha, Syed Ahmed Nadeem, Chenyu You, et al.
Osteoporosis is a common age-related disease characterized by reduced bone density and increased fracture-risk. Microstructural quality of trabecular bone (Tb), commonly found at axial skeletal sites and at the end of long bones, is an important determinant of bone-strength and fracture-risk. High-resolution emerging CT scanners enable in vivo measurement of Tb microstructures at peripheral sites. However, resolution-dependence of microstructural measures and wide resolution-discrepancies among various CT scanners together with rapid upgrades in technology warrant data harmonization in CT-based cross-sectional and longitudinal bone studies. This paper presents a deep learning-based method for high-resolution reconstruction of Tb microstructures from low-resolution CT scans using GAN-CIRCLE. A network was developed and evaluated using post-registered ankle CT scans of nineteen volunteers on both low- and highresolution CT scanners. 9,000 matching pairs of low- and high-resolution patches of size 64×64 were randomly harvested from ten volunteers for training and validation. Another 5,000 matching pairs of patches from nine other volunteers were used for evaluation. Quantitative comparison shows that predicted high-resolution scans have significantly improved structural similarity index (p < 0.01) with true high-resolution scans as compared to the same metric for low-resolution data. Different Tb microstructural measures such as thickness, spacing, and network area density are also computed from low- and predicted high-resolution images, and compared with the values derived from true high-resolution scans. Thickness and network area measures from predicted images showed higher agreement with true high-resolution CT (CCC = [0.95, 0.91]) derived values than the same measures from low-resolution images (CCC = [0.72, 0.88]).
Deep learning based multi-organ segmentation and metastases segmentation in whole mouse body and the cryo-imaging cancer imaging and therapy analysis platform (CITAP)
We are creating a cancer imaging and therapy analysis platform (CITAP), featuring image analysis/visualization software and multi-spectral cryo-imaging to support innovations in preclinical cancer research. Cryo-imaging repeatedly sections and tiles microscope images of the tissue block face, providing color anatomy and molecular fluorescence 3D microscopic imaging over vast volumes as large as a whole mouse, with single-metastatic-cell sensitivity. We utilized DenseVNet from NiftyNet for multi-organ segmentation on color anatomy images to further analyze major organs. The proposed algorithm was trained/validated/tested on 70/5/4 color anatomy volumes with manually labeled lung, liver, and spleen. The mean Dice similarity coefficient for lung, liver, and spleen in the test set were 0.89±0.01, 0.92±0.01, and 0.83±0.04. We deem Dice coefficient of <0.9 good for analyzing distribution of metastases. To segment GFP-labeled breast cancer metastases in high resolution green fluorescence images, big and small candidates were segmented using marker-based watershed and multi-scale Laplacian of Gaussian filtering followed by Otsu segmentation respectively. A bounding box around each candidate was classified with a 3D convolutional neural network (CNN). In one test mouse with 226 metastases, CNNbased classification and random forest with hand-crafted features achieved sensitivity/specificity of 0.95/0.89 and 0.92/0.82, respectively. DenseVNet-based organ segmentation allows automatic quantification of GFP-labeled metastases in each organ of interest. In the test mouse with 226 metastases, 78 (1 with size <2mm, 21 with size 0.5mm-2mm, and 56 with size <0.5mm) and 24 (1 with size <2mm, 11 with size 0.5-2mm, and 12 with size <0.5mm) were found in the lung and liver respectively.
ASNet: An adaptive scale network for skin lesion segmentation in dermoscopy images
Dermoscopy is a non-invasive dermatology imaging and widely used in dermatology clinic. In order to screen and detect melanoma automatically, skin lesion segmentation in dermoscopy images is of great significance. In this paper, we propose an adaptive scale network (ASNet) for skin lesion segmentation in dermoscopy images. A ResNet34 with pretrained weights is applied as the encoder to extract more representative features. A novel adaptive scale module is designed and inserted into the top of the encoder path to dynamically fuse multi-scale information, which can self-learn based on spatial attention mechanism. Our proposed method is 5-fold cross-validated on a public dataset from Challenge Lesion Boundary Segmentation in ISIC-2018, which includes 2594 images from different types of skin lesion with different resolutions. The Jaccard coefficient, Dice coefficient and Accuracy are 82.15±0.328%, 88.880.390% and 96.00±0.228%, respectively. Experimental results show the effectiveness of the proposed ASNet.
A causal brain network estimation method leveraging Bayesian analysis and the PC algorithm
Gemeng Zhang, Aiying Zhang, Vince D. Calhoun, et al.
Estimating causal brain networks from fMRI data is important in understanding functional human brain connectivity, and current causality estimation methods face various challenges such as high dimensionality and expensive computation. The joint estimation of causal networks between groups shows promising potential to investigate group-related brain connectivity variations. In this paper, we proposed a joint causal brain network estimation method by adding a prior to the popular PC algorithm1 (by Peter Spirtes and Clark Glymour). The prior is obtained through a fast joint Bayesian analysis (FIBA) and plays a role as a screening step, significantly reducing computational burden of PC algorithm. Moreover, the FIBA also enables us to efficiently address the high dimensionality problem of fMRI data. The experimental results from both simulation data sets and real fMRI data demonstrate the accuracy and efficiency of the proposed method. The specific brain connections identified in schizophrenia patients extend previous research and shed light on other studies of mental disorders.
Deep Convolutional Neural Networks in Molecular, Structural, and Functional Imaging II
icon_mobile_dropdown
Efficacy of radiomics and genomics in predicting TP53 mutations in diffuse lower grade glioma
An updated classification of diffuse lower-grade gliomas is established in the 2016 World Health Organization Classification of Tumors of the Central Nervous System based on their molecular mutations such as TP53 mutation. This study investigates machine learning methods for TP53 mutation status prediction and classification using radiomics and genomics features, respectively. Radiomics features represent patients' age and imaging features that are extracted from conventional MRI. Genomics feature is represented by patients’ gene expression using RNA sequencing. This study uses a total of 105 LGG patients, where the patient dataset is divided into a training set (80 patients) and testing set (25 patients). Three TP53 mutation prediction models are constructed based on the source of the training features; TP53-radiomics model, TP53-genomics model, and TP53-radiogenomics model, respectively. Radiomics feature selection is performed using recursive feature selection method. For genomics data, EdgeR method is utilized to select the differentially expressed genes between the mutated TP53 versus the non-mutated TP53 cases in the training set. The training classification model is constructed using Random Forest and cross-validated using repeated 10-fold cross validation. Finally, the predictive performance of the three models is assessed using the testing set. The three models, TP53-Radiomics, TP53-RadioGenomics, and TP53-Genomics, achieve a predictive accuracy of 0.84±0.04, 0.92±0.04, and 0.89±0.07, respectively. These results show promise of non-invasive MRI radiomics features and fusion of radiomics with genomics features for prediction of TP53.
Classification of skin-cancer lesions based on fluorescence lifetime imaging
Priyanka Vasanthakumari, Renan A. Romano, Ramon G. T. Rosa, et al.
Every year more than 5.4 million new cases of skin cancer are reported in the US. Melanoma is the most lethal type with only 5% occurrence rate, but accounts for over 75% of all skin cancer deaths. Non-melanoma skin cancer, especially basal cell carcinoma (BCC) is the most commonly occurring and often curable type that affects more than 3 million people and causes about 2000 deaths in the US annually. The current diagnosis involves visual inspection, followed by biopsy of the lesions. The major drawbacks of this practice include difficulty in border detection causing incomplete treatment and, the inability to distinguish between clinically similar lesions. Melanoma is often mistaken for the benign lesion pigmented seborrheic keratosis (pSK), making it extremely important to differentiate benign and malignant lesions. In this work, a novel feature extraction algorithm based on phasors was performed on the Fluorescence Lifetime Imaging (FLIM) images of the skin to reliably distinguish between benign and malignant lesions. This approach, unlike the standard FLIM data processing method that requires time-deconvolution of the instrument response from the measured time-resolved fluorescence signal, is computationally much simpler and provides a unique set of features for classification. Subsequently, FLIM derived features were selected using a double step cross validation approach that assesses the reliability and the performance of the resultant trained classifier. Promising FLIM-based classification performance was attained for detecting benign from malignant pigmented (sensitivity: ~80%, specificity: 79%, overall accuracy: ~79%) and nonpigmented (sensitivity: ~88%, specificity: 83%, overall accuracy: ~87%) lesions.
A hypergraph learning method for brain functional connectivity network construction from fMRI data
Recently, functional magnetic resonance imaging (fMRI)-derived brain functional connectivity networks (FCNs) have provided insights into explaining individual variation in cognitive and behavioral traits. In these studies, how to accurately construct FCNs is always important and challenging. In this paper, we propose a hypergraph learning based method, which constructs a hypergraph similarity matrix to represent the FCN with hyperedges being generated by sparse regression and their weights being learned by hypergraph learning. The proposed method is capable of better capturing the relations among multiple brain regions than the traditional graph based methods and the existing unweighted hypergraph based method. We then validate the effectiveness of our proposed method on the Philadelphia Neurodevelopmental Cohort data for classifying subjects’ learning ability levels, and discover potential imaging biomarkers which may account for a proportion of the variance in learning ability.
MRI-based radiomics of sarcomas in the preclinical arm of a co-clinical trial
Radiomics provide an exciting approach to developing imaging biomarkers in the context of precision medicine. We focus on the preclinical arm of a co-clinical trial investigating synergy of immunotherapy combined with radiation therapy (RT) and surgical resection using a genetically engineered mouse model of sarcoma. Our protocol involves the acquisition of MRI data with T1, T2 and T1 with contrast agent. There are two MRI time points i.e. one day before RT (20Gy) and one week later. After the second MRI acquisition the primary tumor is surgically removed, and the mice are followed for up to 6 months to investigate for local recurrence or distant metastases. The tumor images are segmented using deep learning. We performed radiomics for the tumor, peritumoral rim and the combined tumor and peritumoral rim. Our first radiomics analysis was focused on determining features which are most indicative to the effects of RT. Our second analysis aimed to answer if radiomics features could predict primary tumor recurrence within this population. Top features were selected for training classifiers based on neural networks and support vector machines. Our results show that gray level radiomic features show that tumors often acquire more heterogeneous texture and that tumor volume increases one-week post RT. The results also suggest that radiomics features serve to indicate likelihood of primary tumor recurrence with the best predictive power in the combined tumor and peritumoral area in pre-RT data (AUC: 0.83). In conclusion, we have created a radiomics pipeline to serve in our current preclinical arm of the co-clinical trial.
A machine leaning approach for abdominal aortic aneurysm severity assessment using geometric, biomechanical, and patient-specific historical clinical features
Recent studies monitoring severity of abdominal aortic aneurysm (AAA) suggested that reliance on only the maximum transverse diameter (Dmax) may be insufficient to predict AAA rupture risk. Moreover, geometric indices, biomechanical parameters, material properties, and patient-specific historical data affect AAA morphology, indicating the need for an integrative approach that incorporates all factors for more accurate estimation of AAA severity. We implemented a machine learning algorithm using 45 features extracted from 66 patients. The model was generated using the J48 decision tree algorithm with the aim of maximizing model accuracy. Three different feature sets were used to assess the prediction rate: i) using Dmax as a single-feature set, ii) using a set of all features, and, lastly iii) using a feature set selected via the BestFirst feature selection algorithm. Our results indicate that BestFirst feature selection yielded the highest prediction accuracy. These results indicate that a combination of several specific parameters that comprehensively capture AAA behavior may enable a suitable assessment of AAA severity, suggesting the potential benefit of machine learning for this application.
Novel Imaging Methods
icon_mobile_dropdown
Initial investigations of x-ray particle imaging velocimetry (X-PIV) in 3D printed phantoms using 1000 fps High-Speed Angiography (HSA)
Details of blood flow patterns and rates can provide useful information to physicians when deciding whether to treat diseased vessels and in assessing the effectiveness of treatments. These blood flow details are difficult to see using ‘realtime’ imaging techniques of 30 fps. 1000 fps High-Speed Angiography (HSA) provides the temporal resolution needed to record details of flow within patient vasculature. The Actaeon detector from XCounter is capable of x-ray imaging at 1000 fps providing sufficient temporal and spatial resolution (100 μm pixel pitch) for the quantification of flow details. A new method for experimentally obtaining flow details in patient-specific geometries demonstrates microspheres tracking vascular flow similar to methods used in optical laser-based particle image velocimetry (PIV). The microspheres are prepared by soaking them in iodinated contrast medium to provide radio-opacity and injected into 3D-printed, patient-specific vascular phantoms during x-ray exposure. Images were acquired at 1000 fps for 2.4 seconds using the Actaeon’s High-Sensitivity mode. Changes in particle positions were tracked through consecutive frames and the position data was used to calculate velocities. The velocities were then mapped to the initial position and binned to reduce apparent variation in individual particle paths. This method provides quantitative data regarding the flow details within a vessel and also qualitative information regarding flow at different points in time during the acquisition. These methods could enable new measurements of flow properties in patient-specific vasculature.
Micro-CT imaging technique to characterize diffusion of small-molecules
Tina Khazaee, Chris J. D. Norley, Hristo N. Nikolov, et al.
Optimization and characterization of small-molecule diffusion are important in the development of drug-delivery systems. For example, the delivery of local antibiotics is an important component of therapy for orthopedic devicerelated infections (ODRI). However, despite its wide use, the exact elution mechanism is not yet fully understood. In this study, we developed a quantitative, non-destructive, micro-CT technique to characterize 2D diffusion of small-molecules in a tissue-equivalent phantom. Our objective is to use a radio-opaque molecule (Iohexol; molecular weight (MW) 821 Da) as a surrogate for small-molecule antibiotics (e.g., Vancomycin; MW 1449 Da) to characterize diffusion from a finite, cylindrical-core carrier into an agar, tissue-equivalent, sink. A single-phase diffusion experiment was performed to validate our micro-CT imaging method. A two-part phantom consisted of an inner, cylindrical, agar core loaded with Iohexol as a drug-surrogate, directly communicating with an outer annulus of pure agar. The estimate of a single-phase diffusion coefficient for agar was derived from the analysis of 2D radial diffusion distance. We then applied the validated method to evaluate diffusion in two-phases, using calcium-sulphate matrices loaded with Iohexol eluting into an agar tissue-equivalent sink. Image acquisition was performed at regular intervals up to 25 days. Cumulative release amount was used to calculate diffusion coefficients in two-phase phantoms. Iohexol diffusion coefficient was 2.6 ×10-10 m2 s-1, 0.46 ×10-10 m2 s-1, 0.85 ×10-10 m2 s-1 through agar, Stimulan, and Plaster of Paris, respectively. This approach could be used to validate drug delivery in the development of new carrier structures and materials.
Quantifying the effects of stenosis on blood flow using 1000 fps High-Speed Angiography (HSA)
Vascular pathologies such as stenosis cause changes in blood flow patterns and rates in patient vasculature. These changes are difficult to observe using conventional x-ray imaging techniques that typically operate at no more than 30 fps. However, 1000 fps High-Speed Angiography (HSA) provides sufficient temporal resolution to observe flow patterns in patient vasculature. The Actaeon detector from XCounter provides 1000 fps HSA with high spatial resolution (100 μm pixel pitch) enabling observation of detailed blood flow in 3D printed phantoms. To observe the effects of stenosis on flow, three carotid artery phantoms beginning 7.6 cm proximal to the bifurcation of the common carotid and extending 6.9 cm into the internal and external carotids with different degrees of stenosis (0%, 33%, and 66% stenosis) in the internal carotid were printed. Iodine contrast was injected into the phantom proximal to the bifurcation and images were recorded at 1000 fps. Time-density curves were generated for ROIs distal and proximal to the location of the stenosis for both branches of the carotid artery. The curves for the control phantom (no stenosis) showed little difference between the branches of the carotid while the stenotic phantoms showed an increased velocity in the flow of contrast and a lower maximum signal intensity for the stenosed internal carotid. 1000 fps HSA could be used during treatment of vascular pathologies to observe changes in flow patterns following an endovascular treatment.
Co-registration of pre- and post-stent intravascular OCT images for validation of finite element model simulation of stent expansion
Intravascular optical coherence tomography (IVOCT) provides high-resolution images of coronary calcifications and detailed measurements of acute stent deployment following stent implantation. Since pre- and post-stent IVOCT image “pull-back” acquisitions start from different locations, registration of corresponding pullbacks is needed for assessing treatment outcomes. In particular, we are interested in assessing finite element model (FEM) prediction of lumen gain following stenting, requiring registration. We used deep learning to segment calcifications in corresponding pre- and poststent IVOCT pullbacks. We created 1D representations of calcium thickness as a function of the angle of the helical IVOCT scans. Registration of two scans was done by maximizing the cross correlation of these two 1D representations. Registration was accurate, as determined by visual comparisons of 2D image frames. We used our pre-stent calcification segmentations to create a lesion-specific FEM, which took into account balloon size, balloon pressure, and stent measurements. We then compared simulated lumen gain from FEM analysis to actual stent deployment results. Actual lumen gain across ~200 registered pre and post-stent images was 1.52 ± 0.51, while FEM prediction was 1.43 ± 0.41. Comparison between actual and FEM results showed no significant difference (p < 0.001), suggesting accurate prediction of FEM modeling. Registered image data showed good visual agreement regarding lumen gain and stent strut malapposition. Hence, we have developed a platform for evaluation of FEM prediction of lumen gain. This platform can be used to guide development of FEM prediction software, which could ultimately help physicians with stent treatment planning of calcified lesions.
Image processing using Convolutional Neural Network (CNN) for Region of Interest (ROI) fluoroscopy
Patient x-ray dose during fluoroscopically-guided neuro interventions can be reduced by using a differential Region-of-Interest (ROI) x-ray attenuator. The dose in the periphery region outside the region-of-interest (ROI) treatment area is reduced while maintaining regular dose within the ROI. In this work we present a convolutional neural network to aid in restoration of image quality in dose-reduced regions. A 0.7 mm Cu attenuator with a 10 mm circular hole in the middle was used to reduce entrance dose in the periphery. A 29 layer deep CNN was developed to derive the ROI attenuator mask image from the dose-reduced images. To train the CNN, simulated ROI dose-reduced images of various backgrounds such as anthropomorphic head and chest phantoms were generated using acquired mask images of the ROI attenuator at different positions and radiological magnifications. The image quality in the dose-reduced region of the images was restored by first dividing the CNN derived mask from the dose-reduced image and then noise in the periphery region was reduced by using a combination of Gaussian and recursive temporal filtering. A total dose-area-product reduction of 70% per frame was achieved. After image processing using the CNN derived image mask of the ROI attenuator, the SNR in the dose reduced periphery regions was improved by a factor of 3. The CNN is capable of deriving the mask without any prior knowledge of ROI attenuator position or radiological magnification. Using the CNN generated mask, the image quality in the dose reduced images was restored with minimal or no boundary artifacts.
Ocular and Optical Imaging
icon_mobile_dropdown
Spatially aware deep learning improves identification of retinal pigment epithelial cells with heterogeneous fluorescence levels visualized using adaptive optics
Defects in retinal pigment epithelial (RPE) cells, which nourish retinal neurosensory photoreceptor cells, contribute to many blinding diseases. Recently, the combination of adaptive optics (AO) imaging with indocyanine green (ICG) has enabled the visualization of RPE cells directly in patients’ eyes, which makes it possible to monitor cellular status in real time. However, RPE cells visualized using AO-ICG have ambiguous boundaries and minimal intracellular contrast, making it difficult for computer algorithms to identify cells solely based on image appearance information. Here, we demonstrate the importance of providing spatial information for deep learning networks. We used a training dataset containing 1,633 AO images and a separate dataset containing 250 images for validation. Whereas the original LinkNet was unable to reliably identify low-contrast RPE cells, we found that our proposed spatially-aware LinkNet which has direct access to additional spatial information about the hexagonal arrangement of RPE cells (auxiliary spatial constraints) achieved better results. The overall precision, recall, and F1 score from the spatially aware deep learning method were 92.1±4.3%, 88.2±5.7%, and 90.0±3.8% (mean±SD) respectively, which was significantly better than the original LinkNet with 92.0±4.6%, 57.9±13.3%, and 70.0±10.6 (p<0.05). These experimental results demonstrate that the auxiliary spatial constraints are the key factor for improving RPE identification accuracy. Explicit incorporation of spatial constraints into existing deep learning networks may be useful for handling images with known spatial constraints and low image intensity information at cell boundaries.
Mask-guided diffusion optical tomography image reconstruction for breast imaging
Yejin Kim, Sungho Yun, Seungryong Cho
DBT can provide a high resolution structural image, whereas DOT can provide the functional information. Therefore, we suggest to scan both DBT and DOT for the same object and overlay the two different images to see the high resolution structural information and the functional information at the same time. Designed to have similar scanning geometry to each other, the DBT image can be used as a mask guide for the DOT reconstruction mesh. The scattering dominant interaction near the air-tissue boundary, which causes severe boundary artifacts and affects to the overall image quality, can be improved using the DBT-mask meshes. Even when the DBT and DOT were not simultaneously scanned and thereby there existed the marginal anatomical deformation between the two images, the DBT mask guide was extremely effective in removinge the boundary artifact for the DOT image. For the calibration of the raw data, a homogeneous phantom with known optical properties was used so that the source power variance and the detector sensitivity variance can be reduced. The raw data was sorted according to signal amplitude to avoid air-contaminated information. With the calibrated and sorted raw data, the bulk absorption and scattering coefficients were estimated. The NIRFAST software package was used as the core iterative reconstruction. For the clinical validation, 27 patients in total were scanned with both DBT and DOT. The sensitivity and specificity were 0.92 and 0.64 respectively with the DBT images, 0.61 and 0.85 with the DOT images, whereas with the DBT and DOT overlaid images, 1 and 0.93 respectively.
Rapid and accurate identification of newly emerging infectious pathogens using bio-optical sensor in clinical specimens (Conference Presentation)
Here, we report a bio-optical sensor based isothermal, label-free DNA and RNA detection assay for rapid and accurate identification of newly emerging infectious pathogens in clinical specimens. This bio-optical sensor shows rapid (<20min) and high limit of detection (10-100 copies/reaction) without any labeling in a real-time manner. We validated it with various samples infected with Coxiella burnetii, Orientia tsutsugamushi, severe fever with thrombocytopenia syndrome virus (SFTS virus), Middle East respiratory syndrome coronavirus (MERS-CoV); all the results showed high sensitivity and specificity compared with the conventional PCR methods. This bio-optical sensor is highly sensitive and specific and has proven to be applicable to a wide variety of newly emerging infectious diseases. Moreover, the clinical utility of the assay to identify multiple pathogens in clinical specimens is useful to overcome the limitation of the conventional approaches.
In vivo cancer detection in animal model using hyperspectral image classification with wavelet feature extraction
Ling Ma, Martin Halicek, Baowei Fei
Hyperspectral imaging (HSI) is a promising optical imaging technique for cancer detection. However, quantitative methods need to be developed in order to utilize the rich spectral information and subtle spectral variation in such images. In this study, we explore the feasibility of using wavelet-based features from in vivo hyperspectral images for head and neck cancer detection. Hyperspectral reflectance data were collected from 12 mice bearing head and neck cancer. Catenation of 5-level wavelet decomposition outputs of hyperspectral images was used as a feature for tumor discrimination. A support vector machine (SVM) was utilized as the classifier. Seven types of mother wavelets were tested to select the one with the best performance. Classifications with raw reflectance spectra, 1-level wavelet decomposition output, and 2-level wavelet decomposition output, as well as the proposed feature were carried out for comparison. Our results show that the proposed wavelet-based feature yields better classification accuracy, and that using different type and order of mother wavelet achieves different classification results. The wavelet-based classification method provides a new approach for HSI detection of head and neck cancer in the animal model.
High-resolution x-ray luminescence computed tomography
High-resolution imaging modalities play a critical role for advancing biomedical sciences. Recently, x-ray luminescence computed tomography (XLCT) imaging was introduced as a hybrid molecular imaging modality that combines the highspatial resolution of x-ray imaging and molecular sensitivity of optical imaging. The narrow x-ray beam based XLCT imaging has been demonstrated to achieve high spatial resolution, even at depth, with great molecular sensitivity. Using a focused x-ray beam as the excitation source, orders of magnitude of increased sensitivity has been verified compared with previous methods with a collimated x-ray beam. In this work, we demonstrate the high-spatial resolution capabilities of our focused x-ray beam based XLCT imaging system by scanning two sets of targets, differing in the target size, embedded inside of two tissue-mimicking cylindrical phantoms. Gd2O2S:Eu3+ targets of 200 μm and 150 μm diameters were created and embedded with the same edge-to-edge distances as their diameters respectively. We scanned and reconstructed a single transverse section and successfully demonstrated that a focused x-ray beam with an average dual-cone size of 125 μm could separate the targets in both phantoms with good shape and location accuracy. We have also improved the current XLCT imaging system to make it feasible for fast three-dimensional XLCT scanning.
Bone and Skeletal Imaging, Segmentation, Registration, and Decision-making
icon_mobile_dropdown
Application of a novel ultra-high resolution multi-detector CT in quantitative imaging of trabecular microstructure
G. Shi, S. Subramanian, Q. Cao, et al.
Purpose: To evaluate the performance of a novel ultra-high resolution multi-detector CT scanner (Canon Aquilion Precision UHR CT), capable of visualizing ~150 μm details, in quantitative assessment of bone microarchitecture. Compared to conventional CT, the spatial resolution of UHR CT begins to approach the size of the trabeculae. This might enable measurements of microstructural correlates of osteoporosis, osteoarthritis, and other bone disease. Methods: The UHR CT system features a 160-row x-ray detector with 250x250 μm pixels (measured at isocenter) and a custom-designed x-ray source with a 0.4x0.5 mm focal spot. Visualization of high contrast details down to ~150 μm has been achieved on this device, which is now commercially available for clinical use. To evaluate the performance of UHR CT in quantification of bone microstructure, we imaged a variety of human bone samples (including ulna, hamate, radius, and vertebrae) embedded in a ~16 cm diameter plastic cylinder and in an anthropomorphic thorax phantom (QRM-Thorax, QRM Gmbh). Helical UHR CT acquisitions (120 kVp tube voltage) were acquired at scan exposures of 375 mAs - 5 mAs. For comparison, the samples were also imaged using a Normal Resolution (NR) mode available on the scanner, involving 500 μm slice thickness, exposure of 50 mAs, and a focal spot of 0.6x1.3 mm. We obtained micro-CT (μCT) of the bone samples at ~28 μm voxel size as a gold-standard reference. Geometric measurements of bone microstructure were performed in 17 regions-of-interests (ROIs) distributed throughout the bones of the phantoms; image registration was used to place the ROIs at corresponding locations in the UHR CT and NR CT. Trabecular thickness Tb.Th, spacing Tb.Sp, and Bone Volume fraction BvTv were obtained. The UHR and NR imaging protocols were compared terms of correlations to μCT and error of trabecular measurements. The effect of dose on trabecular morphometry was also studied for the UHR CT. Furthermore, we evaluated the sensitivity of texture features of trabecular bone (recently proposed as an alternative to geometric indices of microstructure) to imaging protocol. Image texture evaluation was performed using ~150 regions of interest (ROIs) across all bone samples. Three-dimensional Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) features were extracted for each ROI. We analyzed correlation and concordance correlation coefficient (CCC) of the mean ROI values of texture features obtained using the UHR and NR modes. Results: UHR CT reconstructions of bone samples clearly demonstrated improved visualization of the trabeculae compared to NR CT. UHR CT achieved substantially better correlations for all three metrics of bone microstructure, in particular for BvTv (correlation coefficient of 0.91 for UHR CT compared to 0.84 for NR CT) and TbSp (correlation of 0.74 for UHR CT and 0.047 for NR CT). The error obtained with UHR CT was generally smaller than that of NR CT. For TbSp, the mean deviation from CT (averaged across all bone samples) was only ~0.07 for UHR CT, compared to 0.25 for NR CT. Analysis of reproducibility of texture features of trabecular bone between UHR CT and NR CT revealed fair correlations (<0.7) for the majority of GLCM features, but relatively poor CCC (e.g. 0.02 for Energy and 0.04 for Entropy). The magnitude of texture metrics is particularly affected by the enhanced spatial resolution of UHR CT. Conclusion: The recently introduced UHR CT achieves improved correlation and reduced error in measurements of trabecular bone microstructure compared to conventional resolution CT. Future development of diagnostic strategies based on textural biomarkers derived from UHR CT will need to account for potential sensitivity of texture features to image resolution.
CT-based characterization of transverse and longitudinal trabeculae and its applications
Osteoporosis is a common age-related disease characterized by reduced bone mineral density (BMD), micro-structural deterioration, and enhanced fracture-risk. Although, BMD is clinically used to define osteoporosis, there are compelling evidences that bone micro-structural properties are strong determinants of bone strength and fracture-risk. Reliable measures of effective trabecular bone (Tb) micro-structural features are of paramount clinical significance. Tb consists of transverse and longitudinal micro-structures, and there is a hypothesis that transverse trabeculae improve bone strength by arresting buckling of longitudinal trabeculae. In this paper, we present an emerging clinical CT-based new method for characterizing transverse and longitudinal trabeculae, validate the method, and examine its application in human studies. Specifically, we examine repeat CT scan reproducibility, and evaluate the relationships of these measures with gender and body size using human CT data from the Iowa Bone Development Study (IBDS) (n = 99; 49 female). Based on a cadaveric ankle study (n = 12), both transverse and longitudinal Tb measures are found reproducible (ICC < 0.94). It was observed in the IBDS human data that males have significantly higher trabecular bone measures than females for both inner (p < 0.05) and outer (p < 0.01) regions of interest (ROIs). For weight, Spearman correlations ranged 0.43-0.48 for inner ROI measures and 0.50-0.52 for outer ROI measures for females versus 0.30-0.34 and 0.23-0.25 for males. Correlation with height was lower (0.36-0.39), but still mostly significant for females. No association of trabecular measures with height was found for males.
Graph Laplacian learning based Fourier Transform for brain network analysis with resting state fMRI
In recent decades, the graph signal processing techniques have demonstrated their effectiveness in tackling neuroimaging problems. However, most of these tools rely on predefined graphs to conduct spectral analysis, which can not be always satisfied due to the complexity of the brain structure. We, therefore, propose a data-driven signal processing framework (or namely, graph Laplacian learning based Fourier transform) that can effectively estimate the graph structure from the data and conduct Fourier transform afterward to analyze their spectral properties. We validate the proposed method on a large real dataset and the experimental results demonstrate its superiority over traditional methods.
Evaluation of intensity-based deformable registration of multi-parametric MRI for radiomics analysis of the prostate
Stephanie Alley, Andrey Fedorov, Cynthia Menard, et al.
Multi-parametric magnetic resonance imaging (mpMRI) has been shown to be a powerful tool in the detection, treatment, and prognosis of prostate cancer. The pre-processing of MR images prior to any type of quantitative analysis is crucial to the quality, reproducibility, and reliability of the results. Co-registration of mpMRI data, in particular, plays an important role in combining the imaging data from each MR modality (i.e., T2, diffusion- weighted imaging (DWI), dynamic contrast enhanced (DCE)) for use in machine learning tools and quantitative analyses. The aim of intensity-based deformable registration is to determine a transformation that relates one image to another. The ideal transformation is both biologically and mathematically sound (smooth and regular transformation) and provides the highest degree of alignment that can be achieved with the least amount of inference or loss of information. Despite the necessity for accurate co-registration of prostate mpMRI, there is no standardization for an optimal approach. Furthermore, once a method has been chosen, an accurate assessment of registration quality can be difficult to establish using many of the standard techniques. These methods are typically only surrogate measures of accuracy that may produce misleading results. Thus, the determination of a strategy for consistent alignment of prostate imaging data, as well as a more strategic choice of registration quality evaluation, could prove to be particularly useful in the emerging context of radiomics analysis for lesion characterization. We aim to accomplish this by investigating the quality of state-of-the-art co-registration obtainable by commonly used open source toolkits.
Automatic measurement of extra-axial CSF from infant MRI data
The quantification of cerebrospinal fluid (CSF) in the human brain has shown to play an important role in early postnatal brain development. Extra-axial fluid (EA-CSF), which is characterized by CSF in the subarachnoid space, is a promising marker for the early detection of children at risk for neurodevelopmental disorders, such as Autism Spectrum Disorder (ASD). Yet, non-ventricular CSF quantification, in particular extra-axial CSF quantification, is not supported in the major neuro-imaging software solutions, such as FreeSurfer. Most current structural image analysis packages mask out the extra-axial CSF space in one of the first pre-processing steps. A quantitative protocol was previously developed by our group to objectively measure the volume of total EA-CSF volume using a pipeline workflow implemented in a series of python scripts. While this solution worked for our specific lab, a graphical user interface-based tool is necessary to facilitate the computation of extra-axial CSF volume across a wide array of neuroimaging studies and research labs. This paper presents the development of a novel open-source, cross-platform, user-friendly software tool, called Auto-EACSF, for the automatic computation of such extra-axial CSF volume. Auto-EACSF allows neuroimaging labs to quantify extra-axial CSF in their neuroimaging studies in order to investigate its role in normal and atypical brain development.
Cardiac Imaging and Nanoparticle Imaging
icon_mobile_dropdown
Study of the effect of boundary conditions on fractional flow reserve using patient specific coronary phantoms
Kelsey N. Sommer, Lauren M. Shepard, Vijay Iyer, et al.
Purpose: 3D printing has become an accepted radiological tool which allow accurate physical renderings of organs for diagnosis and treatment planning. Use of 3D printed phantoms to replicate blood flow conditions have been reported, however, comprehensive studies comparing in-patients and in-vitro measurements are scarce. We propose to study whether 3D-printed patient specific coronary benchtop models, can be used to study how variations in outflow boundary conditions influence benchtop fractional flow reserve (FFR) measurements and how these compare with a research CT-FFR software. Materials and Methods: Fifty-two CT-derived patient-specific 3D-printed coronary phantoms were used for comprehensive flow experiments and a benchtop-FFR (B-FFR) was evaluated along the diseased arteries. A programmable cardiac pulsatile pump provided six coronary outflow rates equally distributed between normal and hyperemic blood flow conditions (250-500 mL/min). B-FFR results were compared to catheter lab Invasive-FFR (IFFR) measurements and a CT-FFR research software. The effect of coronary outflow changes was compared with catheter lab diagnosis using operator characteristics (ROC) and Area Under ROC (AUROC). Results: The highest AUROC was for B-FFR-500, 0.82 (95% CI: 0.65-0.92,), and gradually decreased as the flow rate decreased to B-FFR-250, 0.79 (95% CI: 0.70-0.87). The CT-FFR AUROC was 0.80 (95% CI: 0.69-0.86). Conclusion: 3D-printed patient specific coronary phantoms and controlled flow experiments demonstrated significant agreement between hyperemic simulated flow B-FFR-500 and I-FFR. We also observed not negligible variations of the B-FFR for small coronary outflow rates changes, implying that slight changes in outflow conditions may results in diagnosis change, especially in the 0.75-0.85 FFR range.
Quantitative estimation of flow rate to fill the intravascular volume (FRIV) for CT myocardial perfusion imaging
Hao Wu, Brendan L. Eck, Jacob Levi, et al.
Model-based analysis of CT myocardial perfusion imaging (CT-MPI) constrains the form of the impulse response according to physiologic assumptions and includes parameters such as flow that can be physiologically interpreted. However, if too many parameters exist in the model, it can lead to unreliable parameter estimates. A strangeness of perfusion models is that flow does not explicitly depend on the time delay of bolus arrival, yet a stenosis creates a measurable time delay along an affected vessel. To address this, we propose a new metric, flow-rate-to-fill-theintravascular- volume (FRIV), which is the intravascular blood volume divided by (time-delay + intravascular-transittime). This index is affected both by the appearance time as well as a reduced amount of contrast agent flowing in the affected vessel tree. We evaluate FRIV for a model from the literature, adiabatic approximation of tissue homogeneity (AATH) and compare to myocardial blood flow (MBF). For evaluation, we use a physiologic simulator, digital CT-MPI phantom at different x-ray dose level, and in vivo CT-MPI data from a porcine model with and without partial occlusion of the LAD coronary artery with known pressure-wire fractional flow reserve. FRIV shows much better precision than MBF. For example, at simulated MBF=100-mL/min-100g and nominal dose (200mAs) in the digital simulator, MBF and FRIV give coefficients of variation (CV) of 0.33 and 0.09, respectively, using the AATH model. At 50% nominal dose (100mAs) results are 0.45 and 0.16, respectively. In a porcine model of coronary artery stenosis, FRIV shows higher CNR than MBF and properly detects ischemia.
An implementation of data assimilation techniques for transmural visualization of action potential propagation in cardiac tissue
Christopher Beam, Cristian Linte, Niels F. Otani
A number of models have been put forward which describe the motion and propagation of action potentials within cardiac muscle tissue. The information produced by these models can be unverifiable, as no techniques currently exist to accurately measure voltage within the walls of the cardiac tissue, especially in an in vivo environment. In most situations it is much simpler to measure the contractile motion of the cardiac muscle, which is one of the results of the propagation of these action potentials. Prior work has suggested that one can solve an inverse problem to derive the action potentials present in the cardiac tissue from measurements of the displacement caused by the contractile motion; nevertheless, the solutions to this inverse problem degrade quickly in the face of error in the measurements of these displacements. In our work, we show that one potential solution for reducing the effects of these errors is through the implementation of the Unscented Kalman Filter. This technique allows us to assimilate our error-prone measurements with knowledge of an electrophysiological model to improve our estimates and help refine our solutions to the inverse problem. Using this process, we are able to solve the one dimensional problem in a way that reduces the error present in our estimates significantly, which, in turn, allows us to more accurately resolve the electrical behavior in our system.
Fully automated segmentation of the right ventricle in patients with repaired Tetralogy of Fallot using U-Net
Christopher T. Tran, Martin Halicek, James D. Dormer, et al.
Cardiac magnetic resonance imaging (CMR) is considered the gold-standard imaging modality for volumetric analysis of the right ventricle (RV), an especially important practice in evaluation of heart structure and function in patients with repaired Tetralogy of Fallot (rTOF). In clinical practice, however, this requires time-consuming manual delineation of the RV endocardium in multiple 2-dimensional (2D) slices at multiple phases of the cardiac cycle. In this work, we employed a U-Net based 2D-Convolutional Neural Network (CNN) classifier in the fully automatic segmentation of the RV blood pool. Our dataset was comprised of 5,729 short-axis cine CMR slices taken from 100 individuals with rTOF. Training of our CNN model was performed on images from 50 individuals while validation was performed on images from 10 individuals. Segmentation results were evaluated by Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). Use of the CNN model on our testing group of 40 individuals yielded a median DSC of 90% and a median 95th percentile HD of 5.1 mm, demonstrating good performance in these metrics when compared to literature results. Our preliminary results suggest that our method can be effective in automating RV segmentation.
A spectral CT study on iodine augmentation of radiation therapy and its potential for combination with chemotherapy
High-Z based nanoparticles (NP) are emerging as promising agents for both cancer radiotherapy (RT) and CT imaging. NPs can be delivered to tumors via the enhanced permeability and retention (EPR) effect and they preferentially accumulate in tumor’s perivascular region. Both gold and iodine NPs produce low-energy, short-range photoelectrons during RT, boosting radiation dose. Using spectral CT imaging, we sought to investigate (1) if iodine nanoparticles augmentation of RT increases vascular permeability in solid tumors, and (2) if iodine-RT induced changes in tumor vascular permeability improves delivery of nanoparticle-based chemotherapeutics. In vivo studies were performed in a carcinogen-induced and genetically engineered primary mouse model of soft tissue sarcoma. Tumor-bearing mice in test group were intravenously injected with liposomal-iodine (Lip-I) (1.32 g I/kg) on day 0. On day 1, both test (with Lip-I) and control (without Lip-I) mice received RT (single dose, 10 Gy). One day post-RT (day 2), all mice were intravenously injected with liposomal gadolinium (Lip-Gd) (0.32 g Gd/kg), a surrogate of nanoparticle chemotherapeutic agent. Three days later (day 5) mice were imaged on our hybrid spectral micro-CT system. A dual source pre-clinical CT prototype system that combines a photon counting detector (PCD) and an energy integrating detector (EID) in a single hybrid system served as our imaging device. The results demonstrate that Lip-I augmented RT, resulting in increased tumor vascular permeability compared to control mice treated with RT alone. Consequently, Lip-I +RT treated mice demonstrated a 4- fold higher intra-tumoral accumulation of Lip-Gd compared to RT alone treated mice. In conclusion, our work suggests that Lip-I augments RT-induced effects on tumor vasculature, resulting in increased vascular permeability and higher intratumoral deposition of chemotherapeutic nanoparticles.
Time course study of a gold nanoparticle contrast agent
Increasing interest in in vivo preclinical imaging has made micro-computed tomography (micro-CT) a powerful research tool for studying tissue morphology and monitoring disease status in rodents. Although, micro-CT images have high contrast for bone, the contrast between various soft tissues is typically low. To overcome this inherent issue, attenuating exogenous contrast agents are used which allows contrast enhancement in vascular and abdominal organs. The aim of this study is to determine the time course of computed tomography (CT) contrast enhancement of a gold nanoparticle bloodpool contrast agent. Six healthy female C57BL/6 mice were anesthetized and imaged after receiving an injected dose of MVivo™ gold nanoparticle blood-pool contrast agent. Micro-CT scans were acquired at 0, 0.25, 0.5, 0.75, 1, 2, 4, 8, 24, 48 and 72 hours after injection. The mean CT number was determined in a region of interest in 7 organs. No contrast enhancement was noticed in the bladder, kidneys or muscle. However, it clearly appears that the contrast enhancement is high in both right ventricle and vena cava, even with a low dose of contrast agent. A single injection of the blood-pool contrast agent can be used for investigations of cardiac or vascular disease. An optimal cardiac imaging window during the first hour after injection has been determined. Furthermore, the long-lasting contrast enhancement can be very useful when using protocols that require long acquisition times.
Poster Session
icon_mobile_dropdown
Comparison different vessel segmentation methods in automated microaneurysms detection in retinal images using convolutional neural networks
Meysam Tavakoli, Mahdieh Nazar
Image processing techniques provide important assistance to physicians and relieve their workload in different tasks. In particular, identifying objects of interest such as lesions and anatomical structures from the image is a challenging and iterative process that can be done by computerized approaches in a successful manner. Mi- croaneurysms (MAs) detection is a crucial step in retinal image analysis algorithms. The goal of MAs detection is to find the progress and at last identification of diabetic retinopathy (DR) in the retinal images. The objec- tive of this study is to apply three retinal vessel segmentation methods, Laplacian-of-Gaussian (LoG), Canny edge detector, and Matched filter to compare results of MAs detection using combination of unsupervised and supervised learning either in the normal images or in the presence of DR. The steps for the algorithm are as following: 1) Preprocessing and Enhancement, 2) vessel segmentation and masking, 3) MAs detection and Lo- calization using combination of Matching based approach and Convolutional Neural Networks. To evaluate the accuracy of our proposed method, we compared the output of our method with the ground truth that collected by ophthalmologists. By using the LoG vessel segmentation, our algorithm found sensitivity of more than 85% in detection of MAs for 100 color images in a local retinal database and 40 images of a public dataset (DRIVE). For the Canny vessel segmentation, our automated algorithm found sensitivity of more than 80% in detection of MAs for all 140 images of two databases. And lastly, using Matched filter, our algorithm found sensitivity of more than 87% in detection of MAs in both local and DRIVE datasets.
CrohnIPI: An endoscopic image database for the evaluation of automatic Crohn's disease lesions recognition algorithms
Wireless capsule endoscopy (WCE) allows medical doctors to examine the interior of the small intestine with a noninvasive procedure. This methodology is particularly important for Crohn’s disease (CD), where an early diagnosis improves treatment outcomes. The counting and identification of CD lesions in WCE videos is a time-consuming process for medical experts. In the era of deep-learning many automatic WCE lesion classifiers, requiring annotated data, have been developed. However, benchmarking classifiers is difficult due to the lack of standard evaluation data. Most detection algorithms are evaluated on private datasets or on unspecified subsets of public databases. To help the development and comparison of automatic CD lesion classifiers, we release CrohnIPI, a dataset of 3498 images, independently reviewed by several experts. It contains 60.55% of non-pathological images and 38.85% of pathological images with 7 different types of CD lesions. A part of these images are multilabeled. The dataset is balanced between pathological images and non-pathological ones and split into two subsets for training and testing models. This database will progressively be enriched over the next few years in aim to make the automatic detection algorithms converge to the most accurate system possible and to consolidate their evaluation.
Attenuation correction without structural images for PET imaging
Yang Lei, Tonghe Wang, Xue Dong, et al.
Deriving accurate attenuation correction factors for whole-body positron emission tomography (PET) images is challenging due to issues such as truncation, inter-scan motion, and erroneous transformation of structural voxelintensities to PET μ-map values. In this work, we proposed a deep-learning-based attenuation correction (DL-AC) method to derive the nonlinear mapping between attenuation corrected PET (AC PET) and non-attenuation corrected PET (NAC PET) images for whole-body PET imaging. A 3D cycle-consistent generative adversarial networks (cycle GAN) framework was employed to synthesize AC PET from NAC PET. The method learns a transformation that minimizes the difference between DL-AC PET, generated from NAC PET, and AC PET images. It also learns an inverse transformation such that cycle NAC PET image generated from the DL-AC PET is close to real NAC PET image. Both transformation network architectures are implemented by a residual network and outputs are judged by a fully convolutional network. A retrospective study was performed on 23 sets of whole-body PET/CT with leave-one-out cross validation. The proposed DL-AC method obtained the average mean error and normalized mean square error of the whole-body of -0.01%±2.91% and 1.21%±1.73%. We proposed a deep-learning-based approach to perform wholebody PET attenuation correction from NAC PET. The method demonstrates excellent quantification accuracy and reliability.
Multi-organ segmentation in pelvic CT images with CT-based synthetic MRI
Yang Lei, Xue Dong, Sibo Tian, et al.
We propose a hybrid deep learning-based method, which includes a cycle consistent generative adversarial network (CycleGAN) and deep attention fully convolution network implemented by a U-Net (DAUnet), to perform volumetric multi-organ segmentation for pelvic computed tomography (CT). The proposed method first utilized CycleGAN to generate synthetic MRI (sMRI) to provide superior soft tissue contrast. Then, the proposed method fed the sMRI into the DAUnet to obtain the volumetric segmentation of bladder, prostate and rectum, simultaneously, via a multi-channel output. The deep attention strategy was introduced to retrieve the most relevant features to identify organ boundaries. Deep supervision was incorporated into the DAUnet to enhance the features’ discriminative ability. Segmented contours of a patient were obtained by feeding the CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate the organ contours. A retrospective studied was performed with data sets from 45 patients with prostate cancer. The Dice similarity coefficient and mean surface distance indices for bladder, prostate, and rectum contours were 0.94, 0.47 mm; 0.86, 0.78 mm; and 0.89, 0.85 mm, respectively. The proposed network provides accurate and consistent prostate, bladder and rectum segmentation without the need of additional MRIs. With further evaluation and clinical implementation, this method has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Fully automated segmentation of the temporal bone from micro-CT using deep learning
The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional (3D) anatomy. Virtual reality (VR) surgical simulators have proven to be effective for surgical training. In this paper a fully automated method is proposed for segmenting multiple temporal-bone structures based on micro computed tomography (micro-CT) images for a realistic virtual environment. An automated segmentation pipeline is proposed based on a three-dimensional, fully convolutional neural network. The proposed balanced subsampling strategy creates balanced learning among the labels of multiple anatomical structures and reduces the class imbalance. The accuracy and speed of the proposed algorithm outperforms current manual and semi-automated segmentation techniques. The average Dice similarity scores for all temporal-bone structures was 88%. The proposed algorithm was validated on low-resolution CTs scanned by other centers with different scanner parameters than the ones used to create the algorithm. The presented fully automated segmentation algorithm creates 3D models of multiple structures of temporal-bone anatomy from micro- CT images with sufficient accuracy to be used in VR surgical training simulators.
A new interactive visual-aided decision-making supporting tool to predict severity of acute ischemic stroke
Gopichandh Danala, Sai Kiran Reddy Maryada, Morteza Heidari, et al.
Advent of advanced imaging technology and better neuro-interventional equipment have resulted in timely diagnosis and effective treatment for acute ischemic stroke (AIS) due to large vessel occlusion (LVO). However, objective clinicoradiologic correlate to identify appropriate candidates and their respective clinical outcome is largely unknown. The purpose of the study is to develop and test a new interactive decision-making support tool to predict severity of AIS prior to thrombectomy using CT perfusion imaging protocol. CT image data of 30 AIS patients with LVO assessed radiologically for their eligibility to undergo mechanical thrombectomy were retrospectively collected and analyzed in this study. First, a computer-aided scheme automatically categorizes images into multiple sequences followed by indexing each slice to specified brain location. Next, consecutive mapping is used for accurate brain region segmentation from skull. The brain is then split into left and right hemispheres, followed by detecting blood in each hemisphere. Additionally, visual tools including segmentation, blood correction, select sequence and index analyzer are implemented for deeper analysis. Last, comparison between blood-volume in each hemisphere over the sequences is made to observe wash-in and wash-out rate of blood flow to assess the extent of damaged and “at risk” brain tissue. By integrating computer-aided scheme into a user graphic interface, the study builds a unique image feature analysis and visualization tool to observe and quantify the delayed or reduced blood flow (brain “at-risk” to develop AIS) in the corresponding hemisphere, which has potential to assist radiologists to quickly visualize and more accurately assess extent of AIS.
Controllability of structural brain networks in dementia
The dynamics of large-scale neural circuits is known to play an important role in both aberrant and normal cognitive functioning. Describing these phenomena is extremely important when we want to get an understand- ing of the aging processes and for neurodegenerative disease evolution. Modern systems and control theory offers a wealth of methods and concepts that can be easily applied to facilitate an insight into the dynamic processes governing disease evolution at the patient level, treatment response evaluation and revealing some central mechanism in a network that drives alterations in these diseases. Past research has shown that two types of controllability - the modal and average controllability - are key components when it comes to the mechanistic explanation of how the brain operates in different cognitive states. The average controllability describes the role of a brain network’s node in driving the system to many easily reachable states. On the other hand, the modal controllability is employed to identify the states that are difficult to control. The first controllability type favors highly connected areas while the latter weakly connected areas of the brain. Certain areas of the brain or nodes in the connectivity graph (structural or functional) can act as drivers and move the system (brain) into specific states of action. To determine these areas we apply the novel concept of exact controllability and determine the minimum set and the location of driver nodes for dementia networks. Our results applied on structural brain networks in dementia suggest that this novel technique can accurately describe the different node roles in controlling trajectories of brain networks, and show the transition of some driver nodes and the conservation of others in the course of this disease.
3D distributed deep learning framework for prediction of human intelligence from brain MRI
Seungwook Han, Yan Zhang, Yihui Ren, et al.
Intelligence is a complex, multi-dimensional concept that encompasses multiple brain circuits. Understanding the underpinnings of the human brain requires not only accurate feature extraction from often noisy non-invasive brain imaging data (e.g., MRI), but also rigorous modeling of the complex relationships among distributed brain systems. In this work, we implement a highly scalable end-to-end computational learning framework – that is, a 3D deep convolutional neural network (CNN) to predict fluid intelligence scores directly from 3D brain MRI without any theory- or rule-based feature engineering. We address and overcome the challenge of processing large data (i.e. 44 GB of MRI) by using distributed deep learning techniques. The dataset originates from the Adolescent Brain Cognitive Development (ABCD) study, with 5832 subjects in the training set, 1251 in the validation set, and 1250 in the test set. The single-task ResNet50-3D model achieved mean squared errors of 0.73637 and 0.74535 respectively on the validation and test sets. The multi-task ResNet50-3D model achieved mean squared errors of 0.74418 and 0.75626 respectively on the validation and test sets. These results demonstrate not only that the prediction of fluid intelligence scores directly from structural and diffusion brain MRI is feasible but also that this scalable computational learning framework could be further developed for data-driven human neurocognitive research.
Optimization of DSA image data input to a machine learning aneurysm identifier
Convolutional neural networks (CNN) can automate the quantitative assessment of intracranial aneurysms (IAs); however, as a “black-box” technique, it does not allow users to understand which image features are most important, and thus how to improve network predictions. Class-activation maps (CAMs) are used to visualize which image regions trigger a trained CNN, thus lending insight into how a CNN makes a decision. This work investigated the use of CAMs to identify differences in network activation inside IAs for an image segmentation task for the goal of optimizing the pre-processing framework. Three hundred and fifty angiographic images of pre- and post-coiled IAs were retrospectively collected. The IAs were manually contoured, angiographic sequences were flattened along the time-axis using different techniques (average, median, standard deviation, or minimum value), and flattened sequences and masks were put to a CNN tasked with IA segmentation. CAMs were output to visualize the most salient aneurysmal features. Network activation was higher in the IA peripheral region compared with the IA middle region indicating the IA periphery is more predictive for segmentation. Flattening angiographic sequences using the average value along the time-axis leads to the most accurate IA segmentations with an average Dice coefficient of 0.765 and an average Jaccard Index of 0.624 over the test cohort. This work indicates that CAMs can aid in understanding of a CNN’s segmentation decisions. Fine-tuning and automation of algorithms along with network-input image preprocessing based on these results may improve IA predictive models.
Brain tumor segmentation using 3D mask R-CNN for dynamic susceptibility contrast enhanced perfusion imaging
Jiwoong Jeong, Yang Lei, Hui-Kuo Shu, et al.
The detection and segmentation of neoplasms are an important part of radiotherapy treatment planning, monitoring disease progression, and predicting patient outcome. In the brain, functional magnetic resonance imaging (MRI) like dynamic susceptibility contrast enhanced (DSC) or T1-weighted dynamic contrast enhanced (DCE) perfusion MRI are important tools for diagnosis. However, the manual contouring of these neoplasms are tedious, expensive, time-consuming, and contains inter-observer variability. In this work, we propose to use a 3D Mask R-CNN method to automatically detect and segment high and low grade brain tumors for DSC MRI perfusion images. Twenty-two high and low grade patients with 50-70 perfusion time-point volumes were used in this study. Experimental results show that our proposed method achieved an overall Dice similarity, precision, recall and center of mass distance were 89%±0.03%, 90%±0.02%, 87%±0.04% and 1.27±0.67 respectively.
Connectivity analysis of anterior nuclei of the thalamus in diffusion-MRI using constrained spherical deconvolution (CSD), multi-shell multi-tissue CSD and probabilistic DTI for ANT-DBS in epilepsy
Ruhunur Özdemir, Kai Lehtimäki M.D., Jukka Peltola M.D., et al.
The anterior nuclei of the thalamus (ANT) have been a promising target in order to control and reduce epileptic seizures for deep brain stimulation surgery (DBS). There are several theories on the structural connectivity of ANT, but clear evidence is still missing. The clinical studies show that each subdivision of the ANT presents different patterns of connectivity throughout the hippocampus, mammillary bodies, and neocortex. Diffusion MRI is a well-known technique that non‐invasively investigates the microstructural organization and orientation of biological tissues in vivo. Diffusion tensor imaging (DTI) is one of the models that has been widely accepted in order to examine the human brain, although it does not accurately reveal the fiber orientations of complex structures due to the presence of crossing fibers. Constrained spherical deconvolution (CSD) can be used to reveal the fiber orientations, overcoming the limitations of DTI. Recent studies show that the b value and gradient directions also play a significant role in extracting fiber orientations in such complex structures. These methods enable a more accurate tractography and investigation of the structural connectivity. In this paper, we demonstrate an approach for the connectivity analysis of ANT by determining different ROIs in the Papez circuit.
Integration of multi-task fMRI for cognitive study by structure-enforced collaborative regression
Yuntong Bai, Vince D. Calhoun, Yu-Ping Wang
Task-based fMRI has been widely studied to investigate individual behavioral and cognitive traits. Integrating multiparadigm fMRI has been proven powerful in analyzing brain development, where a variety of multi-view learning methods have been developed. Among them, collaborative regression (CoRe) combines linear regression with canonical correlation analysis (CCA) to jointly analyze multiple views of data. CoRe links multi-paradigm imaging data with phenotypical information while also enforces their agreement across multiple views. However, CoRe overlooks group structures within regions of interest (ROIs) within the brain. To address this, we proposed a novel model, namely structure-enforced collaborative regression (SCoRe), to take advantage of group structures within each view of fMRI. The model was obtained by imposing a sparse group LASSO penalty on the regression term. Our model was validated on the Philadelphia Neurodevelopmental Cohort dataset by combining multi-task fMRI data to study an individual’s cognitive skills. Specifically, we adopted Wide Range Assessment Test 4 (WRAT4) scores to divide 338 participants into two groups (limited and proficient cognitive skill) and applied SCoRe to identify significant brain regions that can separate them. Through data resampling and significance analysis, we identified 17 brain regions from two paradigms of fMRI as biomarkers associated with an individual’s academic ability. Among them, 5 ROIs are shared by both paradigms. The study may also help understand the mechanisms underlying brain development.
MRI-aided attenuation correction for PET imaging with deep learning
Yang Lei, Xue Dong, Tonghe Wang, et al.
We propose to integrate multi-modality images and self-attention strategy into cycle-consistent adversarial networks (CycleGAN) to predict attenuation correction (AC) positron emission tomography (PET) image from non-AC (NAC) PET and MRI. During the training stage, deep features are extracted by 3D patch fashion from NAC PET and MRI images, and are automatically highlighted with the most informative features by self-attention strategy. Then, the deep features are mapped to the AC PET image by 3D CycleGAN. During the correction stage, the paired patches are extracted from a new arrival patient’s NAC PET and MRI images, and are fed into the trained networks to obtain the AC PET image. This proposed algorithm was evaluated using 18 patients’ datasets. Six-fold cross-validation was used to test the performance of the proposed method. The AC PET images generated with the proposed method show great resemblance with the reference AC PET images. The profile comparison also indicates the excellent matching between the reference and the proposed. The proposed method obtains the mean error ranging from -1.61% to 3.67% for all contoured volumes of interest. The whole-brain ME is less than 0.10%. These experimental studies demonstrate the clinical feasibility and accuracy of our proposed method.
Multiparametric MRI-guided high-dose-rate prostate brachytherapy with focal dose boost to dominant intraprostatic lesions
The use of multiparametric MRI (mp-MRI) can reliably identify dominant intra-prostatic lesions (DILs) within prostate cancer. Dose escalation to DILs using high-dose-rate (HDR) brachytherapy may improve tumor control probability. In this study, we retrospectively investigated a total of 17 patients treated by HDR prostate brachytherapy, each of whom has mp-MRI and CT images acquired pre-treatment. 21 DILs were contoured based on mp-MRI and propagated to CT images after registration using a newly developed deformable image registration method. A boost plan was created for each patient and optimized on the original needle pattern. In addition, separate plans were generated using a virtually implanted needle around the DIL in order to simulate mp-MRI guided needle placement. Both plans were optimized to maximize DIL V150 coverage while meeting OAR sparing constraints. DIL V150, prostate coverage, and OAR sparing were compared with original plan results. Overall, optimized boost plans significantly escalate dose to DILs while meeting OAR constraints. The addition of mp-MRI guided virtual needles facilitate increased coverage of DIL volumes, achieving a V150 <90% in 85% of DILs compared with 57% of boost plan without an additional needle. These results strongly indicate that the proposed mp-MRI guided DIL boost in HDR brachytherapy is feasible without violating OAR constraints. This retrospective study suggests the use of mp-MRI-defined DIL to optimize needle placement through the deformable MRI-ultrasound registration in the operating room may represent a strategy to personalize treatment delivery and improve tumor control.
Quantification of axillary lymphadenopathy from CT images of filovirus infections in non-human primates: sensitivity and evaluation of radiomics-based methods
Marcelo A. Castro, Jeffrey Solomon, Ji Hyun Lee, et al.
Estimation of lymph node size and location from computed tomography images is relevant for many clinical applications. However, no previous study has had an intra- and inter-subject, quantitative, repeated measures design to assess the axillary lymphadenopathy. During the course of filovirus infection marked increase in axillary lymph node volume occurs along with edema. Computed tomographic images from eight nonhuman primates exposed intramuscularly (triceps brachii) to either Ebola or Marburg virus were analyzed using radiomics features. Normal values of attenuation in the axillae and surrounding muscles were compared to several baseline acquisitions. While intra-subject variability remained constrained, inter-subject variability was large enough to encourage the use of subject-specific feature values. First and second order radiomics features including those from grey-level co-occurrence matrix and grey-level size zone matrix were investigated. Changes in axillary space volume, mean attenuation, and attenuation distribution during filovirus infection bilaterally (ipsilateral and contralateral to the exposure site) indicated that ipsilateral axillae were affected to a greater degree than contralateral axillae when compared to baseline. Use of subject-specific averaged baselines is necessary to establish normal variation and to determine if post-exposure measurements are significantly different from baselines. A model-based classification, a Gaussian mixture model, can be used to estimate the changes in fractional volume of different tissues (fat, lymph nodes, other tissues within axillae) from attenuation histograms. Radiomics features investigated were consistent with the other descriptors. This method has the potential to be used as a biomarker for the understanding of filovirus diseases and for monitoring and evaluating therapeutic options.
Evaluation of R2* as a noninvasive endogenous imaging biomarker for Ebola virus liver disease progression in nonhuman primates
The main goal of this work is to evaluate R2 * as an imaging biomarker of Ebola virus disease progression in the liver. Ebola virus (EBOV) disease targets the liver among other organs, resulting in hepatocellular necrosis and degeneration, hemorrhage, and edema. In the liver, EBOV destroys cells required to produce coagulation proteins and other important components of plasma and damage to blood vessels. Impairment of vascular integrity leads to disseminated intravascular coagulation and multiorgan failure, including lungs, kidneys, and liver. Noninvasive endogenous imaging biomarkers (e.g., R2 * relaxivity from MRI) are attractive targets to monitor changes of paramagnetic substances that occur from hemorrhage and liver dysfunction during EBOV infection. R2 maps exhibit a decreased relaxivity in edematous tissue due to higher T2 relaxation time compared to that observed in nonedematous tissue. However, during later phases of infection, increased vascular congestion, hemorrhage, or thrombi may result in increased R2 * because of local field inhomogeneities caused by paramagnetic molecules such as deoxyhemoglobin. In this study, R2 * relaxivity was followed in rhesus monkeys at baseline and after exposure to a low lethality variant of EBOV through a prolonged disease course. Increases in R2 * relaxivity measured after the acute phase of EBOV infection reached a peak about 3 weeks after exposure and then slowly returned to normal. After the acute phase, R2 * curve roughly followed later changes in liver function tests. Lower variability of R2 * in paravertebral muscles, hematocrit, and oxygen saturation, suggests that R2 * changes may be liver-specific.
Intensity non-uniformity correction in MR imaging using deep learning
Xianjin Dai, Yang Lei, Yingzi Liu, et al.
Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative MR image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU artifact, can highly degrade the performance of automatic quantitative analysis such as feature extraction and radiomics. In this study, we present a deep learning-based approach for MR image INU correction. Particularly, a cycle generative adversarial network (GAN) was trained and tested using a cohort of 25 abdominal patients with T1-weighted MR INU images. The results show that our cycle GAN-based method achieves a higher accuracy than the most commonly used algorithm N4ITK, and highly speeds up the correction without any unintuitive parameter tuning process.
Synthetic CT-aided MRI-CT image registration for head and neck radiotherapy
Yabo Fu, Yang Lei, Jun Zhou, et al.
In this study, we propose a synthetic CT (sCT) aided MRI-CT deformable image registration for head and neck radiotherapy. An image synthesis network, cycle consistent generative adversarial network (CycleGAN), was first trained using 25 pre-aligned CT-MRI image pairs. Using the MR head and neck images, the trained CycleGAN then predicts sCT images, which were used as MRI’s surrogate in MRI-CT registration. Demons registration algorithm was used to perform the sCT-CT registration on 5 separate datasets. For comparison, the original MRI and CT images were registered using mutual information as similarity metric. Our results showed that the target registration errors after registration were on average 1.31 mm and 1.02 mm for MRI-CT and sCT-CT registration, respectively. The mean normalized cross correlation between the sCT and CT after registration was 0.97, indicating that the proposed method is a viable way to perform MRI-CT image registration for head neck patients.
Improved accuracy and precision of calcium mass score using deconvolution and partial volume correction
There are multiple quantitative methods for assessing coronary calcifications with CT including Agatston, mass score, and volume score. Several studies have shown mass score in mg-calcium to be the most reproducible. Since we are interested in tracking changes in individual calcifications over time as a new biomarker of vascular disease, we have analyzed ways to further improve reproducibility. The conventional way to calculate calcium mass score is to sum all voxels above 130- HU and convert to mass score using a calibration constant. This does not account for blurring in CT system. To improve coronary calcification measurements, we used Richardson-Lucy deconvolution with a measured 2D and 3D impulse response (Philips IQon) and/or partial volume correction processing. At 120 kVp, we imaged a phantom with calcium inserts and calcified cadaver hearts at three rotational orientations at high (0.4883-mm, 0.67-mm-thick) and normal clinical (0.4883-mm, 2.5-mm-thick) resolution. For each calcification in clinical resolution images, in the order of partial volume correction, 2D deconvolution, and 3D deconvolution processing, averaged percentage difference to the actual value of the QRM phantom calcification inserts evaluation are 0.07,0.13 and 0.22, improves as for non-processed clinical evaluation is -0.41. Similarly, in cadaver hearts, accuracy for averaged percentage difference compared to high resolution results are -0.21, 0.29 and 0.09 respectively, improves as the clinical evaluation is -0.51. In all cases, precision across rotation angle improves as coefficient of variation are 0.09, 0.10 and 0.11 respectively in the same processing order, slightly improves compare to clinical evaluation 0.13.
Automatic quantification of hip osteoarthritis from low-quality x-ray images
Diagnosis of hip osteoarthritis is conventionally done through a manual measurement of the joint distance between the femoral head and the acetabular cup, a difficult and often error-prone process. Recently, Chen et al.1 proposed a fully automated technique based on landmark displacement estimation from multiple image patches that is able to accurately segment bone structures around the pelvis. This technique was shown to be comparable or better than state-of-the-art random forest based methods. In this paper, we report on the implementation and evaluation of this method on low-resolution datasets typically available in parts of the developing world where high-resolution X-ray image technology is unavailable. We employed a dataset of hip joint images collected at a local clinic and provided to us in JPG format and at 1/3 the resolution of typical DICOM X-ray images. In addition, we employed the Dice similarity coefficient, average Euclidean distance between corresponding landmarks, and Hausdorff distance to better evaluate the method relative to diagnosis of hip osteoarthritis. Our results show that the proposed method is robust with JPEG images at 1/3 the resolution of DICOM data. Additional preliminary results quantify the accuracy of the approach as a function of decreasing resolution. We believe these results have important significance for application in clinical settings where modern X-ray equipment is not available.
Towards assessment of histopathology correlation with multiple imaging modalities: A pilot study using a visible mouse
Histopathology is the accepted gold standard for identifying cancerous tissues. Validation of in vivo imaging signals with precisely correlated histopathology can potentially improve the delineation of tumors in medical images for focal therapy planning, guidance, and assessment. Registration of histopathology with other imaging modalities is challenging due to soft tissue deformations that occur between imaging and histological processing of tissue. In this paper, a framework for precise registration of medical images and pathology using white-light images (photographs) is presented. A euthanized normal mouse was imaged using four imaging modalities: CBCT, PET-CT, MRI and micro CT. The mouse was then fixed in an embedding medium, optical cutting temperature (OCT) compound, with co-registration markers and sliced at 50 m intervals in a cryostatmicrotome. The device automatically photographed each slice with a mounted camera and reconstructed the 3D white-light image of the mouse through co-registering of consecutive slices. The white-light image was registered to the four imaging modalities based on the external contours of the mouse. Six organs (brain, liver, stomach, pancreas, kidneys and bladder) were contoured on the MR image while the skeletal structure and lungs were segmented on the CBCT image. The contours of these structures were propagated to the additional imaging modalities based on the registrations to the white-light image and were analyzed qualitatively by developing an anatomical atlas of normal mouse defined using three imaging modalities. This work will serve as the foundation to include histopathology through the transfer of the imaged slice onto tape for histological processing.
Non-invasive prediction of lymph node risk in oral cavity cancer patients using a combination of supervised and unsupervised machine learning algorithms
A. Traverso, A. Hosni-Abdalaty, M. Hasan, et al.
In oral cavity (OC) squamous cell cancer, the incidence of occult nodal metastases varies from 20% to 50% depending and tumor size and thickness. Besides clinical and histopathological factors, image-derived biomarkers may help estimate the probability of LN (lymph nodes) metastasis using a non-invasive approach to further stratify patients' need for neck dissection. We investigated the role of MR-based radiomics in predicting positive lymph nodes in OC patients, prior to surgery. We also investigated different supervised and unsupervised dimensionality reduction techniques, as well as different classifiers. Results showed that the combination of radiomics+clinical factors outperform radiomics and clinical predictors alone. Overall, a combination of supervised and supervised machine learning algorithms seems more suitable for better performances in radiomic studies.
Repeatability, reproducibility, and accuracy of a novel imaging technique for measurement of ocular axial length
The rise in short-sightedness (myopia) is raising concerns in our global health, impacting an estimated 22.9% of the world’s population in the year 2000.1, 2 We are developing a new method to determine axial length measurements of the eye, addressing the paediatric population. The purpose of this study is to assess the repeatability, reproducibility, and accuracy of our proposed imaging technique for axial length measurements in ex vivo animal eyes and human model eyes. Our non-contact optical system utilizes an 808 nm continuous wave laser. For the animal study, our system employed varying laser powers <0.6 mW, obtaining images of ex vivo porcine eyes. Comparative measurements were taken with MRI of the same samples in similar conditions. Results indicated that our system was both repeatable and reproducible. The trend for accuracy was not evident in animal eyes, but this was due to the variable ex vivo sample quality. For the model eye study, and to mimic clinical conditions, the laser intensity is reduced via optical components resulting in a safe exposure for the human eye, as outlined by IEC 60825-1:2014. This laser exposure includes a beam power of <0.4 mW and exposure time of less than 2 seconds. Data is captured with an infrared camera. Our device will offer a unique and low-cost solution for early detection of myopia in children, helping to timely intervene and ultimately control the looming myopia pandemic.
Noise reduction in the inverse solution for one-dimensional cardiac active stress reconstruction
Vignesh Venkataramani, Minyao Li, Cristian A. Linte, et al.
Visualizing action potentials within the cardiac tissue enables the identification of abnormal action potential wave propagation patterns for use in both clinical and cardiac research settings. Otani et al. have been investigating the possibility of using 4-D (three spatial dimensions and one time dimension) mechanical deformation data, obtained either from MRI or ultrasound images to reverse-calculate these action potential patterns. However, the inverse system is extremely sensitive to noise; that is, a small amount of perturbation from the signal can lead to a substantial perturbation in the solution if the perturbation has a high-frequency component. Here, we explore three noise reduction methods, in an attempt to reduce the effect of noise in the input data and to regularize the calculated solution.
Micro-computed tomography imaging of a rodent model of Chronic Obstructive Pulmonary Disease (COPD)
Chronic obstructive pulmonary disease is projected to become the 3rd leading cause of death worldwide by 2030. Currently, 200 million people worldwide have been diagnosed with COPD, and many more are living with undiagnosed disease. COPD has no cure and no drugs that lead to improvements in long-term survival. Drug discovery is challenging due to a poor understanding of COPD pathogenesis. To study COPD, rodent models have been developed, with daily exposures to tobacco cigarette smoke over a 6-month period inducing symptoms. Measurements are typically done on histological slides, assessing airway wall thickening and markers of emphysema. These post-mortem techniques are unable to assess how the disease is progressing or how these observed structural changes impact lung function. To identify changes in lung structure and function in a smoking exposure model, we used respiratory-gated micro-computed tomography (micro-CT) and image-based measurements of lung structure and function. Micro-CT imaging was performed in anesthetized, free-breathing mice at baseline. The mice were then subjected to 6-months of exposure to tobacco cigarettes or ambient air, and rescanned. We also performed post-mortem lung compliance tests on 3-month smoke-exposed and age-matched control mice. Significant differences between smoke-exposed and control mice were observed for lung volume and functional residual capacity, which correlate well with the results of the lung compliance testing. In vivo respiratory-gated micro-CT in free-breathing animals is sensitive to changes in lung structure and function resulting from exposure to tobacco cigarette smoke, and is an effective tool to monitor the development of COPD in rodent models.
Infarct region segmentation in rat brain T2 MR images after stroke based on fully convolutional networks
Stroke remains one of the most life-threatening diseases around the world. Rodent stroke models have been widely adopted in experimental ischemia studies for decades. Magnetic resonance imaging (MRI) has been shown effective to reveal the stroke region and associated tissues in many animal studies. Extraction of the infarct regions in rat brain MR images after stroke is crucial for further investigation such as neuro damage analysis and behavior examination. This paper is in an attempt to develop a computer-aided infarct segmentation algorithm based on a fully convolutional network (FCN) for rat stroke model analyses in MR images. In our approach, the entire procedure is divided into two major phases: skull stripping and infarct segmentation. The purpose of the skull stripping process is to provide a clean brain region, from which the infract segmentation is executed. The same FCN is applied to both phases but with different training images and segmentation purposes. Our FCN model consists of 33 convolutional layers, 5 maximum pooling layers, and 5 upsampling layers. The residual network is introduced to the FCN architecture for updating the weights and the batch normalization strategy is exploited to reduce the gradient vanishing problem. To evaluate the proposed FCN framework, 35 subjects of T2-weighted MR images of the rat brain acquired from National Taiwan University, Taipei, Taiwan were utilized. Preliminary experimental results indicated that our method produced high segmentation accuracy regarding skull stripping (Dice = 98.12) and infarct segmentation (Dice = 80.47) across a number of rat brain MR image volumes.
Left ventricular myocardium segmentation in coronary computed tomography angiography using 3D deep attention u-net
Cardiovascular diseases (CVD) are the leading cause of disability and death worldwide. Many parameters based on left ventricular myocardium (LVM), including left ventricular mass, the left ventricular volume, and the ejection fraction (EF) are widely used for disease diagnosis and prognosis prediction. To investigate the relationship between parameters derived from the LVM and various heart diseases, it is crucial to segment the LVM in a fast and reproducible way. However, different diseases can affect the structure of the LVM, which increases the complexity of the already time-consuming manual segmentation work. In this work, we propose to use a 3D deep attention U-Net method to segment the LVM contour for cardiac CT images automatically. We used 50 patients’ cardiac CT images to test the proposed method. The Dice similarity coefficient (DSC), sensitivity, specificity, and mean surface distance (MSD) were 87% ± 5%, 87% ± 4%, 92% ± 3% and 0.68 ± 0.15 mm, which demonstrated the detection and segmentation accuracy of the proposed method.
Automatic detection and counting of retina cell nuclei using deep learning
The ability to automatically detect, classify, calculate the size, number, and grade of retinal cells and other biological objects is critically important in eye disease like age-related macular degeneration (AMD). In this paper, we developed an automated tool based on deep learning technique and Mask R-CNN model to analyze large datasets of transmission electron microscopy (TEM) images and quantify retinal cells with high speed and precision. We considered three categories for outer nuclear layer (ONL) cells: live, intermediate, and pyknotic. We trained the model using a dataset of 24 samples. We then optimized the hyper-parameters using another set of 6 samples. The results of this research, after applying to the test datasets, demonstrated that our method is highly accurate for automatically detecting, categorizing, and counting cell nuclei in the ONL of the retina. Performance of our model was tested using general metrics: general mean average precision (mAP) for detection; and precision, recall, F1-score, and accuracy for categorizing and counting.