Proceedings Volume 9035

Medical Imaging 2014: Computer-Aided Diagnosis

cover
Proceedings Volume 9035

Medical Imaging 2014: Computer-Aided Diagnosis

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 5 May 2014
Contents: 17 Sessions, 126 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2014
Volume Number: 9035

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9035
  • Head, Neck, and Novel Methods
  • Prostate and Colon I
  • Vessels, Heart, and Eye I
  • Lung, Chest, and Abdomen I
  • Vessels, Heart, and Eye II
  • Breast I
  • Prostate and Colon II
  • Musculoskeletal and Miscellaneous
  • Breast II
  • Lung, Chest, and Abdomen II
  • Poster Session: Breast
  • Poster Session: Head, Neck, and Novel
  • Poster Session: Lung, Chest, and Abdomen
  • Poster Session: Prostate and Colon
  • Poster Session: Vessels, Heart, and Eye
  • Poster Session: Musculoskeletal and Miscellaneous
Front Matter: Volume 9035
icon_mobile_dropdown
Front Matter: Volume 9035
This PDF file contains the front matter associated with SPIE Proceedings Volume 9035, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Head, Neck, and Novel Methods
icon_mobile_dropdown
Early detection of Alzheimer's disease using histograms in a dissimilarity-based classification framework
Anne Luchtenberg, Rita Simões, Anne-Marie van Cappellen van Walsum, et al.
Classification methods have been proposed to detect early-stage Alzheimer’s disease using Magnetic Resonance images. In particular, dissimilarity-based classification has been applied using a deformation-based distance measure. However, such approach is not only computationally expensive but it also considers large-scale alterations in the brain only. In this work, we propose the use of image histogram distance measures, determined both globally and locally, to detect very mild to mild Alzheimer’s disease. Using an ensemble of local patches over the entire brain, we obtain an accuracy of 84% (sensitivity 80% and specificity 88%).
Multi-fractal texture features for brain tumor and edema segmentation
In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.
Ischemic stroke lesion segmentation in multi-spectral MR images with support vector machine classifiers
Oskar Maier, Matthias Wilms, Janina von der Gablentz, et al.
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer’s discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature’s and MR sequence’s contribution.
A ROC-based feature selection method for computer-aided detection and diagnosis
Songyuan Wang, Guopeng Zhang, Qimei Liao, et al.
Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier’s suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.
Change detection of medical images using dictionary learning techniques and PCA
Varvara Nika, Paul Babyn, Hongmei Zhu
Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.
Segmentation and automated measurement of chronic wound images: probability map approach
Mohammad Faizal Ahmad Fauzi, Ibrahim Khansa, Karen Catignani, et al.
estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
Prostate and Colon I
icon_mobile_dropdown
Reference-tissue correction of T2-weighted signal intensity for prostate cancer detection
Yahui Peng, Yulei Jiang, Aytekin Oto
The purpose of this study was to investigate whether correction with respect to reference tissue of T2-weighted MRimage signal intensity (SI) improves its effectiveness for classification of regions of interest (ROIs) as prostate cancer (PCa) or normal prostatic tissue. Two image datasets collected retrospectively were used in this study: 71 cases acquired with GE scanners (dataset A), and 59 cases acquired with Philips scanners (dataset B). Through a consensus histology- MR correlation review, 175 PCa and 108 normal-tissue ROIs were identified and drawn manually. Reference-tissue ROIs were selected in each case from the levator ani muscle, urinary bladder, and pubic bone. T2-weighted image SI was corrected as the ratio of the average T2-weighted image SI within an ROI to that of a reference-tissue ROI. Area under the receiver operating characteristic curve (AUC) was used to evaluate the effectiveness of T2-weighted image SIs for differentiation of PCa from normal-tissue ROIs. AUC (± standard error) for uncorrected T2-weighted image SIs was 0.78±0.04 (datasets A) and 0.65±0.05 (datasets B). AUC for corrected T2-weighted image SIs with respect to muscle, bladder, and bone reference was 0.77±0.04 (p=1.0), 0.77±0.04 (p=1.0), and 0.75±0.04 (p=0.8), respectively, for dataset A; and 0.81±0.04 (p=0.002), 0.78±0.04 (p<0.001), and 0.79±0.04 (p<0.001), respectively, for dataset B. Correction in reference to the levator ani muscle yielded the most consistent results between GE and Phillips images. Correction of T2-weighted image SI in reference to three types of extra-prostatic tissue can improve its effectiveness for differentiation of PCa from normal-tissue ROIs, and correction in reference to the levator ani muscle produces consistent T2-weighted image SIs between GE and Phillips MR images.
Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer
Shoshana B. Ginsburg, Mirabela Rusu, John Kurhanewicz, et al.
In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these T2w MRI texture features are potential independent prognostic markers of PSA failure, we implement a partial least squares (PLS) method to embed the data in a low{dimensional space and then use the variable importance in projections (VIP) method to quantify the contributions of individual features to classification on the PLS embedding. In spite of the poor resolution of the 1.5 T MRI data, we are able to identify three Gabor wavelet features that, in conjunction with a logistic regression classifier, yield an area under the receiver operating characteristic curve of 0.83 for predicting the probability of biochemical recurrence following radiation therapy. In comparison to both the Kattan nomogram and semantic MRI attributes, the ability of these three computer-extracted features to predict biochemical recurrence risk is demonstrated.
Multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images
Qinquan Gao, Tong Tong, Daniel Rueckert, et al.
We present a framework for multi-atlas based segmentation in situations where we have a small number of segmented atlas images, but a large database of unlabeled images is also available. The novelty lies in the application of graph-based registration on a manifold to the problem of multi-atlas registration. The approach is to place all the images in a learned manifold space and construct a graph connecting near neighbors. Atlases are selected for any new image to be segmented based on the shortest path length along the manifold graph. A multi-scale non-rigid registration takes place via each of the nodes on the graph. The expectation is that by registering via similar images, the likelihood of misregistrations is reduced. Having registered multiple atlases via the graph, patch-based voxel weighted voting takes place to provide the final segmentation. We apply this approach to a set of T2 MRI images of the prostate, which is a notoriously difficult segmentation task. On a set of 25 atlas images and 85 images overall, we see that registration via the manifold graph improves the Dice coefficient from 0:82±0:05 to 0:86±0:03 and the average symmetrical boundary distance from 2:89±0:62mm to 2:47±0:51mm. This is a modest but potentially useful improvement in a difficult set of images. It is expected that our approach will provide similar improvement to any multi-atlas segmentation task where a large number of unsegmented images are available.
Automated polyp measurement based on colon structure decomposition for CT colonography
Huafeng Wang, Lihong C. Li, Hao Han, et al.
Accurate assessment of colorectal polyp size is of great significance for early diagnosis and management of colorectal cancers. Due to the complexity of colon structure, polyps with diverse geometric characteristics grow from different landform surfaces. In this paper, we present a new colon decomposition approach for polyp measurement. We first apply an efficient maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation algorithm to achieve an effective electronic cleansing on colon. The global colon structure is then decomposed into different kinds of morphological shapes, e.g. haustral folds or haustral wall. Meanwhile, the polyp location is identified by an automatic computer aided detection algorithm. By integrating the colon structure decomposition with the computer aided detection system, a patch volume of colon polyps is extracted. Thus, polyp size assessment can be achieved by finding abnormal protrusion on a relative uniform morphological surface from the decomposed colon landform. We evaluated our method via physical phantom and clinical datasets. Experiment results demonstrate the feasibility of our method in consistently quantifying the size of polyp volume and, therefore, facilitating characterizing for clinical management.
An improved high order texture features extraction method with application to pathological diagnosis of colon lesions for CT colonography
Differentiation of colon lesions according to underlying pathology, e.g., neoplastic and non-neoplastic, is of fundamental importance for patient management. Image intensity based textural features have been recognized as a useful biomarker for the differentiation task. In this paper, we introduce high order texture features, beyond the intensity, such as gradient and curvature, for that task. Based on the Haralick texture analysis method, we introduce a virtual pathological method to explore the utility of texture features from high order differentiations, i.e., gradient and curvature, of the image intensity distribution. The texture features were validated on database consisting of 148 colon lesions, of which 35 are non-neoplastic lesions, using the random forest classifier and the merit of area under the curve (AUC) of the receiver operating characteristics. The results show that after applying the high order features, the AUC was improved from 0.8069 to 0.8544 in differentiating non-neoplastic lesion from neoplastic ones, e.g., hyperplastic polyps from tubular adenomas, tubulovillous adenomas and adenocarcinomas. The experimental results demonstrated that texture features from the higher order images can significantly improve the classification accuracy in pathological differentiation of colorectal lesions. The gain in differentiation capability shall increase the potential of computed tomography (CT) colonography for colorectal cancer screening by not only detecting polyps but also classifying them from optimal polyp management for the best outcome in personalized medicine.
Vessels, Heart, and Eye I
icon_mobile_dropdown
An image-based software tool for screening retinal fundus images using vascular morphology and network transport analysis
Richard D. Clark, Daniel J. Dickrell, David L. Meadows
As the number of digital retinal fundus images taken each year grows at an increasing rate, there exists a similarly increasing need for automatic eye disease detection through image-based analysis. A new method has been developed for classifying standard color fundus photographs into both healthy and diseased categories. This classification was based on the calculated network fluid conductance, a function of the geometry and connectivity of the vascular segments. To evaluate the network resistance, the retinal vasculature was first manually separated from the background to ensure an accurate representation of the geometry and connectivity. The arterial and venous networks were then semi-automatically separated into two separate binary images. The connectivity of the arterial network was then determined through a series of morphological image operations. The network comprised of segments of vasculature and points of bifurcation, with each segment having a characteristic geometric and fluid properties. Based on the connectivity and fluid resistance of each vascular segment, an arterial network flow conductance was calculated, which described the ease with which blood can pass through a vascular system. In this work, 27 eyes (13 healthy and 14 diabetic) from patients roughly 65 years in age were evaluated using this methodology. Healthy arterial networks exhibited an average fluid conductance of 419 ± 89 μm3/mPa-s while the average network fluid conductance of the diabetic set was 165 ± 87 μm3/mPa-s (p < 0.001). The results of this new image-based software demonstrated an ability to automatically, quantitatively and efficiently screen diseased eyes from color fundus imagery.
An automatic machine learning system for coronary calcium scoring in clinical non-contrast enhanced, ECG-triggered cardiac CT
Jelmer M. Wolterink, Tim Leiner, Richard A. P. Takx, et al.
Presence of coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular events. We present a system using a forest of extremely randomized trees to automatically identify and quantify CAC in routinely acquired cardiac non-contrast enhanced CT. Candidate lesions the system could not label with high certainty were automatically identified and presented to an expert who could relabel them to achieve high scoring accuracy with minimal effort. The study included 200 consecutive non-contrast enhanced ECG-triggered cardiac CTs (120 kV, 55 mAs, 3 mm section thickness). Expert CAC annotations made as part of the clinical routine served as the reference standard. CAC candidates were extracted by thresholding (130 HU) and 3-D connected component analysis. They were described by shape, intensity and spatial features calculated using multi-atlas segmentation of coronary artery centerlines from ten CTA scans. CAC was identified using a randomized decision tree ensemble classifier in a ten-fold stratified cross-validation experiment and quantified in Agatston and volume scores for each patient. After classification, candidates with posterior probability indicating uncertain labeling were selected for further assessment by an expert. Images with metal implants were excluded. In the remaining 164 images, Spearman's p between automatic and reference scores was 0.94 for both Agatston and volume scores. On average 1.8 candidate lesions per scan were subsequently presented to an expert. After correction, Spearman's p was 0.98. We have described a system for automatic CAC scoring in cardiac CT images which is able to effectively select difficult examinations for further refinement by an expert.
Automated coronary artery calcification detection on low-dose chest CT images
Yiting Xie, Matthew D. Cham, Claudia Henschke, et al.
Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.
Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images
Zhihong Hu, Gerard G. Medioni, Matthias Hernandez, et al.
Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 ± 0.06 for one test and 0.83 ± 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.
Lung, Chest, and Abdomen I
icon_mobile_dropdown
Detection, modeling and matching of pleural thickenings from CT data towards an early diagnosis of malignant pleural mesothelioma
Kraisorn Chaisaowong, Thomas Kraus
Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.
Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models
Monica M. S. Matsumoto, Niha G. Beig, Jayaram K. Udupa, et al.
Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.
Computer-aided detection of malpositioned endotracheal tubes in portable chest radiographs
Zhimin Huo, Hongda Mao, Jane Zhang, et al.
Portable chest radiographic images play a critical role in examining and monitoring the condition and progress of critically ill patients in intensive care units (ICUs). For example, portable chest images are acquired to ensure that tubes inserted into the patients are properly positioned for effective treatment. In this paper, we present a system that automatically detects the position of an endotracheal tube (ETT), which is inserted into the trachea to assist patients who have difficulty breathing. The computer detection includes the detections of the lung field, spine line, and aortic arch. These detections lead to the identification of regions of interest (ROIs) used for the subsequent detection of the ETT and carina. The detection of the ETT and carina is performed within the ROIs. Our ETT and carina detection methods were trained and tested on a large number of images. The locations of the ETT and carina were confirmed by an experienced radiologist for the purpose of performance evaluation. Our ETT detection achieved an average sensitivity of 85% at less than 0.1 false-positive detections per image. The carina approach correctly identified the carina location within a 10 mm distance from the truth location for 81% of the 217 testing images. We expect our system will assist ICU clinicians to detect malpositioned ETTs and reposition malpositioned ETTs more effectively and efficiently.
Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis
Pedro L. Rodrigues, António H.J. Moreira, Nuno F. Rodrigues, et al.
Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82±5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7±4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.
Mediastinal lymph node detection on thoracic CT scans using spatial prior from multi-atlas label fusion
Jiamin Liu, Jocelyn Zhao, Joanne Hoffman, et al.
Lymph nodes play an important role in clinical practice but detection is challenging due to low contrast surrounding structures and variable size and shape. We propose a fully automatic method for mediastinal lymph node detection on thoracic CT scans. First, lungs are automatically segmented to locate the mediastinum region. Shape features by Hessian analysis, local scale, and circular transformation are computed at each voxel. Spatial prior distribution is determined based on the identification of multiple anatomical structures (esophagus, aortic arch, heart, etc.) by using multi-atlas label fusion. Shape features and spatial prior are then integrated for lymph node detection. The detected candidates are segmented by curve evolution. Characteristic features are calculated on the segmented lymph nodes and support vector machine is utilized for classification and false positive reduction. We applied our method to 20 patients with 62 enlarged mediastinal lymph nodes. The system achieved a significant improvement with 80% sensitivity at 8 false positives per patient with spatial prior compared to 45% sensitivity at 8 false positives per patient without a spatial prior.
Estimation of cartilaginous region in noncontrast CT of the chest
Qian Zhao, Nabile Safdar, Glenna Yu, et al.
Pectus excavatum is a posterior depression of the sternum and adjacent costal cartilages and is the most common congenital deformity of the anterior chest wall. Its surgical repair can be performed via minimally invasive procedures that involve sternum and cartilage relocation and benefit from adequate surgical planning. In this study, we propose a method to estimate the cartilage regions in thoracic CT scans, which is the first step of statistical modeling of the osseous and cartilaginous structures for the rib cage. The ribs and sternum are first segmented by using interactive region growing and removing the vertebral column with morphological operations. The entire chest wall is also segmented to estimate the skin surface. After the segmentation, surface meshes are generated from the volumetric data and the skeleton of the ribs is extracted using surface contraction method. Then the cartilage surface is approximated via contracting the skin surface to the osseous structure. The ribs’ skeleton is projected to the cartilage surface and the cartilages are estimated using cubic interpolation given the joints with the sternum. The final cartilage regions are formed by the cartilage surface inside the convex hull of the estimated cartilages. The method was validated with the CT scans of two pectus excavatum patients and three healthy subjects. The average distance between the estimated cartilage surface and the ground truth is 2.89 mm. The promising results indicate the effectiveness of cartilage surface estimation using the skin surface.
Vessels, Heart, and Eye II
icon_mobile_dropdown
A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images
Akram Belghith, Christopher Bowd, Robert N. Weinreb, et al.
Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the ”non-progressing” and ”progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.
Automated aortic calcification detection in low-dose chest CT images
Yiting Xie, Yu Maw Htwe, Jennifer Padgett, et al.
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
Segmentation and separation of venous vasculatures in liver CT images
Lei Wang, Christian Hansen, Stephan Zidowitz, et al.
Computer-aided analysis of venous vasculatures including hepatic veins and portal veins is important in liver surgery planning. The analysis normally consists of two important pre-processing tasks: segmenting both vasculatures and separating them from each other by assigning different labels. During the acquisition of multi-phase CT images, both of the venous vessels are enhanced by injected contrast agent and acquired either in a common phase or in two individual phases. The enhanced signals established by contrast agent are often not stably acquired due to non-optimal acquisition time. Inadequate contrast and the presence of large lesions in oncological patients, make the segmentation task quite challenging. To overcome these diffculties, we propose a framework with minimal user interactions to analyze venous vasculatures in multi-phase CT images. Firstly, presented vasculatures are automatically segmented adopting an efficient multi-scale Hessian-based vesselness filter. The initially segmented vessel trees are then converted to a graph representation, on which a series of graph filters are applied in post-processing steps to rule out irrelevant structures. Eventually, we develop a semi-automatic workow to refine the segmentation in the areas of inferior vena cava and entrance of portal veins, and to simultaneously separate hepatic veins from portal veins. Segmentation quality was evaluated with intensive tests enclosing 60 CT images from both healthy liver donors and oncological patients. To quantitatively measure the similarities between segmented and reference vessel trees, we propose three additional metrics: skeleton distance, branch coverage, and boundary surface distance, which are dedicated to quantifying the misalignment induced by both branching patterns and radii of two vessel trees.
Computerized luminal analysis for detection of non-calcified plaques in coronary CT angiography
Non-calcified plaque (NCP) detection in coronary CT angiography (cCTA) is challenging due to the low CT number of NCP, the large number of coronary arteries and multiple phase CT acquisition. We are developing computer-vision methods for automated detection of NCPs in cCTA. A data set of 62 cCTA scans with 87 NCPs was collected retrospectively from patient files. Multiscale coronary vessel enhancement and rolling balloon tracking were first applied to each cCTA volume to extract the coronary artery trees. Each extracted vessel was reformatted to a straightened volume composed of cCTA slices perpendicular to the vessel centerline. A topological soft-gradient (TSG) detection method was developed to prescreen for both positive and negative remodeling candidates by analyzing the 2D topological features of the radial gradient field surface along the vessel wall. A quantitative luminal analysis was newly designed for feature extraction and false positive (FP) reduction. We extracted 9 geometric features and 6 gray-level features, to quantify the differences between NCPs and FPs. The gray-level features included 4 features to measure local statistical characteristics and 2 asymmetry features to measure the asymmetric spatial location of gray-level density along the vessel centerline. The geometric features included a radius differential feature and 8 features extracted from two transformed volumes: the volumetric shape indexing and the gradient direction mapping volumes. With a machine learning algorithm and feature selection method, useful features were selected and combined into an NCP likelihood measure to differentiate TPs from FPs. With the NCP likelihood measure as a decision variable in the receiver operating characteristic (ROC) analysis, the area under the curve achieved a value of 0.85±0.01, indicating that the luminal analysis is effective in reducing FPs for NCP detection.
Automated discovery of structural features of the optic nerve head on the basis of image and genetic data
Mark Christopher, Li Tang, John H. Fingert, et al.
Evaluation of optic nerve head (ONH) structure is a commonly used clinical technique for both diagnosis and monitoring of glaucoma. Glaucoma is associated with characteristic changes in the structure of the ONH. We present a method for computationally identifying ONH structural features using both imaging and genetic data from a large cohort of participants at risk for primary open angle glaucoma (POAG). Using 1054 participants from the Ocular Hypertension Treatment Study, ONH structure was measured by application of a stereo correspondence algorithm to stereo fundus images. In addition, the genotypes of several known POAG genetic risk factors were considered for each participant. ONH structural features were discovered using both a principal component analysis approach to identify the major modes of variance within structural measurements and a linear discriminant analysis approach to capture the relationship between genetic risk factors and ONH structure. The identified ONH structural features were evaluated based on the strength of their associations with genotype and development of POAG by the end of the OHTS study. ONH structural features with strong associations with genotype were identified for each of the genetic loci considered. Several identified ONH structural features were significantly associated (p < 0.05) with the development of POAG after Bonferroni correction. Further, incorporation of genetic risk status was found to substantially increase performance of early POAG prediction. These results suggest incorporating both imaging and genetic data into ONH structural modeling significantly improves the ability to explain POAG-related changes to ONH structure.
Breast I
icon_mobile_dropdown
Using undiagnosed data to enhance computerized breast cancer analysis with a three stage data labeling method
A novel three stage Semi-Supervised Learning (SSL) approach is proposed for improving performance of computerized breast cancer analysis with undiagnosed data. These three stages include: (1) Instance selection, which is barely used in SSL or computerized cancer analysis systems, (2) Feature selection and (3) Newly designed ‘Divide Co-training’ data labeling method. 379 suspicious early breast cancer area samples from 121 mammograms were used in our research. Our proposed ‘Divide Co-training’ method is able to generate two classifiers through split original diagnosed dataset (labeled data), and label the undiagnosed data (unlabeled data) when they reached an agreement. The highest AUC (Area Under Curve, also called Az value) using labeled data only was 0.832 and it increased to 0.889 when undiagnosed data were included. The results indicate instance selection module could eliminate untypical data or noise data and enhance the following semi-supervised data labeling performance. Based on analyzing different data sizes, it can be observed that the AUC and accuracy go higher with the increase of either diagnosed data or undiagnosed data, and reach the best improvement (ΔAUC = 0.078, ΔAccuracy = 7.6%) with 40 of labeled data and 300 of unlabeled data.
Fully automated segmentation of whole breast in MR images by use of dynamic programming
Luan Jiang, Yanyun Lian, Yajia Gu, et al.
Breast segmentation is an important and challenging task for computerized analysis of background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance images (DCE-MRI). The purpose of this study is to develop and evaluate a fully automated technique for accurate segmentation of whole breast in three-dimensional (3-D) DCE-MRI. The whole breast segmentation consists of two steps, i.e., the delineation of the chest wall and breast skin line. A sectional dynamic programming method was first designed in each 2-D slice to trace the upper and/or lower boundaries of the chest wall. The statistical distribution of gray levels of the breast skin line was employed as weighting factor to enhance the skin line, and dynamic programming was then applied to delineate breast skin line slice-by-slice within the automatically extracted volume of interest (VOI). Our method also took advantages of the continuity of chest wall and skin line across adjacent slices. Finally, the segmented breast skin line and the detected chest wall were connected to create the whole breast segmentation. The preliminary results on 70 cases show that the proposed method can obtain accurate segmentation of whole breast based on subjective observation. With the manually delineated region of 16 breasts in 8 cases, our method achieved Dice overlap measure of 92.1% ± 1.9% (mean ± SD) and volume agreement of 91.6% ± 4.7% for whole breast segmentation. It took approximately 4 minutes and 2.5 minutes for our method to segment the breast in an MR scan of 160 slices and 108 slices, respectively.
Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Digital breast tomosynthesis: effects of projection-view distribution on computer-aided detection of microcalcification clusters
We investigated the effect of projection view (PV) distribution on detectability of microcalcification clusters (MC) in digital breast tomosynthesis (DBT) by a computer-aided detection (CAD) system. With IRB approval, DBT of breasts with biopsy-proven MCs were acquired with 60° tomographic angle, 21 PVs, and 3° increment (full set). The DBT volume was reconstructed using simultaneous algebraic reconstruction technique (SART) with multiscale bilateral filtering (MSBF) regularization. Three subsets simulating acquisition with 11, 9 and 11 PVs at tomographic angle and angular increment of (30°, 3°), (24°, 3°) and (60°, 6°), respectively, were also reconstructed with MSBF-regularized SART at several iterations. The subsets therefore had about half the dose of the full set. An enhancement-modulated multiscale calcification response volume was derived, and prescreening of the individual microcalcification candidates was performed in this volume. Iterative thresholding in combination with region growing identified the potential microcalcification candidates. The prescreening sensitivity was analyzed using the mean and standard deviation of the signal-to-noise ratio (SNR) of the microcalcification candidates and rank-sensitivity plot. The candidates of MCs were detected by dynamic clustering using SNR and distance criteria. No additional FP reduction steps were performed to avoid the variability due to parameter tuning for a small data set. The performance of MC detection was compared at this stage. For the three subsets, view-based FROC analysis showed that the lowest FP rates at 90% sensitivity was achieved at 6.2, 11.8 and 9.0 per volume, respectively, compared to that of the full set at 3.9. The (30°, 3°) set performed better than the other two subsets.
Prostate and Colon II
icon_mobile_dropdown
Reducing false positives of small bowel segmentation on CT scans by localizing colon regions
Weidong Zhang, Jiamin Liu, Jianhua Yao, et al.
Automated small bowel segmentation is essential for computer-aided diagnosis (CAD) of small bowel pathology, such as tumor detection and pre-operative planning. We previously proposed a method to segment the small bowel using the mesenteric vasculature as a roadmap. The method performed well on small bowel segmentation but produced many false positives, most of which were located on the colon. To improve the accuracy of small bowel segmentation, we propose a semi-automated method with minimum interaction to distinguish the colon from the small bowel. The method utilizes anatomic knowledge about the mesenteric vasculature and a statistical method of colon detection. First, anatomic labeling of the mesenteric arteries is used to identify the arteries supplying the colon. Second, a statistical detector is created by combining two colon probability maps. One probability map is of the colon location and is generated from colon centerlines generated from CT colonography (CTC) data. Another probability map is of 3D colon texture using Haralick features and support vector machine (SVM) classifiers. The two probability maps are combined to localize colon regions, i.e., voxels having high probabilities on both maps were labeled as colon. Third, colon regions identified by anatomical labeling and the statistical detector are removed from the original results of small bowel segmentation. The method was evaluated on 11 abdominal CT scans of patients suspected of having carcinoid tumors. The reference standard consisted of manually-labeled small bowel segmentation. The method reduced the voxel-based false positive rate of small bowel segmentation from 19.7%±3.9% to 5.9%±2.3%, with two-tailed P-value < 0.0001.
An adaptive approach to centerline extraction for CT colonography using MAP-EM segmentation and distance field
In this paper, we present an adaptive approach for fully automatic centerline extraction and small intestine removal based on partial volume (PV) image segmentation and distance field modeling. Computed tomographic colonography (CTC) volume image is first segmented for the colon wall mucosa layer, which represents the PV effect around the colon wall. Then centerline extraction is performed in the presence of colon collapse and small intestine touch by the use of distance field within the segmented PV mucosa layer, where centerline breakings due to collapse are recovered and centerline branches due to small intestine tough are removed. Experimental results from 24 patient CTC scans with small intestine touch rendered 100% removal of the touch, while only 16 out of the 24 could be done by the well-known isolated component method. Our voxel-by-voxel marking strategy in the automated procedure preserves the topology and validity of the colon structure. The marked inner and outer boundaries on cleansed colon are very close to those labeled by the experts. Experimental results demonstrated the robustness and efficiency of the presented adaptive approach for CTC utility.
Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate
Nandinee Fariah Haq, Piotr Kozlowski, Edward C. Jones, et al.
Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.
Distinguishing prostate cancer from benign confounders via a cascaded classifier on multi-parametric MRI
G.J.S. Litjens, R. Elliott, N. Shih, et al.
Learning how to separate benign confounders from prostate cancer is important because the imaging characteristics of these confounders are poorly understood. Furthermore, the typical representations of the MRI parameters might not be enough to allow discrimination. The diagnostic uncertainty this causes leads to a lower diagnostic accuracy. In this paper a new cascaded classifier is introduced to separate prostate cancer and benign confounders on MRI in conjunction with specific computer-extracted features to distinguish each of the benign classes (benign prostatic hyperplasia (BPH), inflammation, atrophy or prostatic intra-epithelial neoplasia (PIN). In this study we tried to (1) calculate different mathematical representations of the MRI parameters which more clearly express subtle differences between different classes, (2) learn which of the MRI image features will allow to distinguish specific benign confounders from prostate cancer, and (2) find the combination of computer-extracted MRI features to best discriminate cancer from the confounding classes using a cascaded classifier. One of the most important requirements for identifying MRI signatures for adenocarcinoma, BPH, atrophy, inflammation, and PIN is accurate mapping of the location and spatial extent of the confounder and cancer categories from ex vivo histopathology to MRI. Towards this end we employed an annotated prostatectomy data set of 31 patients, all of whom underwent a multi-parametric 3 Tesla MRI prior to radical prostatectomy. The prostatectomy slides were carefully co-registered to the corresponding MRI slices using an elastic registration technique. We extracted texture features from the T2-weighted imaging, pharmacokinetic features from the dynamic contrast enhanced imaging and diffusion features from the diffusion-weighted imaging for each of the confounder classes and prostate cancer. These features were selected because they form the mainstay of clinical diagnosis. Relevant features for each of the classes were selected using maximum relevance minimum redundancy feature selection, allowing us to perform classifier independent feature selection. The selected features were then incorporated in a cascading classifier, which can focus on easier sub-tasks at each stage, leaving the more difficult classification tasks for later stages. Results show that distinct features are relevant for each of the benign classes, for example the fraction of extra-vascular, extra-cellular space in a voxel is a clear discriminator for inflammation. Furthermore, the cascaded classifier outperforms both multi-class and one-shot classifiers in overall accuracy for discriminating confounders from cancer: 0.76 versus 0.71 and 0.62.
A prostate MRI atlas of biochemical failures following cancer treatment
Mirabela Rusu, John Kurhanewicz, Ashutosh Tewari, et al.
Radical prostatectomy (RP) and radiation therapy (RT) are the most common treatment options for prostate cancer (PCa). Despite advancements in radiation delivery and surgical procedures, RP and RT can result in failure rates as high as 40% and >25%, respectively. Treatment failure is characterized by biochemical recurrence (BcR), which is defined in terms of prostate specific antigen (PSA) concentrations and its variation following treatment. PSA is expected to decrease following treatment, thereby its presence in even small concentrations (e.g 0.2 ng/ml for surgery or 2 ng/ml over the nadir PSA for radiation therapy) is indicative of treatment failure. Early identification of treatment failure may enable the use of more aggressive or neo-adjuvant therapies. Moreover, predicting failure prior to treatment may spare the patient from a procedure that is unlikely to be successful. Our goal is to identify differences on pre-treatment MRI between patients who have BcR and those who remain disease-free at 5 years post-treatment. Specifically, we focus on (1) identifying statistically significant differences in MRI intensities, (2) establishing morphological differences in prostatic anatomic structures, and (3) comparing these differences with the natural variability of prostatic structures. In order to attain these objectives, we use an anatomically constrained registration framework to construct BcR and non-BcR statistical atlases based on the pre-treatment magnetic resonance images (MRI) of the prostate. The patients included in the atlas either underwent RP or RT and were followed up for at least 5 years. The BcR atlas was constructed from a combined population of 12 pre-RT 1.5 Tesla (T) MRI and 33 pre-RP 3T MRI from patients with BcR within 5 years of treatment. Similarly, the non-BcR atlas was built based on a combined cohort of 20 pre-RT 1.5T MRI and 41 pre-RP 3T MRI from patients who remain disease-free 5 years post treatment. We chose the atlas framework as it enables the mapping of MR images from all subjects into the same canonical space, while constructing both an imaging and a morphological statistical atlas. Such co-registration allowed us to perform voxel-by-voxel comparisons of MRI intensities and capsule and central gland morphology to identify statistically significant differences between the BcR and non-BcR patient populations. To assess whether the morphological differences are valid, we performed an additional experiment where we constructed sub-population atlases by randomly sampling RT patients to construct the BcR and non-BcR atlases. Following these experiments we observed that: (1) statistically significant MRI intensity differences exist between BcR and non-BcR patients, specifically on the border of the central gland; (2) statistically significant morphological differences are visible in the prostate and central gland, specifically in the proximity of the apex, and (3) the differences between the BcR and non-BcR cohorts in terms of shape appeared to be consistent across these sub-population atlases as observed in our RT atlases.
Musculoskeletal and Miscellaneous
icon_mobile_dropdown
Dynamic automated synovial imaging (DASI) for differential diagnosis of rheumatoid arthritis
E. Grisan, B. Raffeiner, A. Coran, et al.
Inflammatory rheumatic diseases are leading causes of disability and constitute a frequent medical disorder, leading to inability to work, high comorbidity and increased mortality. The gold-standard for diagnosing and differentiating arthritis is based on patient conditions and radiographic findings, as joint erosions or decalcification. However, early signs of arthritis are joint effusion, hypervascularization and synovial hypertrophy. In particular, vascularization has been shown to correlate with arthritis’ destructive behavior, more than clinical assessment. Contrast Enhanced Ultrasound (CEUS) examination of the small joints is emerging as a sensitive tool for assessing vascularization and disease activity. The evaluation of perfusion pattern rely on subjective semiquantitative scales, that are able to capture the macroscopic degree of vascularization, but are unable to detect the subtler differences in kinetics perfusion parameters that might lead to a deeper understanding of disease progression and a better management of patients. We show that after a kinetic analysis of contrast agent appearance, providing the quantitative features characterizing the perfusion pattern of the joint, it is possible to accurately discriminate RA from PSA by building a random forest classifier on the computed features. We compare its accuracy with the assessment performed by expert radiologist blinded of the diagnosis.
Automated identification of spinal cord and vertebras on sagittal MRI
Chuan Zhou, Heang-Ping Chan, Qian Dong, et al.
We are developing an automated method for the identification of the spinal cord and the vertebras on spinal MR images, which is an essential step for computerized analysis of bone marrow diseases. The spinal cord segment was first enhanced by a newly developed hierarchical multiscale tubular (HMT) filter that utilizes the complementary hyper- and hypo- intensities in the T1-weighted (T1W) and STIR MRI sequences. An Expectation-Maximization (EM) analysis method was then applied to the enhanced tubular structures to extract candidates of the spinal cord. The spinal cord was finally identified by a maximum-likelihood registration method by analysis of the features extracted from the candidate objects in the two MRI sequences. Using the identified spinal cord as a reference, the vertebras were localized based on the intervertebral disc locations extracted by another HMT filter applied to the T1W images. In this study, 5 and 30 MRI scans from 35 patients who were diagnosed with multiple myeloma disease were collected retrospectively with IRB approval as training and test set, respectively. The vertebras manually outlined by a radiologist were used as reference standard. A total of 422 vertebras were marked in the 30 test cases. For the 30 test cases, 100% (30/30) of the spinal cords were correctly segmented with 4 false positives (FPs) mistakenly identified on the back muscles in 4 scans. A sensitivity of 95.0% (401/422) was achieved for the identification of vertebras, and 5 FPs were marked in 4 scans with an average FP rate of 0.17 FPs/scan.
Adaptive geodesic transform for segmentation of vertebrae on CT images
Bilwaj Gaonkar, Liao Shu, Gerardo Hermosillo, et al.
Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.
2D segmentation of intervertebral discs and its degree of degeneration from T2-weighted magnetic resonance images
Isaac Castro-Mateos, José Maria Pozo, Aron Lazary, et al.
Low back pain (LBP) is a disorder suffered by a large population around the world. A key factor causing this illness is Intervertebral Disc (IVD) degeneration, whose early diagnosis could help in preventing this widespread condition. Clinicians base their diagnosis on visual inspection of 2D slices of Magnetic Resonance (MR) images, which is subject to large interobserver variability. In this work, an automatic classification method is presented, which provides the Pfirrmann degree of degeneration from a mid-sagittal MR slice. The proposed method utilizes Active Contour Models, with a new geometrical energy, to achieve an initial segmentation, which is further improved using fuzzy C-means. Then, IVDs are classified according to their degree of degeneration. This classification is attained by employing Adaboost on five specific features: the mean and the variance of the probability map of the nucleus using two different approaches and the eccentricity of the fitting ellipse to the contour of the IVD. The classification method was evaluated using a cohort of 150 intervertebral discs assessed by three experts, resulting in a mean specificity (93%) and sensitivity (83%) similar to the one provided by every expert with respect to the most voted value. The segmentation accuracy was evaluated using the Dice Similarity Index (DSI) and Root Mean Square Error (RMSE) of the point-to-contour distance. The mean DSI ± 2 standard deviation was 91:7% ±5:6%, the mean RMSE was 0:82mm and the 95 percentile was 1:36mm. These results were found accurate when compared to the state-of-the-art.
Prediction of treatment response and metastatic disease in soft tissue sarcoma
Hamidreza Farhidzadeh, Mu Zhou, Dmitry B. Goldgof, et al.
Soft tissue sarcomas (STS) are a heterogenous group of malignant tumors comprised of more than 50 histologic subtypes. Based on spatial variations of the tumor, predictions of the development of necrosis in response to therapy as well as eventual progression to metastatic disease are made. Optimization of treatment, as well as management of therapy-related side effects, may be improved using progression information earlier in the course of therapy. Multimodality pre- and post-gadolinium enhanced magnetic resonance images (MRI) were taken before and after treatment for 30 patients. Regional variations in the tumor bed were measured quantitatively. The voxel values from the tumor region were used as features and a fuzzy clustering algorithm was used to segment the tumor into three spatial regions. The regions were given labels of high, intermediate and low based on the average signal intensity of pixels from the post-contrast T1 modality. These spatially distinct regions were viewed as essential meta-features to predict the response of the tumor to therapy based on necrosis (dead tissue in tumor bed) and metastatic disease (spread of tumor to sites other than primary). The best feature was the difference in the number of pixels in the highest intensity regions of tumors before and after treatment. This enabled prediction of patients with metastatic disease and lack of positive treatment response (i.e. less necrosis). The best accuracy, 73.33%, was achieved by a Support Vector Machine in a leave-one-out cross validation on 30 cases predicting necrosis < 90% post treatment and metastasis.
Automatic detection and segmentation of liver metastatic lesions on serial CT examinations
Avi Ben Cohen, Idit Diamant, Eyal Klang, et al.
In this paper we present a fully automated method for detection and segmentation of liver metastases on serial CT examinations (portal phase) given a 2D baseline segmentation mask. Our database contains 27 CT scans, baselines and follow-ups, of 12 patients and includes 22 test cases. Our method is based on the information given in the baseline CT scan which contains the lesion's segmentation mask marked manually by a radiologist. We use the 2D baseline segmentation mask to identify the lesion location in the follow-up CT scan using non-rigid image registration. The baseline CT scan is also used to locate regions of tissues surrounding the lesion and to map them onto the follow-up CT scan, in order to reduce the search area on the follow-up CT scan. Adaptive region-growing and mean-shift segmentation are used to obtain the final lesion segmentation. The segmentation results are compared to those obtained by a human radiologist. Compared to the reference standard our method made a correct RECIST 1.1 assessment for 21 out of 22 test cases. The average Dice index was 0.83 ± 0.07, average Hausdorff distance was 7.85± 4.84 mm, average sensitivity was 0.87 ± 0.11 and positive predictive value was 0.81 ± 0.10. The segmentation performance and the RECIST assessment results look promising. We are pursuing the methodology further with expansion to 3D segmentation while increasing the dataset we are collecting from the CT abdomen unit at Sheba medical center.
Breast II
icon_mobile_dropdown
Identification of corresponding lesions in multiple mammographic views using star-shaped iso-contours
Rafael Wiemker, Dominik Kutra, Harald Heese, et al.
It is common practice to assess lesions in two different mammographic views of each breast: medio-lateral oblique (MLO) and cranio-caudal (CC). We investigate methods that aim at automatic identification of a lesion which was indicated by the user in one view in the other view of the same breast. Automated matching of user indicated lesions has slightly different objectives than lesion segmentation or matching for improved computer aided detection, leading to different algorithmic choices. A novel computationally efficient algorithm is presented which is based on detection of star-shaped iso-contours with high sphericity and local consistency. The lesion likelihood is derived from a purely geometry based figure of merit and thus is invariant against monotonous intensity transformations (e.g. non-linear LUTs).Validation was carried out by virtue of FROC curves on a public database consisting of entirely digital mammograms with expert-delineated match pairs, showing superior performance as compared to gradient-based minimum cost path algorithms, with computation times faster by an order of magnitude and the potential of being fully parallelizable for GPU implementations.
Boosting classification performance in computer aided diagnosis of breast masses in raw full-field digital mammography using processed and screen film images
Thijs Kooi, Nico Karssemeijer
The introduction of Full-Field Digital Mammography (FFDM) in breast screening has brought with it several advantages in terms and processing facilities and image quality and Computer Aided Detection (CAD) systems are now sprouting that make use of this modality. A major drawback however, is that FFDM data is still relatively scarce and therefore, CAD system's performance are inhibited by a lack of training examples. In this paper, we explore the incorporation of more ubiquitous Screen Film Mammograms (SFM) and FFDM processed by the manufacturer, in training a system for the detection of tumour masses. We compute a small set of additional quantitative features in the raw data, that make explicit use of the log-linearity of the energy imparted on the detector in raw FFDM. We explore four di erent fusion methods: a weighted average, a majority vote, a convex combination of classi er outputs, based on the training error and an additional classi er, that combines the output of the three individual label estimates. Results are evaluated based on the Partial Area Under the Curve (PAUC) around a clinically relevant operating point. All fusion methods perform signi cantly better than any of the individual classi ers but we nd no signi cant di erence between the fusion techniques.
Lesion classification using clinical and visual data fusion by multiple kernel learning
Pavel Kisilev, Sharbell Hashoul, Eugene Walach, et al.
To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.
Breast density and parenchymal texture measures as potential risk factors for estrogen-receptor positive breast cancer
Brad M. Keller, Jinbo Chen, Emily F. Conant, et al.
Accurate assessment of a woman’s risk to develop specific subtypes of breast cancer is critical for appropriate utilization of chemopreventative measures, such as with tamoxifen in preventing estrogen-receptor positive breast cancer. In this context, we investigate quantitative measures of breast density and parenchymal texture, measures of glandular tissue content and tissue structure, as risk factors for estrogen-receptor positive (ER+) breast cancer. Mediolateral oblique (MLO) view digital mammograms of the contralateral breast from 106 women with unilateral invasive breast cancer were retrospectively analyzed. Breast density and parenchymal texture were analyzed via fully-automated software. Logistic regression with feature selection and was performed to predict ER+ versus ER- cancer status. A combined model considering all imaging measures extracted was compared to baseline models consisting of density-alone and texture-alone features. Area under the curve (AUC) of the receiver operating characteristic (ROC) and Delong’s test were used to compare the models’ discriminatory capacity for receptor status. The density-alone model had a discriminatory capacity of 0.62 AUC (p=0.05). The texture-alone model had a higher discriminatory capacity of 0.70 AUC (p=0.001), which was not significantly different compared to the density-alone model (p=0.37). In contrast the combined density-texture logistic regression model had a discriminatory capacity of 0.82 AUC (p<0.001), which was statistically significantly higher than both the density-alone (p<0.001) and texture-alone regression models (p=0.04). The combination of breast density and texture measures may have the potential to identify women specifically at risk for estrogen-receptor positive breast cancer and could be useful in triaging women into appropriate risk-reduction strategies.
Ultrasound breast lesion segmentation using adaptive parameters
Baek Hwan Cho, Yeong Kyeong Seong, Junghoe Kim, et al.
In computer aided diagnosis for ultrasound images, breast lesion segmentation is an important but intractable procedure. Although active contour models with level set energy function have been proposed for breast ul- trasound lesion segmentation, those models usually select and x the weight values for each component of the level set energy function empirically. The xed weights might a ect the segmentation performance since the characteristics and patterns of tissue and tumor di er between patients. Besides, there is observer variability in probe handling and ultrasound machine gain setting. Hence, we propose an active contour model with adaptive parameters in breast ultrasound lesion segmentation to overcome the variability of tissue and tumor patterns between patients. The main idea is to estimate the optimal parameter set automatically for di erent input images. We used regression models using 27 numerical features from the input image and an initial seed box. Our method showed better results in segmentation performance than the original model with xed parameters. In addition, it could facilitate the higher classi cation performance with the segmentation results. In conclusion, the proposed active contour segmentation model with adaptive parameters has the potential to deal with various di erent patterns of tissue and tumor e ectively.
Lung, Chest, and Abdomen II
icon_mobile_dropdown
Comparison of CLASS and ITK-SNAP in segmentation of urinary bladder in CT urography
We are developing a computerized method for bladder segmentation in CT urography (CTU) for computeraided diagnosis of bladder cancer. We have developed a Conjoint Level set Analysis and Segmentation System (CLASS) consisting of four stages: preprocessing and initial segmentation, 3D and 2D level set segmentation, and post-processing. In case the bladder contains regions filled with intravenous (IV) contrast and without contrast, CLASS segments the noncontrast (NC) region and the contrast (C) filled region separately and conjoins the contours. In this study, we compared the performance of CLASS to ITK-SNAP 2.4, which is a publicly available software application for segmentation of structures in 3D medical images. ITK-SNAP performs segmentation by using the edge-based level set on preprocessed images. The level set were initialized by manually placing a sphere at the boundary between the C and NC parts of the bladders with C and NC regions, and in the middle of the bladders that had only C or NC region. Level set parameters and the number of iterations were chosen after experimentation with bladder cases. Segmentation performances were compared using 30 randomly selected bladders. 3D hand-segmented contours were obtained as reference standard, and computerized segmentation accuracy was evaluated in terms of the average volume intersection %, average % volume error, average absolute % volume error, average minimum distance, and average Jaccard index. For CLASS, the values for these performance metrics were 79.0±8.2%, 16.1±16.3%, 19.9±11.1%, 3.5±1.3 mm, 75.7±8.4%, respectively. For ITK-SNAP, the corresponding values were 78.8±8.2%, 8.3±33.1%, 24.2±23.7%, 5.2±2.6 mm, 71.0±15.4%, respectively. CLASS on average performed better and exhibited less variations than ITK-SNAP for bladder segmentation.
Abdominal lymphadenopathy detection using random forest
Kevin M. Cherry, Shijun Wang, Evrim B. Turkbey, et al.
We propose a new method for detecting abdominal lymphadenopathy by utilizing a random forest statistical classifier to create voxel-level lymph node predictions, i.e. initial detection of enlarged lymph nodes. The framework permits the combination of multiple statistical lymph node descriptors and appropriate feature selection in order to improve lesion detection beyond traditional enhancement filters. We show that Hessian blobness measurements alone are inadequate for detecting lymph nodes in the abdominal cavity. Of the features tested here, intensity proved to be the most important predictor for lymph node classification. For initial detection, candidate lesions were extracted from the 3D prediction map generated by random forest. Statistical features describing intensity distribution, shape, and texture were calculated from each enlarged lymph node candidate. In the last step, a support vector machine (SVM) was trained and tested based on the calculated features from candidates and labels determined by two experienced radiologists. The computer-aided detection (CAD) system was tested on a dataset containing 30 patients with 119 enlarged lymph nodes. Our method achieved an AUC of 0.762±0.022 and a sensitivity of 79.8% with 15 false positives suggesting it can aid radiologists in finding enlarged lymph nodes.
A new classifier fusion method based on confusion matrix and classification confidence for recognizing common CT imaging signs of lung diseases
Ling Ma, Xiabi Liu, Li Song, et al.
Common CT Imaging Signs of Lung Diseases (CISL) are defined as the imaging signs that frequently appear in lung CT images from patients and play important roles in the diagnosis of lung diseases. This paper proposes a new method of multiple classifier fusion to recognize the CISLs, which is based on the confusion matrices of the classifiers and the classification confidence values outputted by the classifiers. The confusion matrix reflects the historical reliability of decision-making of a classifier, while the difference between the classification confidence values for competing classes reflects the current reliability of its decision-making. The two factors are merged to obtain the weights of the classifiers’ classification confidence values for the input pattern. Then the classifiers are fused in a weighted-sum form. In our experiments of CISL recognition, we combine three types of classifiers: the Max-Min posterior Pseudo-probabilities (MMP), the Support Vector Machine (SVM) and the Bagging. Our method behaved better than not only each of the three single classifier but also the AdaBoost with SVM based weak learners. It shows that the proposed method is effective and promising.
Automated detection and quantification of micronodules in thoracic CT scans to identify subjects at risk for silicosis
C. Jacobs, S.H.T. T. Opdam, E. M. van Rikxoort, et al.
Silica dust-exposed individuals are at high risk of developing silicosis, a fatal and incurable lung disease. The presence of disseminated micronodules on thoracic CT is the radiological hallmark of silicosis but locating micronodules, to identify subjects at risk, is tedious for human observers. We present a computer-aided detection scheme to automatically find micronodules and quantify micronodule load. The system used lung segmentation, template matching, and a supervised classification scheme. The system achieved a promising sensitivity of 84% at an average of 8.4 false positive marks per scan. In an independent data set of 54 CT scans in which we defined four risk categories, the CAD system automatically classified 83% of subjects correctly, and obtained a weighted kappa of 0.76.
Multiple-instance learning for computer-aided detection of tuberculosis
J. Melendez, C. I. Sánchez, R. H. H. M. Philipsen, et al.
Detection of tuberculosis (TB) on chest radiographs (CXRs) is a hard problem. Therefore, to help radiologists or even take their place when they are not available, computer-aided detection (CAD) systems are being developed. In order to reach a performance comparable to that of human experts, the pattern recognition algorithms of these systems are typically trained on large CXR databases that have been manually annotated to indicate the abnormal lung regions. However, manually outlining those regions constitutes a time-consuming process that, besides, is prone to inconsistencies and errors introduced by interobserver variability and the absence of an external reference standard. In this paper, we investigate an alternative pattern classi cation method, namely multiple-instance learning (MIL), that does not require such detailed information for a CAD system to be trained. We have applied this alternative approach to a CAD system aimed at detecting textural lesions associated with TB. Only the case (or image) condition (normal or abnormal) was provided in the training stage. We compared the resulting performance with those achieved by several variations of a conventional system trained with detailed annotations. A database of 917 CXRs was constructed for experimentation. It was divided into two roughly equal parts that were used as training and test sets. The area under the receiver operating characteristic curve was utilized as a performance measure. Our experiments show that, by applying the investigated MIL approach, comparable results as with the aforementioned conventional systems are obtained in most cases, without requiring condition information at the lesion level.
Seamless insertion of real pulmonary nodules in chest CT exams
Aria Pezeshk, Berkman Sahiner, Rongping Zeng, et al.
The availability of large medical image datasets is critical in many applications such as training and testing of computer aided diagnosis (CAD) systems, evaluation of segmentation algorithms, and conducting perceptual studies. However, collection of large repositories of clinical images is hindered by the high cost and difficulties associated with both the accumulation of data and establishment of the ground truth. To address this problem, we are developing an image blending tool that allows users to modify or supplement existing datasets by seamlessly inserting a real lesion extracted from a source image into a different location on a target image. In this study we focus on the application of this tool to pulmonary nodules in chest CT exams. We minimize the impact of user skill on the perceived quality of the blended image by limiting user involvement to two simple steps: the user first draws a casual boundary around the nodule of interest in the source, and then selects the center of desired insertion area in the target. We demonstrate examples of the performance of the proposed system on samples taken from the Lung Image Database Consortium (LIDC) dataset, and compare the noise power spectrum (NPS) of blended nodules versus that of native nodules in simulated phantoms.
Poster Session: Breast
icon_mobile_dropdown
Classification based micro-calcification detection using discriminative restricted Boltzmann machine in digitized mammograms
SeungYeon Shin, Soochan Lee, Il Dong Yun
We present a new method for automatic detection of micro-calcifications using the Discriminative Restricted Boltzmann Machine (DRBM). The DRBM is used to automatically learn the specific features which distinguish micro-calcifications from normal tissue as well as their morphological variations. Within the DRBM, low level image structures that are specific features of micro-calcifications are automatically captured without any appropriate feature selection based on expert knowledge or time-consuming hand-tuning, which was required for previous methods. Experimental evaluation conducted on a set of 33 mammograms gave a result of area under Receiver Operating Characteristics (ROC) curve 0.8294, which showed the effectiveness of the proposed method.
Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation
Kongkuo Lu, Christopher S. Hall
Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.
A content based framework for mass retrieval in mammograms
In the recent years, there has been a phenomenal growth in the volume of digital mammograms produced in hospitals and medical centers. Thus, there is a need to create efficient access methods or retrieval tools to search, browse and retrieve images from large repositories to aid diagnoses and research. This paper presents a Content Based Medical Image Retrieval (CBMIR) system for mass retrieval in mammograms using a two stage framework. Also, for mass segmentation, a semi-automatic method based on Seed Region Growing approach is proposed. Shape features are extracted at the first stage to find similar shape lesions and the second stage further refines the results by finding similar pathology bearing lesions using texture features. Various shape features used in this study are Compactness, Convexity, Spicularity, Radial Distance (RD) based features, Zernike Moments (ZM) and Fourier Descriptors (FD). The texture of mass lesions is characterized by Gray Level Co-occurrence Matrix (GLCM) features, Gray Level Run Length Matrix (GLRLM) features and Fourier Power Spectrum (FPS) features. In this paper, feature selection is done by Correlation based Feature Selection (CFS) technique to select the best subset of shape and texture features as high dimensionality of feature vector may limit computational efficiency. This study used the IRMA Version of DDSM LJPEG data to evaluate the retrieval performance of various shape and texture features. From the experimental results, it has been found that the proposed CBMIR system using merely the compactness or shape features selected by CFS provided better distinction among four categories of mass shape (Round, Oval, Lobulated and Irregular) at the first stage and FPS based texture features provided better distinction between pathology (Benign and Malignant) at the second stage.
Development of a computer tool to detect and classify nodules in ultrasound breast images
Due to the high incidence rate of breast cancer in women, many procedures have been developed to assist the diagnosis and early detection. Currently, ultrasonography has proved as a useful tool in distinguishing benign and malignant masses. In this context, the computer-aided diagnosis schemes have provided to the specialist a second opinion more accurately and reliably, minimizing the visual subjectivity between observers. Thus, we propose the application of an automatic detection method based on the use of the technique of active contour in order to show precisely the contour of the lesion and provide a better understanding of their morphology. For this, a total of 144 images of phantoms were segmented and submitted to morphological operations of opening and closing for smoothing the edges. Then morphological features were extracted and selected to work as input parameters for the neural classifier Multilayer Perceptron which obtained 95.34% correct classification of data and Az of 0.96.
A new mass classification system derived from multiple features and a trained MLP model
High false-positive recall rate is an important clinical issue that reduces efficacy of screening mammography. Aiming to help improve accuracy of classification between the benign and malignant breast masses and then reduce false-positive recalls, we developed and tested a new computer-aided diagnosis (CAD) scheme for mass classification using a database including 600 verified mass regions. The mass regions were segmented from regions of interest (ROIs) with a fixed size of 512×512 pixels. The mass regions were first segmented by an automated scheme, with manual corrections to the mass boundary performed if there was noticeable segmentation error. We randomly divided the 600 ROIs into 400 ROIs (200 malignant and 200 benign) for training, and 200 ROIs (100 malignant and 100 benign) for testing. We computed and analyzed 124 shape, texture, contrast, and spiculation based features in this study. Combining with previously computed 27 regional and shape based features for each of the ROIs in our database, we built an initial image feature pool. From this pool of 151 features, we extracted 13 features by applying the Sequential Forward Floating Selection algorithm on the ROIs in the training dataset. We then trained a multilayer perceptron model using these 13 features, and applied the trained model to the ROIs in the testing dataset. Receiver operating characteristic (ROC) analysis was used to evaluate classification accuracy. The area under the ROC curve was 0.8814±0.025 for the testing dataset. The results show a higher CAD mass classification performance, which needs to be validated further in a more comprehensive study.
Automated breast tissue density assessment using high order regional texture descriptors in mammography
Yan Nei Law, Monica Keiko Lieng, Jingmei Li, et al.
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Improving breast mass detection using histogram of oriented gradients
Victor Pomponiu, Harishwaran Hariharan, Bin Zheng, et al.
In this paper we present a simple technique that can be employed to filter the output of the computerized mass detection schemes. The sensitivity of computer-aided detection (CAD) systems is high; nevertheless specificity is not due to high false positive (FP) detection rates. Our approach is based on Histogram of Oriented Gradients (HOG) descriptor for filtering the mass and normal tissue regions. After the descriptors are computed, Support Vector Machines (SVM) are applied to classify the identified masses. The devised technique was tested on 1881 regions of interest (ROIs) acquired using a previously proposed CAD system. Extensive simulations are conducted to illustrate the capacity of the HOG descriptor to improve the performances of mass detection systems.
Detection of the nipple in automated 3D breast ultrasound using coronal slab-average-projection and cumulative probability map
Hannah Kim, Helen Hong
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu’s thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
Bilateral image subtraction features for multivariate automated classification of breast cancer risk
Early tumor detection is key in reducing breast cancer deaths and screening mammography is the most widely available method for early detection. However, mammogram interpretation is based on human radiologist, whose radiological skills, experience and workload makes radiological interpretation inconsistent. In an attempt to make mammographic interpretation more consistent, computer aided diagnosis (CADx) systems has been introduced. This paper presents an CADx system aimed to automatically triage normal mammograms form suspicious mammograms. The CADx system co-reregister the left and breast images, then extracts image features from the co-registered mammographic bilateral sets. Finally, an optimal logistic multivariate model is generated by means of an evolutionary search engine. In this study, 440 subjects form the DDSM public data sets were used: 44 normal mammograms, 201 malignant mass mammograms, and 195 mammograms with malignant calci cations. The results showed a cross validation accuracy of 0.88 and an area under receiver operating characteristic (AUC) of 0.89 for the calci cations vs. normal mammograms. The optimal mass vs. normal mammograms model obtained an accuracy of 0.85 and an AUC of 0.88.
Roles of biologic breast tissue composition and quantitative image analysis of mammographic images in breast tumor characterization
Purpose. Investigate whether knowledge of the biologic image composition of mammographic lesions provides imagebased biomarkers above and beyond those obtainable from quantitative image analysis (QIA) of X-ray mammography. Methods. The dataset consisted of 45 in vivo breast lesions imaged with the novel 3-component breast (3CB) imaging technique based on dual-energy mammography (15 malignant, 30 benign diagnoses). The 3CB composition measures of water, lipid, and protein thicknesses were assessed and mathematical descriptors, ‘3CB features’, were obtained for the lesions and their periphery. The raw low-energy mammographic images were analyzed with an established in-house QIA method obtaining ‘QIA features’ describing morphology and texture. We investigated the correlation within the ‘3CB features’, within the ‘QIA features’, and between the two. In addition, the merit of individual features in the distinction between malignant and benign lesions was assessed. Results. Whereas many descriptors within the ‘3CB features’ and ‘QIA features’ were, often by design, highly correlated, correlation between descriptors of the two feature groups was much weaker (maximum absolute correlation coefficient 0.58, p<0.001) indicating that 3CB and QIA-based biomarkers provided potentially complementary information. Single descriptors from 3CB and QIA appeared equally well-suited for the distinction between malignant and benign lesions, with maximum area under the ROC curve 0.71 for a protein feature (3CB) and 0.71 for a texture feature (QIA). Conclusions. In this pilot study analyzing the new 3CB imaging modality, knowledge of breast tissue composition appeared additive in combination with existing mammographic QIA methods for the distinction between benign and malignant lesions.
New method for predicting estrogen receptor status utilizing breast MRI texture kinetic analysis
Baishali Chaudhury, Lawrence O. Hall, Dmitry B. Goldgof, et al.
Magnetic Resonance Imaging (MRI) of breast cancer typically shows that tumors are heterogeneous with spatial variations in blood flow and cell density. Here, we examine the potential link between clinical tumor imaging and the underlying evolutionary dynamics behind heterogeneity in the cellular expression of estrogen receptors (ER) in breast cancer. We assume, in an evolutionary environment, that ER expression will only occur in the presence of significant concentrations of estrogen, which is delivered via the blood stream. Thus, we hypothesize, the expression of ER in breast cancer cells will correlate with blood flow on gadolinium enhanced breast MRI. To test this hypothesis, we performed quantitative analysis of blood flow on dynamic contrast enhanced MRI (DCE-MRI) and correlated it with the ER status of the tumor. Here we present our analytic methods, which utilize a novel algorithm to analyze 20 volumetric DCE-MRI breast cancer tumors. The algorithm generates post initial enhancement (PIE) maps from DCE-MRI and then performs texture features extraction from the PIE map, feature selection, and finally classification of tumors into ER positive and ER negative status. The combined gray level co-occurrence matrices, gray level run length matrices and local binary pattern histogram features allow quantification of breast tumor heterogeneity. The algorithm predicted ER expression with an accuracy of 85% using a Naive Bayes classifier in leave-one-out cross-validation. Hence, we conclude that our data supports the hypothesis that imaging characteristics can, through application of evolutionary principles, provide insights into the cellular and molecular properties of cancer cells.
Exploring perceptually similar cases with multi-dimensional scaling
Juan Wang, Yongyi Yang, Miles N. Wernick, et al.
Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.
Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography
Shonket Ray, Jae Y. Choi, Brad M. Keller, et al.
Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI’s corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (p<0.001) with a 95% confidence interval [0.77, 0.94]. Significant texture feature predictors (p<0.05) included contrast, sum variance and difference average. Sensitivity for false-positives was 51% at the 100% cancer detection operating point. Although preliminary, clinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.
Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification
Tao Tan, Jan van Zelst, Wei Zhang, et al.
Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.
Classification of breast lesions presenting as mass and non-mass lesions
Cristina Gallego-Ortiz, Anne L. Martel
We aim to develop a CAD system for robust and reliable di erential diagnosis of breast lesions, in particular non-mass lesions. A necessary prerequisite for the development of a successful CAD system is the selection of the best subset of lesion descriptors. But an important methodological concern is whether the selected features are in uenced by the model employed rather than by the underlying characteristic distribution of descriptors for positive and negative cases. Another interesting question is how a particular classi er exploits the relationships between descriptors to increase the accuracy of the classi cation. In this work we set to: (1) Characterize kinetic, morphological and textural features among mass and non-mass lesions; (2) Examine feature spaces and compare selection of subset of features based on similarity of feature importance across feature rankings; (3) Compare two classi er performances namely binary Support Vector Machines (SVM) and Random Forest (RF) for the task of di erentiating between positive and negative cases when using binary classi cation for mass and non-mass lesions separately or when employing a multi-class classi cation. Breast MRI datasets consists of 243 (173 mass and 70 non-mass) lesions. Results show that RF variable importance used with RF-binary based classi cation optimized for mass and non-mass lesions separately o ers the best classi cation accuracy.
Poster Session: Head, Neck, and Novel
icon_mobile_dropdown
Computer-based assessment of left ventricular regional ejection fraction in patients after myocardial infarction
S.-K. Teo, Y. Su, R.S. Tan, et al.
After myocardial infarction (MI), the left ventricle (LV) undergoes progressive remodeling which adversely affects heart function and may lead to development of heart failure. There is an escalating need to accurately depict the LV remodeling process for disease surveillance and monitoring of therapeutic efficacy. Current practice of using ejection fraction to quantitate LV function is less than ideal as it obscures regional variation and anomaly. Therefore, we sought to (i) develop a quantitative method to assess LV regional ejection fraction (REF) using a 16-segment method, and (ii) evaluate the effectiveness of REF in discriminating 10 patients 1-3 months after MI and 9 normal control (sex- and agematched) based on cardiac magnetic resonance (CMR) imaging. Late gadolinium enhancement (LGE) CMR scans were also acquired for the MI patients to assess scar extent. We observed that the REF at the basal, mid-cavity and apical regions for the patient group is significantly lower as compared to the control group (P < 0.001 using a 2-tail student t-test). In addition, we correlated the patient REF over these regions with their corresponding LGE score in terms of 4 categories – High LGE, Low LGE, Border and Remote. We observed that the median REF decreases with increasing severity of infarction. The results suggest that REF could potentially be used as a discriminator for MI and employed to measure myocardium homogeneity with respect to degree of infarction. The computational performance per data sample took approximately 25 sec, which demonstrates its clinical potential as a real-time cardiac assessment tool.
Toward early diagnosis of arteriosclerotic diseases: collaborative detection of carotid artery calcifications by computer and dentists on dental panoramic radiographs
Chisako Muramatsu, Ryo Takahashi, Takeshi Hara, et al.
Several studies have reported the presence of carotid artery calcifications (CACs) on dental panoramic radiographs (DPRs) as a possible sign of arteriosclerotic diseases. However, CACs are not easily visible at the common window level for dental examinations, and dentists, in general, are not looking for CACs. Computerized detection of CACs may help dentists in referring patients with a risk of arteriosclerotic diseases to have a detailed examination at a medical clinic. Downside of our previous method was a relatively large number of false positives (FPs). In this study, we attempted to reduce FPs by including an additional feature and selecting effective features for the classifier. A hundred DPRs including 34 cases with calcifications were included. Initial candidates were detected by thresholding the output of top-hat operation. For each candidate, 10 features and a new feature characterizing the relative position of a CAC with reference to the lower mandible edge were determined. After the rule-based FP reduction, candidates were classified into CACs and FPs by a support vector machine. Based on the leave-one-out cross-validation evaluations, an average number of FPs was 3.1 per image at 90.4% sensitivity using seven features selected. Compared to our previous method, the number of FPs was reduced by 38% at the same sensitivity level. The proposed method has a potential in identifying patients with a risk of arteriosclerosis early via general dental examinations.
Automatic classification of schizophrenia using resting-state functional language network via an adaptive learning algorithm
A reliable and precise classification of schizophrenia is significant for its diagnosis and treatment of schizophrenia. Functional magnetic resonance imaging (fMRI) is a novel tool increasingly used in schizophrenia research. Recent advances in statistical learning theory have led to applying pattern classification algorithms to access the diagnostic value of functional brain networks, discovered from resting state fMRI data. The aim of this study was to propose an adaptive learning algorithm to distinguish schizophrenia patients from normal controls using resting-state functional language network. Furthermore, here the classification of schizophrenia was regarded as a sample selection problem where a sparse subset of samples was chosen from the labeled training set. Using these selected samples, which we call informative vectors, a classifier for the clinic diagnosis of schizophrenia was established. We experimentally demonstrated that the proposed algorithm incorporating resting-state functional language network achieved 83.6% leaveone- out accuracy on resting-state fMRI data of 27 schizophrenia patients and 28 normal controls. In contrast with KNearest- Neighbor (KNN), Support Vector Machine (SVM) and l1-norm, our method yielded better classification performance. Moreover, our results suggested that a dysfunction of resting-state functional language network plays an important role in the clinic diagnosis of schizophrenia.
Accurate discrimination of Alzheimer's disease from other dementia and/or normal subjects using SPECT specific volume analysis
Hitoshi Iyatomi, Jun Hashimoto, Fumuhito Yoshii, et al.
Discrimination between Alzheimer’s disease and other dementia is clinically significant, however it is often difficult. In this study, we developed classification models among Alzheimer’s disease (AD), other dementia (OD) and/or normal subjects (NC) using patient factors and indices obtained by brain perfusion SPECT. SPECT is commonly used to assess cerebral blood flow (CBF) and allows the evaluation of the severity of hypoperfusion by introducing statistical parametric mapping (SPM). We investigated a total of 150 cases (50 cases each for AD, OD, and NC) from Tokai University Hospital, Japan. In each case, we obtained a total of 127 candidate parameters from: (A) 2 patient factors (age and sex), (B) 12 CBF parameters and 113 SPM parameters including (C) 3 from specific volume analysis (SVA), and (D) 110 from voxel-based analysis stereotactic extraction estimation (vbSEE). We built linear classifiers with a statistical stepwise feature selection and evaluated the performance with the leave-one-out cross validation strategy. Our classifiers achieved very high classification performances with reasonable number of selected parameters. In the most significant discrimination in clinical, namely those of AD from OD, our classifier achieved both sensitivity (SE) and specificity (SP) of 96%. In a similar way, our classifiers achieved a SE of 90% and a SP of 98% in AD from NC, as well as a SE of 88% and a SP of 86% in AD from OD and NC cases. Introducing SPM indices such as SVA and vbSEE, classification performances improved around 7-15%. We confirmed that these SPM factors are quite important for diagnosing Alzheimer’s disease.
Automatic pathology classification using a single feature machine learning support - vector machines
Fernando Yepes-Calderon, Fabian Pedregosa, Bertrand Thirion, et al.
Magnetic Resonance Imaging (MRI) has been gaining popularity in the clinic in recent years as a safe in-vivo imaging technique. As a result, large troves of data are being gathered and stored daily that may be used as clinical training sets in hospitals. While numerous machine learning (ML) algorithms have been implemented for Alzheimer’s disease classification, their outputs are usually difficult to interpret in the clinical setting. Here, we propose a simple method of rapid diagnostic classification for the clinic using Support Vector Machines (SVM)1 and easy to obtain geometrical measurements that, together with a cortical and sub-cortical brain parcellation, create a robust framework capable of automatic diagnosis with high accuracy. On a significantly large imaging dataset consisting of over 800 subjects taken from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, classification-success indexes of up to 99.2% are reached with a single measurement.
MRI signal and texture features for the prediction of MCI to Alzheimer's disease progression
An early diagnosis of Alzheimer’s disease (AD) confers many benefits. Several biomarkers from different information modalities have been proposed for the prediction of MCI to AD progression, where features extracted from MRI have played an important role. However, studies have focused almost exclusively in the morphological characteristics of the images. This study aims to determine whether features relating to the signal and texture of the image could add predictive power. Baseline clinical, biological and PET information, and MP-RAGE images for 62 subjects from the Alzheimer’s Disease Neuroimaging Initiative were used in this study. Images were divided into 83 regions and 50 features were extracted from each one of these. A multimodal database was constructed, and a feature selection algorithm was used to obtain an accurate and small logistic regression model, which achieved a cross-validation accuracy of 0.96. These model included six features, five of them obtained from the MP-RAGE image, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index, showing that both groups are statistically different (p-value of 2.04e-11). The results demonstrate that MRI features related to both signal and texture, add MCI to AD predictive power, and support the idea that multimodal biomarkers outperform single-modality biomarkers.
Volume curtaining: a focus+context effect for multimodal volume visualization
Adam J. Fairfield, Jonathan Plasencia, Yun Jang, et al.
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
Le Lu, Jianhua Yao, Evrim Turkbey, et al.
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
Surgical retained foreign object (RFO) prevention by computer aided detection (CAD)
Theodore C. Marentis, Lubomir Hadjiiyski, Amrita R. Chaudhury, et al.
Surgical Retained Foreign Objects (RFOs) cause significant morbidity and mortality. They are associated with $1.5 billion annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. We address the RFO problem with two complementary technologies: a three dimensional (3D) Gossypiboma Micro Tag (μTa) that improves the visibility of RFOs on radiographs, and a Computer Aided Detection (CAD) system that detects the μTag. The 3D geometry of the μTag produces a similar 2D depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. We create a database of cadaveric radiographs with the μTag and other common man-made objects positioned randomly. We develop the CAD modules that include preprocessing, μTag enhancement, labeling, segmentation, feature analysis, classification and detection. The CAD can operate in a high specificity mode for the surgeon to allow for seamless workflow integration and function as a first reader. The CAD can also operate in a high sensitivity mode for the radiologist to ensure accurate detection. On a data set of 346 cadaveric radiographs, the CAD system performed at a high specificity (85.5% sensitivity, 0.02 FPs/image) for the OR and a high sensitivity (96% sensitivity, 0.73 FPs/image) for the radiologists.
Quantitative characterization of brain β-amyloid using a joint PiB/FDG PET image histogram
Jon J. Camp, Dennis P. Hanson, David R. Holmes III, et al.
A complex analysis performed by spatial registration of PiB and MRI patient images in order to localize the PiB signal to specific cortical brain regions has been proven effective in identifying imaging characteristics associated with underlying Alzheimer’s Disease (AD) and Lewy Body Disease (LBD) pathology. This paper presents an original method of image analysis and stratification of amyloid-related brain disease based on the global spatial correlation of PiB PET images with 18F-FDG PET images (without MR images) to categorize the PiB signal arising from the cortex. Rigid registration of PiB and 18F-FDG images is relatively straightforward, and in registration the 18F-FDG signal serves to identify the cortical region in which the PiB signal is relevant. Cortical grey matter demonstrates the highest levels of amyloid accumulation and therefore the greatest PiB signal related to amyloid pathology. The highest intensity voxels in the 18F-FDG image are attributed to the cortical grey matter. The correlation of the highest intensity PiB voxels with the highest 18F-FDG values indicates the presence of β-amyloid protein in the cortex in disease states, while correlation of the highest intensity PiB voxels with mid-range 18F-FDG values indicates only nonspecific binding in the white matter.
Texture descriptors to distinguish radiation necrosis from recurrent brain tumors on multi-parametric MRI
Pallavi Tiwari, Prateek Prasanna, Lisa Rogers, et al.
Di erentiating radiation necrosis (a radiation induced treatment e ect) from recurrent brain tumors (rBT) is currently one of the most clinically challenging problems in care and management of brain tumor (BT) patients. Both radiation necrosis (RN), and rBT exhibit similar morphological appearance on standard MRI making non-invasive diagnosis extremely challenging for clinicians, with surgical intervention being the only course for obtaining de nitive ground truth". Recent studies have reported that the underlying biological pathways de n- ing RN and rBT are fundamentally di erent. This strongly suggests that there might be phenotypic di erences and hence cues on multi-parametric MRI, that can distinguish between the two pathologies. One challenge is that these di erences, if they exist, might be too subtle to distinguish by the human observer. In this work, we explore the utility of computer extracted texture descriptors on multi-parametric MRI (MP-MRI) to provide alternate representations of MRI that may be capable of accentuating subtle micro-architectural di erences between RN and rBT for primary and metastatic (MET) BT patients. We further explore the utility of texture descriptors in identifying the MRI protocol (from amongst T1-w, T2-w and FLAIR) that best distinguishes RN and rBT across two independent cohorts of primary and MET patients. A set of 119 texture descriptors (co-occurrence matrix homogeneity, neighboring gray-level dependence matrix, multi-scale Gaussian derivatives, Law features, and histogram of gradient orientations (HoG)) for modeling di erent macro and micro-scale morphologic changes within the treated lesion area for each MRI protocol were extracted. Principal component analysis based variable importance projection (PCA-VIP), a feature selection method previously developed in our group, was employed to identify the importance of every texture descriptor in distinguishing RN and rBT on MP-MRI. PCA-VIP employs regression analysis to provide an importance score to each feature based on their ability to distinguish the two classes (RN/rBT). The top performing features identi ed via PCA-VIP were employed within a random- forest classi er to di erentiate RN from rBT across two cohorts of 20 primary and 22 MET patients. Our results revealed that, (a) HoG features at di erent orientations were the most important image features for both cohorts, suggesting inherent orientation di erences between RN, and rBT, (b) inverse di erence moment (capturing local intensity homogeneity), and Laws features (capturing local edges and gradients) were identi ed as important for both cohorts, and (c) Gd-C T1-w MRI was identi ed, across the two cohorts, as the best MRI protocol in distinguishing RN/rBT.
Poster Session: Lung, Chest, and Abdomen
icon_mobile_dropdown
Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules
Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.
A novel computer-aided detection system for pulmonary nodule identification in CT images
Hao Han, Lihong Li, Huafeng Wang, et al.
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
Comparison of biophysical factors influencing on emphysema quantification with low-dose CT
Emphysema Index(EI) measurements in MDCT is known to be influenced by various biophysical factors such as total lung volume, and body size. We investigated the association of the four biophysical factors with emphysema index in low-dose MDCT. In particular, we attempted to identify a potentially stronger biophysical factor than total lung volume. A total of 400 low-dose MDCT volumes taken at 120kVp, 40mAs, 1mm thickness, and B30f reconstruction kernel were used. The lungs, airways, and pulmonary vessels were automatically segmented, and two Emphysema Indices, relative area below -950HU(RA950) and 15th percentile(Perc15), were extracted from the segmented lungs. The biophysical factors such as total lung volume(TLV), mode of lung attenuation(ModLA), effective body diameter(EBD), and the water equivalent body diameter(WBD) were estimated from the segmented lung and body area. The association of biophysical factors with emphysema indices were evaluated by correlation coefficients. The mean emphysema indices were 8.3±5.5(%) in RA950, and -930±18(HU) in Perc15. The estimates of biophysical factors were 4.7±1.0(L) in TLV, -901±21(HU) in ModLA, 26.9±2.2(cm) in EBD, and 25.9±2.6(cm) in WBD. The correlation coefficients of biophysical factors with RA950 were 0.73 in TLV, 0.94 in ModLA, 0.31 in EBD, and 0.18 WBD, the ones with Perc15 were 0.74 in TLV, 0.98 in ModLA, 0.29 in EBD, and 0.15 WBD. Study results revealed that two biophysical factors, TLV and ModLA, mostly affects the emphysema indices. In particular, the ModLA exhibited strongest correlation of 0.98 with Perc15, which indicating the ModLA is the most significant confounding biophysical factor in emphysema indices measurement.
Microstructure analysis of the secondary pulmonary lobules by 3D synchrotron radiation CT
Recognition of abnormalities related to the lobular anatomy has become increasingly important in the diagnosis and differential diagnosis of lung abnormalities at clinical routines of CT examinations. This paper aims a 3-D microstructural analysis of the pulmonary acinus with isotropic spatial resolution in the range of several micrometers by using micro CT. Previously, we demonstrated the ability of synchrotron radiation micro CT (SRμCT) using offset scan mode in microstructural analysis of the whole part of the secondary pulmonary lobule. In this paper, we present a semiautomatic method to segment the acinar and subacinar airspaces from the secondary pulmonary lobule and to track small vessels running inside alveolar walls in human acinus imaged by the SRμCT. The method beains with and segmentation of the tissues such as pleural surface, interlobular septa, alveola wall, or vessel using a threshold technique and 3-D connected component analysis. 3-D air space are then conustructed separated by tissues and represented branching patterns of airways and airspaces distal to the terminal bronchiole. A graph-partitioning approach isolated acini whose stems are interactively defined as the terminal bronchiole in the secondary pulmonary lobule. Finally, we performed vessel tracking using a non-linear sate space which captures both smoothness of the trajectories and intensity coherence along vessel orientations. Results demonstrate that the proposed method can extract several acinar airspaces from the 3-D SRμCT image of secondary pulmonary lobule and that the extracted acinar airspace enable an accurate quantitative description of the anatomy of the human acinus for interpretation of the basic unit of pulmonary structure and function.
Wavelet based rotation invariant texture feature for lung tissue classification and retrieval
Jatindra Kumar Dash, Sudipta Mukhopadhyay, Rahul Das Gupta, et al.
This paper evaluates the performance of recently proposed rotation invariant texture feature extraction method for the classi¯cation and retrieval of lung tissues a®ected with Interstitial Lung Diseases (ILDs). The method makes use of principle texture direction as the reference direction and extracts texture features using Discrete Wavelet Transform (DWT). A private database containing high resolution computed tomography (HRCT) images belonging to ¯ve category of lung tissue is used for the experiment. The experimental result shows that the texture appearances of lung tissues are anisotropic in nature and hence rotation invariant features achieve better retrieval as well as classi¯cation accuracy.
Effect of image variation on computer-aided detection systems
S. P. Rabbani, P. Maduskar, R. H. H. M. Philipsen, et al.
As the importance of Computer Aided Detection (CAD) systems application is rising in medical imaging field due to the advantages they generate; it is essential to know their weaknesses and try to find a proper solution for them. A common possible practical problem that affects CAD systems performance is: dissimilar training and testing datasets declines the efficiency of CAD systems. In this paper normalizing images is proposed, three different normalization methods are applied on chest radiographs namely (1) Simple normalization (2) Local Normalization (3) Multi Band Local Normalization. The supervised lung segmentation CAD system performance is evaluated on normalized chest radiographs with these three different normalization methods in terms of Jaccard index. As a conclusion the normalization enhances the performance of CAD system and among these three normalization methods Local Normalization and Multi band Local normalization improve performance of CAD system more significantly than the simple normalization.
3D mapping of airway wall thickening in asthma with MSCT: a level set approach
Catalin Fetita, Pierre-Yves Brillet, Ruth Hartley, et al.
Assessing the airway wall thickness in multi slice computed tomography (MSCT) as image marker for airway disease phenotyping such asthma and COPD is a current trend and challenge for the scientific community working in lung imaging. This paper addresses the same problem from a different point of view: considering the expected wall thickness-to-lumen-radius ratio for a normal subject as known and constant throughout the whole airway tree, the aim is to build up a 3D map of airway wall regions of larger thickness and to define an overall score able to highlight a pathological status. In this respect, the local dimension (caliber) of the previously segmented airway lumen is obtained on each point by exploiting the granulometry morphological operator. A level set function is defined based on this caliber information and on the expected wall thickness ratio, which allows obtaining a good estimate of the airway wall throughout all segmented lumen generations. Next, the vascular (or mediastinal dense tissue) contact regions are automatically detected and excluded from analysis. For the remaining airway wall border points, the real wall thickness is estimated based on the tissue density analysis in the airway radial direction; thick wall points are highlighted on a 3D representation of the airways and several quantification scores are defined. The proposed approach is fully automatic and was evaluated (proof of concept) on a patient selection coming from different databases including mild, severe asthmatics and normal cases. This preliminary evaluation confirms the discriminative power of the proposed approach regarding different phenotypes and is currently extending to larger cohorts.
3D intrathoracic region definition and its application to PET-CT analysis
Ronnarit Cheirsilp, Rebecca Bascom, Thomas W. Allen, et al.
Recently developed integrated PET-CT scanners give co-registered multimodal data sets that offer complementary three-dimensional (3D) digital images of the chest. PET (positron emission tomography) imaging gives highly specific functional information of suspect cancer sites, while CT (X-ray computed tomography) gives associated anatomical detail. Because the 3D CT and PET scans generally span the body from the eyes to the knees, accurate definition of the intrathoracic region is vital for focusing attention to the central-chest region. In this way, diagnostically important regions of interest (ROIs), such as central-chest lymph nodes and cancer nodules, can be more efficiently isolated. We propose a method for automatic segmentation of the intrathoracic region from a given co-registered 3D PET-CT study. Using the 3D CT scan as input, the method begins by finding an initial intrathoracic region boundary for a given 2D CT section. Next, active contour analysis, driven by a cost function depending on local image gradient, gradient-direction, and contour shape features, iteratively estimates the contours spanning the intrathoracic region on neighboring 2D CT sections. This process continues until the complete region is defined. We next present an interactive system that employs the segmentation method for focused 3D PET-CT chest image analysis. A validation study over a series of PET-CT studies reveals that the segmentation method gives a Dice index accuracy of less than 98%. In addition, further results demonstrate the utility of the method for focused 3D PET-CT chest image analysis, ROI definition, and visualization.
Lung texture classification using bag of visual words
Marina Asherov, Idit Diamant, Hayit Greenspan
Interstitial lung diseases (ILD) refer to a group of more than 150 parenchymal lung disorders. High-Resolution Computed Tomography (HRCT) is the most essential imaging modality of ILD diagnosis. Nonetheless, classification of various lung tissue patterns caused by ILD is still regarded as a challenging task. The current study focuses on the classification of five most common categories of lung tissues of ILD in HRCT images: normal, emphysema, ground glass, fibrosis and micronodules. The objective of the research is to classify an expert-given annotated region of interest (AROI) using a bag of visual words (BoVW) framework. The images are divided into small patches and a collection of representative patches are defined as visual words. This procedure, termed dictionary construction, is performed for each individual lung texture category. The assumption is that different lung textures are represented by a different visual word distribution. The classification is performed using an SVM classifier with histogram intersection kernel. In the experiments, we use a dataset of 1018 AROIs from 95 patients. Classification using a leave-one-patient-out cross validation (LOPO CV) is used. Current classification accuracy obtained is close to 80%.
Automated segmentation of murine lung tumors in x-ray micro-CT images
Joshua K. Y. Swee, Clare Sheridan, Elza de Bruin, et al.
Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.
Longitudinal follow-up study of smoking-induced emphysema progression in low-dose CT screening of lung cancer
Chronic obstructive pulmonary disease is a major public health problem that is predicted to be third leading cause of death in 2030. Although spirometry is traditionally used to quantify emphysema progression, it is difficult to detect the loss of pulmonary function by emphysema in early stage, and to assess the susceptibility to smoking. This study presents quantification method of smoking-induced emphysema progression based on annual changes of low attenuation volume (LAV) by each lung lobe acquired from low-dose CT images in lung cancer screening. The method consists of three steps. First, lung lobes are segmented using extracted interlobar fissures by enhancement filter based on fourdimensional curvature. Second, LAV of each lung lobe is segmented. Finally, smoking-induced emphysema progression is assessed by statistical analysis of the annual changes represented by linear regression of LAV percentage in each lung lobe. This method was applied to 140 participants in lung cancer CT screening for six years. The results showed that LAV progressions of nonsmokers, past smokers, and current smokers are different in terms of pack-year and smoking cessation duration. This study demonstrates effectiveness in diagnosis and prognosis of early emphysema in lung cancer CT screening.
Potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection
Y. Kawata, N. Niki, H. Ohmatsu, et al.
In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.
Computerized organ localization in abdominal CT volume with context-driven generalized Hough transform
Jing Liu, Qiang Li
Fast localization of organs is a key step in computer-aided detection of lesions and in image guided radiation therapy. We developed a context-driven Generalized Hough Transform (GHT) for robust localization of organ-of-interests (OOIs) in a CT volume. Conventional GHT locates the center of an organ by looking-up center locations of pre-learned organs with “matching” edges. It often suffers from mislocalization because “similar” edges in vicinity may attract the prelearned organs towards wrong places. The proposed method not only uses information from organ’s own shape but also takes advantage of nearby “similar” edge structures. First, multiple GHT co-existing look-up tables (cLUT) were constructed from a set of training shapes of different organs. Each cLUT represented the spatial relationship between the center of the OOI and the shape of a co-existing organ. Second, the OOI center in a test image was determined using GHT with each cLUT separately. Third, the final localization of OOI was based on weighted combination of the centers obtained in the second stage. The training set consisted of 10 CT volumes with manually segmented OOIs including liver, spleen and kidneys. The method was tested on a set of 25 abdominal CT scans. Context-driven GHT correctly located all OOIs in the test image and gave localization errors of 19.5±9.0, 12.8±7.3, 9.4±4.6 and 8.6±4.1 mm for liver, spleen, left and right kidney respectively. Conventional GHT mis-located 8 out of 100 organs and its localization errors were 26.0±32.6, 14.1±10.6, 30.1±42.6 and 23.6±39.7mm for liver, spleen, left and right kidney respectively.
Segmentation of urinary bladder in CT urography (CTU) using CLASS with enhanced contour conjoint procedure
We are developing a computerized method for bladder segmentation in CT urography (CTU) for computeraided diagnosis of bladder cancer. A challenge for computerized bladder segmentation in CTU is that the bladder often contains regions filled with intravenous (IV) contrast and without contrast. Previously, we proposed a Conjoint Level set Analysis and Segmentation System (CLASS) consisting of four stages: preprocessing and initial segmentation, 3D and 2D level set segmentation, and post-processing. In case the bladder is partially filled with contrast, CLASS segments the non-contrast (NC) region and the contrast (C) filled region separately and conjoins the contours with a Contour Conjoint Procedure (CCP). The CCP is not trivial. Inaccuracies in the NC and C contours may cause CCP to exclude portions of the bladder. To alleviate this problem, we implemented model-guided refinement to propagate the C contour if the level set propagation in the region stops prematurely due to substantial non-uniformity of the contrast. An enhanced CCP with regularized energies further propagates the conjoint contours to the correct bladder boundary. Segmentation performance was evaluated using 70 cases. For all cases, 3D hand segmented contours were obtained as reference standard, and computerized segmentation accuracy was evaluated in terms of average volume intersection %, average % volume error, and average minimum distance. With enhanced CCP, those values were 84.4±10.6%, 8.3±16.1%, 3.4±1.8 mm, respectively. With CLASS, those values were 74.6±13.1%, 19.6±18.6%, 4.4±2.2 mm, respectively. The enhanced CCP improved bladder segmentation significantly (p<0.001) for all three performance measures.
Level-set based free fluid segmentation with improved initialization using region growing in 3D ultrasound sonography
Dae Hoe Kim, Konstantinos N. Plataniotis, Yong Man Ro
In this study, new free fluid segmentation method is proposed, aiming to increase segmentation accuracy on free fluids, at the same time, decrease processing time, regardless of the accuracy of initial seeds. In order to segment free fluid regions fast and accurate, we propose a new free fluid segmentation based on Chan-vese level-set with an improved initialization using minimum variance region growing. The proposed method is devised to take complementary effects on both methods. In experiments, the effectiveness of the proposed method is demonstrated with 3D US volumes in terms of Dice’s coefficient, volume difference, Hausdorff distance and processing time. Results show that the proposed method outperforms CVLS and MVRG in terms of processing time as well as segmentation accuracy.
Performance of an automated renal segmentation algorithm based on morphological erosion and connectivity
Benjamin Abiri, Brian Park, Hersh Chandarana, et al.
The precision, accuracy, and efficiency of a novel semi-automated segmentation technique for VIBE MRI sequences was analyzed using clinical datasets. Two observers performed whole-kidney segmentation using EdgeWave software based on constrained morphological growth, with average inter-observer disagreement of 2.7% for whole kidney volume, 2.1% for cortex, and 4.1% for medulla. Ground truths were prepared by constructing ROI on individual slices, revealing errors of 2.8%, 3.1%, and 3.6%, respectfully. It took approximately 7 minutes to perform one segmentation. These improvements over our existing graph-cuts segmentation technique make kidney volumetry a reality in many clinical applications.
COMPASS-based ureter segmentation in CT urography (CTU)
David Zick, Lubomir Hadjiiyski, Heang-Ping Chan, et al.
We are developing a computerized system for automated segmentation of ureters in CT urography (CTU), referred to as COmbined Model-guided Path-finding Analysis and Segmentation System (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. A challenge for ureter segmentation is the presence of regions not well opacified with intravenous (IV) contrast. COMPASS consists of three stages: (1) adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. One hundred fourteen ureters in 74 CTU scans with IV contrast were collected from 74 patient files. On average, the ureters spanned 283 CT slices (range: 116 to 399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 114 ureters was selected manually to initialize the tracking by COMPASS. Path-finding and segmentation were guided by the anatomical knowledge of ureters in CTU. The segmentation performance was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter. Of the 114 ureters, 110 (96%) were segmented completely (100%), 111 (97%) were segmented through at least 70% of its length, and 113 (99%) were segmented at least 50%. In comparison, using our previous method, 79 (69%) ureters were segmented completely (100%), 92 (81%) were segmented through at least 70% of its length, and 98 (86%) were segmented at least 50%. COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening and the narrowing of the lumen.
Ultrasound based computer-aided-diagnosis of kidneys for pediatric hydronephrosis
Juan J. Cerrolaza, Craig A. Peters, Aaron D. Martin, et al.
Ultrasound is the mainstay of imaging for pediatric hydronephrosis, though its potential as diagnostic tool is limited by its subjective assessment, and lack of correlation with renal function. Therefore, all cases showing signs of hydronephrosis undergo further invasive studies, like diuretic renogram, in order to assess the actual renal function. Under the hypothesis that renal morphology is correlated with renal function, a new ultrasound based computer-aided diagnosis (CAD) tool for pediatric hydronephrosis is presented. From 2D ultrasound, a novel set of morphological features of the renal collecting systems and the parenchyma, is automatically extracted using image analysis techniques. From the original set of features, including size, geometric and curvature descriptors, a subset of ten features are selected as predictive variables, combining a feature selection technique and area under the curve filtering. Using the washout half time (T1/2) as indicative of renal obstruction, two groups are defined. Those cases whose T1/2 is above 30 minutes are considered to be severe, while the rest would be in the safety zone, where diuretic renography could be avoided. Two different classification techniques are evaluated (logistic regression, and support vector machines). Adjusting the probability decision thresholds to operate at the point of maximum sensitivity, i.e., preventing any severe case be misclassified, specificities of 53%, and 75% are achieved, for the logistic regression and the support vector machine classifier, respectively. The proposed CAD system allows to establish a link between non-invasive non-ionizing imaging techniques and renal function, limiting the need for invasive and ionizing diuretic renography.
Automated abdominal lymph node segmentation based on RST analysis and SVM
Yukitaka Nimura, Yuichiro Hayashi, Takayuki Kitasaka, et al.
This paper describes a segmentation method for abdominal lymph node (LN) using radial structure tensor analysis (RST) and support vector machine. LN analysis is one of crucial parts of lymphadenectomy, which is a surgical procedure to remove one or more LNs in order to evaluate them for the presence of cancer. Several works for automated LN detection and segmentation have been proposed. However, there are a lot of false positives (FPs). The proposed method consists of LN candidate segmentation and FP reduction. LN candidates are extracted using RST analysis in each voxel of CT scan. RST analysis can discriminate between difference local intensity structures without influence of surrounding structures. In FP reduction process, we eliminate FPs using support vector machine with shape and intensity information of the LN candidates. The experimental result reveals that the sensitivity of the proposed method was 82.0 % with 21.6 FPs/case.
A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut
Xiangrong Zhou, Takaaki Ito, Xinxin Zhou, et al.
This paper describes a universal approach to automatic segmentation of different internal organ and tissue regions in three-dimensional (3D) computerized tomography (CT) scans. The proposed approach combines object localization, a probabilistic atlas, and 3D GrabCut techniques to achieve automatic and quick segmentation. The proposed method first detects a tight 3D bounding box that contains the target organ region in CT images and then estimates the prior of each pixel inside the bounding box belonging to the organ region or background based on a dynamically generated probabilistic atlas. Finally, the target organ region is separated from the background by using an improved 3D GrabCut algorithm. A machine-learning method is used to train a detector to localize the 3D bounding box of the target organ using template matching on a selected feature space. A content-based image retrieval method is used for online generation of a patient-specific probabilistic atlas for the target organ based on a database. A 3D GrabCut algorithm is used for final organ segmentation by iteratively estimating the CT number distributions of the target organ and backgrounds using a graph-cuts algorithm. We applied this approach to localize and segment twelve major organ and tissue regions independently based on a database that includes 1300 torso CT scans. In our experiments, we randomly selected numerous CT scans and manually input nine principal types of inner organ regions for performance evaluation. Preliminary results showed the feasibility and efficiency of the proposed approach for addressing automatic organ segmentation issues on CT images.
Poster Session: Prostate and Colon
icon_mobile_dropdown
A novel colonic polyp volume segmentation method for computer tomographic colonography
Huafeng Wang, Lihong C. Li, Hao Han, et al.
Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists’ experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.
Progressive region-based colon extraction for computer-aided detection and quantitative imaging in cathartic and non-cathartic CT colonography
Automated colon extraction is an important first step for applications of computer-aided detection (CADe) and quantitative imaging in computed tomographic colonography (CTC). However, previously developed colon extraction algorithms have various limitations. We developed a new fully automated progressive region-based (PRB) method to extract the complete region of colon from CTC images while minimizing the presence of extra-colonic components. The method can be used to provide the target region for CADe as well as to provide quantitative imaging information about the colon. In the method, extra-colonic components are excluded from the abdominal region by use of anatomy-based extraction methods, visible lumen regions of the colon and small bowel are decomposed into material-based subregions, a colon pathway is tracked from the anus to cecum by use of algorithms of progressively increasing complexity using anatomy-based landmarks, segmental features, and region-based tracking algorithms, and the extracted lumen region is perfected into a complete lumen by use of a discrete level-set algorithm. The method was tested with 15 challenging cathartic and non-cathartic fecaltagging CTC cases. Preliminary results indicate that the PRB method can outperform our previously developed centerline-based tracking method in colon extraction.
GISentinel: a software platform for automatic ulcer detection on capsule endoscopy videos
Steven Yi, Heng Jiao, Fan Meng, et al.
In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.
Poster Session: Vessels, Heart, and Eye
icon_mobile_dropdown
Retinal image quality assessment using generic features
Mahnaz Fasih, J.M. Pierre Langlois, Houssem Ben Tahar, et al.
Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image’s suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm’s predictions and the medical expert’s judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.
A boosted optimal linear learner for retinal vessel segmentation
E. Poletti, E. Grisan
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
Glaucoma detection based on local binary patterns in fundus photographs
Maya Alsheh Ali, Thomas Hurtut, Timothée Faucon, et al.
Glaucoma, a group of diseases that lead to optic neuropathy, is one of the most common reasons for blindness worldwide. Glaucoma rarely causes symptoms until the later stages of the disease. Early detection of glaucoma is very important to prevent visual loss since optic nerve damages cannot be reversed. To detect glaucoma, purely data-driven techniques have advantages, especially when the disease characteristics are complex and when precise image-based measurements are difficult to obtain. In this paper, we present our preliminary study for glaucoma detection using an automatic method based on local texture features extracted from fundus photographs. It implements the completed modeling of Local Binary Patterns to capture representative texture features from the whole image. A local region is represented by three operators: its central pixel (LBPC) and its local differences as two complementary components, the sign (which is the classical LBP) and the magnitude (LBPM). An image texture is finally described by both the distribution of LBP and the joint-distribution of LBPM and LBPC. Our images are then classified using a nearest-neighbor method with a leave-one-out validation strategy. On a sample set of 41 fundus images (13 glaucomatous, 28 non-glaucomatous), our method achieves 95:1% success rate with a specificity of 92:3% and a sensitivity of 96:4%. This study proposes a reproducible glaucoma detection process that could be used in a low-priced medical screening, thus avoiding the inter-experts variability issue.
Automatic multiresolution age-related macular degeneration detection from fundus images
Mickaël Garnier, Thomas Hurtut, Houssem Ben Tahar, et al.
Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.
Preliminary study on differentiation between glaucomatous and non-glaucomatous eyes on stereo fundus images using cup gradient models
Chisako Muramatsu, Yuji Hatanaka, Kyoko Ishida, et al.
Glaucoma is one of the leading causes of blindness in Japan and the US. One of the indices for diagnosis of glaucoma is the cup-to-disc ratio (CDR). We have been developing a computerized method for measuring CDR on stereo fundus photographs. Although our previous study indicated that the method may be useful, cup determination was not always successful, especially for the normal eyes. In this study, we investigated a new method to quantify the likelihood of glaucomatous disc based on the similarity scores to the glaucoma and non-glaucoma models. Eighty-seven images, including 40 glaucomatous eyes, were used in this study. Only one eye from each patient was used. Using a stereo fundus camera, two images were captured from different angles, and the depth image was created by finding the local corresponding points. One of the characteristics of a glaucomatous disc can be not only that the cup is enlarged but it has an acute slope. On the other hand, a non-glaucomatous cup generally has a gentle slope. Therefore, our models were constructed by averaging the depth gradient images. In order to account for disc size, disc outline was automatically detected, and all images were registered by warping the disc outline to a circle with a predetermined diameter using thin plate splines. Similarity scores were determined by multiplying a test case with both models. At the sensitivity of 90.0%, the specificity was improved from 83.0% using the CDR to 97.9% by the model-based method. The proposed method may be useful for differentiation of glaucomatous eyes.
From medical imaging to computer simulation of fractional flow reserve in four coronary artery trees
Simone Melchionna, Stefania Fortini, Massimo Bernaschi, et al.
We present the results of a computational study of coronary trees obtained from CT acquisition at resolution of 0.35mm x 0.35mm x 0.4mm and presenting significant stenotic plaques. We analyze the cardiovascular implications of stenotic plaques for a sizeable number of patients and show that the standard clinical criterion for surgical or percutaneous intervention, based on the Fractional Flow Reserve (FFR), is well reproduced by simulations in a range of inflow conditions that can be finely controlled. The relevance of the present study is related to the reproducibility of FFR data by simulating the coronary trees at global level via high performance simulation methods together with an independent assessment based on in vitro hemodynamics. The data show that controlling the flow Reynolds number is a viable procedure to account for FFR as heart-cycle time averages and maximal hyperemia, as measured in vivo. The reproducibility of the clinical data with simulation offers a systematic approach to measuring the functional implications of stenotic plaques.
Learning-based automatic detection of severe coronary stenoses in CT angiographies
Imen Melki, Cyril Cardon, Nicolas Gogin, et al.
3D cardiac computed tomography angiography (CCTA) is becoming a standard routine for non-invasive heart diseases diagnosis. Thanks to its high negative predictive value, CCTA is increasingly used to decide whether or not the patient should be considered for invasive angiography. However, an accurate assessment of cardiac lesions using this modality is still a time consuming task and needs a high degree of clinical expertise. Thus, providing automatic tool to assist clinicians during the diagnosis task is highly desirable. In this work, we propose a fully automatic approach for accurate severe cardiac stenoses detection. Our algorithm uses the Random Forest classi cation to detect stenotic areas. First, the classi er is trained on 18 CT cardiac exams with CTA reference standard. Then, then classi cation result is used to detect severe stenoses (with a narrowing degree higher than 50%) in a 30 cardiac CT exam database. Features that best captures the di erent stenoses con guration are extracted along the vessel centerlines at di erent scales. To ensure the accuracy against the vessel direction and scale changes, we extract features inside cylindrical patterns with variable directions and radii. Thus, we make sure that the ROIs contains only the vessel walls. The algorithm is evaluated using the Rotterdam Coronary Artery Stenoses Detection and Quantication Evaluation Framework. The evaluation is performed using reference standard quanti cations obtained from quantitative coronary angiography (QCA) and consensus reading of CTA. The obtained results show that we can reliably detect severe stenosis with a sensitivity of 64%.
Time-resolved volumetric MRI blood flow: a Doppler ultrasound perspective
Roy van Pelt, Javier Oliván Bescós, Eike Nagel, et al.
Hemodynamic information is increasingly inspected to assess cardiovascular disease. Abnormal blood-flow patterns include high-speed jet flow and regurgitant flow. Such pathological blood-flow patterns are nowadays mostly inspected by means of color Doppler ultrasound imaging. To date, Doppler ultrasound has been the prevailing modality for blood-flow analysis, providing non-invasive and cost-effective blood-flow imaging. Since recent years, magnetic resonance imaging (MRI) is increasingly employed to measure time-resolved blood-flow data. Albeit more expensive, MRI enables volumetric velocity encoding, providing true vector-valued data with less noise. Domain experts in the field of ultrasound and MRI have extensive experience in the interpretation of blood-flow information, although they employ different analysis techniques. We devise a visualization framework that extends on common Doppler ultrasound visualizations, exploiting the added value of MRI velocity data, and aiming for synergy between the domain experts. Our framework enables experts to explore the advantages and disadvantages of the current renditions of their imaging data. Furthermore, it facilitates the transition from conventional Doppler ultrasound images to present-day high-dimensional velocity fields. To this end, we present a virtual probe that enables direct exploration of MRI-acquired blood-flow velocity data using user-friendly interactions. Based on the probe, Doppler ultrasound inspired visualizations convey both in-plane and through-plane blood-flow velocities. In a compound view, these two-dimensional visualizations are linked to state-of-the-art three-dimensional blood-flow visualizations. Additionally, we introduce a novel volume rendering of the blood-flow velocity data that emphasizes anomalous blood-flow patterns. The visualization framework was evaluated by domain experts, and we present their feedback.
Poster Session: Musculoskeletal and Miscellaneous
icon_mobile_dropdown
Description of patellar movement by 3D parameters obtained from dynamic CT acquisition
Marina de Sá Rebelo, Ramon Alfredo Moreno, Riccardo Gomes Gobbi, et al.
The patellofemoral joint is critical in the biomechanics of the knee. The patellofemoral instability is one condition that generates pain, functional impairment and often requires surgery as part of orthopedic treatment. The analysis of the patellofemoral dynamics has been performed by several medical image modalities. The clinical parameters assessed are mainly based on 2D measurements, such as the patellar tilt angle and the lateral shift among others. Besides, the acquisition protocols are mostly performed with the leg laid static at fixed angles. The use of helical multi slice CT scanner can allow the capture and display of the joint´s movement performed actively by the patient. However, the orthopedic applications of this scanner have not yet been standardized or widespread. In this work we present a method to evaluate the biomechanics of the patellofemoral joint during active contraction using multi slice CT images. This approach can greatly improve the analysis of patellar instability by displaying the physiology during muscle contraction. The movement was evaluated by computing its 3D displacements and rotations from different knee angles. The first processing step registered the images in both angles based on the femur´s position. The transformation matrix of the patella from the images was then calculated, which provided the rotations and translations performed by the patella from its position in the first image to its position in the second image. Analysis of these parameters for all frames provided real 3D information about the patellar displacement.
Wide association study of radiological features that predict future knee OA pain: data from the OAI
In this work a case-control study was done using available data form participates in OAI databases. All case-control subjects present no evidence of pain, no medication for pain, and no symptomatic status, case subjects developed pain in some time point of the study. Radiological information was evaluated with a quantitative and a semi-quantitative score by OAI radiologist groups. The multivariate models obtained in the Bioinformatics study suggest that can exist an association between some of the early joint changes and the presence of future pain.
Vertebral degenerative disc disease severity evaluation using random forest classification
Hector E. Munoz, Jianhua Yao, Joseph E. Burns, et al.
Degenerative disc disease (DDD) develops in the spine as vertebral discs degenerate and osseous excrescences or outgrowths naturally form to restabilize unstable segments of the spine. These osseous excrescences, or osteophytes, may progress or stabilize in size as the spine reaches a new equilibrium point. We have previously created a CAD system that detects DDD. This paper presents a new system to determine the severity of DDD of individual vertebral levels. This will be useful to monitor the progress of developing DDD, as rapid growth may indicate that there is a greater stabilization problem that should be addressed. The existing DDD CAD system extracts the spine from CT images and segments the cortical shell of individual levels with a dual-surface model. The cortical shell is unwrapped, and is analyzed to detect the hyperdense regions of DDD. Three radiologists scored the severity of DDD of each disc space of 46 CT scans. Radiologists’ scores and features generated from CAD detections were used to train a random forest classifier. The classifier then assessed the severity of DDD at each vertebral disc level. The agreement between the computer severity score and the average radiologist’s score had a quadratic weighted Cohen’s kappa of 0.64.
Registration and color calibration for dermoscopy images in time-course analysis
Daiji Furusho, Hitoshi Iyatomi
Since melanomas grow and metastasize rapidly, the mutation in their appearance is much larger than that of nevi. If the variation of skin tumor can be evaluated quantitatively, it is of substantial help not only for clinical diagnosis, but also for development of computer-based diagnostic systems. However, photographic conditions of skin tumor are in most cases not uniform during the follow-up. In this study, we proposed a fully automated image registration and color calibration method between dermoscopy images in the time-course analysis. Our proposed algorithm aligned the time-course images with a precision of 91.6 ± 5.1% and a recall of 95.7 ± 5.9%, respectively whereas the fully manual registrations with Exif data as a performance reference did 95.4 ± 3.2% and 92.4 ± 6.5%, respectively. Our color calibration method largely reduced the color difference between timecourse images ΔE from 10.9 ± 5.6 to 3.9 1.7. These results showed that the proposed method was effective to compensate both geometrical and chronological changes between dermoscopy images in the time-course analysis.
Towards quantitative assessment of calciphylaxis
Calciphylaxis is a rare disease that has devastating conditions associated with high morbidity and mortality. Calciphylaxis is characterized by systemic medial calcification of the arteries yielding necrotic skin ulcerations. In this paper, we aim at supporting the installation of multi-center registries for calciphylaxis, which includes a photographic documentation of skin necrosis. However, photographs acquired in different centers under different conditions using different equipment and photographers cannot be compared quantitatively. For normalization, we use a simple color pad that is placed into the field of view, segmented from the image, and its color fields are analyzed. In total, 24 colors are printed on that scale. A least-squares approach is used to determine the affine color transform. Furthermore, the card allows scale normalization. We provide a case study for qualitative assessment. In addition, the method is evaluated quantitatively using 10 images of two sets of different captures of the same necrosis. The variability of quantitative measurements based on free hand photography is assessed regarding geometric and color distortions before and after our simple calibration procedure. Using automated image processing, the standard deviation of measurements is significantly reduced. The coefficients of variations yield 5-20% and 2-10% for geometry and color, respectively. Hence, quantitative assessment of calciphylaxis becomes practicable and will impact a better understanding of this rare but fatal disease.
Towards robust identification and tracking of nevi in sparse photographic time series
Jakob Vogel, Alexandru Duliu, Yuji Oyamada, et al.
In dermatology, photographic imagery is acquired in large volumes in order to monitor the progress of diseases, especially melanocytic skin cancers. For this purpose, overview (macro) images are taken of the region of interest and used as a reference map to re-localize highly magni ed images of individual lesions. The latter are then used for diagnosis. These pictures are acquired at irregular intervals under only partially constrained circumstances, where patient positions as well as camera positions are not reliable. In the presence of a large number of nevi, correct identi cation of the same nevus in a series of such images is thus a time consuming task with ample chances for error. This paper introduces a method for largely automatic and simultaneous identi cation of nevi in di erent images, thus allowing the tracking of a single nevus over time, as well as pattern evaluation. The method uses a rotation-invariant feature descriptor that uses the local neighborhood of a nevus to describe it. The texture, size and shape of the nevus are not used to describe it, as these can change over time, especially in the case of a malignancy. We then use the Random Walks framework to compute the correspondences based on the probabilities derived from comparing the feature vectors. Evaluation is performed on synthetic and patient data at the university clinic.
Evaluation of a computer-aided skin cancer diagnosis system for conventional digital photography with manual segmentation
Adam Huang, Wen-Yu Chang, Cheng-Han Hsieh, et al.
We evaluate a computer-aided diagnosis (CADx) system developed for both melanocytic and non-melanocytic skin lesions by using conventional digital photographs with lesion boundaries manually marked by a dermatologist. Clinical images of skin lesions taken by conventional digital cameras can capture useful information such as shape, color, and texture for diagnosing skin cancer. However, shape/border features are difficult to analyze automatically because skin surface reflections may change skin color and make segmentation a challenging task. In this study, two non-medical users manually mark the boundaries of a dataset of 769 (174 malignant, 595 benign) conventional photographs of melanocytic and non-melanocytic skin lesions. A state-of-the-art software system for segmenting color images, JSEG, is also tested on the same dataset. Their results are compared to a dermatologist's markings, which are used as the gold standard in this study. The human users' markings are relatively close to the gold standard and achieve an overlapping rate of 70.4% (+/- 15.3%, std) and 74.5% (+/- 14.7%, std). Compared to human users, JSEG only succeeds in segmenting 636 (82.7%) out of 769 lesions and achieves an overlapping rate of 72.4% (+/-20.4%) for these 636 lesions. The estimated area under the receiver operating characteristic curve (AUC) of the CADx by using lesion boundary markings of users 1, 2, and JSEG are 0.915, 0.940, and 0.857 respectively. Our preliminary results indicate that manual segmentation can be repeated relatively consistent compared to automatic segmentation.
Computer aided diagnosis of diabetic peripheral neuropathy
Viktor Chekh, Peter Soliz, Elizabeth McGrew, et al.
Diabetic peripheral neuropathy (DPN) refers to the nerve damage that can occur in diabetes patients. It most often affects the extremities, such as the feet, and can lead to peripheral vascular disease, deformity, infection, ulceration, and even amputation. The key to managing diabetic foot is prevention and early detection. Unfortunately, current existing diagnostic techniques are mostly based on patient sensations and exhibit significant inter- and intra-observer differences. We have developed a computer aided diagnostic (CAD) system for diabetic peripheral neuropathy. The thermal response of the feet of diabetic patients following cold stimulus is captured using an infrared camera. The plantar foot in the images from a thermal video are segmented and registered for tracking points or specific regions. The temperature recovery of each point on the plantar foot is extracted using our bio-thermal model and analyzed. The regions that exhibit abnormal ability to recover are automatically identified to aid the physicians to recognize problematic areas. The key to our CAD system is the segmentation of infrared video. The main challenges for segmenting infrared video compared to normal digital video are (1) as the foot warms up, it also warms up the surrounding, creating an ever changing contrast; and (2) there may be significant motion during imaging. To overcome this, a hybrid segmentation algorithm was developed based on a number of techniques such as continuous max-flow, model based segmentation, shape preservation, convex hull, and temperature normalization. Verifications of the automatic segmentation and registration using manual segmentation and markers show good agreement.
An automatic early stage alveolar-bone-resorption evaluation method on digital dental panoramic radiographs
Periodontal disease is a kind of typical dental diseases, which affects many adults. The presence of alveolar bone resorption, which can be observed from dental panoramic radiographs, is one of the most important signs of the progression of periodontal disease. Automatically evaluating alveolar-bone resorption is of important clinic meaning in dental radiology. The purpose of this study was to propose a novel system for automated alveolar-bone-resorption evaluation from digital dental panoramic radiographs for the first time. The proposed system enables visualization and quantitative evaluation of alveolar bone resorption degree surrounding the teeth. It has the following procedures: (1) pre-processing for a test image; (2) detection of tooth root apices with Gabor filter and curve fitting for the root apex line; (3) detection of features related with alveolar bone by using image phase congruency map and template matching and curving fitting for the alveolar line; (4) detection of occlusion line with selected Gabor filter; (5) finally, evaluation of the quantitative alveolar-bone-resorption degree in the area surrounding teeth by simply computing the average ratio of the height of the alveolar bone and the height of the teeth. The proposed scheme was applied to 30 patient cases of digital panoramic radiographs, with alveolar bone resorption of different stages. Our initial trial on these test cases indicates that the quantitative evaluation results are correlated with the alveolar-boneresorption degree, although the performance still needs further improvement. Therefore it has potential clinical practicability.
A method for automatic liver segmentation from multi-phase contrast-enhanced CT images
Rong Yuan, Ming Luo, Shaofa Wang, et al.
Liver segmentation is a basic and indispensable function in systems of computer aided liver surgery for volume calculation, operation designing and risk evaluation. Traditional manual segmentation is very time consuming because of the complicated contours of liver and the big amount of images. For increasing the efficiency of the clinical work, in this paper, a fully-automatic method was proposed to segment the liver from multi-phase contrast-enhanced computed tomography (CT) images. As an advanced region growing method, we applied various pre- and post-processing to get better segmentation from the different phases. Fifteen sets of clinical abdomens CT images of five patients were segmented by our algorithm, and the results were acceptable and evaluated by an experienced surgeon. The running-time is about 30 seconds for a single-phase data which includes more than 200 slices.
Classification of weak specular reflections in laparoscopic images
Bidisha Chakraborty, Jan Marek Marcinczak, Rolf-Rainer Grigat
Specular reflections are present in the majority of laparoscopic videos. If not considered they will affect all further image analysis and registration algorithms. In most state-of-the-art algorithms, segmentation of specular reflections is done by intensity thresholding. However, the strong reflections are detected but the weak reflections are missed. The proposed method automatically detects the contour boundaries belonging to specular reflections by an SVM classifier. The algorithm improves the detection of small weak reflections by training on contours of specular reflections with a combination of intensity and shape descriptors. Segmentation is done on contours by intensity thresholding and morphological operations. A comparative analysis of the proposed method with the existing methods is presented. The ground truth for the test images is manually labeled for evaluation. The database contains 1012 specular reflections present in 184 images and they are taken from 42 patients. This method improves the sensitivity in detection of weak reflections by 15% as compared to the best known method and 7% for all reflections.
Active shape models incorporating isolated landmarks for medical image annotation
Tobias Norajitra, Hans-Peter Meinzer, Bram Stieltjes, et al.
Apart from their robustness in anatomic surface segmentation, purely surface based 3D Active Shape Models lack the ability to automatically detect and annotate non-surface key points of interest. However, annotation of anatomic landmarks is desirable, as it yields additional anatomic and functional information. Moreover, landmark detection might help to further improve accuracy during ASM segmentation. We present an extension of surface-based 3D Active Shape Models incorporating isolated non-surface landmarks. Positions of isolated and surface landmarks are modeled conjoint within a point distribution model (PDM). Isolated landmark appearance is described by a set of haar-like features, supporting local landmark detection on the PDM estimates using a kNN-Classi er. Landmark detection was evaluated in a leave-one-out cross validation on a reference dataset comprising 45 CT volumes of the human liver after shape space projection. Depending on the anatomical landmark to be detected, our experiments have shown in about 1/4 up to more than 1/2 of all test cases a signi cant improvement in detection accuracy compared to the position estimates delivered by the PDM. Our results encourage further research with regard to the combination of shape priors and machine learning for landmark detection within the Active Shape Model Framework.
Texture feature based liver lesion classification
Yeela Doron, Nitzan Mayer-Wolf, Idit Diamant, et al.
Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.
Automatic seed selection for segmentation of liver cirrhosis in laparoscopic sequences
Rahul Sinha, Jan Marek Marcinczak, Rolf-Rainer Grigat
For computer aided diagnosis based on laparoscopic sequences, image segmentation is one of the basic steps which define the success of all further processing. However, many image segmentation algorithms require prior knowledge which is given by interaction with the clinician. We propose an automatic seed selection algorithm for segmentation of liver cirrhosis in laparoscopic sequences which assigns each pixel a probability of being cirrhotic liver tissue or background tissue. Our approach is based on a trained classifier using SIFT and RGB features with PCA. Due to the unique illumination conditions in laparoscopic sequences of the liver, a very low dimensional feature space can be used for classification via logistic regression. The methodology is evaluated on 718 cirrhotic liver and background patches that are taken from laparoscopic sequences of 7 patients. Using a linear classifier we achieve a precision of 91% in a leave-one-patient-out cross-validation. Furthermore, we demonstrate that with logistic probability estimates, seeds with high certainty of being cirrhotic liver tissue can be obtained. For example, our precision of liver seeds increases to 98.5% if only seeds with more than 95% probability of being liver are used. Finally, these automatically selected seeds can be used as priors in Graph Cuts which is demonstrated in this paper.
Infective endocarditis detection through SPECT/CT images digital processing
Albino Moreno, Raquel Valdés, Luis Jiménez, et al.
Infective endocarditis (IE) is a difficult-to-diagnose pathology, since its manifestation in patients is highly variable. In this work, it was proposed a semiautomatic algorithm based on SPECT images digital processing for the detection of IE using a CT images volume as a spatial reference. The heart/lung rate was calculated using the SPECT images information. There were no statistically significant differences between the heart/lung rates values of a group of patients diagnosed with IE (2.62±0.47) and a group of healthy or control subjects (2.84±0.68). However, it is necessary to increase the study sample of both the individuals diagnosed with IE and the control group subjects, as well as to improve the images quality.
A preliminary study for fully automated quantification of psoriasis severity using image mapping
Kazuhiro Mukai, Hitoshi Iyatomi
Psoriasis is a common chronic skin disease and it detracts patients’ QoL seriously. Since there is no known permanent cure so far, controlling appropriate disease condition is necessary and therefore quantification of its severity is important. In clinical, psoriasis area and severity index (PASI) is commonly used for abovementioned purpose, however it is often subjective and troublesome. A fully automatic computer-assisted area and severity index (CASI) was proposed to make an objective quantification of skin disease. It investigates the size and density of erythema based on digital image analysis, however it does not consider various inadequate effects caused by different geometrical conditions under clinical follow-up (i.e. variability in direction and distance between camera and patient). In this study, we proposed an image alignment method for clinical images and investigated to quantify the severity of psoriasis under clinical follow-up combined with the idea of CASI. The proposed method finds geometrical same points in patient’s body (ROI) between images with Scale Invariant Feature Transform (SIFT) and performs the Affine transform to map the pixel value to the other. In this study, clinical images from 7 patients with psoriasis lesions on their trunk under clinical follow-up were used. In each series, our image alignment algorithm align images to the geometry of their first image. Our proposed method aligned images appropriately on visual assessment and confirmed that psoriasis areas were properly extracted using the approach of CASI. Although we cannot evaluate PASI and CASI directly due to their different definition of ROI, we confirmed that there is a large correlation between those scores with our image quantification method.