Proceedings Volume 9785

Medical Imaging 2016: Computer-Aided Diagnosis

cover
Proceedings Volume 9785

Medical Imaging 2016: Computer-Aided Diagnosis

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 1 September 2016
Contents: 20 Sessions, 137 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2016
Volume Number: 9785

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9785
  • Vessels and Heart
  • Musculoskeletal and Miscellaneous
  • Lung and Chest I
  • Breast
  • Keynote and Deep Learning I
  • Radiomics I
  • Deep Learning II
  • Lung and Chest II
  • Head and Neck
  • Radiomics II
  • Colon and Prostate
  • Abdominal
  • Posters: Breast
  • Posters: Colon and Prostate
  • Posters: Head and Neck
  • Posters: Lung and Chest
  • Posters: Musculoskeletal and Miscellaneous
  • Posters: Vessels and Heart
  • Posters: Abdominal
Front Matter: Volume 9785
icon_mobile_dropdown
Front Matter: Volume 9785
This PDF file contains the front matter associated with SPIE Proceedings Volume 9785, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Vessels and Heart
icon_mobile_dropdown
Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force
Udhayaraj Sivalingam, Michael Wels, Markus Rempfler, et al.
In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).
Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis
We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist’s top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.
3D assessment of the carotid artery vessel wall volume: an imaging biomarker for diagnosis of the atherosclerotic disease
Mariam Afshin, Tishan Maraj, Tina Binesh Marvasti, et al.
This study investigates a novel method of 3D evaluation of the carotid vessel wall using two different Magnetic Resonance (MR) sequences. The method focuses on energy minimization by level-set curve boundary evolution. The level-set framework allows for the introduction of prior knowledge that is learnt from some images on the solution. The lumen is detected using a 3D TOF sequence. The lumen MRA segmentation (contours) was then transferred and registered to the corresponding images on the Magnetic Resonance Imaging plaque hemorrhage (MRIPH) sequence. The 3D registration algorithm was applied to align the sequences. The same technique used for lumen detection was then applied to extract the outer wall boundary. Our preliminary results show that the segmentations are well correlated with those obtained from a 2D reference sequence (2D-T1W). The estimated Vessel Wall Volume (VWV) can be used as an imaging biomarker to help radiologists diagnose and monitor atherosclerotic disease. Furthermore, the 3D map of the Vessel Wall Thickness (WVT) and Vessel Wall Signal Intensity may be used as complementary information to monitor disease severity.
A system for automatic aorta sections measurements on chest CT
Yitzchak Pfeffer, Arnaldo Mayer, Adi Zholkover, et al.
A new method is proposed for caliber measurement of the ascending aorta (AA) and descending aorta (DA). A key component of the method is the automatic detection of the carina, as an anatomical landmark around which an axial volume of interest (VOI) can be defined to observe the aortic caliber. For each slice in the VOI, a linear profile line connecting the AA with the DA is found by pattern matching on the underlying intensity profile. Next, the aortic center position is found using Hough transform on the best linear segment candidate. Finally, region growing around the center provides an accurate segmentation and caliber measurement. We evaluated the algorithm on 113 sequential chest CT scans, slice thickness of 0.75 - 3.75mm, 90 with contrast agent injected. The algorithm success rates were computed as the percentage of scans in which the center of the AA was found. Automated measurements of AA caliber were compared with independent measurements of two experienced chest radiologists, comparing the absolute difference between the two radiologists with the absolute difference between the algorithm and each of the radiologists. The measurement stability was demonstrated by computing the STD of the absolute difference between the radiologists, and between the algorithm and the radiologists. Results: Success rates of 93% and 74% were achieved, for contrast injected cases and non-contrast cases, respectively. These results indicate that the algorithm can be robust in large variability of image quality, such as the cases in a realworld clinical setting. The average absolute difference between the algorithm and the radiologists was 1.85mm, lower than the average absolute difference between the radiologists, which was 2.1mm. The STD of the absolute difference between the algorithm and the radiologists was 1.5mm vs 1.6mm between the two radiologists. These results demonstrate the clinical relevance of the algorithm measurements.
Quantitative MRI myocarditis analysis by a PCA-based object recognition algorithm
Rocco Romano, Fausto Acernese, Gerardo Giordano, et al.
Magnetic Resonance Imaging (MRI) has shown promising results in diagnosing myocarditis that can be qualitatively observed as enhanced pixels on the cardiac muscles images. In this paper, a quantitative MRI Myocarditis Analysis is proposed. Analysis consists in introducing a myocarditis index, defined as the ratio between enhanced pixels, representing an inflammation, and the total pixels of myocardial muscle. In order to recognize and quantify enhanced pixels, a PCA-based recognition algorithm is used. The algorithm, implemented in Matlab, was tested by examining a group of 12 patients, referred to MRI with presumptive, clinical diagnosis of myocarditis. To assess intra- and interobserver variability, two observers blindly analyzed data related to the 12 patients by delimiting myocardial region and selecting enhanced pixels. After 10 days the same observers redid the analysis. The obtained myocarditis indexes were compared to an ordinal variable (values in the 1 - 5 range) that represented the blind assessment of myocarditis seriousness given by two radiologists on the base of the patient case histories. Results show that there is a significant correlation (P < 0:001; r = 0:96) between myocarditis indexes and the radiologists' clinical judgments. Furthermore, a good intraobserver and interobserver reproducibility was obtained.
Musculoskeletal and Miscellaneous
icon_mobile_dropdown
Differentiation of fat, muscle, and edema in thigh MRIs using random forest classification
William Kovacs, Chia-Ying Liu, Ronald M. Summers, et al.
There are many diseases that affect the distribution of muscles, including Duchenne and fascioscapulohumeral dystrophy among other myopathies. In these disease cases, it is important to quantify both the muscle and fat volumes to track the disease progression. There has also been evidence that abnormal signal intensity on the MR images, which often is an indication of edema or inflammation can be a good predictor for muscle deterioration. We present a fully-automated method that examines magnetic resonance (MR) images of the thigh and identifies the fat, muscle, and edema using a random forest classifier. First the thigh regions are automatically segmented using the T1 sequence. Then, inhomogeneity artifacts were corrected using the N3 technique. The T1 and STIR (short tau inverse recovery) images are then aligned using landmark based registration with the bone marrow. The normalized T1 and STIR intensity values are used to train the random forest. Once trained, the random forest can accurately classify the aforementioned classes. This method was evaluated on MR images of 9 patients. The precision values are 0.91±0.06, 0.98±0.01 and 0.50±0.29 for muscle, fat, and edema, respectively. The recall values are 0.95±0.02, 0.96±0.03 and 0.43±0.09 for muscle, fat, and edema, respectively. This demonstrates the feasibility of utilizing information from multiple MR sequences for the accurate quantification of fat, muscle and edema.
Assessing vertebral fracture risk on volumetric quantitative computed tomography by geometric characterization of trabecular bone structure
Walter A. Checefsky, Anas Z. Abidin, Mahesh B. Nagarajan, et al.
The current clinical standard for measuring Bone Mineral Density (BMD) is dual X-ray absorptiometry, however more recently BMD derived from volumetric quantitative computed tomography has been shown to demonstrate a high association with spinal fracture susceptibility. In this study, we propose a method of fracture risk assessment using structural properties of trabecular bone in spinal vertebrae. Experimental data was acquired via axial multi-detector CT (MDCT) from 12 spinal vertebrae specimens using a whole-body 256-row CT scanner with a dedicated calibration phantom. Common image processing methods were used to annotate the trabecular compartment in the vertebral slices creating a circular region of interest (ROI) that excluded cortical bone for each slice. The pixels inside the ROI were converted to values indicative of BMD. High dimensional geometrical features were derived using the scaling index method (SIM) at different radii and scaling factors (SF). The mean BMD values within the ROI were then extracted and used in conjunction with a support vector machine to predict the failure load of the specimens. Prediction performance was measured using the root-mean-square error (RMSE) metric and determined that SIM combined with mean BMD features (RMSE = 0.82 ± 0.37) outperformed MDCT-measured mean BMD (RMSE = 1.11 ± 0.33) (p < 10-4). These results demonstrate that biomechanical strength prediction in vertebrae can be significantly improved through the use of SIM-derived texture features from trabecular bone.
Classification of voting patterns to improve the generalized Hough transform for epiphyses localization
Ferdinand Hahmann, Gordon Böer, Eric Gabriel, et al.
This paper presents a general framework for object localization in medical (and non-medical) images. In particular, we focus on objects of well-defined shape, like epiphyseal regions in hand-radiographs, which are localized based on a voting framework using the Generalized Hough Transform (GHT). We suggest to combine the GHT voting with a classifier which rates the voting characteristics of the GHT model at individual Hough cells. Specifically, a Random Forest Classifier rates whether the model points, voting for an object position, constitute a regular shape or not, and this measure is combined with the GHT votes. With this technique, we achieve a success rate of 99.4% for localizing 12 epiphyseal regions of interest in 412 hand- radiographs. The mean error is 6.6 pixels on images with a mean resolution of 1185×2006 pixels. Furthermore, we analyze the influence of the radius of the local neighborhood which is considered in analyzing the voting characteristics of a Hough cell.
Medical sieve: a cognitive assistant for radiologists and cardiologists
T. Syeda-Mahmood, E. Walach, D. Beymer, et al.
Radiologists and cardiologists today have to view large amounts of imaging data relatively quickly leading to eye fatigue. Further, they have only limited access to clinical information relying mostly on their visual interpretation of imaging studies for their diagnostic decisions. In this paper, we present Medical Sieve, an automated cognitive assistant for radiologists and cardiologists designed to help in their clinical decision-making. The sieve is a clinical informatics system that collects clinical, textual and imaging data of patients from electronic health records systems. It then analyzes multimodal content to detect anomalies if any, and summarizes the patient record collecting all relevant information pertinent to a chief complaint. The results of anomaly detection are then fed into a reasoning engine which uses evidence from both patient-independent clinical knowledge and large-scale patient-driven similar patient statistics to arrive at potential differential diagnosis to help in clinical decision making. In compactly summarizing all relevant information to the clinician per chief complaint, the system still retains links to the raw data for detailed review providing holistic summaries of patient conditions. Results of clinical studies in the domains of cardiology and breast radiology have already shown the promise of the system in differential diagnosis and imaging studies summarization.
Acne image analysis: lesion localization and classification
Fazly Salleh Abas, Benjamin Kaffenberger, Joseph Bikowski, et al.
Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.
Classification of melanoma lesions using sparse coded features and random forests
Mojdeh Rastgoo , Guillaume Lemaître, Olivier Morel, et al.
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
Lung and Chest I
icon_mobile_dropdown
Localized Fisher vector representation for pathology detection in chest radiographs
Ofer Geva, Sivan Lieberman, Eli Konen, et al.
In this work, we present a novel framework for automatic detection of abnormalities in chest radiographs. The representation model is based on the Fisher Vector encoding method. In the representation process, we encode each chest radiograph using a set of extracted local descriptors. These include localized texture features that address typical local texture abnormalities as well as spatial features. Using a Gaussian Mixture Model, a rich image descriptor is generated for each chest radiograph. An improved representation is obtained by selection of features that correspond to the relevant region of interest for each pathology. Categorization of the X-ray images is conducted using supervised learning and the SVM classifier. The proposed system was tested on a dataset of 636 chest radiographs taken from a real clinical environment. We measured the performance in terms of area (AUC) under the receiver operating characteristic (ROC) curve. Results show an AUC value of 0.878 for abnormal mediastinum detection, and AUC values of 0.827 and 0.817 for detection of right and left lung opacities, respectively. These results improve upon the state-of-the-art as compared with two alternative representation models.
Intensity targeted radial structure tensor analysis and its application for automated mediastinal lymph node detection from CT volumes
Hirohisa Oda, Yukitaka Nimura, Masahiro Oda, et al.
This paper presents a new blob-like enhancement filter based on Intensity Targeted Radial Structure Tensor (ITRST) analysis to improve mediastinal lymph node detection from chest CT volumes. Blob-like structure enhancement filter based on Radial Structure Tensor (RST) analysis can be utilized for initial detection of lymph node candidate regions. However, some of lymph nodes cannot be detected because RST analysis is influenced by neighboring regions whose intensity is very high or low, such as contrast-enhanced blood vessels and air. To overcome the problem, we propose ITRST analysis that integrate the prior knowledge on detection target intensity into RST analysis. Our lymph node detection method consists of two steps. First, candidate regions are obtained by ITRST analysis. Second, false positives (FPs) are removed by the Support Vector Machine (SVM) classifier. We applied the proposed method to 47 cases. Among 19 lymph nodes whose short axis is no less than 10 mm, 100.0 % of them were detected with 247.7 FPs/case by ITRST analysis, while only 80.0 % were detected with 123.0 FPs/case by RST analysis. After the false positive (FP) reduction by SVM, ITRST analysis outperformed RST analysis in lymph node detection performance.
Automatic aortic root segmentation in CTA whole-body dataset
Xinpei Gao, Pieter H. Kitslaar, Arthur J. H. A. Scholte, et al.
Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient’s arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient’s cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient’s whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.
Effects of CT dose and nodule characteristics on lung-nodule detectability in a cohort of 90 national lung screening trial patients
Lung cancer screening CT is already performed at low dose. There are many techniques to reduce the dose even further, but it is not clear how such techniques will affect nodule detectability. In this work, we used an in-house CAD algorithm to evaluate detectability. 90348 patients and their raw CT data files were drawn from the National Lung Screening Trial (NLST) database. All scans were acquired at ~2 mGy CTDIvol with fixed tube current, 1 mm slice thickness, and B50 reconstruction kernel on a Sensation 64 scanner (Siemens Healthcare). We used the raw CT data to simulate two additional reduced-dose scans for each patient corresponding to 1 mGy (50%) and 0.5 mGy (25%). Radiologists’ findings on the NLST reader forms indicated 65 nodules in the cohort, which we subdivided based on LungRADS criteria. For larger category 4 nodules, median sensitivities were 100% at all three dose levels, and mean sensitivity decreased with dose. For smaller nodules meeting the category 2 or 3 criteria, the dose dependence was less obvious. Overall, mean patient-level sensitivity varied from 38.5% at 100% dose to 40.4% at 50% dose, a difference of only 1.9%. However, the false-positive rate quadrupled from 1 per case at 100% dose to 4 per case at 25% dose. Dose reduction affected lung-nodule detectability differently depending on the LungRADS category, and the false-positive rate was very sensitive at sub-screening dose levels. Thus, care should be taken to adapt CAD for the very challenging noise characteristics of screening.
An automated lung nodule detection system for CT images using synthetic minority oversampling
Shrikant A. Mehre, Sudipta Mukhopadhyay, Anirvan Dutta, et al.
Pulmonary nodules are a potential manifestation of lung cancer, and their early detection can remarkably enhance the survival rate of patients. This paper presents an automated pulmonary nodule detection algorithm for lung CT images. The algorithm utilizes a two-stage approach comprising nodule candidate detection followed by reduction of false positives. The nodule candidate detection involves thresholding, followed by morphological opening. The geometrical features at this stage are selected from properties of nodule size and compactness, and lead to reduced number of false positives. An SVM classifier is used with a radial basis function kernel. The data imbalance, due to uneven distribution of nodules and non-nodules as a result of the candidate detection stage, is proposed to be addressed by oversampling of minority class using Synthetic Minority Over-sampling Technique (SMOTE), and over-imposition of its misclassification penalty. Experiments were performed on 97 CT scans of a publically-available (LIDC-IDRI) database. Performance is evaluated in terms of sensitivity and false positives per scan (FP/scan). Results indicate noteworthy performance of the proposed approach (nodule detection sensitivity after 4-fold cross-validation is 92.91% with 3 FP/scan). Comparative analysis also reflects a comparable and often better performance of the proposed setup over some of the existing techniques.
Breast
icon_mobile_dropdown
Quantification of mammographic masking risk with volumetric breast density maps: how to select women for supplemental screening
Katharina Holland, Carla H. van Gils, Johanna OP Wanders, et al.
The sensitivity of mammograms is low for women with dense breasts, since cancers may be masked by dense tissue. In this study, we investigated methods to identify women with density patterns associated with a high masking risk. Risk measures are derived from volumetric breast density maps. We used the last negative screening mammograms of 93 women who subsequently presented with an interval cancer (IC), and, as controls, 930 randomly selected normal screening exams from women without cancer. Volumetric breast density maps were computed from the mammograms, which provide the dense tissue thickness at each location. These were used to compute absolute and percentage glandular tissue volume. We modeled the masking risk for each pixel location using the absolute and percentage dense tissue thickness and we investigated the effect of taking the cancer location probability distribution (CLPD) into account. For each method, we selected cases with the highest masking measure (by thresholding) and computed the fraction of ICs as a function of the fraction of controls selected. The latter can be interpreted as the negative supplemental screening rate (NSSR). Between the models, when incorporating CLPD, no significant differences were found. In general, the methods performed better when CLPD was included. At higher NSSRs some of the investigated masking measures had a significantly higher performance than volumetric breast density. These measures may therefore serve as an alternative to identify women with a high risk for a masked cancer.
Seamless lesion insertion in digital mammography: methodology and reader study
Collection of large repositories of clinical images containing verified cancer locations is costly and time consuming due to difficulties associated with both the accumulation of data and establishment of the ground truth. This problem poses a significant challenge to the development of machine learning algorithms that require large amounts of data to properly train and avoid overfitting. In this paper we expand the methods in our previous publications by making several modifications that significantly increase the speed of our insertion algorithms, thereby allowing them to be used for inserting lesions that are much larger in size. These algorithms have been incorporated into an image composition tool that we have made publicly available. This tool allows users to modify or supplement existing datasets by seamlessly inserting a real breast mass or micro-calcification cluster extracted from a source digital mammogram into a different location on another mammogram. We demonstrate examples of the performance of this tool on clinical cases taken from the University of South Florida Digital Database for Screening Mammography (DDSM). Finally, we report the results of a reader study evaluating the realism of inserted lesions compared to clinical lesions. Analysis of the radiologist scores in the study using receiver operating characteristic (ROC) methodology indicates that inserted lesions cannot be reliably distinguished from clinical lesions.
Workflow improvements for digital breast tomosynthesis: computerized generation of enhanced synthetic images
In a typical 2D mammography workflow scenario, a computer-aided detection (CAD) algorithm is used as a second reader producing marks for a radiologist to review. In the case of 3D digital breast tomosynthesis (DBT), the display of CAD detections at multiple reconstruction heights would lead to an increased image browsing and interpretation time. We propose an alternative approach in which an algorithm automatically identifies suspicious regions of interest from 3D reconstructed DBT slices and then merges the findings with the corresponding 2D synthetic projection image which is then reviewed. The resultant enhanced synthetic 2D image combines the benefits of a familiar 2D breast view with superior appearance of suspicious locations from 3D slices. Moreover, clicking on 2D suspicious locations brings up the display of the corresponding 3D regions in a DBT volume allowing navigation between 2D and 3D images. We explored the use of these enhanced synthetic images in a concurrent read paradigm by conducting a study with 5 readers and 30 breast exams. We observed that the introduction of the enhanced synthetic view reduced radiologist's average interpretation time by 5.4%, increased sensitivity by 6.7% and increased specificity by 15.6%.
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
Mehmet Ufuk Dalmiş, Albert Gubern-Mérida, Cristina Borelli, et al.
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Parenchymal texture measures weighted by breast anatomy: preliminary optimization in a case-control study
Growing evidence suggests that quantitative descriptors of the parenchymal texture patterns hold a valuable role in assessing an individual woman’s risk for breast cancer. In this work, we assess the hypothesis that breast cancer risk factors are not uniformly expressed in the breast parenchymal tissue and, therefore, breast-anatomy-weighted parenchymal texture descriptors, where different breasts ROIs have non uniform contributions, may enhance breast cancer risk assessment. To this end, we introduce an automated breast-anatomy-driven methodology which generates a breast atlas, which is then used to produce a weight map that reinforces the contributions of the central and upper-outer breast areas. We incorporate this methodology to our previously validated lattice-based strategy for parenchymal texture analysis. In the framework of a pilot case-control study, including digital mammograms from 424 women, our proposed breast-anatomy-weighted texture descriptors are optimized and evaluated against non weighted texture features, using regression analysis with leave-one-out cross validation. The classification performance is assessed in terms of the area under the curve (AUC) of the receiver operating characteristic. The collective discriminatory capacity of the weighted texture features was maximized (AUC=0.87) when the central breast area was considered more important than the upperouter area, with significant performance improvement (DeLong's test, p-value<0.05) against the non-weighted texture features (AUC=0.82). Our results suggest that breast-anatomy-driven methodologies have the potential to further upgrade the promising role of parenchymal texture analysis in breast cancer risk assessment and may serve as a reference in the design of future studies towards image-driven personalized recommendations regarding women’s cancer risk evaluation.
Automated linking of suspicious findings between automated 3D breast ultrasound volumes
Albert Gubern-Mérida, Tao Tan, Jan van Zelst, et al.
Automated breast ultrasound (ABUS) is a 3D imaging technique which is rapidly emerging as a safe and relatively inexpensive modality for screening of women with dense breasts. However, reading ABUS examinations is very time consuming task since radiologists need to manually identify suspicious findings in all the different ABUS volumes available for each patient. Image analysis techniques to automatically link findings across volumes are required to speed up clinical workflow and make ABUS screening more efficient. In this study, we propose an automated system to, given the location in the ABUS volume being inspected (source), find the corresponding location in a target volume. The target volume can be a different view of the same study or the same view from a prior examination. The algorithm was evaluated using 118 linkages between suspicious abnormalities annotated in a dataset of ABUS images of 27 patients participating in a high risk screening program. The distance between the predicted location and the center of the annotated lesion in the target volume was computed for evaluation. The mean ± stdev and median distance error achieved by the presented algorithm for linkages between volumes of the same study was 7.75±6.71 mm and 5.16 mm, respectively. The performance was 9.54±7.87 and 8.00 mm (mean ± stdev and median) for linkages between volumes from current and prior examinations. The proposed approach has the potential to minimize user interaction for finding correspondences among ABUS volumes.
Keynote and Deep Learning I
icon_mobile_dropdown
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
Holger R. Roth, Yinong Wang, Jianhua Yao, et al.
Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of `positive', i.e. fractured posterior-elements and `negative', i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.
Increasing CAD system efficacy for lung texture analysis using a convolutional network
Sebastian Roberto Tarando, Catalin Fetita, Alex Faccinetto, et al.
The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.
Radiomics I
icon_mobile_dropdown
Increasing cancer detection yield of breast MRI using a new CAD scheme of mammograms
Maxine Tan, Faranak Aghaei, Alan B. Hollingsworth, et al.
Although breast MRI is the most sensitive imaging modality to detect early breast cancer, its cancer detection yield in breast cancer screening is quite low (< 3 to 4% even for the small group of high-risk women) to date. The purpose of this preliminary study is to test the potential of developing and applying a new computer-aided detection (CAD) scheme of digital mammograms to identify women at high risk of harboring mammography-occult breast cancers, which can be detected by breast MRI. For this purpose, we retrospectively assembled a dataset involving 30 women who had both mammography and breast MRI screening examinations. All mammograms were interpreted as negative, while 5 cancers were detected using breast MRI. We developed a CAD scheme of mammograms, which include a new quantitative mammographic image feature analysis based risk model, to stratify women into two groups with high and low risk of harboring mammography-occult cancer. Among 30 women, 9 were classified into the high risk group by CAD scheme, which included all 5 women who had cancer detected by breast MRI. All 21 low risk women remained negative on the breast MRI examinations. The cancer detection yield of breast MRI applying to this dataset substantially increased from 16.7% (5/30) to 55.6% (5/9), while eliminating 84% (21/25) unnecessary breast MRI screenings. The study demonstrated the potential of applying a new CAD scheme to significantly increase cancer detection yield of breast MRI, while simultaneously reducing the number of negative MRIs in breast cancer screening.
Identification, segmentation, and characterization of microcalcifications on mammography
Karen Drukker, Serghei Malkov, Jesus Avila, et al.
The purpose was to develop a characterization method for breast lesions visible only as microcalcifications on digital mammography. The method involved 4 steps: 1) image preprocessing through morphological filtering, 2) un-supervised identification of microcalcifications in the region surrounding the radiologist-indicated location through k-means clustering, 3) segmentation of the identified microcalcifications using an active contour model, and 4) characterization by computer-extracted image-based phenotypes describing properties of individual microcalcifications, cluster, and surrounding parenchyma. The image-based phenotypes were investigated for their ability to distinguish – individually, i.e., without merging with other phenotypes with a classifier – between invasive breast cancers, in-situ (non-invasive) breast cancers, fibroadenomas, and other benign-type lesions. The data set contained diagnostic mammograms of 82 patients with 2 views per patient – cranio-caudal (CC) and medio-lateral (ML) views of the affected breast with a single biopsy-proven finding indicated per view – with 7 invasive cancers, 14 in situ cancers, 13 fibroadenomas, and 48 other benign-type lesions. Analysis was performed per lesion and calculated phenotypes were averaged over views. Performance was assessed using ROC analysis with individual phenotypes as decision variables in the tasks of a) pairwise distinction amongst the 4 finding types, b) distinction between each finding type and all others, and c) distinction between cancer and non-cancer. Different phenotypes emerged as the best performers with areas under the ROC curve ranging from 0.69 (0.05) to 0.92 (0.09) depending on the task. We obtained encouraging preliminary results beyond the classification of cancer versus non-cancer in the distinction between different types of breast lesions visible as mammographic calcifications.
Predicting Ki67% expression from DCE-MR images of breast tumors using textural kinetic features in tumor habitats
Baishali Chaudhury, Mu Zhou, Hamidreza Farhidzadeh, et al.
The use of Ki67% expression, a cell proliferation marker, as a predictive and prognostic factor has been widely studied in the literature. Yet its usefulness is limited due to inconsistent cut off scores for Ki67% expression, subjective differences in its assessment in various studies, and spatial variation in expression, which makes it difficult to reproduce as a reliable independent prognostic factor. Previous studies have shown that there are significant spatial variations in Ki67% expression, which may limit its clinical prognostic utility after core biopsy. These variations are most evident when examining the periphery of the tumor vs. the core. To date, prediction of Ki67% expression from quantitative image analysis of DCE-MRI is very limited. This work presents a novel computer aided diagnosis framework to use textural kinetics to (i) predict the ratio of periphery Ki67% expression to core Ki67% expression, and (ii) predict Ki67% expression from individual tumor habitats. The pilot cohort consists of T1 weighted fat saturated DCE-MR images from 17 patients. Support vector regression with a radial basis function was used for predicting the Ki67% expression and ratios. The initial results show that texture features from individual tumor habitats are more predictive of the Ki67% expression ratio and spatial Ki67% expression than features from the whole tumor. The Ki67% expression ratio could be predicted with a root mean square error (RMSE) of 1.67%. Quantitative image analysis of DCE-MRI using textural kinetic habitats, has the potential to be used as a non-invasive method for predicting Ki67 percentage and ratio, thus more accurately reporting high KI-67 expression for patient prognosis.
Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients
Yunzhi Wang, Yuchen Qiu, Theresa Thai, et al.
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients’ progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Radiogenomics of glioblastoma: a pilot multi-institutional study to investigate a relationship between tumor shape features and tumor molecular subtype
Nicholas M. Czarnek, Kal Clark, Katherine B. Peters, et al.
Genomic subtype has been shown to be an important predictor of therapy response for patients with glioblastomas. Unfortunately, obtaining the genomic subtype is an expensive process that is not typically included in the standard of care. It is therefore of interest to investigate potential surrogates of molecular subtypes that use standard diagnostic data such as magnetic resonance (MR) imaging. In this study, we analyze the relationship between tumor genomic subtypes, proposed by Verhaak et al, 2010, and novel features that capture the shape of abnormalities as seen in fluid attenuated inversion recovery (FLAIR) MR images. In our study, we used data from 54 patients with glioblastomas from four institutions provided by The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). We explore five shape features calculated by computer algorithms implemented in our laboratory that assess shape both in individual slices and in rendered three-dimensional tumor volumes. The association between each feature and molecular subtype was assessed using area under the receiver operating characteristic curve analysis. We show that the two dimensional measures of edge complexity are significant discriminators between mesenchymal and classical tumors. These preliminary findings show promise for an imaging-based surrogate of molecular subtype and contribute to the understanding of the relationship between tumor biology and its radiology phenotype.
Prognosis classification in glioblastoma multiforme using multimodal MRI derived heterogeneity textural features: impact of pre-processing choices
Taman Upadhaya, Yannick Morvan, Eric Stindel, et al.
Heterogeneity image-derived features of Glioblastoma multiforme (GBM) tumors from multimodal MRI sequences may provide higher prognostic value than standard parameters used in routine clinical practice. We previously developed a framework for automatic extraction and combination of image-derived features (also called “Radiomics”) through support vector machines (SVM) for predictive model building. The results we obtained in a cohort of 40 GBM suggested these features could be used to identify patients with poorer outcome. However, extraction of these features is a delicate multi-step process and their values may therefore depend on the pre-processing of images. The original developed workflow included skull removal, bias homogeneity correction, and multimodal tumor segmentation, followed by textural features computation, and lastly ranking, selection and combination through a SVM-based classifier. The goal of the present work was to specifically investigate the potential benefit and respective impact of the addition of several MRI pre-processing steps (spatial resampling for isotropic voxels, intensities quantization and normalization) before textural features computation, on the resulting accuracy of the classifier. Eighteen patients datasets were also added for the present work (58 patients in total). A classification accuracy of 83% (sensitivity 79%, specificity 85%) was obtained using the original framework. The addition of the new pre-processing steps increased it to 93% (sensitivity 93%, specificity 93%) in identifying patients with poorer survival (below the median of 12 months). Among the three considered pre-processing steps, spatial resampling was found to have the most important impact. This shows the crucial importance of investigating appropriate image pre-processing steps to be used for methodologies based on textural features extraction in medical imaging.
Deep Learning II
icon_mobile_dropdown
Detection of soft tissue densities from digital breast tomosynthesis: comparison of conventional and deep learning approaches
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 ± 0:040 to 0:893 ± 0:033 for suspicious ROIs; and from 0:852 ± 0:065 to 0:930 ± 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis
A deep learning convolution neural network (DLCNN) was designed to differentiate microcalcification candidates detected during the prescreening stage as true calcifications or false positives in a computer-aided detection (CAD) system for clustered microcalcifications. The microcalcification candidates were extracted from the planar projection image generated from the digital breast tomosynthesis volume reconstructed by a multiscale bilateral filtering regularized simultaneous algebraic reconstruction technique. For training and testing of the DLCNN, true microcalcifications are manually labeled for the data sets and false positives were obtained from the candidate objects identified by the CAD system at prescreening after exclusion of the true microcalcifications. The DLCNN architecture was selected by varying the number of filters, filter kernel sizes and gradient computation parameter in the convolution layers, resulting in a parameter space of 216 combinations. The exhaustive grid search method was used to select an optimal architecture within the parameter space studied, guided by the area under the receiver operating characteristic curve (AUC) as a figure-of-merit. The effects of varying different categories of the parameter space were analyzed. The selected DLCNN was compared with our previously designed CNN architecture for the test set. The AUCs of the CNN and DLCNN was 0.89 and 0.93, respectively. The improvement was statistically significant (p < 0.05).
Computer aided lung cancer diagnosis with deep learning algorithms
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval
Yaron Anavi, Ilya Kogan, Elad Gelbart, et al.
We explore the combination of text metadata, such as patients’ age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest CT
The amount of calcifications in the coronary arteries is a powerful and independent predictor of cardiovascular events and is used to identify subjects at high risk who might benefit from preventive treatment. Routine quantification of coronary calcium scores can complement screening programs using low-dose chest CT, such as lung cancer screening. We present a system for automatic coronary calcium scoring based on deep convolutional neural networks (CNNs). The system uses three independently trained CNNs to estimate a bounding box around the heart. In this region of interest, connected components above 130 HU are considered candidates for coronary artery calcifications. To separate them from other high intensity lesions, classification of all extracted voxels is performed by feeding two-dimensional 50 mm × 50 mm patches from three orthogonal planes into three concurrent CNNs. The networks consist of three convolutional layers and one fully-connected layer with 256 neurons. In the experiments, 1028 non-contrast-enhanced and non-ECG-triggered low-dose chest CT scans were used. The network was trained on 797 scans. In the remaining 231 test scans, the method detected on average 194.3 mm3 of 199.8 mm3 coronary calcifications per scan (sensitivity 97.2 %) with an average false-positive volume of 10.3 mm3 . Subjects were assigned to one of five standard cardiovascular risk categories based on the Agatston score. Accuracy of risk category assignment was 84.4 % with a linearly weighted κ of 0.89. The proposed system can perform automatic coronary artery calcium scoring to identify subjects undergoing low-dose chest CT screening who are at risk of cardiovascular events with high accuracy.
Comparison of bladder segmentation using deep-learning convolutional neural network with and without level sets
We are developing a CAD system for detection of bladder cancer in CTU. In this study we investigated the application of deep-learning convolutional neural network (DL-CNN) to the segmentation of the bladder, which is a challenging problem because of the strong boundary between the non-contrast and contrast-filled regions in the bladder. We trained a DL-CNN to estimate the likelihood of a pixel being inside the bladder using neighborhood information. The segmented bladder was obtained from thresholding and hole-filling of the likelihood map. We compared the segmentation performance of the DL-CNN alone and with additional cascaded 3D and 2D level sets to refine the segmentation using 3D hand-segmented contours as reference standard. The segmentation accuracy was evaluated by five performance measures: average volume intersection %, average % volume error, average absolute % error, average minimum distance, and average Jaccard index for a data set of 81 training and 92 test cases. For the training set, DLCNN with level sets achieved performance measures of 87.2±6.1%, 6.0±9.1%, 8.7±6.1%, 3.0±1.2 mm, and 81.9±7.6%, respectively, while the DL-CNN alone obtained the values of 73.6±8.5%, 23.0±8.5%, 23.0±8.5%, 5.1±1.5 mm, and 71.5±9.2%, respectively. For the test set, the DL-CNN with level sets achieved performance measures of 81.9±12.1%, 10.2±16.2%, 14.0±13.0%, 3.6±2.0 mm, and 76.2±11.8%, respectively, while DL-CNN alone obtained 68.7±12.0%, 27.2±13.7%, 27.4±13.6%, 5.7±2.2 mm, and 66.2±11.8%, respectively. DL-CNN alone is effective in segmenting bladders but may not follow the details of the bladder wall. The combination of DL-CNN with level sets provides highly accurate bladder segmentation.
Lung and Chest II
icon_mobile_dropdown
Pulmonary nodule detection using a cascaded SVM classifier
Martin Bergtholdt, Rafael Wiemker, Tobias Klinder
Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.
Spatial context learning approach to automatic segmentation of pleural effusion in chest computed tomography images
Awais Mansoor, Rafael Casas Jr., Marius G. Linguraru
Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.
Lymph node detection in IASLC-defined zones on PET/CT images
Yihua Song, Jayaram K. Udupa, Dewey Odhner, et al.
Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.
Intrapulmonary vascular remodeling: MSCT-based evaluation in COPD and alpha-1 antitrypsin deficient subjects
Adeline Crosnier, Catalin Fetita, Gabriel Thabut, et al.
Whether COPD is generally known as a small airway disease, recent investigations suggest that vascular remodeling could play a key role in disease progression. This paper develops a specific investigation framework in order to evaluate the remodeling of the intrapulmonary vascular network and its correlation with other image or clinical parameters (emphysema score or FEV1) in patients with smoking- or genetic- (alpha-1 antitrypsin deficiency - AATD) related COPD. The developed approach evaluates the vessel caliber distribution per lung or lung region (upper, lower, 10%- and 20%- periphery) in relation with the severity of the disease and computes a remodeling marker given by the area under the caliber distribution curve for radii less than 1.6mm, AUC16. It exploits a medial axis analysis in relation with local caliber information computed in the segmented vascular network, with values normalized with respect to the lung volume (for which a robust segmentation is developed). The first results obtained on a 34-patient database (13 COPD, 13 AATD and 8 controls) showed significant vascular remodeling for COPD and AATD versus controls, with a negative correlation with the emphysema degree for COPD, but not for AATD. Significant vascular remodeling at 20% lung periphery was found both for the severe COPD and AATD patients, but not for the moderate groups. Also the vascular remodeling in AATD did not correlate with the FEV1, nor with DLCO, which might suggest independent mechanisms for bronchial and vascular remodeling in the lung.
Automatic heart localization and radiographic index computation in chest x-rays
Sema Candemir, Stefan Jaeger, Wilson Lin, et al.
This study proposes a novel automated method for cardiomegaly detection in chest X-rays (CXRs). The algo- rithm has two main stages: i) heart and lung region localization on CXRs, and ii) radiographic index extraction from the heart and lung boundaries. We employed a lung detection algorithm and extended it to automatically compute the heart boundaries. The typical models of heart and lung regions are learned using a public CXR dataset with boundary markings. The method estimates the location of these regions in candidate ('patient') CXR images by registering models to the patient CXR. For the radiographic index computation, we implemented the traditional and recently published indexes in the literature. The method is tested on a database with 250 abnormal, and 250 normal CXRs. The radiographic indexes are combined through a classifier, and the method successfully classifies the patients with cardiomegaly with a 0:77 accuracy, 0:77 sensitivity and 0:76 specificity.
Head and Neck
icon_mobile_dropdown
Comprehensive eye evaluation algorithm
C. Agurto , S. Nemeth, G. Zamora, et al.
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Vessel discoloration detection in malarial retinopathy
C. Agurto , S. Nemeth, S. Barriga, et al.
Cerebral malaria (CM) is a life-threatening clinical syndrome associated with malarial infection. It affects approximately 200 million people, mostly sub-Saharan African children under five years of age. Malarial retinopathy (MR) is a condition in which lesions such as whitening and vessel discoloration that are highly specific to CM appear in the retina. Other unrelated diseases can present with symptoms similar to CM, therefore the exact nature of the clinical symptoms must be ascertained in order to avoid misdiagnosis, which can lead to inappropriate treatment and, potentially, death. In this paper we outline the first system to detect the presence of discolored vessels associated with MR as a means to improve the CM diagnosis. We modified and improved our previous vessel segmentation algorithm by incorporating the ‘a’ channel of the CIELab color space and noise reduction. We then divided the segmented vasculature into vessel segments and extracted features at the wall and in the centerline of the segment. Finally, we used a regression classifier to sort the segments into discolored and not-discolored vessel classes. By counting the abnormal vessel segments in each image, we were able to divide the analyzed images into two groups: normal and presence of vessel discoloration due to MR. We achieved an accuracy of 85% with sensitivity of 94% and specificity of 67%. In clinical practice, this algorithm would be combined with other MR retinal pathology detection algorithms. Therefore, a high specificity can be achieved. By choosing a different operating point in the ROC curve, our system achieved sensitivity of 67% with specificity of 100%.
Computer-aided detection of human cone photoreceptor inner segments using multi-scale circular voting
Jianfei Liu, Alfredo Dubra, Johnny Tam
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
Sweet-spot training for early esophageal cancer detection
Fons van der Sommen, Svitlana Zinger, Erik J. Schoon, et al.
Over the past decade, the imaging tools for endoscopists have improved drastically. This has enabled physicians to visually inspect the intestinal tissue for early signs of malignant lesions. Besides this, recent studies show the feasibility of supportive image analysis for endoscopists, but the analysis problem is typically approached as a segmentation task where binary ground truth is employed. In this study, we show that the detection of early cancerous tissue in the gastrointestinal tract cannot be approached as a binary segmentation problem and it is crucial and clinically relevant to involve multiple experts for annotating early lesions. By employing the so-called sweet spot for training purposes as a metric, a much better detection performance can be achieved. Furthermore, a multi-expert-based ground truth, i.e. a golden standard, enables an improved validation of the resulting delineations. For this purpose, besides the sweet spot we also propose another novel metric, the Jaccard Golden Standard (JIGS) that can handle multiple ground-truth annotations. Our experiments involving these new metrics and based on the golden standard show that the performance of a detection algorithm of early neoplastic lesions in Barrett's esophagus can be increased significantly, demonstrating a 10 percent point increase in the resulting F1 detection score.
A single-layer network unsupervised feature learning method for white matter hyperintensity segmentation
Koen Vijverberg, Mohsen Ghafoorian, Inge W. M. van Uden, et al.
Cerebral small vessel disease (SVD) is a disorder frequently found among the old people and is associated with deterioration in cognitive performance, parkinsonism, motor and mood impairments. White matter hyperintensities (WMH) as well as lacunes, microbleeds and subcortical brain atrophy are part of the spectrum of image findings, related to SVD. Accurate segmentation of WMHs is important for prognosis and diagnosis of multiple neurological disorders such as MS and SVD. Almost all of the published (semi-)automated WMH detection models employ multiple complex hand-crafted features, which require in-depth domain knowledge. In this paper we propose to apply a single-layer network unsupervised feature learning (USFL) method to avoid hand-crafted features, but rather to automatically learn a more efficient set of features. Experimental results show that a computer aided detection system with a USFL system outperforms a hand-crafted approach. Moreover, since the two feature sets have complementary properties, a hybrid system that makes use of both hand-crafted and unsupervised learned features, shows a significant performance boost compared to each system separately, getting close to the performance of an independent human expert.
Early esophageal cancer detection using RF classifiers
Markus H. A. Janse, Fons van der Sommen, Svitlana Zinger, et al.
Esophageal cancer is one of the fastest rising forms of cancer in the Western world. Using High-Definition (HD) endoscopy, gastroenterology experts can identify esophageal cancer at an early stage. Recent research shows that early cancer can be found using a state-of-the-art computer-aided detection (CADe) system based on analyzing static HD endoscopic images. Our research aims at extending this system by applying Random Forest (RF) classification, which introduces a confidence measure for detected cancer regions. To visualize this data, we propose a novel automated annotation system, employing the unique characteristics of the previous confidence measure. This approach allows reliable modeling of multi-expert knowledge and provides essential data for real-time video processing, to enable future use of the system in a clinical setting. The performance of the CADe system is evaluated on a 39-patient dataset, containing 100 images annotated by 5 expert gastroenterologists. The proposed system reaches a precision of 75% and recall of 90%, thereby improving the state-of-the-art results by 11 and 6 percentage points, respectively.
Radiomics II
icon_mobile_dropdown
Applying a radiomics approach to predict prognosis of lung cancer patients
Nastaran Emaminejad, Shiju Yan, Yunzhi Wang, et al.
Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS (“yes” and “no”) groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients < 0.5 including between ERCC1 and RRM1 biomarkers). By using area under ROC curve as an assessment index, the QI, SR and GB based classifiers yielded AUC = 0.89±0.04, 0.73±0.06 and 0.76±0.07, respectively, which showed that all three types of features had prediction power (AUC>0.5). Among them, using QI yielded the highest performance.
Radiomics versus physician assessment for the early prediction of local cancer recurrence after stereotactic radiotherapy for lung cancer
Sarah A. Mattonen, Carol Johnson, David A. Palma, et al.
Stereotactic ablative radiotherapy (SABR) has recently become a standard treatment option for patients with early-stage lung cancer, which achieves local control rates similar to surgery. Local recurrence following SABR typically presents after one year post-treatment. However, benign radiological changes mimicking local recurrence can appear on CT imaging following SABR, complicating the assessment of response. We hypothesize that subtle changes on early post- SABR CT images are important in predicting the eventual incidence of local recurrence and would be extremely valuable to support timely salvage interventions. The objective of this study was to extract radiomic image features on post-SABR follow-up images for 45 patients (15 with local recurrence and 30 without) to aid in the early prediction of local recurrence. Three blinded thoracic radiation oncologists were also asked to score follow-up images as benign injury or local recurrence. A radiomic signature consisting of five image features demonstrated a classification error of 24%, false positive rate (FPR) of 24%, false negative rate (FNR) of 23%, and area under the receiver operating characteristic curve (AUC) of 0.85 at 2–5 months post-SABR. At the same time point, three physicians assessed the majority of images as benign injury for overall errors of 34–37%, FPRs of 0–4%, and FNRs of 100%. These results suggest that radiomics can detect early changes associated with local recurrence which are not typically considered by physicians. We aim to develop a decision support system which could potentially allow for early salvage therapy of patients with local recurrence following SABR.
Automatic staging of bladder cancer on CT urography
Correct staging of bladder cancer is crucial for the decision of neoadjuvant chemotherapy treatment and minimizing the risk of under- or over-treatment. Subjectivity and variability of clinicians in utilizing available diagnostic information may lead to inaccuracy in staging bladder cancer. An objective decision support system that merges the information in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate and consistent staging assessments. In this study, we developed a preliminary method to stage bladder cancer. With IRB approval, 42 bladder cancer cases with CTU scans were collected from patient files. The cases were classified into two classes based on pathological stage T2, which is the decision threshold for neoadjuvant chemotherapy treatment (i.e. for stage ≥T2) clinically. There were 21 cancers below stage T2 and 21 cancers at stage T2 or above. All 42 lesions were automatically segmented using our auto-initialized cascaded level sets (AI-CALS) method. Morphological features were extracted, which were selected and merged by linear discriminant analysis (LDA) classifier. A leave-one-case-out resampling scheme was used to train and test the classifier using the 42 lesions. The classification accuracy was quantified using the area under the ROC curve (Az). The average training Az was 0.97 and the test Az was 0.85. The classifier consistently selected the lesion volume, a gray level feature and a contrast feature. This predictive model shows promise for assisting in assessing the bladder cancer stage.
Signal intensity analysis of ecological defined habitat in soft tissue sarcomas to predict metastasis development
Hamidreza Farhidzadeh, Baishali Chaudhury, Jacob G. Scott, et al.
Magnetic Resonance Imaging (MRI) is the standard of care in the clinic for diagnosis and follow up of Soft Tissue Sarcomas (STS) which presents an opportunity to explore the heterogeneity inherent in these rare tumors. Tumor heterogeneity is a challenging problem to quantify and has been shown to exist at many scales, from genomic to radiomic, existing both within an individual tumor, between tumors from the same primary in the same patient and across different patients. In this paper, we propose a method which focuses on spatially distinct sub-regions or habitats in the diagnostic MRI of patients with STS by using pixel signal intensity. Habitat characteristics likely represent areas of differing underlying biology within the tumor, and delineation of these differences could provide clinically relevant information to aid in selecting a therapeutic regimen (chemotherapy or radiation). To quantify tumor heterogeneity, first we assay intra-tumoral segmentations based on signal intensity and then build a spatial mapping scheme from various MRI modalities. Finally, we predict clinical outcomes, using in this paper the appearance of distant metastasis - the most clinically meaningful endpoint. After tumor segmentation into high and low signal intensities, a set of quantitative imaging features based on signal intensity is proposed to represent variation in habitat characteristics. This set of features is utilized to predict metastasis in a cohort of STS patients. We show that this framework, using only pre-therapy MRI, predicts the development of metastasis in STS patients with 72.41% accuracy, providing a starting point for a number of clinical hypotheses.
Classification of progression free survival with nasopharyngeal carcinoma tumors
Hamidreza Farhidzadeh, Joo Y. Kim, Jacob G. Scott, et al.
Nasopharyngeal carcinoma (NPC) is an abnormal growth of tissue which arises from the back of the nose. At the time of diagnosis, detection of tumor features with prognostic significance, including patient demographics, imaging characteristics and molecular characteristics, can enable the treating clinician to select a treatment that is optimized for the individual patient. At present, the analysis of tumor imaging features is limited to size criteria and macroscopic textural semantic descriptors, but computerized quantification of intratumoral heterogeneity and their temporal evolution may provide another metric for predicting prognosis. We propose medical imaging feature analysis methods and radiomics machine learning methods to predict failure of treatment. NPC tumors on contrast-enhanced T1 (T1Gd) sequences of 25 NPC patients' diagnostic magnetic resonance images (MRI) were manually contoured. Otsu segmentation was applied to segment the tumor into highly enhancing vs. weakly enhancing signal intensity subregions. Within these subregions, texture features were extracted to numerically quantify the intraregional heterogeneity. Patients were divided into two prognostic groups; a progression-freesurvival group (those without locoregional recurrence or distant metastases), and the disease progression group (those with locoregional recurrence or distant metastases). We used Support Vector Machines (SVM) to perform classification (prediction of prognosis). The features from the highly enhancing subregion classify prognosis with 80% predictive accuracy with AUC=0.60, while the captured features from the weakly enhancing subregion classify prognosis with 76% accuracy with AUC= 0.76.
Colon and Prostate
icon_mobile_dropdown
Decision forests for learning prostate cancer probability maps from multiparametric MRI
Henry R. Ehrenberg, Daniel Cornfeld, Cayce B. Nawaf, et al.
Objectives: Advances in multiparametric magnetic resonance imaging (mpMRI) and ultrasound/MRI fusion imaging offer a powerful alternative to the typical undirected approach to diagnosing prostate cancer. However, these methods require the time and expertise needed to interpret mpMRI image scenes. In this paper, a machine learning framework for automatically detecting and localizing cancerous lesions within the prostate is developed and evaluated. Methods: Two studies were performed to gather MRI and pathology data. The 12 patients in the first study underwent an MRI session to obtain structural, diffusion-weighted, and dynamic contrast enhanced image vol- umes of the prostate, and regions suspected of being cancerous from the MRI data were manually contoured by radiologists. Whole-mount slices of the prostate were obtained for the patients in the second study, in addition to structural and diffusion-weighted MRI data, for pathology verification. A 3-D feature set for voxel-wise appear- ance description combining intensity data, textural operators, and zonal approximations was generated. Voxels in a test set were classified as normal or cancer using a decision forest-based model initialized using Gaussian discriminant analysis. A leave-one-patient-out cross-validation scheme was used to assess the predictions against the expert manual segmentations confirmed as cancer by biopsy. Results: We achieved an area under the average receiver-operator characteristic curve of 0.923 for the first study, and visual assessment of the probability maps showed 21 out of 22 tumors were identified while a high level of specificity was maintained. In addition to evaluating the model against related approaches, the effects of the individual MRI parameter types were explored, and pathological verification using whole-mount slices from the second study was performed. Conclusions: The results of this paper show that the combination of mpMRI and machine learning is a powerful tool for quantitatively diagnosing prostate cancer.
Fusion of multi-parametric MRI and temporal ultrasound for characterization of prostate cancer: in vivo feasibility study
Farhad Imani, Sahar Ghavidel, Purang Abolmaesumi, et al.
Recently, multi-parametric Magnetic Resonance Imaging (mp-MRI) has been used to improve the sensitivity of detecting high-risk prostate cancer (PCa). Prior to biopsy, primary and secondary cancer lesions are identified on mp-MRI. The lesions are then targeted using TRUS guidance. In this paper, for the first time, we present a fused mp-MRI-temporal-ultrasound framework for characterization of PCa, in vivo. Cancer classification results obtained using temporal ultrasound are fused with those achieved using consolidated mp-MRI maps determined by multiple observers. We verify the outcome of our study using histopathology following deformable registration of ultrasound and histology images. Fusion of temporal ultrasound and mp-MRI for characterization of the PCa results in an area under the receiver operating characteristic curve (AUC) of 0.86 for cancerous regions with Gleason scores (GSs)≥3+3, and AUC of 0.89 for those with GSs≥3+4.
An integrated classifier for computer-aided diagnosis of colorectal polyps based on random forest and location index strategies
Yifan Hu, Hao Han, Wei Zhu, et al.
Feature classification plays an important role in differentiation or computer-aided diagnosis (CADx) of suspicious lesions. As a widely used ensemble learning algorithm for classification, random forest (RF) has a distinguished performance for CADx. Our recent study has shown that the location index (LI), which is derived from the well-known kNN (k nearest neighbor) and wkNN (weighted k nearest neighbor) classifier [1], has also a distinguished role in the classification for CADx. Therefore, in this paper, based on the property that the LI will achieve a very high accuracy, we design an algorithm to integrate the LI into RF for improved or higher value of AUC (area under the curve of receiver operating characteristics -- ROC). Experiments were performed by the use of a database of 153 lesions (polyps), including 116 neoplastic lesions and 37 hyperplastic lesions, with comparison to the existing classifiers of RF and wkNN, respectively. A noticeable gain by the proposed integrated classifier was quantified by the AUC measure.
Deep learning for electronic cleansing in dual-energy CT colonography
Rie Tachibana, Janne J. Näppi, Toru Hironakaa, et al.
The purpose of this study was to develop a novel deep-learning-based electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC). In this method, an ensemble of deep convolutional neural networks (DCNNs) is used to classify each voxel of DE-CTC image volumes into one of five multi-material (MUMA) classes: luminal air, soft tissue, tagged fecal material, or a partial-volume boundary between air and tagging or that of soft tissue and tagging. Each DCNN acts as a voxel classifier. At each voxel, a region-of-interest (ROI) centered at the voxel is extracted. After mapping the pixels of the ROI to the input layer of a DCNN, a series of convolutional and max-pooling layers is used to extract features with increasing levels of abstraction. The output layer produces the probabilities at which the input voxel belongs to each of the five MUMA classes. To develop an ensemble of DCNNs, we trained multiple DCNNs based on multi-spectral image volumes derived from the DE-CTC images, including material decomposition images and virtual monochromatic images. The outputs of these DCNNs were then combined by means of a meta-classifier for precise classification of the voxels. Finally, the electronically cleansed CTC images were generated by removing regions that were classified as other than soft tissue, followed by colon surface reconstruction. Preliminary results based on 184,320 images sampled from 30 clinical CTC cases showed a higher accuracy in labeling these classes than that of our previous machine-learning methods, indicating that deep-learning-based multi-spectral EC can accurately remove residual fecal materials from CTC images without generating major EC artifacts.
Colitis detection on abdominal CT scans by rich feature hierarchies
Jiamin Liu, Nathan Lay, Zhuoshi Wei, et al.
Colitis is inflammation of the colon due to neutropenia, inflammatory bowel disease (such as Crohn disease), infection and immune compromise. Colitis is often associated with thickening of the colon wall. The wall of a colon afflicted with colitis is much thicker than normal. For example, the mean wall thickness in Crohn disease is 11-13 mm compared to the wall of the normal colon that should measure less than 3 mm. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with linear SVMs. We applied the detection method to 260 images from 26 CT scans of patients with colitis for evaluation. The detection system can achieve 0.85 sensitivity at 1 false positive per image.
Abdominal
icon_mobile_dropdown
Automatic detection of ureter lesions in CT urography
Trevor Exell, Lubomir Hadjiiski, Heang-Ping Chan, et al.
We are developing a CAD system for automated detection of ureter abnormalities in multi-detector row CT urography (CTU). Our CAD system consists of two stages. The first stage automatically tracks the ureter via the previously proposed COmbined Model-guided Path-finding Analysis and Segmentation System (COMPASS). The second stage consists of lesion enhancement filtering, adaptive thresholding, edge extraction, and noise removal. With IRB approval, 36 cases were collected from patient files, including 15 cases (17 ureters with 32 lesions) for training, and 10 abnormal cases (11 ureters with 17 lesions) and 11 normal cases (22 ureters) for testing. All lesions were identified by experienced radiologists on the CTU images and COMPASS was able to track the ureters in 100% of the cases. The average lesion size was 5.1 mm (range: 2.1 mm – 21.9 mm) for the training set and 6.1 mm (range: 2.0 mm – 18.9 mm) for the test set. The average conspicuity was 4.1 (range: 2 to 5) and 3.9 (range: 1 to 5) on a scale of 1 to 5 (5 very subtle), for the training and test sets, respectively. The system achieved 90.6% sensitivity at 2.41 (41/17) FPs/ureter for the training set and 70.6% sensitivity at 2 (44/22) FPs/normal ureter for the test set. These initial results demonstrate the feasibility of the CAD system to track the ureter and detect ureter cancer of medium conspicuity and relatively small sizes.
Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis
Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55±0:27 and 12:7±7:9 mm (mean standard deviation), respectively.
Semi-automatic assessment of pediatric hydronephrosis severity in 3D ultrasound
Juan J. Cerrolaza, Hansel Otero M.D., Peter Yao, et al.
Hydronephrosis is the most common abnormal finding in pediatric urology. Thanks to its non-ionizing nature, ultrasound (US) imaging is the preferred diagnostic modality for the evaluation of the kidney and the urinary track. However, due to the lack of correlation of US with renal function, further invasive and/or ionizing studies might be required (e.g., diuretic renograms). This paper presents a computer-aided diagnosis (CAD) tool for the accurate and objective assessment of pediatric hydronephrosis based on morphological analysis of kidney from 3DUS scans. The integration of specific segmentation tools in the system, allows to delineate the relevant renal structures from 3DUS scans of the patients with minimal user interaction, and the automatic computation of 90 anatomical features. Using the washout half time (T1/2) as indicative of renal obstruction, an optimal subset of predictive features is selected to differentiate, with maximum sensitivity, those severe cases where further attention is required (e.g., in the form of diuretic renograms), from the noncritical ones. The performance of this new 3DUS-based CAD system is studied for two clinically relevant T1/2 thresholds, 20 and 30 min. Using a dataset of 20 hydronephrotic cases, pilot experiments show how the system outperforms previous 2D implementations by successfully identifying all the critical cases (100% of sensitivity), and detecting up to 100% (T1/2 = 20 min) and 67% (T1/2 = 30 min) of non-critical ones for T1/2 thresholds of 20 and 30 min, respectively.
Machine-learning based comparison of CT-perfusion maps and dual energy CT for pancreatic tumor detection
Michael Goetz, Stephan Skornitzke, Christian Weber, et al.
Perfusion CT is well-suited for diagnosis of pancreatic tumors but tends to be associated with a high radiation exposure. Dual-energy CT (DECT) might be an alternative to perfusion CT, offering correlating contrasts while being acquired at lower radiation doses. While previous studies compared intensities of Dual Energy iodine maps and CT-perfusion maps, no study has assessed the combined discriminative power of all information that can be generated from an acquisition of both functional imaging methods. We therefore propose the use of a machine learning algorithm for assessing the amount of information that becomes available by the combination of multiple images. For this, we train a classifier on both imaging methods, using a new approach that allows us to train only from small regions of interests (ROIs). This makes our study comparable to other - ROI-based analysis - and still allows comparing the ability of both classifiers to discriminate between healthy and tumorous tissue. We were able to train classifiers that yield DICE scores over 80% with both imaging methods. This indicates that Dual Energy Iodine maps might be used for diagnosis of pancreatic tumors instead of Perfusion CT, although the detection rate is lower. We also present tumor risk maps that visualize possible tumorous areas in an intuitive way and can be used during diagnosis as an additional information source.
Automatic anatomy recognition on CT images with pathology
Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps – model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.
Differentiating bladder carcinoma from bladder wall using 3D textural features: an initial study
Differentiating bladder tumors from wall tissues is of critical importance for the detection of invasion depth and cancer staging. The textural features embedded in bladder images have demonstrated their potentials in carcinomas detection and classification. The purpose of this study was to investigate the feasibility of differentiating bladder carcinoma from bladder wall using three-dimensional (3D) textural features extracted from MR bladder images. The widely used 2D Tamura features were firstly wholly extended to 3D, and then different types of 3D textural features including 3D features derived from gray level co-occurrence matrices (GLCM) and grey level-gradient co-occurrence matrix (GLGCM), as well as 3D Tamura features, were extracted from 23 volumes of interest (VOIs) of bladder tumors and 23 VOIs of patients’ bladder wall. Statistical results show that 30 out of 47 features are significantly different between cancer tissues and wall tissues. Using these features with significant differences between these two types of tissues, classification performance with a supported vector machine (SVM) classifier demonstrates that the combination of three types of selected 3D features outperform that of using only one type of features. All the observations demonstrate that significant textural differences exist between carcinomatous tissues and bladder wall, and 3D textural analysis may be an effective way for noninvasive staging of bladder cancer.
Posters: Breast
icon_mobile_dropdown
Reference state estimation of breast computed tomography for registration with digital mammography
Understanding the deformation of the breast is a fundamental aspect to lesion localization in multi-view and multimodality imaging. Finite element methods (FEMs) are commonly used to model the deformation process of the breast. In FEM, ideally a reference state of the breast with no loading conditions is available as a starting point and then appropriate imaging-modality-based loading conditions for a specific application can be applied to the breast in the reference state. We propose an iterative method to estimate the reference state configuration between a gravity loaded uncompressed breast computed tomography (BCT) volume and a compressed breast using the corresponding digital mammograms (DM) as a guide. The reference state breast model is compressed between two plates similar to mammographic imaging. A DM-like image is generated by forward ray-tracing. The iterative method applies pressure in the anterior-to-posterior direction of the breast and uses information from the DM geometry and measurements to converge on a reference state of the breast. The process of reference state estimation and breast compression was studied using BCT cases from small to large breast sizes and breast densities consisting of scattered, heterogeneous and extremely dense categories. The breasts were assumed to be composed of non-linear materials based on Mooney-Rivlin models. The effects of the material properties on the estimation process were analyzed. The Fréchet distance between the edges of the DM-like image and the DM image was used as a performance measure.
Improving the performance of lesion-based computer-aided detection schemes of breast masses using a case-based adaptive cueing method
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (≤ 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
Quantitative breast MRI radiomics for cancer risk assessment and the monitoring of high-risk populations
Breast density is routinely assessed qualitatively in screening mammography. However, it is challenging to quantitatively determine a 3D density from a 2D image such as a mammogram. Furthermore, dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is used more frequently in the screening of high-risk populations. The purpose of our study is to segment parenchyma and to quantitatively determine volumetric breast density on pre-contrast axial DCE-MRI images (i.e., non-contrast) using a semi-automated quantitative approach. In this study, we retroactively examined 3D DCE-MRI images taken for breast cancer screening of a high-risk population. We analyzed 66 cases with ages between 28 and 76 (mean 48.8, standard deviation 10.8). DCE-MRIs were obtained on a Philips 3.0 T scanner. Our semi-automated DCE-MRI algorithm includes: (a) segmentation of breast tissue from non-breast tissue using fuzzy cmeans clustering (b) separation of dense and fatty tissues using Otsu’s method, and (c) calculation of volumetric density as the ratio of dense voxels to total breast voxels. We examined the relationship between pre-contrast DCE-MRI density and clinical BI-RADS density obtained from radiology reports, and obtained a statistically significant correlation [Spearman ρ-value of 0.66 (p < 0.0001)]. Our method within precision medicine may be useful for monitoring high-risk populations.
Benign-malignant mass classification in mammogram using edge weighted local texture features
Rinku Rabidas, Abhishek Midya, Anup Sadhu, et al.
This paper introduces novel Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary Pattern (DRLTP) for the classification of mammographic masses as benign or malignant. Mass is one of the common, however, challenging evidence of breast cancer in mammography and diagnosis of masses is a difficult task. Since DRLBP and DRLTP overcome the drawbacks of Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) by discriminating a brighter object against the dark background and vice-versa, in addition to the preservation of the edge information along with the texture information, several edge-preserving texture features are extracted, in this study, from DRLBP and DRLTP. Finally, a Fisher Linear Discriminant Analysis method is incorporated with discriminating features, selected by stepwise logistic regression method, for the classification of benign and malignant masses. The performance characteristics of DRLBP and DRLTP features are evaluated using a ten-fold cross-validation technique with 58 masses from the mini-MIAS database, and the best result is observed with DRLBP having an area under the receiver operating characteristic curve of 0.982.
Parameter optimization of parenchymal texture analysis for prediction of false-positive recalls from screening mammography
Shonket Ray, Brad M. Keller, Jinbo Chen, et al.
This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.
Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach
Xiangrong Zhou, Takuya Kano, Yunliang Cai, et al.
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Computer-aided classification of mammographic masses using the deep learning technology: a preliminary study
Yuchen Qiu, Shiju Yan, Maxine Tan, et al.
Although mammography is the only clinically acceptable imaging modality used in the population-based breast cancer screening, its efficacy is quite controversy. One of the major challenges is how to help radiologists more accurately classify between benign and malignant lesions. The purpose of this study is to investigate a new mammographic mass classification scheme based on a deep learning method. In this study, we used an image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms, which includes 280 malignant and 280 benign mass ROIs, respectively. An eight layer deep learning network was applied, which employs three pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perception (MLP) classifier for feature categorization. In order to improve robustness of selected features, each convolution layer is connected with a max-pooling layer. A number of 20, 10, and 5 feature maps were utilized for the 1st, 2nd and 3rd convolution layer, respectively. The convolution networks are followed by a MLP classifier, which generates a classification score to predict likelihood of a ROI depicting a malignant mass. Among 560 ROIs, 420 ROIs were used as a training dataset and the remaining 140 ROIs were used as a validation dataset. The result shows that the new deep learning based classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.810±0.036. This study demonstrated the potential superiority of using a deep learning based classifier to distinguish malignant and benign breast masses without segmenting the lesions and extracting the pre-defined image features.
An initial investigation on developing a new method to predict short-term breast cancer risk based on deep learning technology
Yuchen Qiu, Yunzhi Wang, Shiju Yan, et al.
In order to establish a new personalized breast cancer screening paradigm, it is critically important to accurately predict the short-term risk of a woman having image-detectable cancer after a negative mammographic screening. In this study, we developed and tested a novel short-term risk assessment model based on deep learning method. During the experiment, a number of 270 “prior” negative screening cases was assembled. In the next sequential (“current”) screening mammography, 135 cases were positive and 135 cases remained negative. These cases were randomly divided into a training set with 200 cases and a testing set with 70 cases. A deep learning based computer-aided diagnosis (CAD) scheme was then developed for the risk assessment, which consists of two modules: adaptive feature identification module and risk prediction module. The adaptive feature identification module is composed of three pairs of convolution-max-pooling layers, which contains 20, 10, and 5 feature maps respectively. The risk prediction module is implemented by a multiple layer perception (MLP) classifier, which produces a risk score to predict the likelihood of the woman developing short-term mammography-detectable cancer. The result shows that the new CAD-based risk model yielded a positive predictive value of 69.2% and a negative predictive value of 74.2%, with a total prediction accuracy of 71.4%. This study demonstrated that applying a new deep learning technology may have significant potential to develop a new short-term risk predicting scheme with improved performance in detecting early abnormal symptom from the negative mammograms.
Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment
Faranak Aghaei, Maxine Tan, Alan B. Hollingsworth, et al.
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had “complete response” (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had “partially response” (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83±0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
First and second-order features for detection of masses in digital breast tomosynthesis
We are developing novel methods for prescreening of mass candidates in computer-aided detection (CAD) system for digital breast tomosynthesis (DBT). With IRB approval and written informed consent, 186 views from 94 breasts were imaged using a GE GEN2 prototype DBT system. The data set was randomly separated into training and test sets by cases. Gradient field convergence features based on first-order features were used to select the initial set of mass candidates. Eigenvalues based on second-order features from the Hessian matrix were extracted for the mass candidate locations in the DBT volume. The features from the first- and second-order analysis form the feature vector that was input to a linear discriminant analysis (LDA) classifier to generate a candidate-likelihood score. The likelihood scores were ranked and the top N candidates were passed onto the subsequent detection steps. The improvement between using only first-order features and the combination of first and second-order features was analyzed using a rank-sensitivity plot. 3D objects were obtained with two-stage 3D clustering followed by active contour segmentation. Morphological, gradient field, and texture features were extracted and feature selection was performed using stepwise feature selection. A combination of LDA and rule-based classifiers was used for FP reduction. The LDA classifier output a masslikelihood score for each object that was used as a decision variable for FROC analysis. At breast-based sensitivities of 70% and 80%, prescreening using first-order and second-order features resulted in 0.7 and 1.0 FPs/DBT.
An adaptive online learning framework for practical breast cancer diagnosis
Tianshu Chu, Jie Wang, Jiayu Chen
This paper presents an adaptive online learning (OL) framework for supporting clinical breast cancer (BC) diagnosis. Unlike traditional data mining, which trains a particular model from a fixed set of medical data, our framework offers robust OL models that can be updated adaptively according to new data sequences and newly discovered features. As a result, our framework can naturally learn to perform BC diagnosis using experts’ opinions on sequential patient cases with cumulative clinical measurements. The framework integrates both supervised learning (SL) models for BC risk assessment and reinforcement learning (RL) models for decision-making of clinical measurements. In other words, online SL and RL interact with one another, and under a doctor’s supervision, push the patient’s diagnosis further. Furthermore, our framework can quickly update relevant model parameters based on current diagnosis information during the training process. Additionally, it can build flexible fitted models by integrating different model structures and plugging in the corresponding parameters during the prediction (or decision-making) process. Even when the feature space is extended, it can initialize the corresponding parameters and extend the existing model structure without loss of the cumulative knowledge. We evaluate the OL framework on real datasets from BCSC and WBC, and demonstrate that our SL models achieve accurate BC risk assessment from sequential data and incremental features. We also verify that the well-trained RL models provide promising measurement suggestions.
Posters: Colon and Prostate
icon_mobile_dropdown
Computer-aided detection of polyps in optical colonoscopy images
Saad Nadeem, Arie Kaufman
We present a computer-aided detection algorithm for polyps in optical colonoscopy images. Polyps are the precursors to colon cancer. In the US alone, 14 million optical colonoscopies are performed every year, mostly to screen for polyps. Optical colonoscopy has been shown to have an approximately 25% polyp miss rate due to the convoluted folds and bends present in the colon. In this work, we present an automatic detection algorithm to detect these polyps in the optical colonoscopy images. We use a machine learning algorithm to infer a depth map for a given optical colonoscopy image and then use a detailed pre-built polyp profile to detect and delineate the boundaries of polyps in this given image. We have achieved the best recall of 84.0% and the best specificity value of 83.4%.
Performance evaluation of multi-material electronic cleansing for ultra-low-dose dual-energy CT colonography
Rie Tachibana, Naja Kohlhase, Janne J. Näppi, et al.
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
Detection of benign prostatic hyperplasia nodules in T2W MR images using fuzzy decision forest
Nathan Lay, Sabrina Freeman, Baris Turkbey, et al.
Prostate cancer is the second leading cause of cancer-related death in men MRI has proven useful for detecting prostate cancer, and CAD may further improve detection. One source of false positives in prostate computer-aided diagnosis (CAD) is the presence of benign prostatic hyperplasia (BPH) nodules. These nodules have a distinct appearance with a pseudo-capsule on T2 weighted MR images but can also resemble cancerous lesions in other sequences such as the ADC or high B-value images. Describing their appearance with hand-crafted heuristics (features) that also exclude the appearance of cancerous lesions is challenging. This work develops a method based on fuzzy decision forests to automatically learn discriminative features for the purpose of BPH nodule detection in T2 weighted images for the purpose of improving prostate CAD systems.
Colonoscopic polyp detection using convolutional neural networks
Sun Young Park, Dusty Sargent
Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician’s interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report testing results for the new algorithm using both human and mouse colonoscopy data.
Normalization of T2W-MRI prostate images using Rician a priori
Guillaume Lemaître, Mojdeh Rastgoo , Joan Massich, et al.
Prostate cancer is reported to be the second most frequently diagnosed cancer of men in the world. In practise, diagnosis can be affected by multiple factors which reduces the chance to detect the potential lesions. In the last decades, new imaging techniques mainly based on MRI are developed in conjunction with Computer-Aided Diagnosis (CAD) systems to help radiologists for such diagnosis. CAD systems are usually designed as a sequential process consisting of four stages: pre-processing, segmentation, registration and classification. As a pre-processing, image normalization is a critical and important step of the chain in order to design a robust classifier and overcome the inter-patients intensity variations. However, little attention has been dedicated to the normalization of T2W-Magnetic Resonance Imaging (MRI) prostate images. In this paper, we propose two methods to normalize T2W-MRI prostate images: (i) based on a Rician a priori and (ii) based on a Square-Root Slope Function (SRSF) representation which does not make any assumption regarding the Probability Density Function (PDF) of the data. A comparison with the state-of-the-art methods is also provided. The normalization of the data is assessed by comparing the alignment of the patient PDFs in both qualitative and quantitative manners. In both evaluation, the normalization using Rician a priori outperforms the other state-of-the-art methods.
Deep transfer learning of virtual endoluminal views for the detection of polyps in CT colonography
Janne J. Näppi, Toru Hironaka, Daniele Regge, et al.
Proper training of deep convolutional neural networks (DCNNs) requires large annotated image databases that are currently not available in CT colonography (CTC). In this study, we employed a deep transfer learning (DETALE) scheme to circumvent this problem in automated polyp detection for CTC. In our method, a DCNN that had been pre-trained with millions of non-medical images was adapted to identify polyps using virtual endoluminal images of the polyp candidates prompted by a computer-aided detection (CADe) system. For evaluation, 154 CTC cases with and without fecal tagging were divided randomly into a development set and an external validation set including 107 polyps ≥6 mm in size. A CADe system was trained to detect polyp candidates using the development set, and the virtual endoluminal images of the polyp candidates were labeled manually into true-positive and several false-positive (FP) categories for transfer learning of the DCNN. Next, the trained CADe system was used to detect polyp candidates from the external validation set, and the DCNN reviewed their images to determine the final detections. The detection sensitivity of the standalone CADe system was 93% at 6.4 FPs per patient on average, whereas the DCNN reduced the number of FPs to 2.0 per patient without reducing detection sensitivity. Most of the remaining FP detections were caused by untagged stool. In fecal-tagged CTC cases, the detection sensitivity was 94% at only 0.78 FPs per patient on average. These preliminary results indicate that DETALE can yield substantial improvement in the accuracy of automated polyp detection in CTC.
Posters: Head and Neck
icon_mobile_dropdown
Detection and measurement of retinal blood vessel pulsatile motion
Di Xiao, Shaun Frost, Janardhan Vignarajan, et al.
Retinal photography is a non-invasive and well-accepted clinical diagnosis of ocular diseases. Qualitative and quantitative assessment of retinal images is crucial in ocular diseases related clinical application. Pulsatile properties caused by cardiac rhythm, such as spontaneous venous pulsation (SVP) and pulsatile motion of small arterioles, can be visualized by dynamic retinal imaging techniques and provide clinical significance. In this paper, we aim at vessel pulsatile motion detection and measurement. We proposed a novel approach for pulsatile motion measurement of retinal blood vessels by applying retinal image registration, blood vessel detection and blood vessel motion detection and measurement on infrared retinal image sequences. The performance of the proposed methods was evaluated on 8 image sequences with 240 images. A preliminary result has demonstrated the good performance of the method for blood vessel pulsatile motion observation and measurement.
Automatic determination of white matter hyperintensity properties in relation to the development of Alzheimer's disease
Sandra van der Velden, Christoph Moenninghoff, Isabel Wanke, et al.
Alzheimer's disease (AD) is the most common form of dementia seen in the elderly. No curing medicine for AD exists at this moment. In the search for an effective medicine, research is directed towards the prediction of conversion of mild cognitive impairment (MCI) to AD. White matter hyperintensities (WMHs) have been shown to contain information regarding the development of AD, although non-conclusive results are found in literature. These studies often use qualitative measures to describe WMHs, which is time consuming and prone to variability. To investigate the relation between WMHs and the development of AD, algorithms to automatically determine quantitative properties in terms of volume and spatial distribution of WMHs are developed and compared between normal controls and MCI subjects. MCI subjects have a significantly higher total volume of WMHs than normal controls. This difference persists when lesions are classified according to their distance to the ventricular wall. Spatial distribution is also described by defining different brain regions based on a common coordinate system. This reveals that MCI subjects have a larger WMH volume in the upper part of the brain compared to normal controls. In four subjects, the change of WMH properties over time is studied in detail. Although such a small dataset cannot be used to give definitive conclusions, the data suggests that progression of WMHs in subjects with a low lesion load is caused by an increase in the number of lesions and by the progression of juxtacortical lesions. In subjects with a larger lesion load, progression is caused by expansion of pre-existing lesions.
Improvement of retinal blood vessel detection by spur removal and Gaussian matched filtering compensation
Di Xiao, Janardhan Vignarajan, Dong An, et al.
Retinal photography is a non-invasive and well-accepted clinical diagnosis of ocular diseases. Qualitative and quantitative assessment of retinal images is crucial in ocular diseases related clinical application. In this paper, we proposed approaches for improving the quality of blood vessel detection based on our initial blood vessel detection methods. A blood vessel spur pruning method has been developed for removing the blood vessel spurs both on vessel medial lines and binary vessel masks, which are caused by artifacts and side-effect of Gaussian matched vessel enhancement. A Gaussian matched filtering compensation method has been developed for removing incorrect vessel branches in the areas of low illumination. The proposed approaches were applied and tested on the color fundus images from one publicly available database and our diabetic retinopathy screening dataset. A preliminary result has demonstrated the robustness and good performance of the proposed approaches and their potential application for improving retinal blood vessel detection.
Automated blood vessel extraction using local features on retinal images
Yuji Hatanaka, Kazuki Samo, Mikiya Tajima, et al.
An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.
Automated detection of retinal whitening in malarial retinopathy
V. Joshi, C. Agurto , S. Barriga, et al.
Cerebral malaria (CM) is a severe neurological complication associated with malarial infection. Malaria affects approximately 200 million people worldwide, and claims 600,000 lives annually, 75% of whom are African children under five years of age. Because most of these mortalities are caused by the high incidence of CM misdiagnosis, there is a need for an accurate diagnostic to confirm the presence of CM. The retinal lesions associated with malarial retinopathy (MR) such as retinal whitening, vessel discoloration, and hemorrhages, are highly specific to CM, and their detection can improve the accuracy of CM diagnosis. This paper will focus on development of an automated method for the detection of retinal whitening which is a unique sign of MR that manifests due to retinal ischemia resulting from CM. We propose to detect the whitening region in retinal color images based on multiple color and textural features. First, we preprocess the image using color and textural features of the CMYK and CIE-XYZ color spaces to minimize camera reflex. Next, we utilize color features of the HSL, CMYK, and CIE-XYZ channels, along with the structural features of difference of Gaussians. A watershed segmentation algorithm is used to assign each image region a probability of being inside the whitening, based on extracted features. The algorithm was applied to a dataset of 54 images (40 with whitening and 14 controls) that resulted in an image-based (binary) classification with an AUC of 0.80. This provides 88% sensitivity at a specificity of 65%. For a clinical application that requires a high specificity setting, the algorithm can be tuned to a specificity of 89% at a sensitivity of 82%. This is the first published method for retinal whitening detection and combining it with the detection methods for vessel discoloration and hemorrhages can further improve the detection accuracy for malarial retinopathy.
Finding regional models of the Alzheimer disease by fusing information from neuropsychological tests and structural MR images
Initial diagnosis of Alzheimer's disease (AD) is based on the patient's clinical history and a battery of neuropsy-chological tests. This work presents an automatic strategy that uses Structural Magnetic Resonance Imaging (MRI) to learn brain models for different stages of the disease using information from clinical assessments. Then, a comparison of the discriminant power of the models in different anatomical areas is made by using the brain region of the models as a reference frame for the classification problem, by using the projection into the AD model a Receiver Operating Characteristic (ROC) curve is constructed. Validation was performed using a leave- one-out scheme with 86 subjects (20 AD and 60 NC) from the Open Access Series of Imaging Studies (OASIS) database. The region with the best classification performance was the left amygdala where it is possible to achieve a sensibility and specificity of 85% at the same time. The regions with the best performance, in terms of the AUC, are in strong agreement with those described as important for the diagnosis of AD in clinical practice.
Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation
Bilwaj Gaonkar, David Hovda, Neil Martin, et al.
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
A fully automatic framework for cell segmentation on non-confocal adaptive optics images
Jianfei Liu, Alfredo Dubra, Johnny Tam
By the time most retinal diseases are diagnosed, macroscopic irreversible cellular loss has already occurred. Earlier detection of subtle structural changes at the single photoreceptor level is now possible, using the adaptive optics scanning light ophthalmoscope (AOSLO). This work aims to develop a fully automatic segmentation framework to extract cell boundaries from non-confocal split-detection AOSLO images of the cone photoreceptor mosaic in the living human eye. Significant challenges include anisotropy, heterogeneous cell regions arising from shading effects, and low contrast between cells and background. To overcome these challenges, we propose the use of: 1) multi-scale Hessian response to detect heterogeneous cell regions, 2) convex hulls to create boundary templates, and 3) circularlyconstrained geodesic active contours to refine cell boundaries. We acquired images from three healthy subjects at eccentric retinal regions and manually contoured cells to generate ground-truth for evaluating segmentation accuracy. Dice coefficient, relative absolute area difference, and average contour distance were 82±2%, 11±6%, and 2.0±0.2 pixels (Mean±SD), respectively. We find that strong shading effects from vessels are a main factor that causes cell oversegmentation and false segmentation of non-cell regions. Our segmentation algorithm can automatically and accurately segment photoreceptor cells on non-confocal AOSLO images, which is the first step in longitudinal tracking of cellular changes in the individual eye over the time course of disease progression.
Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool
Jeremy Devine, Arjun Sahgal, Irene Karam, et al.
The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.
Automated detection of retinal nerve fiber layer defects on fundus images: false positive reduction based on vessel likelihood
Chisako Muramatsu, Kyoko Ishida, Akira Sawada, et al.
Early detection of glaucoma is important to slow down or cease progression of the disease and for preventing total blindness. We have previously proposed an automated scheme for detection of retinal nerve fiber layer defect (NFLD), which is one of the early signs of glaucoma observed on retinal fundus images. In this study, a new multi-step detection scheme was included to improve detection of subtle and narrow NFLDs. In addition, new features were added to distinguish between NFLDs and blood vessels, which are frequent sites of false positives (FPs). The result was evaluated with a new test dataset consisted of 261 cases, including 130 cases with NFLDs. Using the proposed method, the initial detection rate was improved from 82% to 98%. At the sensitivity of 80%, the number of FPs per image was reduced from 4.25 to 1.36. The result indicates the potential usefulness of the proposed method for early detection of glaucoma.
Phenotypic characterization of glioblastoma identified through shape descriptors
Ahmad Chaddad, Christian Desrosiers, Matthew Toews
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.
3D texture-based classification applied on brain white matter lesions on MR images
Mariana Leite, David Gobbi, Marina Salluzi, et al.
Lesions in the brain white matter are among the most frequently observed incidental findings on MR images. This paper presents a 3D texture-based classification to distinguish normal appearing white matter from white matter containing lesions, and compares it with the 2D approach. Texture analysis were based on 55 texture attributes extracted from gray-level histogram, gray-level co-occurrence matrix, run-length matrix and gradient. The results show that the 3D approach achieves an accuracy rate of 99.28%, against 97.41% of the 2D approach by using a support vector machine classifier. Furthermore, the most discriminating texture attributes on both 2D and 3D cases were obtained from the image histogram and co-occurrence matrix.
Classification of SD-OCT volumes for DME detection: an anomaly detection approach
S. Sankar, D. Sidibé, Y. Cheung, et al.
Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects’ cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
Using support vector machines with tract-based spatial statistics for automated classification of Tourette syndrome children
Hongwei Wen, Yue Liu, Jieqiong Wang, et al.
Tourette syndrome (TS) is a developmental neuropsychiatric disorder with the cardinal symptoms of motor and vocal tics which emerges in early childhood and fluctuates in severity in later years. To date, the neural basis of TS is not fully understood yet and TS has a long-term prognosis that is difficult to accurately estimate. Few studies have looked at the potential of using diffusion tensor imaging (DTI) in conjunction with machine learning algorithms in order to automate the classification of healthy children and TS children. Here we apply Tract-Based Spatial Statistics (TBSS) method to 44 TS children and 48 age and gender matched healthy children in order to extract the diffusion values from each voxel in the white matter (WM) skeleton, and a feature selection algorithm (ReliefF) was used to select the most salient voxels for subsequent classification with support vector machine (SVM). We use a nested cross validation to yield an unbiased assessment of the classification method and prevent overestimation. The accuracy (88.04%), sensitivity (88.64%) and specificity (87.50%) were achieved in our method as peak performance of the SVM classifier was achieved using the axial diffusion (AD) metric, demonstrating the potential of a joint TBSS and SVM pipeline for fast, objective classification of healthy and TS children. These results support that our methods may be useful for the early identification of subjects with TS, and hold promise for predicting prognosis and treatment outcome for individuals with TS.
A diagnosis model for early Tourette syndrome children based on brain structural network characteristics
Hongwei Wen, Yue Liu, Jieqiong Wang, et al.
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder characterized by the presence of multiple motor and vocal tics. Tic generation has been linked to disturbed networks of brain areas involved in planning, controlling and execution of action. The aim of our work is to select topological characteristics of structural network which were most efficient for estimating the classification models to identify early TS children. Here we employed the diffusion tensor imaging (DTI) and deterministic tractography to construct the structural networks of 44 TS children and 48 age and gender matched healthy children. We calculated four different connection matrices (fiber number, mean FA, averaged fiber length weighted and binary matrices) and then applied graph theoretical methods to extract the regional nodal characteristics of structural network. For each weighted or binary network, nodal degree, nodal efficiency and nodal betweenness were selected as features. Support Vector Machine Recursive Feature Extraction (SVM-RFE) algorithm was used to estimate the best feature subset for classification. The accuracy of 88.26% evaluated by a nested cross validation was achieved on combing best feature subset of each network characteristic. The identified discriminative brain nodes mostly located in the basal ganglia and frontal cortico-cortical networks involved in TS children which was associated with tic severity. Our study holds promise for early identification and predicting prognosis of TS children.
A primitive study of voxel feature generation by multiple stacked denoising autoencoders for detecting cerebral aneurysms on MRA
Mitsutaka Nemoto, Naoto Hayashi, Shouhei Hanaoka, et al.
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
Predicting outcomes in glioblastoma patients using computerized analysis of tumor shape: preliminary data
Maciej A. Mazurowski, Nicholas M. Czarnek, Leslie M. Collins, et al.
Glioblastoma (GBM) is the most common primary brain tumor characterized by very poor survival. However, while some patients survive only a few months, some might live for multiple years. Accurate prognosis of survival and stratification of patients allows for making more personalized treatment decisions and moves treatment of GBM one step closer toward the paradigm of precision medicine. While some molecular biomarkers are being investigated, medical imaging remains significantly underutilized for prognostication in GBM. In this study, we investigated whether computer analysis of tumor shape can contribute toward accurate prognosis of outcomes. Specifically, we implemented applied computer algorithms to extract 5 shape features from magnetic resonance imaging (MRI) for 22 GBM patients. Then, we determined whether each one of the features can accurately distinguish between patients with good and poor outcomes. We found that that one of the 5 analyzed features showed prognostic value of survival. The prognostic feature describes how well the 3D tumor shape fills its minimum bounding ellipsoid. Specifically, for low values (less or equal than the median) the proportion of patients that survived more than a year was 27% while for high values (higher than median) the proportion of patients with survival of more than 1 year was 82%. The difference was statistically significant (p < 0.05) even though the number of patients analyzed in this pilot study was low. We concluded that computerized, 3D analysis of tumor shape in MRI may strongly contribute to accurate prognostication and stratification of patients for therapy in GBM.
Glioma grading using cell nuclei morphologic features in digital pathology images
This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients’ images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.
Quantitative characterization of brain β-amyloid in 718 normal subjects using a joint PiB/FDG PET image histogram
Jon J. Camp, Dennis P. Hanson, Val J. Lowe M.D., et al.
We have previously described an automated system for the co-registration of PiB and FDG PET images with structural MRI and a neurological anatomy atlas to produce region-specific quantization of cortical activity and amyloid burden. We also reported a global joint PiB/FDG histogram-based measure (FDG-Associated PiB Uptake Ratio – FAPUR) that performed as well as regional PiB ratio in stratifying Alzheimer’s disease (AD) and Lewy Body Dementia (LBD) patients from normal subjects in an autopsy-verified cohort of 31. In this paper we examine results of this analysis on a clinically-verified cohort of 718 normal volunteers. We found that the global FDG ratio correlated negatively with age (r2 = 0.044) and global PiB ratio correlated positively with age (r2=0.038). FAPUR also correlated negatively with age (r2-.025), and in addition, we introduce a new metric – the Pearson’s correlation coefficient (r2) of the joint PiB/FDG histogram which correlates positively (r2=0.014) with age. We then used these measurements to construct age-weighted Z-scores for all measurements made on the original autopsy cohort. We found similar stratification using Z-scores compared to raw values; however, the joint PiB/FDG r2 Z-score showed the greatest stratification ability.
Posters: Lung and Chest
icon_mobile_dropdown
A novel approach for tuberculosis screening based on deep convolutional neural networks
Tuberculosis (TB) is one of the major global health threats especially in developing countries. Although newly diagnosed TB patients can be recovered with high cure rate, many curable TB patients in the developing countries are obliged to die because of delayed diagnosis, partly by the lack of radiography and radiologists. Therefore, developing a computer-aided diagnosis (CAD) system for TB screening can contribute to early diagnosis of TB, which results in prevention of deaths from TB. Currently, most CAD algorithms adopt carefully designed morphological features distinguishing different lesion types to improve screening performances. However, such engineered features cannot be guaranteed to be the best descriptors for TB screening. Deep learning has become a majority in machine learning society. Especially in computer vision fields, it has been verified that deep convolutional neural networks (CNN) is a very promising algorithm for various visual tasks. Since deep CNN enables end-to-end training from feature extraction to classification, it does not require objective-specific manual feature engineering. In this work, we designed CAD system based on deep CNN for automatic TB screening. Based on large-scale chest X-rays (CXRs), we achieved viable TB screening performance of 0.96, 0.93 and 0.88 in terms of AUC for three real field datasets, respectively, by exploiting the effect of transfer learning.
Ensemble lymph node detection from CT volumes combining local intensity structure analysis approach and appearance learning approach
Yoshihiko Nakamura, Yukitaka Nimura, Masahiro Oda, et al.
This paper presents an ensemble lymph node detection method combining two automated lymph node detection methods from CT volumes. Detecting enlarged abdominal lymph nodes from CT volumes is an important task for the pre-operative diagnosis and planning done for cancer surgery. Although several research works have been conducted toward achieving automated abdominal lymph node detection methods, such methods still do not have enough accuracy for detecting lymph nodes of 5 mm or larger. This paper proposes an ensemble lymph node detection method that integrates two different lymph node detection schemes: (1) the local intensity structure analysis approach and (2) the appearance learning approach. This ensemble approach is introduced with the aim of achieving high sensitivity and specificity. Each component detection method is independently designed to detect candidate regions of enlarged abdominal lymph nodes whose diameters are over 5 mm. We applied the proposed ensemble method to 22 cases using abdominal CT volumes. Experimental results showed that we can detect about 90.4% (47/52) of the abdominal lymph nodes with about 15.2 false-positives/case for lymph nodes of 5mm or more in diameter.
Classification of pulmonary nodules in lung CT images using shape and texture features
Ashis Kumar Dhara, Sudipta Mukhopadhyay, Anirvan Dutta, et al.
Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy “1",”2" are considered as benign and “4",”5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.
Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance
Mandar Kale, Sudipta Mukhopadhyay, Jatindra K. Dash, et al.
Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.
Automated anatomical description of pleural thickening towards improvement of its computer-assisted diagnosis
Kraisorn Chaisaowong, Mingze Jiang, Peter Faltin, et al.
Pleural thickenings are caused by asbestos exposure and may evolve into malignant pleural mesothelioma. An early diagnosis plays a key role towards an early treatment and an increased survival rate. Today, pleural thickenings are detected by visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. A computer-assisted diagnosis system to automatically assess pleural thickenings has been developed, which includes not only a quantitative assessment with respect to size and location, but also enhances this information with an anatomical description, i.e. lung side (left, right), part of pleura (pars costalis, mediastinalis, diaphragmatica, spinalis), as well as vertical (upper, middle, lower) and horizontal (ventral, dorsal) position. For this purpose, a 3D anatomical model of the lung surface has been manually constructed as a 3D atlas. Three registration sub-steps including rigid, affine, and nonrigid registration align the input patient lung to the 3D anatomical atlas model of the lung surface. Finally, each detected pleural thickening is assigned a set of labels describing its anatomical properties. Through this added information, an enhancement to the existing computer-assisted diagnosis system is presented in order to assure a higher precision and reproducible assessment of pleural thickenings, aiming at the diagnosis of the pleural mesothelioma in its early stage.
Computer aided diagnosis for severity assessment of pneumoconiosis using CT images
240,000 participants have a screening for diagnosis of pneumoconiosis every year in Japan. Radiograph is used for staging of severity in pneumoconiosis worldwide. This paper presents a method for quantitative assessment of severity in pneumoconiosis using both size and frequency of lung nodules that detected by thin-section CT images. This method consists of three steps. First, thoracic organs (body, ribs, spine, trachea, bronchi, lungs, heart, and pulmonary blood vessels) are segmented. Second, lung nodules that have radius over 1.5mm are detected. These steps used functions of our developed computer aided detection system of chest CT images. Third, severity in pneumoconiosis is quantified using size and frequency of lung nodules. This method was applied to nine pneumoconiosis patients. The initial results showed that proposed method can assess severity in pneumoconiosis quantitatively. This paper demonstrates effectiveness of our method in diagnosis and prognosis of pneumoconiosis in CT screening.
Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data
Rushil Anirudh, Jayaraman J. Thiagarajan, Timo Bremer, et al.
Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.
Investigating the effects of majority voting on CAD systems: a LIDC case study
Miguel Carrazza, Brendan Kennedy, Alexander Rasin, et al.
Computer-Aided Diagnosis (CAD) systems can provide a second opinion for either identifying suspicious regions on a medical image or predicting the degree of malignancy for a detected suspicious region. To develop a predictive model, CAD systems are trained on low-level image features extracted from image data and the class labels acquired through radiologists’ interpretations or a gold standard (e.g., a biopsy). While the opinion of an expert radiologist is still an estimate of the answer, the ground truth may be extremely expensive to acquire. In such cases, CAD systems are trained on input data that contains multiple expert opinions per case with the expectation that the aggregate of labels will closely approximate the ground truth. Using multiple labels to solve this problem has its own challenges because of the inherent label uncertainty introduced by the variability in the radiologists’ interpretations. Most CAD systems use majority voting (e.g., average, mode) to handle label uncertainty. This paper investigates the effects that majority voting can have on a CAD system by classifying and analyzing different semantic characteristics supplied with the Lung Image Database Consortium (LIDC) dataset. Using a decision tree based iterative predictive model, we show that majority voting with labels that exhibit certain types of skewed distribution can have a significant negative impact on the performance of a CAD system; therefore, alternative strategies for label integration are required when handling multiple interpretations.
Change descriptors for determining nodule malignancy in national lung screening trial CT screening images
Benjamin Geiger, Samuel Hawkins, Lawrence O. Hall, et al.
Pulmonary nodules are effectively diagnosed in CT scans, but determining their malignancy has been a challenge. The rate of change of the volume of a pulmonary nodule is known to be a prognostic factor for cancer development. In this study, we propose that other changes in imaging characteristics are similarly informative. We examined the combination of image features across multiple CT scans, taken from the National Lung Screening Trial, with individual scans of the same patient separated by approximately one year. By subtracting the values of existing features in multiple scans for the same patient, we were able to improve the ability of existing classification algorithms to determine whether a nodule will become malignant. We trained each classifier on 83 nodules determined to be malignant by biopsy and 172 nodules determined to be benign by their clinical stability through two years of no change; classifiers were tested on 77 malignant and 144 benign nodules, using a set of features that in a test-retest experiment were shown to be stable. An accuracy of 83.71% and AUC of 0.814 were achieved with the Random Forests classifier on a subset of features determined to be stable via test-retest reproducibility analysis, further reduced with the Correlation-based Feature Selection algorithm.
Adaptive thresholding of chest temporal subtraction images in computer-aided diagnosis of pathologic change
Radiologists frequently use chest radiographs acquired at different times to diagnose a patient by identifying regions of change. Temporal subtraction (TS) images are formed when a computer warps a radiographic image to register and then subtract one image from the other, accentuating regions of change. The purpose of this study was to create a computeraided diagnostic (CAD) system to threshold chest TS images and identify candidate regions of pathologic change. Each thresholding technique created two different candidate regions: light and dark. Light regions have a high gray-level mean, while dark regions have a low gray-level mean; areas with no change appear as medium-gray pixels. Ten different thresholding techniques were examined and compared. By thresholding light and dark candidate regions separately, the number of properly thresholded regions improved. The thresholding of light and dark regions separately produced fewer overall candidate regions that included more regions of actual pathologic change than global thresholding of the image. Overall, the moment-preserving method produced the best results for light regions, while the normal distribution method produced the best results for dark regions. Separation of light and dark candidate regions by thresholding shows potential as the first step in creating a CAD system to detect pathologic change in chest TS images.
Correlation analysis between pulmonary function test parameters and CT image parameters of emphysema
Cheng-Pei Liu, Chia-Chen Li, Chong-Jen Yu, et al.
Conventionally, diagnosis and severity classification of Chronic Obstructive Pulmonary Disease (COPD) are usually based on the pulmonary function tests (PFTs). To reduce the need of PFT for the diagnosis of COPD, this paper proposes a correlation model between the lung CT images and the crucial index of the PFT, FEV1/FVC, a severity index of COPD distinguishing a normal subject from a COPD patient. A new lung CT image index, Mirage Index (MI), has been developed to describe the severity of COPD primarily with emphysema disease. Unlike conventional Pixel Index (PI) which takes into account all voxels with HU values less than -950, the proposed approach modeled these voxels by different sizes of bullae balls and defines MI as a weighted sum of the percentages of the bullae balls of different size classes and locations in a lung. For evaluation of the efficacy of the proposed model, 45 emphysema subjects of different severity were involved in this study. In comparison with the conventional index, PI, the correlation between MI and FEV1/FVC is -0.75±0.08, which substantially outperforms the correlation between PI and FEV1/FVC, i.e., -0.63±0.11. Moreover, we have shown that the emphysematous lesion areas constituted by small bullae balls are basically irrelevant to FEV1/FVC. The statistical analysis and special case study results show that MI can offer better assessment in different analyses.
Computerized lung cancer malignancy level analysis using 3D texture features
Wenqing Sun, Xia Huang, Tzu-Liang Tseng, et al.
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
A computer-aided diagnosis system to detect pathologies in temporal subtraction images of chest radiographs
Radiologists often compare sequential radiographs to identify areas of pathologic change; however, this process is prone to error, as human anatomy can obscure the regions of change, causing the radiologists to overlook pathology. Temporal subtraction (TS) images can provide enhanced visualization of regions of change in sequential radiographs and allow radiologists to better detect areas of change in radiographs. Not all areas of change shown in TS images, however, are actual pathology. The purpose of this study was to create a computer-aided diagnostic (CAD) system that identifies which regions of change are caused by pathology and which are caused by misregistration of the radiographs used to create the TS image. The dataset used in this study contained 120 images with 74 pathologic regions on 54 images outlined by an experienced radiologist. High and low (“light” and “dark”) gray-level candidate regions were extracted from the images using gray-level thresholding. Then, sampling techniques were used to address the class imbalance problem between “true” and “false” candidate regions. Next, the datasets of light candidate regions, dark candidate regions, and the combined set of light and dark candidate regions were used as training and testing data for classifiers by using five-fold cross validation. Of the classifiers tested (support vector machines, discriminant analyses, logistic regression, and k-nearest neighbors), the support vector machine on the combined candidates using synthetic minority oversampling technique (SMOTE) performed best with an area under the receiver operating characteristic curve value of 0.85, a sensitivity of 85%, and a specificity of 84%.
Posters: Musculoskeletal and Miscellaneous
icon_mobile_dropdown
Computer-aided diagnosis for osteoporosis using chest 3D CT images
K. Yoneda, M. Matsuhiro, H. Suzuki, et al.
The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.
Segmentation of knee MRI using structure enhanced local phase filtering
The segmentation of bone surfaces from magnetic resonance imaging (MRI) data has applications in the quanti- tative measurement of knee osteoarthritis, surgery planning for patient specific total knee arthroplasty and its subsequent fabrication of artificial implants. However, due to the problems associated with MRI imaging such as low contrast between bone and surrounding tissues, noise, bias fields, and the partial volume effect, segmentation of bone surfaces continues to be a challenging operation. In this paper, a new framework is presented for the enhancement of knee MRI scans prior to segmentation in order to obtain high contrast bone images. During the first stage, a new contrast enhanced relative total variation (RTV) regularization method is used in order to remove textural noise from the bone structures and surrounding soft tissue interface. This salient bone edge information is further enhanced using a sparse gradient counting method based on L0 gradient minimization, which globally controls how many non-zero gradients are resulted in order to approximate prominent bone structures in a structure-sparsity-management manner. The last stage of the framework involves incorporation of local phase bone boundary information in order to provide an intensity invariant enhancement of contrast between the bone and surrounding soft tissue. The enhanced images are segmented using a fast random walker algorithm. Validation against expert segmentation was performed on 10 clinical knee MRI images, and achieved a mean dice similarity coefficient (DSC) of 0.975.
Automated morphological analysis of bone marrow cells in microscopic images for diagnosis of leukemia: nucleus-plasma separation and cell classification using a hierarchical tree model of hematopoesis
Sebastian Krappe, Thomas Wittenberg, Torsten Haferlach, et al.
The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.
A B-spline image registration based CAD scheme to evaluate drug treatment response of ovarian cancer patients
Maxine Tan, Zheng Li, Kathleen Moore, et al.
Ovarian cancer is the second most common cancer amongst gynecologic malignancies, and has the highest death rate. Since the majority of ovarian cancer patients (>75%) are diagnosed in the advanced stage with tumor metastasis, chemotherapy is often required after surgery to remove the primary ovarian tumors. In order to quickly assess patient response to the chemotherapy in the clinical trials, two sets of CT examinations are taken pre- and post-therapy (e.g., after 6 weeks). Treatment efficacy is then evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) guideline, whereby tumor size is measured by the longest diameter on one CT image slice and only a subset of selected tumors are tracked. However, this criterion cannot fully represent the volumetric changes of the tumors and might miss potentially problematic unmarked tumors. Thus, we developed a new CAD approach to measure and analyze volumetric tumor growth/shrinkage using a cubic B-spline deformable image registration method. In this initial study, on 14 sets of pre- and post-treatment CT scans, we registered the two consecutive scans using cubic B-spline registration in a multiresolution (from coarse to fine) framework. We used Mattes mutual information metric as the similarity criterion and the L-BFGS-B optimizer. The results show that our method can quantify volumetric changes in the tumors more accurately than RECIST, and also detect (highlight) potentially problematic regions that were not originally targeted by radiologists. Despite the encouraging results of this preliminary study, further validation of scheme performance is required using large and diverse datasets in future.
Phantom-less bone mineral density (BMD) measurement using dual energy computed tomography-based 3-material decomposition
Philipp Hofmann, Martin Sedlmair, Bernhard Krauss, et al.
Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.
Reliable measurement of 3D foot bone angles based on the frame-of-reference derived from a sole of the foot
Taeho Kim, Dong Yeon Lee, Jinah Park
Clinical management of foot pathology requires accurate and robust measurement of the anatomical angles. In order to measure a 3D angle, recent approaches have adopted a landmark-based local coordinate system to establish bone angles used in orthopedics. These measurement methods mainly assess the relative angle between bones using a representative axis derived from the morphological feature of the bone and therefore, the results can be affected by bone deformities. In this study, we propose a method of deriving a global frame-of-reference to acquire consistent direction of the foot by extracting the undersurface of the foot from the CT image data. The two lowest positions of the foot skin are identified from the surface to define the base plane, and the direction from the hallux to the fourth toe is defined together to construct the global coordinate system. We performed the experiment on 10 volumes of foot CT images of healthy subjects to verify that the proposed method provides reliable measurements. We measured 3D angles for talus-calcaneus and talus-navicular using facing articular surfaces of paired bones. The angle was reported in 3 projection angles based on both coordinate systems defined by proposed global frame-of-reference and by CT image planes (saggital, frontal, and transverse). The result shows that the quantified angle using the proposed method considerably reduced the standard deviation (SD) against the angle using the conventional projection planes, and it was also comparable with the measured angles obtained from local coordinate systems of the bones. Since our method is independent from any individual local shape of a bone, unlike the measurement method using the local coordinate system, it is suitable for inter-subject comparison studies.
Segmentation and determination of joint space width in foot radiographs
O. Schenk, D. M. de Muinck Keizer, H. J. Bernelot Moens M.D., et al.
Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30±0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.
Multi-atlas segmentation of the cartilage in knee MR images with sequential volume- and bone-mask-based registrations
Han Sang Lee, Hyeun A. Kim, Hyeonjin Kim, et al.
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
A new paradigm of oral cancer detection using digital infrared thermal imaging
M. Chakraborty, S. Mukhopadhyay, A. Dasgupta, et al.
Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.
Image segmentation evaluation for very-large datasets
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Automated recognition of the iliac muscle and modeling of muscle fiber direction in torso CT images
N. Kamiya, X. Zhou, K. Azuma, et al.
The iliac muscle is an important skeletal muscle related to ambulatory function. The muscles related to ambulatory function are the psoas major and iliac muscles, collectively defined as the iliopsoas muscle. We have proposed an automated recognition method of the iliac muscle. Muscle fibers of the iliac muscle have a characteristic running pattern. Therefore, we used 20 cases from a training database to model the movement of the muscle fibers of the iliac muscle. In the recognition process, the existing position of the iliac muscle was estimated by applying the muscle fiber model. To generate an approximation mask by using a muscle fiber model, a candidate region of the iliac muscle was obtained. Finally, the muscle region was identified by using values from the gray value and boundary information. The experiments were performed by using the 20 cases without abnormalities in the skeletal muscle for modeling. The recognition result in five cases obtained a 76.9% average concordance rate. In the visual evaluation, overextraction of other organs was not observed in 85% of the cases. Therefore, the proposed method is considered to be effective in the recognition of the initial region of the iliac muscle. In the future, we will integrate the recognition method of the psoas major muscle in developing an analytical technique for the iliopsoas area. Furthermore, development of a sophisticated muscle function analysis method is necessary.
Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms
John J. Squiers, Weizhi Li, Darlene R. King, et al.
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms’ performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Automated torso organ segmentation from 3D CT images using conditional random field
Yukitaka Nimura, Yuichiro Hayashi, Takayuki Kitasaka, et al.
This paper presents a segmentation method for torso organs using conditional random field (CRF) from medical images. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. In this paper, we propose an organ segmentation method using structured output learning which is based on probabilistic graphical model. The proposed method utilizes CRF on three-dimensional grids as probabilistic graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weight parameters of the CRF using stochastic gradient descent algorithm and estimate organ labels for a given image by maximum a posteriori (MAP) estimation. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 6.6%. The DICE coefficients of right lung, left lung, heart, liver, spleen, right kidney, and left kidney are 0.94, 0.92, 0.65, 0.67, 0.36, 0.38, and 0.37, respectively.
Interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities
Christoph Palm, Heiko Siegmund, Matthias Semmelmann, et al.
Introduction – Diagnosis of abnormal cilia function is based on ultrastructural analysis of axoneme defects, especialy the features of inner and outer dynein arms which are the motors of ciliar motility. Sub-optimal biopsy material, methodical, and intrinsic electron microscopy factors pose difficulty in ciliary defects evaluation. We present a computer-assisted approach based on state-of-the-art image analysis and object recognition methods yielding a time-saving and efficient diagnosis of cilia dysfunction. Method – The presented approach is based on a pipeline of basal image processing methods like smoothing, thresholding and ellipse fitting. However, integration of application specific knowledge results in robust segmentations even in cases of image artifacts. The method is build hierarchically starting with the detection of cilia within the image, followed by the detection of nine doublets within each analyzable cilium, and ending with the detection of dynein arms of each doublet. The process is concluded by a rough classification of the dynein arms as basis for a computer-assisted diagnosis. Additionally, the interaction possibilities are designed in a way, that the results are still reproducible given the completion report. Results – A qualitative evaluation showed reasonable detection results for cilia, doublets and dynein arms. However, since a ground truth is missing, the variation of the computer-assisted diagnosis should be within the subjective bias of human diagnosticians. The results of a first quantitative evaluation with five human experts and six images with 12 analyzable cilia showed, that with default parameterization 91.6% of the cilia and 98% of the doublets were found. The computer-assisted approach rated 66% of those inner and outer dynein arms correct, where all human experts agree. However, especially the quality of the dynein arm classification may be improved in future work.
Improving vertebra segmentation through joint vertebra-rib atlases
Yinong Wang, Jianhua Yao, Holger R. Roth, et al.
Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 ± 3.1% to 93.8 ± 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 ± 0.60mm and 8.63 ± 4.44mm to 0.30 ± 0.27mm and 3.65 ± 2.87mm, respectively.
A machine learning approach for classification of anatomical coverage in CT
Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.
Computerized scheme for vertebra detection in CT scout image
Wei Guo, Qiang Chen, Hanxun Zhou, et al.
Our purposes are to develop a vertebra detection scheme for automated scan planning, which would assist radiological technologists in their routine work for the imaging of vertebrae. Because the orientations of vertebrae were various, and the Haar-like features were only employed to represent the subject on the vertical, horizontal, or diagonal directions, we rotated the CT scout image seven times to make the vertebrae roughly horizontal in least one of the rotated images. Then, we employed Adaboost learning algorithm to construct a strong classifier for the vertebra detection by use of Haar-like features, and combined the detection results with the overlapping region according to the number of times they were detected. Finally, most of the false positives were removed by use of the contextual relationship between them. The detection scheme was evaluated on a database with 76 CT scout image. Our detection scheme reported 1.65 false positives per image at a sensitivity of 94.3% for initial detection of vertebral candidates, and then the performance of detection was improved to 0.95 false positives per image at a sensitivity of 98.6% for the further steps of false positive reduction. The proposed scheme achieved a high performance for the detection of vertebrae with different orientations.
Posters: Vessels and Heart
icon_mobile_dropdown
Learning evaluation of ultrasound image segmentation using combined measures
Objective evaluation of medical image segmentation is one of the important steps for proving its validity and clinical applicability. Although there are many researches presenting segmentation methods on medical image, while with few studying the evaluation methods on their results, this paper presents a learning evaluation method with combined measures to make it as close as possible to the clinicians’ judgment. This evaluation method is more quantitative and precise for the clinical diagnose. In our experiment, the same data sets include 120 segmentation results of lumen-intima boundary (LIB) and media-adventitia boundary (MAB) of carotid ultrasound images respectively. And the 15 measures of goodness method and discrepancy method are used to evaluate the different segmentation results alone. Furthermore, the experimental results showed that compared with the discrepancy method, the accuracy with the measures of goodness method is poor. Then, by combining with the measures of two methods, the average accuracy and the area under the receiver operating characteristic (ROC) curve of 2 segmentation groups are higher than 93% and 0.9 respectively. And the results of MAB are better than LIB, which proved that this novel method can effectively evaluate the segmentation results. Moreover, it lays the foundation for the non-supervised segmentation evaluation system.
Atorvastatin effect evaluation based on feature combination of three-dimension ultrasound images
In the past decades, stroke has become the worldwide common cause of death and disability. It is well known that ischemic stroke is mainly caused by carotid atherosclerosis. As an inexpensive, convenient and fast means of detection, ultrasound technology is applied widely in the prevention and treatment of carotid atherosclerosis. Recently, many studies have focused on how to quantitatively evaluate local arterial effects of medicine treatment for carotid diseases. So the evaluation method based on feature combination was proposed to detect potential changes in the carotid arteries after atorvastatin treatment. And the support vector machine (SVM) and 10-fold cross-validation protocol were utilized on a database of 5533 carotid ultrasound images of 38 patients (17 atorvastatin groups and 21 placebo groups) at baseline and after 3 months of the treatment. With combination optimization of many features (including morphological and texture features), the evaluation results of single feature and different combined features were compared. The experimental results showed that the performance of single feature is poor and the best feature combination have good recognition ability, with the accuracy 92.81%, sensitivity 80.95%, specificity 95.52%, positive predictive value 80.47%, negative predictive value 95.65%, Matthew’s correlation coefficient 76.27%, and Youden’s index 76.48%. And the receiver operating characteristic (ROC) curve was also performed well with 0.9663 of the area under the ROC curve (AUC), which is better than all the features with 0.9423 of the AUC. Thus, it is proved that this novel method can reliably and accurately evaluate the effect of atorvastatin treatment.
Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images
Luan Jiang, Shan Ling, Qiang Li
Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.
Computerized flow and vessel wall analyses of coronary arteries for detection of non-calcified plaques in coronary CT angiography
The buildup of non-calcified plaques (NCP) that are vulnerable to rupture in coronary arteries is a risk for myocardial infarction. We are developing a computer-aided detection (CADe) system to assist radiologists in detecting NCPs in cCTA. A major challenge of NCP detection is the large number of false positives (FPs) caused by the small sized coronary arteries, image noise and artifacts. In this study, our purpose is to design new image features to reduce FPs. A data set of 98 cCTA scans was retrospectively collected from patient files. We first used vessel wall analysis, in which topological features were extracted from vessel wall and fused with a support-vector machine, to identify the NCP candidates from the segmented coronary tree. Computerized flow dynamic (CFD) features that characterize the change in blood flow due to the presence of plaques and a vascular cross-sectional (VCS) feature that quantifies the presence of low attenuation region at the vessel wall were designed for FP reduction. Using a leave-one-out resampling method, a support vector machine classifier was trained to merge the features into a NCP likelihood score using the vessel wall features alone or in combination with the new CDF and VCS features. The performance of the new features in classification of true NCPs and FPs was evaluated by the area under the receiver operating characteristic (ROC) curve (AUC). Without the new CFD and VCS features, the test AUC was 0.84±0.01. The AUC was improved to 0.88±0.01 with the addition of the new features. The improvement was statistically significant (p < 0.001). The study indicated that the new flow dynamic and vascular cross-sectional features were useful for differentiation of NCPs from FPs in cCTA.
Posters: Abdominal
icon_mobile_dropdown
Maximal area and conformal welding heuristics for optimal slice selection in splenic volume estimation
Ievgeniia Gutenko, Hao Peng, Xianfeng Gu, et al.
Accurate estimation of splenic volume is crucial for the determination of disease progression and response to treatment for diseases that result in enlargement of the spleen. However, there is no consensus with respect to the use of single or multiple one-dimensional, or volumetric measurement. Existing methods for human reviewers focus on measurement of cross diameters on a representative axial slice and craniocaudal length of the organ. We propose two heuristics for the selection of the optimal axial plane for splenic volume estimation: the maximal area axial measurement heuristic and the novel conformal welding shape-based heuristic. We evaluate these heuristics on time-variant data derived from both healthy and sick subjects and contrast them to established heuristics. Under certain conditions our heuristics are superior to standard practice volumetric estimation methods. We conclude by providing guidance on selecting the optimal heuristic for splenic volume estimation.
Computer-aided detection of bladder mass within non-contrast-enhanced region of CT Urography (CTU)
We are developing a computer-aided detection system for bladder cancer in CT urography (CTU). We have previously developed methods for detection of bladder masses within the contrast-enhanced region of the bladder. In this study, we investigated methods for detection of bladder masses within the non-contrast enhanced region. The bladder was first segmented using a newly developed deep-learning convolutional neural network in combination with level sets. The non-contrast-enhanced region was separated from the contrast-enhanced region with a maximum-intensityprojection- based method. The non-contrast region was smoothed and a gray level threshold was employed to segment the bladder wall and potential masses. The bladder wall was transformed into a straightened thickness profile, which was analyzed to identify lesion candidates as a prescreening step. The lesion candidates were segmented using our autoinitialized cascaded level set (AI-CALS) segmentation method, and 27 morphological features were extracted for each candidate. Stepwise feature selection with simplex optimization and leave-one-case-out resampling were used for training and validation of a false positive (FP) classifier. In each leave-one-case-out cycle, features were selected from the training cases and a linear discriminant analysis (LDA) classifier was designed to merge the selected features into a single score for classification of the left-out test case. A data set of 33 cases with 42 biopsy-proven lesions in the noncontrast enhanced region was collected. During prescreening, the system obtained 83.3% sensitivity at an average of 2.4 FPs/case. After feature extraction and FP reduction by LDA, the system achieved 81.0% sensitivity at 2.0 FPs/case, and 73.8% sensitivity at 1.5 FPs/case.