Show all abstracts
View Session
- Front Matter: Volume 7963
- Keynote and Bone CAD
- Breast Imaging I
- Lung Nodules
- Vascular and Cardiac
- CBIR
- Liver and Prostate
- Breast Imaging II
- Novel Applications and Retina
- Machine Learning
- Colon and Other Gastrointestinal CAD
- Breast Imaging III
- Lung Imaging
- Posters: Breast
- Posters: Cardiovascular
- Posters: CBIR
- Posters: Gastrointestinal and Abdominal
- Posters: Head and Neck
- Posters: Lung
- Posters: Microscopy
- Posters: Musculoskeletal
- Posters: Oncology
- Posters: Retina
- Posters: Other
Front Matter: Volume 7963
Front Matter: Volume 7963
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 7963, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Keynote and Bone CAD
Automatic lumbar vertebra segmentation from clinical CT for wedge compression fracture diagnosis
Show abstract
Lumbar vertebral fractures vary greatly in types and causes and usually result from severe trauma or pathological
conditions such as osteoporosis. Lumbar wedge compression fractures are amongst the most common ones where
the vertebra is severely compressed forming a wedge shape and causing pain and pressure on the nerve roots
and the spine. Since vertebral segmentation is the first step in any automated diagnosis task, we present a fully
automated method for robustly localizing and segmenting the vertebrae for preparation of vertebral fracture
diagnosis. Our segmentation method consists of five main steps towards the CAD(Computer-Aided Diagnosis)
system: 1) Localization of the intervertebral discs. 2) Localization of the vertebral skeleton. 3) Segmentation
of the individual vertebra. 4) Detection of the vertebrae center line and 5) Detection of the vertebrae major
boundary points. Our segmentation results are promising with an average error of 1.5mm (modified Hausdorff
distance metric) on 50 clinical CT cases i.e. a total of 250 lumbar vertebrae. We also present promising
preliminary results for automatic wedge compression fracture diagnosis on 15 cases, 7 of which have one or more
vertebral compression fracture, and obtain an accuracy of 97.33%.
Lumbar spinal stenosis CAD from clinical MRM and MRI based on inter- and intra-context features with a two-level classifier
Show abstract
An imaging test has an important role in the diagnosis of lumbar abnormalities since it allows to examine the internal
structure of soft tissues and bony elements without the need of an unnecessary surgery and recovery time. For the past
decade, among various imaging modalities, magnetic resonance imaging (MRI) has taken the significant part of the clinical
evaluation of the lumbar spine. This is mainly due to technological advancements that lead to the improvement of imaging
devices in spatial resolution, contrast resolution, and multi-planar capabilities. In addition, noninvasive nature of MRI
makes it easy to diagnose many common causes of low back pain such as disc herniation, spinal stenosis, and degenerative
disc diseases. In this paper, we propose a method to diagnose lumbar spinal stenosis (LSS), a narrowing of the spinal canal,
from magnetic resonance myelography (MRM) images. Our method segments the thecal sac in the preprocessing stage,
generates the features based on inter- and intra-context information, and diagnoses lumbar disc stenosis. Experiments with
55 subjects show that our method achieves 91.3% diagnostic accuracy. In the future, we plan to test our method on more
subjects.
Breast Imaging I
Spectral embedding based active contour (SEAC): application to breast lesion segmentation on DCE-MRI
Show abstract
Spectral embedding (SE), a graph-based manifold learning method, has previously been shown to be useful in high
dimensional data classification. In this work, we present a novel SE based active contour (SEAC) segmentation
scheme and demonstrate its applications in lesion segmentation on breast dynamic contrast enhance magnetic
resonance imaging (DCE-MRI). In this work, we employ SE on DCE-MRI on a per voxel basis to embed the
high dimensional time series intensity vector into a reduced dimensional space, where the reduced embedding
space is characterized by the principal eigenvectors. The orthogonal eigenvector-based data representation allows
for computation of strong tensor gradients in the spectrally embedded space and also yields improved region
statistics that serve as optimal stopping criteria for SEAC. We demonstrate both analytically and empirically
that the tensor gradients in the spectrally embedded space are stronger than the corresponding gradients in the
original grayscale intensity space. On a total of 50 breast DCE-MRI studies, SEAC yielded a mean absolute
difference (MAD) of 3.2±2.1 pixels and mean Dice similarity coefficient (DSC) of 0.74±0.13 compared to manual
ground truth segmentation. An active contour in conjunction with fuzzy c-means (FCM+AC), a commonly used
segmentation method for breast DCE-MRI, produced a corresponding MAD of 7.2±7.4 pixels and mean DSC
of 0.58±0.32. In conjunction with a set of 6 quantitative morphological features automatically extracted from
the SEAC derived lesion boundary, a support vector machine (SVM) classifier yielded an area under the curve
(AUC) of 0.73, for discriminating between 10 benign and 30 malignant lesions; the corresponding SVM classifier
with the FCM+AC derived morphological features yielded an AUC of 0.65.
Estimating corresponding locations in ipsilateral breast tomosynthesis views
Show abstract
To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine
information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match
corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions
in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to
be inspected individually. In this study we developed a method to quickly estimate corresponding locations in
ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is
matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress
again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple
elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT
case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast
tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each
location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a
good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of
the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool
in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.
Automatic breast density segmentation based on pixel classification
Show abstract
Mammographic breast density has been found to be a strong risk factor for breast cancer. In most studies it is
assessed with a user assisted threshold method, which is time consuming and subjective. In this study we develop
a breast density segmentation method that is fully automatic. The method is based on pixel classification in
which different approaches known in literature to segment breast density are integrated and extended. In addition
the method incorporates knowledge of a trained observer, by using segmentations obtained by the user assisted
threshold method as training data. The method is trained and tested using 1300 digitised film mammographic
images acquired with a variety of systems. Results show a high correspondence between the automated method
and the user assisted threshold method. The Spearman's rank correlation coefficient between our method and the
user assisted method was R = 0.914 for percent density, which is substantially higher than the best correlation
found in literature (R=0.70). The AUC obtained when discriminating between fatty and dense pixels was 0.985.
A combination of segmentation strategies outperformed the application of a single segmentation technique. The
method was shown to be robust for differences in mammography systems, image acquisition techniques and
image quality.
Detection of architectural distortion in prior mammograms using measures of angular distribution
Show abstract
We present methods for the detection of architectural distortion in mammograms of interval-cancer cases taken
prior to the diagnosis of breast cancer using measures of angular distribution derived from Gabor filter responses
in magnitude and angle, coherence, orientation strength, and the angular spread of power in the Fourier spectrum.
A total of 4224 regions of interest (ROIs) were automatically obtained using Gabor filters and phase portrait
analysis from 106 prior mammograms of 56 interval-cancer cases with 301 ROIs related to architectural distortion,
and from 52 mammograms of 13 normal cases. Images of coherence and orientation strength were derived from
the Gabor responses in magnitude and orientation. Each ROI was represented by the entropy of the angular
histogram composed with the Gabor magnitude response, angle, coherence, and orientation strength; the entropy
of the angular spread of power in the Fourier spectrum was also computed. Using stepwise logistic regression
for feature selection and the leave-one-image-out method in feature selection and pattern classification, the area
under the receiver operating characteristic curve of 0.76 was obtained with an artificial neural network based on
radial basis functions. Analysis of the free-response receiver operating characteristics indicated 82% sensitivity
at 7.2 false positives per image.
Fully automated segmentation of the pectoralis muscle boundary in breast MR images
Show abstract
Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection
and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air
boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD)
applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between
the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable
automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of
the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the
second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle
boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced
and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized
such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed
method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the
average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and
the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which
shows great potential for integration of the approach in the clinical routine.
Multi-view information fusion for automatic BI-RADS description of mammographic masses
Show abstract
Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify
lesions that are eventually relevant. These systems base their analysis upon a single independent view. This
article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS
lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest
(RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing
the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was
assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided
by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6%
in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using
a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.
Lung Nodules
Automatic segmentation and identification of solitary pulmonary nodules on follow-up CT scans based on local intensity structure analysis and non-rigid image registration
Show abstract
This paper presents a novel method that can automatically segment solitary pulmonary nodule (SPN) and match
such segmented SPNs on follow-up thoracic CT scans. Due to the clinical importance, a physician needs to find
SPNs on chest CT and observe its progress over time in order to diagnose whether it is benign or malignant, or
to observe the effect of chemotherapy for malignant ones using follow-up data. However, the enormous amount
of CT images makes large burden tasks to a physician. In order to lighten this burden, we developed a method
for automatic segmentation and assisting observation of SPNs in follow-up CT scans. The SPNs on input 3D
thoracic CT scan are segmented based on local intensity structure analysis and the information of pulmonary
blood vessels. To compensate lung deformation, we co-register follow-up CT scans based on an affine and a
non-rigid registration. Finally, the matches of detected nodules are found from registered CT scans based on a
similarity measurement calculation. We applied these methods to three patients including 14 thoracic CT scans.
Our segmentation method detected 96.7% of SPNs from the whole images, and the nodule matching method
found 83.3% correspondences from segmented SPNs. The results also show our matching method is robust to the
growth of SPN, including integration/separation and appearance/disappearance. These confirmed our method
is feasible for segmenting and identifying SPNs on follow-up CT scans.
Improved computerized detection of lung nodules in chest radiographs by means of 'virtual dual-energy' radiography
Show abstract
Major challenges in current computer-aided detection (CADe) of nodules in chest radiographs (CXRs) are to detect
nodules that overlap with ribs and to reduce the frequent false positives (FPs) caused by ribs. Our purpose was to
develop a CADe scheme with improved sensitivity and specificity by use of "virtual dual-energy" (VDE) CXRs where
ribs are suppressed with a massive-training artificial neural network (MTANN). To reduce rib-induced FPs and detect
nodules overlapping with ribs, we incorporated VDE technology in our CADe scheme. VDE technology suppressed ribs
in CXR while maintaining soft-tissue opacity by use of an MTANN that had been trained with real DE imaging. Our
scheme detected nodule candidates on VDE images by use of a morphologic filtering technique. Sixty-four morphologic
and gray-level-based features were extracted from each candidate from both original and VDE CXRs. A nonlinear
support vector classifier was employed for classification of the nodule candidates. A publicly available database
containing 126 nodules in 126 CXRs was used for testing of our CADe scheme. Twenty nine percent (36/126) of the
nodules were rated "extremely subtle" or "very subtle" by a radiologist. With the original scheme, a sensitivity of 76.2
(96/126) with 5 (630/126) FPs per image was achieved. By use of VDE images, more nodules overlapping with ribs
were detected and the sensitivity was improved substantially to 84.1% (106/126) at the same FP rate in a leave-one-out
cross-validation test, whereas the literature shows that other CADe schemes achieved sensitivities of 66.0% and 72.0% at
the same FP rate.
Evaluation of 1D, 2D and 3D nodule size estimation by radiologists for spherical and non-spherical nodules through CT thoracic phantom imaging
Show abstract
The purpose of this work was to estimate bias in measuring the size of spherical and non-spherical lesions by
radiologists using three sizing techniques under a variety of simulated lesion and reconstruction slice thickness
conditions. We designed a reader study in which six radiologists estimated the size of 10 synthetic nodules of various
sizes, shapes and densities embedded within a realistic anthropomorphic thorax phantom from CT scan data. In this
manuscript we report preliminary results for the first four readers (Reader 1-4). Two repeat CT scans of the phantom
containing each nodule were acquired using a Philips 16-slice scanner at a 0.8 and 5 mm slice thickness. The readers
measured the sizes of all nodules for each of the 40 resulting scans (10 nodules x 2 slice thickness x 2 repeat scans)
using three sizing techniques (1D longest in-slice dimension; 2D area from longest in-slice dimension and corresponding
longest perpendicular dimension; 3D semi-automated volume) in each of 2 reading sessions. The normalized size was
estimated for each sizing method and an inter-comparison of bias among methods was performed. The overall relative
biases (standard deviation) of the 1D, 2D and 3D methods for the four readers subset (Readers 1-4) were -13.4 (20.3),
-15.3 (28.4) and 4.8 (21.2) percentage points, respectively. The relative biases for the 3D volume sizing method was
statistically lower than either the 1D or 2D method (p<0.001 for 1D vs. 3D and 2D vs. 3D).
Automatic lung nodule detection in thick slice CT: a comparative study of different gating schemes in CAD
Show abstract
Common chest CT clinical workflows for detecting lung nodules use a large slice thickness protocol (typically 5 mm).
However, most existing CAD studies are performed on a thin slice data (0.3-2 mm) available on state-of-the art scanners.
A major challenge for the widespread clinical use of Lung CAD is the concurrent availability of both thick and thin
resolutions for use by radiologist and CAD respectively. Having both slice thickness reconstructions is not always
possible based on the availability of scanner technologies, acquisition parameters chosen at remote site, and transmission
and archiving constraints that may make transmission and storage of large data impracticable. However, applying current
thin-slice CAD algorithms on thick slice cases outside their designed acquisition parameters may result in degradation of
sensitivity and high false-positive rate making them clinically unacceptable. Therefore a CAD system that can handle
thicker slice acquisitions is desirable to address those situations.
In this paper, we propose a CAD system which works directly on thick slice scans. We first propose a multi-stage
classifier based CAD system for detecting lung nodules in such data. Furthermore, we propose different gating systems
adapted for thick slice scans. The proposed gating schemes are based on: 1. wall-attached and non wall-attached. 2.
central and non-central region. These gating schemes can be used independently or combined as well. Finally, we present
prototype1 results showing significant improvement of CAD sensitivity at much better false positive rate on thick-slice
CT images are presented.
Temporal subtraction of 'virtual dual-energy' chest radiographs for improved conspicuity of growing cancers and other pathologic changes
Show abstract
A temporal-subtraction (TS) technique provides enhanced visualization of tumor growth and subtle pathologic changes
between previous and current chest radiographs (CXRs) from the same patient. Our purpose was to develop a new TS
technique incorporating "virtual dual-energy" technology to improve its enhancement quality. Our TS technique
consisted of ribcage edge detection, rigid body transformation based on a global alignment criterion, image warping
under the maximum cross-correlation criterion, and subtraction between the registered previous and current images. A
major problem with TS was obscuring of abnormalities by rib artifacts due to misregistration. To reduce the rib artifacts,
we developed a massive-training artificial neural network (MTANN) for separation of ribs from soft tissue. The
MTANN was trained with input CXRs and the corresponding "teaching" soft-tissue CXRs obtained with real dualenergy
radiography. Once trained, the MTANNs did not require a dual-energy system and provided "soft-tissue" images.
Our database consisted of 100 sequential pairs of CXR studies from 53 patients. To assess the registration accuracy and
clinical utility, a chest radiologist subjectively rated the original TS and rib-suppressed TS images on a 5-point scale. By
use of "virtual dual-energy" technology, rib artifacts in the TS images were reduced substantially. The registration
accuracy and clinical utility ratings for TS rib-suppressed images (3.7; 3.9) were significantly better than those for
original TS images (3.5; 3.6) (P<0.01; P<0.02, respectively). Our "virtual dual-energy" TS CXRs can provide improved
enhancement quality of TS images for the assessment of pathologic change.
Vascular and Cardiac
Segmentation of the lumen and media-adventitia boundaries of the common carotid artery from 3D ultrasound images
Show abstract
Three-dimensional ultrasound (3D US) vessel wall volume (VWV) measurements provide high measurement sensitivity
and reproducibility for the monitoring and assessment of carotid atherosclerosis. In this paper, we describe a semiautomated
approach based on the level set method to delineate the media-adventitia and lumen boundaries of the
common carotid artery from 3D US images to support the computation of VWV. Due to the presence of plaque and US
image artifacts, the carotid arteries are challenging to segment using image information alone. Our segmentation
framework combines several image cues with domain knowledge and limited user interaction. Our method was
evaluated with respect to manually outlined boundaries on 430 2D US images extracted from 3D US images of 30
patients who have carotid stenosis of 60% or more. The VWV given by our method differed from that given by manual
segmentation by 6.7% ± 5.0%. For the media-adventitia and lumen segmentations, respectively, our method yielded
Dice coefficients of 95.2% ± 1.6%, 94.3% ± 2.6%, mean absolute distances of 0.3 ± 0.1 mm, 0.2 ± 0.1 mm, maximum
absolute distances of 0.8 ± 0.4 mm, 0.6 ± 0.3 mm, and volume differences of 4.2% ± 3.1%, 3.4% ± 2.6%. The
realization of a semi-automated segmentation method will accelerate the translation of 3D carotid US to clinical care for
the rapid, non-invasive, and economical monitoring of atherosclerotic disease progression and regression during therapy.
Feature extraction and wall motion classification of 2D stress echocardiography with support vector machines
Kiryl Chykeyuk,
David A. Clifton,
J. Alison Noble
Show abstract
Stress echocardiography is a common clinical procedure for diagnosing heart disease. Clinically, diagnosis of the heart
wall motion depends mostly on visual assessment, which is highly subjective and operator-dependent. Introduction of
automated methods for heart function assessment have the potential to minimise the variance in operator assessment.
Automated wall motion analysis consists of two main steps: (i) segmentation of heart wall borders, and (ii) classification
of heart function as either "normal" or "abnormal" based on the segmentation. This paper considers automated
classification of rest and stress echocardiography. Most previous approaches to the classification of heart function have
considered rest or stress data separately, and have only considered using features extracted from the two main frames
(corresponding to the end-of-diastole and end-of-systole). One previous attempt [1] has been made to combine
information from rest and stress sequences utilising a Hidden Markov Model (HMM), which has proven to be the best
performing approach to date. Here, we propose a novel alternative feature selection approach using combined
information from rest and stress sequences for motion classification of stress echocardiography, utilising a Support
Vector Machines (SVM) classifier. We describe how the proposed SVM-based method overcomes difficulties that occur
with HMM classification. Overall accuracy with the new method for global wall motion classification using datasets
from 173 patients is 92.47%, and the accuracy of local wall motion classification is 87.20%, showing that the proposed
method outperforms the current state-of-the-art HMM-based approach (for which global and local classification accuracy
is 82.15% and 78.33%, respectively).
Automated method for the identification and analysis of vascular tree structures in retinal vessel network
Show abstract
Structural analysis of retinal vessel network has so far served in the diagnosis of retinopathies and systemic
diseases. The retinopathies are known to affect the morphologic properties of retinal vessels such as course,
shape, caliber, and tortuosity. Whether the arteries and the veins respond to these changes together or in
tandem has always been a topic of discussion. However the diseases such as diabetic retinopathy and retinopathy
of prematurity have been diagnosed with the morphologic changes specific either to arteries or to veins. Thus
a method describing the separation of retinal vessel trees imaged in a two dimensional color fundus image may
assist in artery-vein classification and quantitative assessment of morphologic changes particular to arteries or
veins. We propose a method based on mathematical morphology and graph search to identify and label the
retinal vessel trees, which provides a structural mapping of vessel network in terms of each individual primary
vessel, its branches and spatial positions of branching and cross-over points. The method was evaluated on a
dataset of 15 fundus images resulting into an accuracy of 92.87 % correctly assigned vessel pixels when compared
with the manual labeling of separated vessel trees. Accordingly, the structural mapping method performs well
and we are currently investigating its potential in evaluating the characteristic properties specific to arteries or
veins.
Robust and fast abdominal aortic aneurysm centerline detection for rupture risk prediction
Show abstract
This work describes a robust and fast semi-automatic approach for Abdominal Aortic Aneurysm (AAA) centerline
detection. AAA is a vascular disease accompanied by progressive enlargement of the abdominal aorta, which leads to
rupture if left untreated, an event that accounts for the 13th leading cause of death in the U.S. The lumen centerline can
be used to provide the initial starting points for thrombus segmentation. Different from other methods, which are mostly
based on region growing and suffer from problems of leakage and heavy computational burden, we propose a novel
method based on online classification. An online version of the adaboost classifier based on steerable features is applied
to AAA MRI data sets with a rectangular box enclosing the lumen in the first slice. The classifier is updated during the
tracking process by using the testing result of the previous image as the new training data. Unlike traditional offline
versions, the online classifier can adjust parameters automatically when a leakage occurs. With the help of integral
images on the computation of haar-like features, the method can achieve nearly real time processing (about 2 seconds
per image on a standard workstation). Ten ruptured and ten unruptured AAA data sets were processed and the tortuosity
of the 20 centerlines was calculated. The correlation coefficient of the tortuosity was calculated to illustrate the
significance of the prediction with the proposed method. The mean relative accuracy is 95.68% with a standard deviation
of 0.89% when compared to a manual segmentation procedure. The correlation coefficient is 0.394.
Machine learning-based automatic detection of pulmonary trunk
Show abstract
Pulmonary embolism is a common cardiovascular emergency with about 600,000 cases occurring annually and causing
approximately 200,000 deaths in the US. CT pulmonary angiography (CTPA) has become the reference standard for PE
diagnosis, but the interpretation of these large image datasets is made complex and time consuming by the intricate
branching structure of the pulmonary vessels, a myriad of artifacts that may obscure or mimic PEs, and suboptimal bolus
of contrast and inhomogeneities with the pulmonary arterial blood pool. To meet this challenge, several approaches for
computer aided diagnosis of PE in CTPA have been proposed. However, none of these approaches is capable of
detecting central PEs, distinguishing the pulmonary artery from the vein to effectively remove any false positives from
the veins, and dynamically adapting to suboptimal contrast conditions associated the CTPA scans. To overcome these
shortcomings, it requires highly efficient and accurate identification of the pulmonary trunk. For this very purpose, in
this paper, we present a machine learning based approach for automatically detecting the pulmonary trunk. Our idea is to
train a cascaded AdaBoost classifier with a large number of Haar features extracted from CTPA image samples, so that
the pulmonary trunk can be automatically identified by sequentially scanning the CTPA images and classifying each
encountered sub-image with the trained classifier. Our approach outperforms an existing anatomy-based approach,
requiring no explicit representation of anatomical knowledge and achieving a nearly 100% accuracy tested on a large
number of cases.
Computerized detection of pulmonary embolism in computed tomographic pulmonary angiography (CTPA): improvement of vessel segmentation
Show abstract
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of
this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have
developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree
extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary
vessels accurately when the vessel is occluded by PEs and/or surrounded by lymphoid tissues or lung diseases. In this
study, we developed a method that combines MHES with level set refinement (MHES-LSR) to improve vessel
segmentation accuracy. The level set was designed to propagate the initial object contours to the regions with relatively
high gray-level, high gradient, and high compactness as measured by the smoothness of the curvature along vessel
boundaries. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty
volumes of interest (VOI) containing "representative" vessels were manually segmented by a radiologist experienced in
CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage
volume error relative to the reference standard was improved from 31.7±10.9% using the MHES method to 7.7±4.7%
using the MHES-LSR method. The correlation between the computer-segmented vessel volume and the reference
standard was improved from 0.954 to 0.986. The accuracy of vessel segmentation was improved significantly (p<0.05).
The MHES-LSR method may have the potential to improve PE detection.
CBIR
System for pathology categorization and retrieval in chest radiographs
Uri Avni,
Hayit Greenspan,
Eli Konen,
et al.
Show abstract
In this paper we present an overview of a system we have been developing for the past several years for efficient
image categorization and retrieval in large radiograph archives. The methodology is based on local patch representation
of the image content, using a bag of visual words approach and similarity-based categorization with a
kernel based SVM classifier. We show an application to pathology-level categorization of chest x-ray data, the
most popular examination in radiology. Our study deals with pathology detection and identification of individual
pathologies including right and left pleural effusion, enlarged heart and cases of enlarged mediastinum. The input
from a radiologist provided a global label for the entire image (healthy/pathology), and the categorization was
conducted on the entire image, with no need for segmentation algorithms or any geometrical rules. An automatic
diagnostic-level categorization, even on such an elementary level as healthy vs pathological, provides a useful
tool for radiologists on this popular and important examination. This is a first step towards similarity-based
categorization, which has a major clinical implications for computer-assisted diagnostics.
Search for the best matching ultrasound frame based on spatial and temporal saliencies
Shaolei Feng,
Xiaoyan Xiang,
S. Kevin Zhou,
et al.
Show abstract
In this paper we present a generic system for fast and accurate retrieval of the best matching frame from Ultrasound
video clips given a reference Ultrasound image. It is challenging to build a generic system to handle various lesion types
without any prior information of the anatomic structures of the Ultrasound data. We propose to solve the problem based
on both spatial and temporal saliency maps calculated from the Ultrasound images, which implicitly analyze the
semantics of images and emphasize the anatomic regions of interest. The spatial saliency map describes the importance
of the pixels of the reference image while the temporal saliency map further distinguishes the subtle changes of the
anatomic structure in a video. A hierarchical comparison scheme based on a novel similarity measure is employed to
locate the most similar frames quickly and precisely. Our system ensures the robustness, accuracy and efficiency.
Experiments show that our system achieves more accurate results with fast speed.
Bone age assessment by content-based image retrieval and case-based reasoning
Show abstract
Skeletal maturity is assessed visually by comparing hand radiographs to a standardized reference image atlas. Most
common are the methods by Greulich & Pyle and Tanner & Whitehouse. For computer-aided diagnosis (CAD), local
image regions of interest (ROI) such as the epiphysis or the carpal areas are extracted and evaluated. Heuristic
approaches trying to automatically extract, measure and classify bones and distances between bones suffer from the high
variability of biological material and the differences in bone development resulting from age, gender and ethnic origin.
Content-based image retrieval (CBIR) provides a robust solution without delineating and measuring bones. In this work,
epiphyseal ROIs (eROIS) of a hand radiograph are compared to previous cases with known age, mimicking a human
observer. Leaving-one-out experiments are conducted on 1,102 left hand radiographs and 15,428 metacarpal and
phalangeal eROIs from the publicly available USC hand atlas. The similarity of the eROIs is assessed by a combination
of cross-correlation, image distortion model, and Tamura texture features, yielding a mean error rate of 0.97 years and a
variance of below 0.63 years. Furthermore, we introduce a publicly available online-demonstration system, where
queries on the USC dataset as well as on uploaded radiographs are performed for instant CAD. In future, we plan to
evaluate physician with CBIR-CAD against physician without CBIR-CAD rather than physician vs. CBIR-CAD.
Integrating user profile in medical CBIR systems to answer perceptual similarity queries
Show abstract
Techniques for Content-Based Image Retrieval (CBIR) have been intensively explored due to the increase in the
amount of captured images and the need of fast retrieval of them. The medical field is a specific example that
generates a large flow of information, especially digital images employed for diagnosing. One issue that still
remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effective retrieval,
one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore,
the present paper was conceived to fill in this gap creating a consistent support to perform similarity queries
over medical images, maintaining the semantics of a given query desired by the user. CBIR systems relying in
relevance feedback techniques usually request the users to label relevant images. In this paper, we present a
simple but highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather
the user perceptual similarity. The user profiles maintain the settings desired for each user, allowing tuning
the similarity assessment, which encompasses dynamically changing the distance function employed through an
interactive process. Experiments using computed tomography lung images show that the proposed approach is
effective in capturing the users' perception.
Liver and Prostate
A method for mass candidate detection and an application to liver lesion detection
Maria Jimena Costa,
Alexey Tsymbal,
William Nguatem,
et al.
Show abstract
Detection and segmentation of abnormal masses within organs in Computed Tomography (CT) images of
patients is of practical importance in computer-aided diagnosis (CAD), treatment planning, and analysis of
normal as well as pathological regions. For intervention planning e.g. in radiotherapy the detection of abnormal
masses is essential for patient diagnosis, personalized treatment choice and follow-up. The unpredictable nature
of disease often makes the detection of the presence, appearance, shape, size and number of abnormal masses
a challenging task, which is particularly tedious when performed by hand. Moreover, in cases in which the
imaging protocol specifies the administration of a contrast agent, the contrast agent phases at which the patient
images are acquired have a dramatic influence on the shape and appearance of the diseased masses. In this
paper we propose a method to automatically detect candidate lesions (CLs) in 3D CTs of liver lesions. We
introduce a novel multilevel candidate generation method that proves clearly advantageous in a comparative
study with a state of the art approach. A learning-based selection module and a candidate fusion module are
then introduced to reduce both redundancy and the false positive rate. The proposed workflow is applied to
the detection of both hyperdense and hypodense hepatic lesions in all contrast agent phases, with resulting
sensitivities of 89.7% and 92% and positive predictive values of 82.6% and 87.6% respectively.
Computer-aided detection of hepatocellular carcinoma in multiphase contrast-enhanced hepatic CT: a preliminary study
Show abstract
Malignant liver tumors such as hepatocellular carcinoma (HCC) account for 1.25 million deaths each year worldwide.
Early detection of HCC is sometimes difficult on CT images because the attenuation of HCC is often similar to that of
normal liver parenchyma. Our purpose was to develop computer-aided detection (CADe) of HCC using both arterial
phase (AP) and portal-venous phase (PVP) of contrast-enhanced CT images. Our scheme consisted of liver
segmentation, tumor candidate detection, feature extraction and selection, and classification of the candidates as HCC or
non-lesions. We used a 3D geodesic-active-contour model coupled with a level-set algorithm to segment the liver. Both
hyper- and hypo-dense tumors were enhanced by a sigmoid filter. A gradient-magnitude filter followed by a watershed
algorithm was applied to the tumor-enhanced images for segmenting closed-contour regions as HCC candidates.
Seventy-five morphologic and texture features were extracted from the segmented candidate regions in both AP and
PVP images. To select most discriminant features for classification, we developed a sequential forward floating feature
selection method directly coupled with a support vector machine (SVM) classifier. The initial CADe before the
classification achieved a 100% (23/23) sensitivity with 33.7 (775/23) false positives (FPs) per patient. The SVM with
four selected features removed 96.5% (748/775) of the FPs without any removal of the HCCs in a leave-one-lesion-out
cross-validation test; thus, a 100% sensitivity with 1.2 FPs per patient was achieved, whereas CADe using AP alone
produced 6.4 (147/23) FPs per patient at the same sensitivity level.
Automatic computer aided detection of abnormalities in multi-parametric prostate MRI
Show abstract
Development of CAD systems for detection of prostate cancer has been a recent topic of research. A multi-stage
computer aided detection scheme is proposed to help reduce perception and oversight errors in multi-parametric
prostate cancer screening MRI. In addition, important features for development of computer aided detection
systems for prostate cancer screening MRI are identified. A fast, robust prostate segmentation routine is used
to segment the prostate, based on coupled appearance and anatomy models. Subsequently a voxel classification
is performed using a support vector machine to compute an abnormality likelihood map of the prostate. This
classification step is based on quantitative voxel features like the apparent diffusion coefficient (ADC) and
pharmacokinetic parameters. Local maxima in the likelihood map are found using a local maxima detector, after
which regions around the local maxima are segmented. Region features are computed to represent statistical
properties of the voxel features within the regions. Region classification is performed using these features, which
results in a likelihood of abnormality per region. Performance was validated using a 188 patient dataset in
a leave-one-patient-out manner. Ground truth was annotated by two expert radiologists. The results were
evaluated using FROC analysis. The FROC curves show that inclusion of ADC and pharmacokinetic parameter
features increases the performance of an automatic detection system. In addition it shows the potential of such
an automated system in aiding radiologists diagnosing prostate MR, obtaining a sensitivity of respectively 74.7%
and 83.4% at 7 and 9 false positives per patient.
Enhanced multi-protocol analysis via intelligent supervised embedding (EMPrAvISE): detecting prostate cancer on multi-parametric MRI
Show abstract
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric
(structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease
detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities
and scales of individual protocols, while deriving an integrated multi-parametric data representation which best
captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework
applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an
ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple
uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply
this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric
(T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging
(MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols
via automated image registration, followed by quantification of image attributes from individual protocols.
Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined
intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for
DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic
pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier,
yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground
truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount
histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over
classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as
well as one trained using multi-parametric feature concatenation (AUC=0.67).
Empirical evaluation of bias field correction algorithms for computer-aided detection of prostate cancer on T2w MRI
Show abstract
In magnetic resonance imaging (MRI), intensity inhomogeneity refers to an acquisition artifact which introduces
a non-linear variation in the signal intensities within the image. Intensity inhomogeneity is known to significantly
affect computerized analysis of MRI data (such as automated segmentation or classification procedures), hence
requiring the application of bias field correction (BFC) algorithms to account for this artifact. Quantitative
evaluation of BFC schemes is typically performed using generalized intensity-based measures (percent coefficient
of variation, %CV ) or information-theoretic measures (entropy). While some investigators have previously
empirically compared BFC schemes in the context of different domains (using changes in %CV and entropy
to quantify improvements), no consensus has emerged as to the best BFC scheme for any given application.
The motivation for this work is that the choice of a BFC scheme for a given application should be dictated by
application-specific measures rather than ad hoc measures such as entropy and %CV. In this paper, we have
attempted to address the problem of determining an optimal BFC algorithm in the context of a computer-aided
diagnosis (CAD) scheme for prostate cancer (CaP) detection from T2-weighted (T2w) MRI. One goal of this work
is to identify a BFC algorithm that will maximize the CaP classification accuracy (measured in terms of the area
under the ROC curve or AUC). A secondary aim of our work is to determine whether measures such as %CV and
entropy are correlated with a classifier-based objective measure (AUC). Determining the presence or absence of
these correlations is important to understand whether domain independent BFC performance measures such as
%CV , entropy should be used to identify the optimal BFC scheme for any given application. In order to answer
these questions, we quantitatively compared 3 different popular BFC algorithms on a cohort of 10 clinical 3 Tesla
prostate T2w MRI datasets (comprising 39 2D MRI slices): N3 , PABIC, and the method of Cohen et al. Results
of BFC via each of the algorithms was evaluated in terms of %CV , entropy, as well as classifier AUC for CaP
detection from T2w MRI. The CaP classifier was trained and evaluated on a per-pixel basis using annotations
of CaP obtained via registration of T2w MRI and ex vivo whole-mount histology sections. Our results revealed
that different BFC schemes resulted in a maximization of different performance measures, that is, the BFC
scheme identified by minimization of %CV and entropy was not the one that maximized AUC as well. Moreover,
existing BFC evaluation measures (%CV , entropy) did not correlate with AUC (application-based evaluation),
but did correlate with each other, suggesting that domain-specific performance measures should be considered
in making a decision regarding choice of appropriate BFC scheme. Our results also revealed that N3 provided
the best correction of bias field artifacts in prostate MRI data, when the goal was to identify prostate cancer.
Automated determination of arterial input function for DCE-MRI of the prostate
Show abstract
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides
an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on
determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have
been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to
large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from
prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we
analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel
based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and
lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such
as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and
solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method
is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that
our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with
expert traced manual AIF.
Breast Imaging II
Classification of breast lesions in automated 3D breast ultrasound
Show abstract
In this paper we investigated classification of malignant and benign lesions in automated 3D breast ultrasound (ABUS).
As a new imaging modality, ABUS overcomes the drawbacks of 2D hand-held ultrasound (US) such as its operator dependence
and limited capability in visualizing the breast in 3D. The classification method we present includes a 3D lesion
segmentation stage based on dynamic programming, which effectively deals with limited visibility of lesion boundaries
due to shadowing and speckle. A novel aspect of ABUS imaging, in which the breast is compressed by means of a dedicated
membrane, is the presence of spiculation in coronal planes perpendicular to the transducer. Spiculation patterns, or
architectural distortion, are characteristic for malignant lesions. Therefore, we compute a spiculation measure in coronal
planes and combine this with more traditional US features related to lesion shape, margin, posterior acoustic behavior,
and echo pattern. However, in our work the latter features are defined in 3D. Classification experiments were performed
with a dataset of 40 lesions including 20 cancers. Linear discriminant analysis (LDA) was used in combination with leaveone-
patient-out and feature selection in each training cycle. We found that spiculation and margin contrast were the most
discriminative features and that these features were most often chosen during feature selection. An Az value of 0.86 was
obtained by merging all features, while an Az value of 0.91 was obtained by feature selection.
Exploring deep parametric embeddings for breast CADx
Show abstract
Computer-aided diagnosis (CADx) involves training supervised classifiers using labeled ("truth-known") data.
Often, training data consists of high-dimensional feature vectors extracted from medical images. Unfortunately, very
large data sets may be required to train robust classifiers for high-dimensional inputs. To mitigate the risk of classifier
over-fitting, CADx schemes may employ feature selection or dimension reduction (DR), for example, principal
component analysis (PCA). Recently, a number of novel "structure-preserving" DR methods have been proposed1.
Such methods are attractive for use in CADx schemes for two main reasons. First, by providing visualization of highdimensional
data structure, and second, since DR can be unsupervised or semi-supervised, unlabeled ("truth-unknown")
data may be incorporated2. However, the practical application of state-of-the-art DR techniques such as, t-SNE3, to
breast CADx were inhibited by the inability to retain a parametric embedding function capable of mapping new input
data to the reduced representation. Deep (more than one hidden layer) neural networks can be used to learn such
parametric DR embeddings. We explored the feasibility of such methods for use in CADx by conducting a variety of
experiments using simulated feature data, including models based on breast CADx features. Specifically, we
investigated the unsupervised parametric t-SNE4 (pt-SNE), the supervised deep t-distributed MCML5 (dt-MCML), and
hybrid semi-supervised modifications combining the two.
The impact of motion correction on lesion characterization in DCE breast MR images
Show abstract
In the context of dynamic contrast enhanced breast MR imaging we analyzed the effect of motion compensating
registration on the characterization of lesions. Two registration techniques were applied: 1) rigid registration
and 2) elastic registration based on the Navier-Lam´e equation. Interpreting voxels that exhibit a decline in
image intensity after contrast injection (compared to the non-contrasted native image) as motion outliers, it
can be shown that the rate of motion outliers can be largely reduced by both rigid and elastic registration.
The performance of lesion features, including maximal signal enhancement ratio and variance of the signal
enhancement ratio, was measured by area under the ROC curve as well as Cohen's κ and showed significant
improvement for elastic registration, whereas features derived from rigidly registered images did not in general
exhibit a significant improvement over the level of unregistered data.
Incorporating domain knowledge for tubule detection in breast histopathology using O'Callaghan neighborhoods
Show abstract
An important criterion for identifying complicated objects with multiple attributes is the use of domain knowledge
which reflects the precise spatial linking of the constituent attributes. Hence, simply detecting the presence of
the low-level attributes that constitute the object, even in cases where these attributes might be detected in
spatial proximity to each other is usually not a robust strategy. The O'Callaghan neighborhood is an ideal
vehicle for characterizing objects comprised of multiple attributes spatially connected to each other in a precise
fashion because it allows for modeling and imposing spatial distance and directional constraints on the object
attributes. In this work we apply the O'Callaghan neighborhood to the problem of tubule identification on
hematoxylin and eosin (H & E) stained breast cancer (BCa) histopathology, where a tubule is characterized by
a central lumen surrounded by cytoplasm and a ring of nuclei around the cytoplasm. The detection of tubules
is important because tubular density is an important predictor in cancer grade determination. In the context of
ER+ BCa, grade has been shown to be strongly linked to disease aggressiveness and patient outcome. The more
standard pattern recognition approaches to detection of complex objects typically involve training classifiers for
low-level attributes individually. For tubule detection, the spatial proximity of lumen, cytoplasm, and nuclei
might suggest the presence of a tubule. However such an approach could also suffer from false positive errors
due to the presence of fat, stroma, and other lumen-like areas that could be mistaken for tubules. In this work,
tubules are identified by imposing spatial and distance constraints using O'Callaghan neighborhoods between the
ring of nuclei around each lumen. In this work, cancer nuclei in each image are found via a color deconvolution
scheme, which isolates the hematoxylin stain, thereby enabling automated detection of individual cell nuclei. The
potential lumen areas are segmented using a Hierarchical Normalized Cut (HNCut) initialized Color Gradient
based Active Contour model (CGAC). The HNCut algorithm detects lumen-like areas within the image via pixel
clustering across multiple image resolutions. The pixel clustering provides initial contours for the CGAC. From
the initial contours, the CGAC evolves to segment the boundaries of the potential lumen areas. A set of 22
graph-based image features characterizing the spatial linking between the tubular attributes is extracted from
the O'Callaghan neighborhood in order to distinguish true from false lumens. Evaluation on 1226 potential
lumen areas from 14 patient studies produces an area under the receiver operating characteristic curve (AUC)
of 0.91 along with the ability to classify true lumen with 86% accuracy. In comparison to manual grading of
tubular density over 105 images, our method is able to distinguish histopathology images with low and high
tubular density at 89% accuracy (AUC = 0.94).
Computer-aided detection of breast masses in digital breast tomosynthesis (DBT): improvement of false positive reduction by optimization of object segmentation
Show abstract
DBT is a promising new imaging modality that may improve the sensitivity and specificity for breast cancer detection.
However, DBT could only provide quasi-3D information with limited resolution along the depth (Z) direction because
tomosynthesis only has limited angular information for reconstruction. Our purpose of this study is to develop a mass
segmentation method for a computer-aided detection system in DBT. A data set of 50 two-view DBTs was collected
with a GE prototype system. We reconstructed the DBTs using a simultaneous algebraic reconstruction technique
(SART). Mass candidates including true and false masses were identified by 3D gradient field analysis. Two-stage 3D
clustering followed by active contour segmentation was applied to a volume of interest (VOI) at each candidate location.
We compared a fixed-Z approach, in which the Z dimension of the VOI was pre-determined, to an adaptive-Z approach,
in which Z was determined by the object diameter (D) on the X-Y plane obtained from the first-stage clustering. We
studied the effect of Z ranging from D to D+8 slices, centered at the central slice, in the second stage. Features were
extracted on the individual slices of the segmented 3D object and averaged over all slices for both approaches. Linear
discriminant analysis with stepwise feature selection was trained with a leave-one-case-out method to differentiate true
from false masses in each feature space. With proper optimization of the adaptive-Z approach, the classification
accuracy was significantly improved (p<0.0001) in comparison with the fixed-Z approach. The improved differentiation
of true from false masses will be useful for false positive reduction in an automated mass detection system.
Novel Applications and Retina
Analysis of adipose tissue distribution using whole-body magnetic resonance imaging
Show abstract
Obesity is an increasing problem in the western world and triggers diseases like cancer, type two diabetes, and
cardiovascular diseases. In recent years, magnetic resonance imaging (MRI) has become a clinically viable method
to measure the amount and distribution of adipose tissue (AT) in the body. However, analysis of MRI images
by manual segmentation is a tedious and time-consuming process. In this paper, we propose a semi-automatic
method to quantify the amount of different AT types from whole-body MRI data with less user interaction.
Initially, body fat is extracted by automatic thresholding. A statistical shape model of the abdomen is then
used to differentiate between subcutaneous and visceral AT. Finally, fat in the bone marrow is removed using
morphological operators. The proposed method was evaluated on 15 whole-body MRI images using manual
segmentation as ground truth for adipose tissue. The resulting overlap for total AT was 93.7% ± 5.5 with a
volumetric difference of 7.3% ± 6.4. Furthermore, we tested the robustness of the segmentation results with regard
to the initial, interactively defined position of the shape model. In conclusion, the developed method proved
suitable for the analysis of AT distribution from whole-body MRI data. For large studies, a fully automatic
version of the segmentation procedure is expected in the near future.
Computer-aided abdominal lymph node detection using contrast-enhanced CT images
Show abstract
Many malignant processes cause abdominal lymphadenopathy, and computed tomography (CT) has become the primary
modality for its detection. A lymph node is considered enlarged (swollen) if it is more than 1 centimeter in diameter.
Which lymph nodes are swollen depends on the type of disease and the body parts involved. Identifying their locations is
very important to determine the possible cause. In the current clinical workflow, the detection and diagnosis of enlarged
lymph nodes is usually performed manually by examining all slices of CT images, which can be error-prone and time
consuming. 3D blob enhancement filter is a usual way for computer-aided node detection. We proposed an improved
blob detection method for automatic lymph node detection in contrast-enhanced abdominal CT images. First, spine was
automatically extracted to indicate abdominal region. Since lymph nodes are usually next to blood vessels, abdominal
blood vessels were then segmented as a reference to set the search region for lymph nodes. Next, lymph node candidates
were generated by object-scale Hessian analysis. Finally the detected candidates were segmented and some prior
anatomical knowledge was utilized for false positive reduction. We applied our method to 9 patients with 11 enlarged
lymph nodes and compared the results with the performance of the original multi-scale Hessian analysis. The
sensitivities were 91% and 82% for our method and multi-scale Hessian analysis, respectively. The false positive rates
per patient were 17 and 28 for our method and multi-scale Hessian analysis, respectively. Our results indicated that
computer-aided lymph node detection with this blob detector may yield a high sensitivity and a relatively low FP rate in
abdominal CT.
Linked statistical shape models for multi-modal segmentation of the prostate on MRI-CT for radiotherapy planning
Show abstract
We present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model
(SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This
framework is particularly relevant in scenarios where accurate delineations of a SOI's boundary on one of the
modalities may not be readily available, or difficult to obtain, for training a SSM. We apply the LSSM in the
context of multi-modal prostate segmentation for radiotherapy planning, where we segment the prostate on MRI
and CT simultaneously. Prostate capsule segmentation is a critical step in prostate radiotherapy planning, where
dose plans have to be formulated on CT. Since accurate delineations of the prostate boundary are very difficult
to obtain on CT, pre-treatment MRI is now beginning to be acquired at several medical centers. Delineation of
the prostate on MRI is acknowledged as being significantly simpler to do compared to CT. Hence, our framework
incorporates multi-modal registration of MRI and CT to map 2D boundary delineations of prostate (obtained
from an expert radiation oncologist) on MR training images onto corresponding CT images. The delineations
of the prostate capsule on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the
building of the LSSM. We acquired 7 MRI-CT patient studies and used the leave-one-out strategy to train and
evaluate our LSSM (fLSSM), built using expert ground truth delineations on MRI and MRI-CT fusion derived
capsule delineations on CT. A unique attribute of our fLSSM is that it does not require expert delineations of
the capsule on CT. In order to perform prostate MRI segmentation using the fLSSM, we employed a regionbased
approach where we deformed the evolving prostate boundary to optimize a mutual information based
cost criterion, which took into account region-based intensity statistics of the image being segmented. The
final prostate segmentation was then transferred onto the CT image using the LSSM. We compared our fLSSM
against another LSSM (xLSSM), where, unlike the fLSSM, expert delineations of the capsule on both MRI and
CT were employed in the model building; xLSSM representing the idealized LSSM. We also compared our fLSSM
against an exclusive CT-based SSM (ctSSM), built from expert delineations of capsule on CT only. Due to the
intensity-driven nature of the segmentation algorithm, the ctSSM was not able segment the prostate. On MRI,
the xLSSM and fLSSM yielded almost identical results. On CT, our results suggest that the fLSSM, while
not dependent on highly accurate delineations of the capsule on CT, yields comparable results to an idealized
LSSM scheme (xLSSM). Hence, the fLSSM provides an accurate alternative to SSMs that require careful SOI
delineations that may be difficult or laborious to obtain, while providing concurrent segmentations of the capsule
on multiple modalities.
Sampling-based ensemble segmentation against inter-operator variability
Show abstract
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this
study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM)
brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased
simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing
user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus
result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The
reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter
drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu
thresholding method (p<.001).
Toward comprehensive detection of sight threatening retinal disease using a multiscale AM-FM methodology
Show abstract
In the United States and most of the western world, the leading causes of vision impairment and blindness are age-related
macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma. In the last decade, research in automatic
detection of retinal lesions associated with eye diseases has produced several automatic systems for detection and
screening of AMD, DR, and glaucoma. However. advanced, sight-threatening stages of DR and AMD can present with
lesions not commonly addressed by current approaches to automatic screening. In this paper we present an automatic eye
screening system based on multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions that
addresses not only the early stages, but also advanced stages of retinal and optic nerve disease. Ten different experiments
were performed in which abnormal features such as neovascularization, drusen, exudates, pigmentation abnormalities,
geographic atrophy (GA), and glaucoma were classified. The algorithm achieved an accuracy detection range of [0.77 to
0.98] area under the ROC curve for a set of 810 images. When set to a specificity value of 0.60, the sensitivity of the
algorithm to the detection of abnormal features ranged between 0.88 and 1.00. Our system demonstrates that, given an
appropriate training set, it is possible to use a unique algorithm to detect a broad range of eye diseases.
Fast localization of optic disc and fovea in retinal images for eye disease screening
Show abstract
Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in
color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm
developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps.
First, the OD location is identified using template matching and directional matched filter. To reduce false positives due
to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated
as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic
disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which
combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc
boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas,
431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location
methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute
distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius.
Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation
algorithm performs well even on blurred images.
Machine Learning
Texture feature selection with relevance learning to classify interstitial lung disease patterns
Show abstract
The Generalized Matrix Learning Vector Quantization (GMLVQ) is used to estimate the relevance of texture
features in their ability to classify interstitial lung disease patterns in high-resolution computed tomography
(HRCT) images. After a stochastic gradient descent, the GMLVQ algorithm provides a discriminative distance
measure of relevance factors, which can account for pairwise correlations between different texture features and
their importance for the classification of healthy and diseased patterns. Texture features were extracted from
gray-level co-occurrence matrices (GLCMs), and were ranked and selected according to their relevance obtained
by GMLVQ and, for comparison, to a mutual information (MI) criteria. A k-nearest-neighbor (kNN) classifier
and a Support Vector Machine with a radial basis function kernel (SVMrbf) were optimized in a 10-fold crossvalidation
for different texture feature sets. In our experiment with real-world data, the feature sets selected by
the GMLVQ approach had a significantly better classification performance compared with feature sets selected
by a MI ranking.
A robust independent component analysis (ICA) model for functional magnetic resonance imaging (fMRI) data
Show abstract
The coupling of carefully designed experiments with proper analysis of functional magnetic resonance imaging (fMRI)
data provides us with a powerful as well as noninvasive tool to help us understand cognitive processes associated with
specific brain regions and hence could be used to detect abnormalities induced by a diseased state. The hypothesisdriven
General Linear Model (GLM) and the data-driven Independent Component Analysis (ICA) model are the two
most commonly used models for fMRI data analysis. A hybrid ICA-GLM model combines the two models to take
advantages of benefits from both models to achieve more accurate mapping of the stimulus-induced activated brain
regions. We propose a modified hybrid ICA-GLM model with probabilistic ICA that includes a noise model. In this
modified hybrid model, a probabilistic principle component analysis (PPCA)-based component number estimation is
used in the ICA stage to extract the intrinsic number of original time courses. In addition, frequency matching is
introduced into the time course selection stage, along with temporal correlation, F-test based model fitting estimation,
and time course combination, to produce a more accurate design matrix for GLM. A standard fMRI dataset is used to
compare the results of applying GLM and the proposed hybrid ICA-GLM in generating activation maps.
Manifold learning for dimensionality reduction and clustering of skin spectroscopy data
Show abstract
Diagnosis of benign and malign skin lesions is currently done mostly relying on visual assessment and frequent biopsies
performed by dermatologists. As the timely and correct diagnosis of these skin lesions is one of the most important
factors in the therapeutic outcome, leveraging new technologies to assist the dermatologist seems natural. Optical
spectroscopy is a technology that is being established to aid skin lesion diagnosis, as the multi-spectral nature of this
imaging method allows to detect multiple physiological changes like those associated with increased vasculature, cellular
structure, oxygen consumption or edema in tumors. However, spectroscopy data is typically very high dimensional (on
the order of thousands), which causes difficulties in visualization and classification. In this work we apply different
manifold learning techniques to reduce the dimensions of the input data and get clustering results. Spectroscopic data of
48 patients with suspicious and actually malignant lesions was analyzed using ISOMAP, Laplacian Eigenmaps and
Diffusion Maps with varying parameters and compared to results using PCA. Using optimal parameters, both ISOMAP
and Laplacian Eigenmaps could cluster the data into suspicious and malignant with 96% accuracy, compared to the
diagnosis of the treating physicians.
A cost constrained boosting algorithm for fast lesion detection and segmentation
Show abstract
Machine learning techniques like pointwise classification are widely used for object detection and segmentation.
However, for large search spaces like CT images, this approach becomes computationally very demanding. Designing
strong yet compact classifiers is thus of great importance for systems that ought to be clinically used as
time is a limiting factor in clinical routine. The runtime of a system plays an important role in the decision about
its application. In this paper we propose a novel technique for reducing the computational complexity of voxel
classification systems based on the well-known AdaBoost algorithm in general and Probabilistic Boosting Trees
in particular. We describe a means of incorporating a measure of hypothesis complexity into the optimization
process, resulting in classifiers with lower evaluation cost. More specifically, in our approach the hypothesis
generation that is performed during the AdaBoost training is no longer based only on the error of a hypothesis
but also on its complexity. This leads to a reduced overall classifier complexity and thus shorter evaluation
times. The validity of the approach is shown in an experimental evaluation. In a cross validation experiment,
a system for automatic segmentation of liver tumors in CT images, that is based on the Probabilistic Boosting
Tree, was trained with and without the proposed extension. In this preliminary study, the evaluation cost for
classifying previously unseen samples could be reduced by 83% using the methods described here without losing
classification accuracy.
Colon and Other Gastrointestinal CAD
Probabilistic method for context-sensitive detection of polyps in CT colonography
Show abstract
Radiologists can outperform computer-aided detection (CAD) systems for CT colonography, because they consider
not only local characteristics but also the context of findings. In particular, isolated findings are considered
as more suspicious than clustered ones. We developed a computational method to model this problem-solving
technique for reducing false-positive (FP) CAD detections in CT colonography. Lesion likelihood was estimated
from shape and texture features of each candidate detection by use of a Bayesian neural network. Context features
were calculated to characterize the distribution of candidate detections in a local neighborhood. A belief
network was applied to detect isolated candidates at a higher sensitivity than clustered ones. The detection performances
of the context-sensitive CAD and a conventional CAD were compared by use of leave-one-patient-out
evaluation on 73 patients. Conventional CAD detected 82% of the lesions 6 - 9 mm in size with a median of 6
false positives per CT scan, whereas context-sensitive CAD detected the lesions at a median of 4 false positives
with significant increment in overall detection performance. For lesions ≥10 mm in size, the detection sensitivity
was 98% with a median of 7 false positives per patient, but the improvement in detection performance was not
significant.
Detection of longitudinal ulcer using roughness value for computer aided diagnosis of Crohn's disease
Show abstract
The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine.
An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall.
We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.
3D supine and prone colon registration for computed tomographic colonography scans based on graph matching
Show abstract
In this paper, we propose a new registration method for supine and prone computed tomographic colonography scans
based on graph matching. We first formulated 3D colon registration as a graph matching problem and utilized a graph
matching algorithm based on mean field theory. During the iterative optimization process, one-to-one matching
constraints were added to the system step-by-step. Prominent matching pairs found in previous iterations are used to
guide subsequent mean field calculations. The advantage of the proposed method is that it does not require a colon
centerline for registration. We tested the algorithm on a CTC dataset of 19 patients with 19 polyps. The average
registration error of the proposed method was 4.0cm (std. 2.1cm). The 95% confidence intervals were [3.0cm, 5.0mm].
There was no significant difference between the proposed method and our previous method based on the normalized
distance along the colon centerline (p=0.1).
Computer-aided teniae coli detection using height maps from computed tomographic colonography images
Show abstract
Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer screening.
Teniae coli are three bands of longitudinal smooth muscle on the colon surface. They are parallel, equally distributed on
the colon wall, and form a triple helix structure from the appendix to the sigmoid colon. Because of their characteristics,
teniae coli are important anatomical meaningful landmarks on human colon. This paper proposes a novel method for
teniae coli detection on CT colonography. We first unfold the three-dimensional (3D) colon using a reversible projection
technique and compute the two-dimensional (2D) height map of the unfolded colon. The height map records the
elevation of colon surface relative to the unfolding plane, where haustral folds corresponding to high elevation points and
teniae to low elevation points. The teniae coli are detected on the height map and then projected back to the 3D colon.
Since teniae are located where the haustral folds meet, we break down the problem by first detecting haustral folds. We
apply 2D Gabor filter banks to extract fold features. The maximum response of the filter banks is then selected as the
feature image. The fold centers are then identified based on piecewise thresholding on the feature image. Connecting the
fold centers yields a path of the folds. Teniae coli are finally extracted as lines running between the fold paths.
Experiments were carried out on 7 cases. The proposed method yielded a promising result with an average normalized
RMSE of 5.66% and standard deviation of 4.79% of the circumference of the colon.
Temporal volume flow: an approach to tracking failure recovery
Show abstract
The simultaneous use of pre-segmented CT colonoscopy images and optical colonoscopy images during routine
endoscopic procedures provides useful clinical information to the gastroenterologist. Blurry images in the video
stream can cause the tracking system to fail during the procedure, due to the endoscope touching the colon wall
or a polyp. The ability to recover from such failures is necessary to continually track images, and goes towards
building a robust tracking system. Identifying similar images before and after the blurry sequence is central to
this task.
In this work, we propose a Temporal Volume Flow(TVF) based approach to search for a similar image
pair before and after blurry sequences in the optical colonoscopy video. TVF employs nonlinear intensity and
gradient constancy models, as well as a discontinuity-preserving smoothness constraint to formulate an energy
function; minimizing this function between two temporal volumes before and after the blurry sequence results
in an estimate of TVF. A voting approach is then used to determine an image pair with the maximum number
of point correspondences. Region flow algorithm10 is applied to the selected image pair to determine camera
motion parameters.
We applied our algorithm to three optical colonoscopy sequences. The first sequence had 235 images in
the ascending colon, and 12 blurry images. The image pair selected by TVF decreases the rotation error of
the tracking results using the region flow algorithm. Similar results were observed in the second patient in the
descending colon, containing 535 images and 24 blurry images. The third sequence contained 580 images in the
descending colon and 172 blurry images. Region flow method failed in this case due to improper image pair
selection; using TVF to determine the image pair allowed the system to successfully recover from the blurry
sequence.
On-the-fly detection of images with gastritis aspects in magnetically guided capsule endoscopy
P. W. Mewes,
D. Neumann,
A. Lj. Juloski,
et al.
Show abstract
Capsule Endoscopy (CE) was introduced in 2000 and has since become an established diagnostic procedure for
the small bowel, colon and esophagus. For the CE examination the patient swallows the capsule, which then
travels through the gastrointestinal tract under the influence of the peristaltic movements. CE is not indicated
for stomach examination, as the capsule movements can not be controlled from the outside and the entire surface
of the stomach can not be reliably covered. Magnetically-guided capsule endoscopy (MGCE) was introduced in
2010. For the MGCE procedure the stomach is filled with water and the capsule is navigated from the outside
using an external magnetic field. During the examination the operator can control the motion of the capsule
in order to obtain a sufficient number of stomach-surface images with diagnostic value. The quality of the
examination depends on the skill of the operator and his ability to detect aspects of interest in real time. We
present a novel computer-assisted diagnostic-procedure (CADP) algorithm for indicating gastritis pathologies in
the stomach during the examination. Our algorithm is based on pre-processing methods and feature vectors that
are suitably chosen for the challenges of the MGCE imaging (suspended particles, bubbles, lighting). An image
is classified using an ada-boost trained classifier. For the classifier training, a number of possible features were
investigated. Statistical evaluation was conducted to identify relevant features with discriminative potential.
The proposed algorithm was tested on 12 video sequences stemming from 6 volunteers. A mean detection rate
of 91.17% was achieved during leave-one out cross-validation.
Breast Imaging III
Multiscale quantification of tissue spiculation and distortion for detection of architectural distortion and spiculated mass in mammography
Zhiqiang Lao,
Xin Zheng
Show abstract
This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems
that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses
the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating
spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition
of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field
generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram
and two features are then extracted from the feature map. These two features will be combined with other mass features
to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion.
The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to
over 500 cancer cases and over 1000 normal cases.
Computer aided detection of breast masses in mammography using support vector machine classification
Show abstract
The reduction of false positive marks in breast mass CAD is an active area of research. Typically, the problem
can be approached either by developing more discriminative features or by employing different classifier designs.
Usually one intends to find an optimal combination of classifier configuration and small number of features to
ensure high classification performance and a robust model with good generalization capabilities.
In this paper, we investigate the potential benefit of relying on a support vector machine (SVM) classifier
for the detection of masses. The evaluation is based on a 10-fold cross validation over a large database of screen
film mammograms (10397 images). The purpose of this study is twofold: first, we assess the SVM performance
compared to neural networks (NNet), k-nearest neighbor classification (k-NN) and linear discriminant analysis
(LDA). Second, we study the classifiers' performances when using a set of 30 and a set of 73 region-based
features. The CAD performance is quantified by the mean sensitivity in 0.05 to 1 false positives per exam on
the free-response receiver operating characteristic curve.
The best mean exam sensitivities found were 0.545, 0.636, 0.648, 0.675 for LDA, k-NN, NNet and SVM.
K-NN and NNet proved to be stable against variation of the featuresets. Conversely, LDA and SVM exhibited
an increase in performance when adding more features. It is concluded that with an SVM a more pronounced
reduction of false positives is possible, given that a large number of cases and features are available.
Computerized prediction of breast cancer risk: comparison between the global and local bilateral mammographic tissue asymmetry
Show abstract
We have developed and preliminarily tested a new breast cancer risk prediction model based on computerized
bilateral mammographic tissue asymmetry. In this study, we investigated and compared the performance difference of
our risk prediction model when the bilateral mammographic tissue asymmetrical features were extracted in two different
methods namely (1) the entire breast area and (2) the mirror-matched local strips between the left and right breast. A
testing dataset including bilateral craniocaudal (CC) view images of 100 negative and 100 positive cases for developing
breast abnormalities or cancer was selected from a large and diverse full-field digital mammography (FFDM) image
database. To detect bilateral mammographic tissue asymmetry, a set of 20 initial "global" features were extracted from
the entire breast areas of two bilateral mammograms in CC view and their differences were computed. Meanwhile, a
pool of 16 local histogram-based statistic features was computed from eight mirror-matched strips between the left and
right breast. Using a genetic algorithm (GA) to select optimal features, two artificial neural networks (ANN) were built
to predict the risk of a test case developing cancer. Using the leave-one-case-out training and testing method, two GAoptimized
ANNs yielded the areas under receiver operating characteristic (ROC) curves of 0.754±0.024 (using feature
differences extracted from the entire breast area) and 0.726±0.026 (using the feature differences extracted from 8 pairs of
local strips), respectively. The risk prediction model using either ANN is able to detect 58.3% (35/60) of cancer cases 6
to 18 months earlier at 80% specificity level. This study compared two methods to compute bilateral mammographic
tissue asymmetry and demonstrated that bilateral mammographic tissue asymmetry was a useful breast cancer risk
indicator with high discriminatory power.
A comparison study of textural features between FFDM and film mammogram images
Show abstract
In this work, we conducted an imaging study to make a direct, quantitative comparison of image features measured by
film and full-field digital mammography (FFDM). We acquired images of cadaver breast specimens containing
simulated microcalcifications using both a GE digital mammography system and a screen-film system. To quantify the
image features, we calculated and compared a set of 12 texture features derived from spatial gray-level dependence
matrices. Our results demonstrate that there is a great degree of agreement between film and FFDM, with the correlation
coefficient of the feature vector (formed by the 12 textural features) being 0.9569 between the two; in addition, a paired
sign test reveals no significant difference between film and FFDM features. These results indicate that textural features
may be interchangeable between film and FFDM for CAD algorithms.
Mammographic parenchymal texture as an imaging marker of hormonal activity: a comparative study between pre- and post-menopausal women
Show abstract
Mammographic parenchymal texture patterns have been shown to be related to breast cancer risk. Yet, little is known
about the biological basis underlying this association. Here, we investigate the potential of mammographic parenchymal
texture patterns as an inherent phenotypic imaging marker of endogenous hormonal exposure of the breast tissue.
Digital mammographic (DM) images in the cranio-caudal (CC) view of the unaffected breast from 138 women
diagnosed with unilateral breast cancer were retrospectively analyzed. Menopause status was used as a surrogate marker
of endogenous hormonal activity. Retroareolar 2.5cm2 ROIs were segmented from the post-processed DM images using
an automated algorithm. Parenchymal texture features of skewness, coarseness, contrast, energy, homogeneity, grey-level
spatial correlation, and fractal dimension were computed. Receiver operating characteristic (ROC) curve analysis
was performed to evaluate feature classification performance in distinguishing between 72 pre- and 66 post-menopausal
women. Logistic regression was performed to assess the independent effect of each texture feature in predicting
menopause status. ROC analysis showed that texture features have inherent capacity to distinguish between pre- and
post-menopausal statuses (AUC>0.5, p<0.05). Logistic regression including all texture features yielded an ROC curve
with an AUC of 0.76. Addition of age at menarche, ethnicity, contraception use and hormonal replacement therapy
(HRT) use lead to a modest model improvement (AUC=0.78) while texture features maintained significant contribution
(p<0.05). The observed differences in parenchymal texture features between pre- and post- menopausal women suggest
that mammographic texture can potentially serve as a surrogate imaging marker of endogenous hormonal activity.
Lung Imaging
Classification of pulmonary emphysema from chest CT scans using integral geometry descriptors
Show abstract
To gain insight into the underlying pathways of emphysema and monitor the effect of treatment, methods
to quantify and phenotype the different types of emphysema from chest CT scans are of crucial importance.
Current standard measures rely on density thresholds for individual voxels, which is influenced by inspiration
level and does not take into account the spatial relationship between voxels. Measures based on texture analysis
do take the interrelation between voxels into account and therefore might be useful for distinguishing different
types of emphysema. In this study, we propose to use Minkowski functionals combined with rotation invariant
Gaussian features to distinguish between healthy and emphysematous tissue and classify three different types of
emphysema. Minkowski functionals characterize binary images in terms of geometry and topology. In 3D, four
Minkowski functionals are defined. By varying the threshold and size of neighborhood around a voxel, a set of
Minkowski functionals can be defined for each voxel. Ten chest CT scans with 1810 annotated regions were used
to train the method. A set of 108 features was calculated for each training sample from which 10 features were
selected to be most informative. A linear discriminant classifier was trained to classify each voxel in the lungs
into a subtype of emphysema or normal lung. The method was applied to an independent test set of 30 chest
CT scans with varying amounts and types of emphysema with 4347 annotated regions of interest. The method
is shown to perform well, with an overall accuracy of 95%.
Lung partitioning for x-ray CAD applications
Show abstract
Partitioning the inside region of lung into homogeneous regions becomes a crucial step in any computer-aided diagnosis
applications based on chest X-ray. The ribs, air pockets and clavicle occupy major space inside the lung as seen in the
chest x-ray PA image. Segmenting the ribs and clavicle to partition the lung into homogeneous regions forms a crucial
step in any CAD application to better classify abnormalities. In this paper we present two separate algorithms to segment
ribs and the clavicle bone in a completely automated way. The posterior ribs are segmented based on Phase congruency
features and the clavicle is segmented using Mean curvature features followed by Radon transform. Both the algorithms
work on the premise that the presentation of each of these anatomical structures inside the left and right lung has a
specific orientation range within which they are confined to. The search space for both the algorithms is limited to the
region inside the lung, which is obtained by an automated lung segmentation algorithm that was previously developed in
our group. Both the algorithms were tested on 100 images of normal and patients affected with Pneumoconiosis.
Estimating local scaling properties for the classification of interstitial lung disease patterns
Show abstract
Local scaling properties of texture regions were compared in their ability to classify morphological patterns
known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases
in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing,
a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241
regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist.
Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM),
Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM).
A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized
in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent
test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used
to compare two accuracy distributions including the Bonferroni correction. The best classification results were
obtained by the set of SIM features, which performed significantly better than all the standard GLCM and
MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN
classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%,
87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local
scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial
lung diseases when compared to standard texture analysis methods.
High-throughput morphometric analysis of pulmonary airways in MSCT via a mixed 3D/2D approach
Show abstract
Asthma and COPD are complex airway diseases with an increased incidence estimated for the next decade.
Today, the mechanisms and relationships between airway structure/physiology and the clinical phenotype and
genotype are not completely understood. We thus lack the tools to predict disease progression or therapeutic
responses. One of the main causes is our limited ability to assess the complexity of airway diseases in large
populations of patients with appropriate controls. Multi-slice computed tomography (MSCT) imaging opened
the way to the non-invasive assessment of airway physiology and structure, but the use of such technology in
large cohorts requires a high degree of automation of the measurements. This paper develops an investigation
framework and the associated image quantification tools for high-throughput analysis of airways in MSCT. A
mixed approach is proposed, combining 3D and cross-section measurements of the airway tree where the user-interaction
is limited to the choice of the desired analysis patterns. Such approach relies on the fully-automated
segmentation of the 3D airway tree, caliber estimation and visualization based on morphologic granulometry,
central axis computation and tree segment selection, cross-section morphometry of airway lumen and wall, and
bronchus longitudinal shape analysis for stenosis/bronciectasis detection and measure validation. The developed
methodology has been successfully applied to a cohort of 96 patients from a multi-center clinical study of asthma
control in moderate and persistent asthma.
Interactive lung lobe segmentation and correction in tomographic images
Show abstract
Lobe-based quantification of tomographic images is of increasing interest for diagnosis and monitoring lung
pathology. With modern tomography scanners providing data sets with hundreds of slices, manual segmentation
is time-consuming and not feasible in the clinical routine. Especially for patients with severe lung pathology that
are of particular clinical importance, automatic segmentation approaches frequently generate partially inaccurate
or even completely unacceptable results. In this work we present a modality-independent, semi-automated
method that can be used both for generic correction of any existing lung lobe segmentation and for segmentation
from scratch. Intuitive slice-based drawing of fissure parts is used to introduce user knowledge. Internally, the
current fissure is represented as sampling points in 3D space that are interpolated to a fissure surface. Using
morphological processing, a 3D impact region is computed for each user-drawn 2D curve. Based on the curve
and impact region, the updated lobar boundary surface is immediately computed after each interaction step to
provide instant user feedback. The method was evaluated on 25 normal-dose CT scans with a reference standard
provided by a human observer. When segmenting from scratch, the average distance to the reference standard
was 1.6mm using an average of five interactions and 50 seconds of interaction time per case. When correcting
inadequate automatic segmentations, the initial error was reduced from 13.9 to 1.9mm with comparable efforts.
The evaluation shows that both correction of a given segmentation and segmentation from scratch can be
successfully performed with little interaction in a short amount of time.
Enhancing image classification models with multi-modal biomarkers
Show abstract
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose,
quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at
providing quantitative measurements and assisting physicians during the decision-making process. As the need for
more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged.
In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create
more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and
EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically
score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the
accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab
values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40,
are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
Posters: Breast
Lesion classification on breast MRI through topological characterization of morphology over time
Show abstract
Morphological characterization of lesions on dynamic breast MRI exams through texture analysis has typically involved
the computation of gray-level co-occurrence matrices (GLCM), which serve as the basis for second order statistical
texture features. This study aims to characterize lesion morphology through the underlying topology and geometry with
Minkowski Functionals (MF) and investigate the impact of using such texture features extracted dynamically over a time
series in classifying benign and malignant lesions. 60 lesions (28 malignant & 32 benign) were identified and annotated
by experienced radiologists on 54 breast MRI exams of female patients where histopathological reports were available
prior to this investigation. 13 GLCM-derived texture features and 3 MF features were then extracted from lesion ROIs
on all five post-contrast images. These texture features were combined into high dimensional texture feature vectors and
used in a lesion classification task. A fuzzy k-nearest neighbor classifier was optimized using random sub-sampling
cross-validation for each texture feature and the classification performance was calculated on an independent test set
using the area under the ROC curve (AUC); AUC distributions of different features were compared using a Mann-
Whitney U-test. The MF feature 'Area' exhibited significantly improvements in classification performance (p<0.05)
when compared to all GLCM-derived features while the MF feature 'Perimeter' significantly outperformed 12 out of 13
GLCM features (p<0.05) in the lesion classification task. These results show that dynamic texture tracking of
morphological characterization that relies on topological texture features can contribute to better lesion character
classification.
False-positive reduction using RANSAC in mammography microcalcification detection
Show abstract
This paper proposes a method for false-positive reduction in mammography computer aided detection (CAD) systems by
detecting a linear structure (LS) in individual microcalcification (MCC) cluster candidates, which primarily involves
three steps. First, it applies a modified RANSAC algorithm to a region of interest (ROI) that encloses an MCC cluster
candidate to find LS. Second, a peak-to-peak ratio of two orthogonal integral-curves (named the RANSAC feature) is
computed based on the results from the first step. Last, the computed RANSAC feature is, together with other MCC
cancer features, used in a neural network for MCC classification, results of which are compared with the classification
without the RANSAC feature. One thousand (1000) cases were used in training the classifiers, 671 cases were used in
testing. The comparison shows that there is a significant improvement in terms of the reduction of linear structure
associated false-positives readings (up to about 40% FP reduction).
Automatic identification of pectoralis muscle on digital cranio-caudal-view mammograms
Show abstract
To improve efficiency and reduce human error in the computerized calculation of volumetric breast density, we have
developed an automatic identification process which suppresses the projected region of the pectoralis muscle on digital
CC-view mammograms. The pixels in the image of the pectoralis muscle, represent dense tissue, but not related to risk,
will cause an error in estimated breast density if counted as fibroglandular tissue. The pectoralis muscle on the CC-view
is not always visible and has variable shape and location. Our algorithm robustly detects the existence of the pectoralis in
the image and segments it as a semi-elliptical region that closely matches manually segmented images. We present a
pipeline where adaptive thresholding and distance transforms have been used in the initial pectoralis region identification
process; statistical region growing is applied to explore the region within the identified location aimed at refining the
boundary; and a 2D shape descriptor is developed for the target validation: the segmented region is identified as the
pectoralis muscle if it has a semi-elliptical contour. After the pectoralis muscle is identified, a 1D-FFT filtering is used
for boundary smoothing. Quantitative evaluation was performed by comparing manual segmentation by a trained
operator, and analysis using the algorithm in a set of 174 randomly selected digital mammograms. Use of the algorithm
is shown to improve accuracy in the automatic determination of the volumetric ratio of breast composition by removal of
the pectoralis muscle from both the numerator and denominator. As well, it greatly improves the efficiency and
throughput in large scale volumetric mammographic density studies where previously interaction with an operator was
required to obtain that level of accuracy.
Comparison of breast percent density estimation from raw versus processed digital mammograms
Diane Li,
Sara Gavenonis,
Emily Conant,
et al.
Show abstract
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic
(DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were
retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing
was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated
by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast
PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression,
and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that
breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001).
Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant
difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in
PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk
stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical
settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD%
estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment
models used in clinical practice.
Automatic lesion detection and segmentation algorithm on 2D breast ultrasound images
Show abstract
Although X-ray mammography (MG) is the dominant imaging modality, ultrasonography (US), with recent advances in
technologies, has proven very useful in the evaluation of breast abnormalities. But radiologist should investigate a lot of
images for proper diagnosis unlike MG. This paper proposes the automatic algorithm of detecting and segmenting
lesions on 2D breast ultrasound images to help radiologist. The detecting part is based on the Hough transform with
downsampling process which is very efficient to sharpen the smooth lesion boundary and also to reduce the noise. In
segmenting part, radial dependent contrast adjustment (RDCA) method is newly proposed. RDCA is introduced to
overcome the limitation of Gaussian constraint function. It decreases contrast around the center of lesion but increases
contrast proportional to the distance from the center of lesion. As a result, segmentation algorithm shows robustness in
various shapes of lesion. The proposed algorithms may help to detect lesions and to find boundary of lesions efficiently.
Multi-probe-based resonance-frequency electrical impedance spectroscopy for detection of suspicious breast lesions: improving performance using partial ROC optimization
Show abstract
We have developed a multi-probe resonance-frequency electrical impedance spectroscope (REIS) system to detect breast
abnormalities. Based on assessing asymmetry in REIS signals acquired between left and right breasts, we developed
several machine learning classifiers to classify younger women (i.e., under 50YO) into two groups of having high and
low risk for developing breast cancer. In this study, we investigated a new method to optimize performance based on the
area under a selected partial receiver operating characteristic (ROC) curve when optimizing an artificial neural network
(ANN), and tested whether it could improve classification performance. From an ongoing prospective study, we selected
a dataset of 174 cases for whom we have both REIS signals and diagnostic status verification. The dataset includes 66
"positive" cases recommended for biopsy due to detection of highly suspicious breast lesions and 108 "negative" cases
determined by imaging based examinations. A set of REIS-based feature differences, extracted from the two breasts
using a mirror-matched approach, was computed and constituted an initial feature pool. Using a leave-one-case-out
cross-validation method, we applied a genetic algorithm (GA) to train the ANN with an optimal subset of features. Two
optimization criteria were separately used in GA optimization, namely the area under the entire ROC curve (AUC) and
the partial area under the ROC curve, up to a predetermined threshold (i.e., 90% specificity). The results showed that
although the ANN optimized using the entire AUC yielded higher overall performance (AUC = 0.83 versus 0.76), the
ANN optimized using the partial ROC area criterion achieved substantially higher operational performance (i.e.,
increasing sensitivity level from 28% to 48% at 95% specificity and/ or from 48% to 58% at 90% specificity).
Study of adaptability of breast density analysis system developed for screen film mammograms (SFMs) to full-field digital mammograms (FFDMs): robustness of parenchymal texture analysis
Show abstract
Mammography is in the transition to full-field digital mammograms (FFDM). It is important to evaluate the
adaptability of image analysis methods and computer-aided diagnosis (CAD) systems developed with screen-film
mammograms (SFM) to FFDMs. In addition, prior SFMs are more readily available for development of new
techniques that involve long-term follow up such as breast cancer risk prediction. We have previously developed a
texture-feature-based method for mammographic parenchymal pattern (MPP) analysis on SFMs. The MPP measure
was found to be more predictive of breast cancer risk than percent dense area on mammograms. In this study, we
investigated the correlation of computerized texture features extracted from matched pairs of SFM and FFDM obtained
from the same patient using the same algorithms without retraining for MPP analysis. The computerized texture features
from the two modalities demonstrated strong correlation, indicating that the MPP analysis system that we developed
with SFMs for breast cancer risk prediction can be readily adapted to FFDMs with at most minor retraining.
Computer aided breast density evaluation in cone beam breast CT
Show abstract
Cone Beam Breast CT is a three-dimensional breast imaging modality with high contrast resolution and no tissue overlap.
With these advantages, it is possible to measure volumetric breast density accurately and quantitatively with CBBCT 3D
images. Three major breast components need to be segmented: skin, fat and glandular tissue. In this research, a modified
morphological processing is applied to the CBBCT images to detect and remove the skin of the breast. After the skin is
removed, a 2-step fuzzy clustering scheme is applied to the CBBCT image volume to adaptively cluster the image voxels
into fat and glandular tissue areas based on the intensity of each voxel. Finally, the CBBCT breast volume images are
divided into three categories: skin, fat and glands. Clinical data is used and the quantitative CBBCT breast density
evaluation results are compared with the mammogram-based BIRADS breast density categories.
Minimal elastographic modeling of breast cancer for model based tumor detection in a digital image elasto tomography (DIET) system
Thomas F. Lotz,
Natalie Muller,
Christopher E. Hann,
et al.
Show abstract
Digital Image Elasto Tomography (DIET) is a non-invasive breast cancer screening technology that images the surface
motion of a breast under harmonic mechanical actuation. A new approach capturing the dynamics and characteristics of
tumor behavior is presented. A simple mechanical model of the breast is used to identify a transfer function relating the
input harmonic actuation to the output surface displacements using imaging data of a silicone phantom. Areas of higher
stiffness cause significant changes of damping and resonant frequencies as seen in the resulting Bode plots. A case study
on a healthy and tumor silicone breast phantom shows the potential for this model-based method to clearly distinguish
cancerous and healthy tissue as well as correctly predicting the tumor position.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
Show abstract
As all women over the age of 40 are recommended to perform mammographic exams every two years, the
demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool
to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been
investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not
diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes
them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our
research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing
modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to
detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of
microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough
information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the
system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical
practice testing, with results comparable to others reported in literature.
Classification of mammographic masses: use and influence of a bilateral-filter-based flat-texture approach
Show abstract
Computer-assisted diagnosis (CADx) for the interactive characterization of mammographic masses as benign or malignant has a high potential to help radiologists during the critical process of diagnostic decision making. By default, the characterization of mammographic masses is performed by extracting features from a region of interest (ROI) depicting the mass. To investigate the influence of a so-called bilateral filter based emph{flat texture} (FT) preprocessing step on the classification performance, textural as well as frequency-based features are calculated in the ROI, in the core of the mass and in the mass margin for preprocessed and unprocessed images. Furthermore. the influence of the parameterization of the bilateral filter on the classification performance is investigated. Additionally, as reference Median and Gaussian filters have been used to compute the FT image and the resulting classification performances of the feature extractors are compared to those obtained with the bilateral filters. Classification is done using a k-NN classifier. The classification performance was evaluated using the area Az under the receiver operating characteristic (ROC) curve. A publicly available mammography database was used as reference image data set. The results show that the proposed FT preprocessing step has a positive influence on the texture-based feature extractors while most of the frequency-based feature extractors perform better on the unprocessed images. For some of the features the original Az could be improved up to 10%. The comparison of the bilateral filter approach with the Median and Gaussian filter approaches showed the superiority of the bilateral filter.
Comparison of two-class and three-class Bayesian artificial neural networks in estimation of observations drawn from simulated bivariate normal distributions
Show abstract
The development and application of multi-class BANN classifiers in computer-aided diagnosis methods motivated this
study in which we compared estimates produced by two-class and three-class BANN classifiers to true observations
drawn from simulated distributions. Observations were drawn from three Gaussian bivariate distributions with distinct
means and variances to generate G1, G2, and G3 simulated datasets. A two-class BANN was trained on each training
dataset for a total of ten different trained BANNs. The same testing dataset was run on each trained BANN. The average
and standard deviation of the resulting ten sets of BANN outputs were then calculated. This process was repeated with
three-class BANNS. Different sample numbers and values of a priori probabilities were investigated. The relationship
between the average BANN output and true distribution was measured using Pearson and Spearman coefficients, R-squared
and mean square error for two-class and three-class BANNs. There was significantly high correlation between
the average BANN output and true distribution for two-class and three-class BANNs; however, subtle non-linearities and
spread were found in comparing the true and estimated distributions. The standard deviations of two-class and three-class
BANNs were comparable, demonstrating that three-class BANNs can perform as reliably as two-class BANN
classifiers in estimating true distributions and that the observed non-linearities and spread were not simply due to
statistical uncertainty but were valid characteristics of the BANN classifiers. In summary, three-class BANN decision
variables were similar in performance to those of two-class BANNs in estimating true observations drawn from
simulated bivariate normal distributions.
Posters: Cardiovascular
A preparatory study to choose similarity metrics for left-ventricle segmentations comparison
Show abstract
In medical image processing and analysis it is often required to perform segmentation for quantitative measures
of extent, volume and shape.
The validation of new segmentation methods and tools usually implies comparing their various outputs among
themselves (or with a ground truth), using similarity metrics. Several such metrics are proposed in the literature
but it is important to select those which are relevant for a particular task as opposed to using all metrics and
therefore avoiding additional computational cost and redundancy.
A methodology is proposed which enables the assessment of how different similarity and discrepancy metrics
behave for a particular comparison and the selection of those which provide relevant data.
Robust detection of bifurcations for vessel tree tracking
Show abstract
Vessel tree tracking is an important and challenging task for many medical applications. This paper presents
a novel bifurcation detection algorithm for Bayesian tracking of vessel trees. Based on a cylindrical model, we
introduce a bifurcation metric that yields minimal values at potential branching points. This approach avoids
searching for bifurcations in every iteration of the tracking process (as proposed by prior works) and is therefore
computationally more efficient. We use the same geometric model for the bifurcation metric as for the tracking;
no specific bifurcation model is needed. In a preliminary evaluation of our method on 8 CTA datasets of coronary
arteries, all side branches and 95.8% of the main branches were detected correctly.
Optical coherence tomography layer thickness characterization of a mock artery during angioplasty balloon deployment
Show abstract
Optical coherence tomography (OCT) is used to study the deformation of a mock artery in an angioplasty simulation
setup. An OCT probe integrated in a balloon catheter provides intraluminal real-time images during balloon
inflation. Swept-source OCT is used for imaging. A 4 mm semi-compliant polyurethane balloon is used for
experiments. The balloon is inflated inside a custom-built multi-layer artery phantom. The phantom has three layers
to mock artery layers, namely, intima, media and adventitia. Semi-automatic segmentation of phantom layers is
performed to provide a detailed assessment of the phantom deformation at various inflation pressures.
Characterization of luminal diameter and thickness of different layers of the mock artery is provided for various
inflation pressures.
Plaque characterization in ex vivo MRI evaluated by dense 3D correspondence with histology
Show abstract
Automatic quantification of carotid artery plaque composition is important in the development of methods that
distinguish vulnerable from stable plaques. MRI has shown to be capable of imaging different components noninvasively.
We present a new plaque classification method which uses 3D registration of histology data with ex vivo
MRI data, using non-rigid registration, both for training and evaluation. This is more objective than previously presented
methods, as it eliminates selection bias that is introduced when 2D MRI slices are manually matched to histological
slices before evaluation.
Histological slices of human atherosclerotic plaques were manually segmented into necrotic core, fibrous tissue and
calcification. Classification of these three components was voxelwise evaluated. As features the intensity, gradient
magnitude and Laplacian in four MRI sequences after different degrees of Gaussian smoothing, and the distances to the
lumen and the outer vessel wall, were used. Performance of linear and quadratic discriminant classifiers for different
combinations of features was evaluated. Best accuracy (72.5 ± 7.7%) was reached with the linear classifier when all
features were used. Although this was only a minor improvement to the accuracy of a classifier that only included the
intensities and distance features (71.6 ± 7.9%), the difference was statistically significant (paired t-test, p<0.05). Good
sensitivity and specificity for calcification was reached (83% and 95% respectively), however, differentiation between
fibrous (sensitivity 85%, specificity 60%) and necrotic tissue (sensitivity 49%, specificity 89%) was more difficult.
Estimation of myocardial volume at risk from CT angiography
Show abstract
The determination of myocardial volume at risk distal to coronary stenosis provides important information for prognosis
and treatment of coronary artery disease. In this paper, we present a novel computational framework for estimating the
myocardial volume at risk in computed tomography angiography (CTA) imagery. Initially, epicardial and endocardial
surfaces, and coronary arteries are extracted using an active contour method. Then, the extracted coronary arteries are
projected onto the epicardial surface, and each point on this surface is associated with its closest coronary artery using
the geodesic distance measurement. The likely myocardial region at risk on the epicardial surface caused by a stenosis is
approximated by the region in which all its inner points are associated with the sub-branches distal to the stenosis on the
coronary artery tree. Finally, the likely myocardial volume at risk is approximated by the volume in between the region
at risk on the epicardial surface and its projection on the endocardial surface, which is expected to yield computational
savings over risk volume estimation using the entire image volume. Furthermore, we expect increased accuracy since, as
compared to prior work using the Euclidean distance, we employ the geodesic distance in this work. The experimental
results demonstrate the effectiveness of the proposed approach on pig heart CTA datasets.
Developments of thrombosis detection algorithm using the contrast enhanced CT images
Show abstract
In the diagnosis of thrombosis with no specific clinic symptoms, diagnostic imaging plays a greater role. Particularly,
contrast Enhanced CT is low invasive diagnostics, and the thrombus in the pulmonary artery can be detected as a low
density without the contrast effect. Moreover, because describing the change of concentration in lung field and the
decline in lung blood vessel shadow is also possible, it is indispensable to diagnose of thrombosis. As the image
diagnosis support, it is necessary to classify the pulmonary artery and vein that relate to the thrombosis, and to analyze
the lung blood vessel quantitatively. The technique for detecting the thrombosis by detecting the position of the thrombus
has been proposed so far. In this study, it aims to focusing on the dilation of the main pulmonary artery and to detect the
thrombosis. The effectiveness of the method is shown by measuring the pulmonary trunk diameter by using the extracted
pulmonary artery from contrast Enhanced CT through semi-automated method, and comparing it with a normal case.
Posters: CBIR
Liver tumor detection and classification using content-based image retrieval
Show abstract
Computer aided liver tumor detection and diagnosis can assist radiologists to interpret abnormal features in liver CT
scans. In this paper, a general frame work is proposed to automatically detect liver focal mass lesions, conduct
differential diagnosis of liver focal mass lesions based on multiphase CT scans, and provide visually similar case
samples for comparisons. The proposed method first detects liver abnormalities by eliminating the normal tissue/organ
from the liver region, and in the second step it ranks these abnormalities with respect to spherical symmetry,
compactness and size using a tumoroid measure to facilitate fast location of liver focal mass lesions. To differentiate
liver focal mass lesions, content-based image retrieval technique is used to query a CT model database with known
diagnosis. Multiple-phase encoded texture features are proposed to represent the focal mass lesions. A hypercube
indexing structure based method is adopted as the retrieval strategy and the similarity score is calculated to rank the
retrieval results. Good performances are obtained from eight clinical CT scans. With the proposed method, the clinician
is expected to improve the accuracy of differential diagnosis.
3D lung image retrieval using localized features
Show abstract
The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the
lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas
automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its
aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem
of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases
using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists.
In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used
as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses
of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve
and show similar images with pathology appearing at a particular lung position was not possible. In this work,
a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When
compared to our previous study, the introduction of localization features allows improving early precision for
some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.
Similarity evaluation between query and retrieved masses using a content-based image retrieval (CBIR) CADx system for characterization of breast masses on ultrasound images: an observer study
Show abstract
The purpose of this study is to evaluate the similarity between the query and retrieved masses by a Content-Based
Image Retrieval (CBIR) computer-aided diagnosis (CADx) system for characterization of breast masses on ultrasound
(US) images based on radiologists' visual similarity assessment. We are developing a CADx system to assist radiologists
in characterizing masses on US images. The CADx system retrieves masses that are similar to a query mass from a
reference library based on automatically extracted image features. An observer study was performed to compare the
retrieval performance of four similarity measures: Euclidean distance (ED), Cosine (Cos), Linear Discriminant Analysis
(LDA), and Bayesian Neural Network (BNN). For ED and Cos, a k-nearest neighbor (k-NN) algorithm was used for
retrieval. For LDA and BNN, the features of a query mass were combined first into a malignancy score and then masses
with similar scores were retrieved. For a query mass, three most similar masses were retrieved with each method and
were presented to the radiologists in random order. Three MQSA radiologists rated the similarity between the query
mass and the computer-retrieved masses using a nine-point similarity scale (1=very dissimilar, 9=very similar). The
average similarity ratings of all radiologists for LDA, BNN, Cos, and ED were 4.71, 4.95, 5.18 and 5.32. The ED
measures retrieved masses of significantly higher similarity (p<0.008) than LDA and BNN. Although the BNN measure
had the best classification performance (Az: 0.90±0.03) in the CBIR scheme, ED exhibited higher image retrieval
performance than others based on radiologists' assessment.
Automatic colonic polyp shape determination using content-based image retrieval
Show abstract
Polyp shape (sessile or pedunculated) may provide important clinical implication. However, the traditional way of
determining polyp shape is both invasive and subjective. We present a less-invasive and automated method to
predict the shape of colonic polyps on computed tomographic colonography (CTC) using the content-based image
retrieval (CBIR) approach. We classify polyps as either sessile (SS) or pedunculated (PS) in shape. The CBIR uses
numerical feature vectors generated from our CTC computer aided detection (CTC-CAD) system to describe the
polyps. These features relate to physical and visual characteristics of the polyp. Feature selection was done using a
support vector machine classifier on a training set of polyp shapes. The system is evaluated using an independent
test set. Using receiver operating curve (ROC) analysis, we showed our system is as accurate as a polyp shape
classifier. The area under the ROC curve was 0.86 (95% confidence interval [0.77, 0.93]).
BI-RADS guided mammographic mass retrieval
Show abstract
In this study, a mammographic mass retrieval platform was established using content-based image retrieval
method to extract and to model the semantic content of mammographic masses. Specifically, the shape and
margin of a mass was classified into different categories, which were sorted by radiologist experts according
to BI-RADS descriptors. Mass lesions were analyzed by the likelihoods of each category with defined
features including third order moments, curvature scale space descriptors, compactness, solidity, and
eccentricity, etc. To evaluate the performance of the retrieval system, we defined that a retrieved image is
considered relevant if it belongs to the same class (benign or malignant) as the query image. A total of 476
biopsy-proven mass cases (219 malignant and 257 benign) were used for 10 random test/train partitions. For
each test query mass, 5 most similar masses were retrieved from the image library. The performance of the
retrieval system was evaluated by ROC analysis of the malignancy rating of the query masses in the test set
relative to the biopsy truth. Through 10 random test/train partitions, we found that the averaged area under
the ROC curve (Az) was 0.80±0.06. With another independent dataset containing 415 cases (244 malignant
and 171 benign) as a test set, the ROC analysis indicated the performance of the retrieval system had an Az of
0.75±0.03.
A context-aware approach to content-based image retrieval of lung nodules
Show abstract
We are investigating various techniques to improve the quality of Content-Based Image Retrieval(CBIR) for
computed-tomography(CT) scans of lung nodules. Previous works have used linear regression models1 and
artificial neural networks(ANN)6 to predict the similarity between two nodules. This paper expands upon this
work incorporating contextual information around lung nodules to determine if the existing model using an ANN
will produce a better correlation between content-based and semantic-based human perceived similarity.
Posters: Gastrointestinal and Abdominal
Computer-aided detection of small bowel strictures in CT enterography
Show abstract
The workflow of CT enterography in an emergency setting could be improved significantly by computer-aided detection
(CAD) of small bowel strictures to enable even non-expert radiologists to detect sites of obstruction rapidly. We
developed a CAD scheme to detect strictures automatically from abdominal multi-detector CT enterography image data
by use of multi-scale template matching and a blob detector method. A pilot study was performed on 15 patients with 22
surgically confirmed strictures to study the effect of the CAD scheme on observer performance. The 77% sensitivity of
an inexperienced radiologist assisted by CAD was comparable with the 81% sensitivity of an unaided expert radiologist
(p=0.07). The use of CAD reduced the reading time to identify strictures significantly (p<0.0001). Most of the false-positive
CAD detections were caused by collapsed bowel loops, approximated bowel wall, muscles, or vessels, and they
were easy to dismiss. The results indicate that CAD could provide radiologists with a rapid and accurate interpretation of
strictures to improve workflow in an emergency setting.
Detection of metastatic liver tumor in multi-phase CT images by using a spherical gray-level differentiation searching filter
Show abstract
To detect the metastatic liver tumor on CT scans, two liver edge maps on unenhanced and portal venous phase images
are firstly extracted and registered using phase-only correlation (POC) method, by which rotation and shift parameters
are detected on two log-polar transformed power spectrum images. Then the liver gray map is obtained on non-contrast
phase images by calculating the gray value within the region of edge map. The initial tumors are derived from the
subtraction of edge and gray maps as well as referring to the score from the spherical gray-level differentiation searching
(SGDS) filter. Finally the FPs are eliminated by shape and texture features. 12 normal cases and 25 cases with 44
metastatic liver tumors are used to test the performance of our algorithm, 86.7% of TPs are successfully extracted by our
CAD system with 2.5 FPs per case. The result demonstrates that the POC is a robust method for the liver registration,
and our proposed SGDS filter is effective to detect spherical shape tumor on CT images. It is expected that our CAD
system could useful for quantitative assessment of metastatic liver tumor in clinical practice.
Automatic colonic lesion detection and tracking in endoscopic videos
Show abstract
The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other
imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic
polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the
diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in
endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa
in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker
controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting
strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used
to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during
regular colonoscopic procedures and demonstrated promising results.
Active contours for localizing polyps in colonoscopic NBI image data
Show abstract
Colon cancer is the third most common type of cancer in the United States of America. Every year about 140,000
people are newly diagnosed with colon cancer. Early detection is crucial for a successful therapy. The standard
screening procedure is called colonoscopy. Using this endoscopic examination physicians can find colon polyps
and remove them if necessary. Adenomatous colon polyps are deemed a preliminary stage of colon cancer. The
removal of a polyp, though, can lead to complications like severe bleedings or colon perforation. Thus, only
polyps diagnosed as adenomatous should be removed. To decide whether a polyp is adenomatous the polyp's
surface structure including vascular patterns has to be inspected. Narrow-Band imaging (NBI) is a new tool to
improve visibility of vascular patterns of the polyps. The first step for an automatic polyp classification system
is the localization of the polyp. We investigate active contours for the localization of colon polyps in NBI image
data. The shape of polyps, though roughly approximated by an elliptic form, is highly variable. Active contours
offer the flexibility to adapt to polyp variation well. To avoid clustering of contour polygon points we propose
the application of active rays. The quality of the results was evaluated based on manually segmented polyps as
ground truth data. The results were compared to a template matching approach and to the Generalized Hough
Transform. Active contours are superior to the Hough transform and perform equally well as the template
matching approach.
Automatic teniae coli detection for computed tomography colonography
Show abstract
Human colon has complex structures since it turns, twists, and even mobiles when the position of patient changes. The
awareness of the locations and orientations is very important for improving the experience of virtual navigation,
registration of supine/prone images and polyp matching. Teniae coli (TCs) are three longitudinal muscles along the
human colon. They are parts of the colon wall, and they have the potential to serve as reliable landmarks to provide the
above mentioned awareness. Morphologically, TCs are three smooth narrow bands, approximately perpendicular to the
haustral folds, and extending between the fold pairs in a parallel manner. Such characteristics make the TCs detectable
if the folds have been extracted already. In this study, based on the previous work of the segmentation of haustral folds,
we introduce a new method of automatically detecting the three TCs. The experiments will be conducted on real patient
studies to demonstrate the feasibility of the method, and solid evaluation will be conducted based on a flattened two-dimensional
(2D) colon representation.
Quantitative CT imaging for adipose tissue analysis in mouse model of obesity
Show abstract
In obese humans CT imaging is a validated method for follow up studies of adipose tissue distribution and quantification
of visceral and subcutaneous fat. Equivalent methods in murine models of obesity are still lacking. Current small animal
micro-CT involves long-term X-ray exposure precluding longitudinal studies. We have overcome this limitation by using
a human medical CT which allows very fast 3D imaging (2 sec) and minimal radiation exposure. This work presents
novel methods fitted to in vivo investigations of mice model of obesity, allowing (i) automated detection of adipose
tissue in abdominal regions of interest, (ii) quantification of visceral and subcutaneous fat.
For each mouse, 1000 slices (100μm thickness, 160 μm resolution) were acquired in 2 sec using a Toshiba medical CT
(135 kV, 400mAs). A Gaussian mixture model of the Hounsfield curve of 2D slices was computed with the Expectation
Maximization algorithm. Identification of each Gaussian part allowed the automatic classification of adipose tissue
voxels. The abdominal region of interest (umbilical) was automatically detected as the slice showing the highest ratio of
the Gaussian proportion between adipose and lean tissues. Segmentation of visceral and subcutaneous fat compartments
was achieved with 2D 1/2 level set methods.
Our results show that the application of human clinical CT to mice is a promising approach for the study of obesity,
allowing valuable comparison between species using the same imaging materials and software analysis.
Colonoscopy video quality assessment using hidden Markov random fields
Show abstract
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing
colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically
valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate
the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the
abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video
are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality
frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To
address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative
frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information.
Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out
uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model
(EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis
system for colonoscopy video.
Development of automated quantification of visceral and subcutaneous adipose tissue volumes from abdominal CT scans
Show abstract
This contribution describes a novel algorithm for the automated quantification of visceral and subcutaneous
adipose tissue volumes from abdominal CT scans of patients referred for colorectal resection. Visceral and
subcutaneous adipose tissue volumes can accurately be measured with errors of 1.2 and 0.5%, respectively. Also
the reproducibility of CT measurements is good; a disadvantage is the amount of radiation. In this study the
diagnostic CT scans in the work - up of (colorectal) cancer were used. This implied no extra radiation. For
the purpose of segmentation alone, a low dose protocol can be applied. Obesity is a well known risk factor
for complications in and after surgery. Body Mass Index (BMI) is a widely accepted indicator of obesity, but
it is not specific for risk assessment of colorectal surgery. We report on an automated method to quantify
visceral and subcutaneous adipose tissue volumes as a basic step in a clinical research project concerning preoperative
risk assessment. The outcomes are to be correlated with the surgery results. The hypothesis is that
the balance between visceral and subcutaneous adipose tissue together with the presence of calcifications in the
major bloodvessels, is a predictive indicator for post - operatieve complications such as anastomotic leak. We
start with four different computer simulated humanoid abdominal volumes with tissue values in the appropriate
Hounsfield range at different dose levels. With satisfactory numerical results for this test, we have applied the
algorithm on over a 100 patient scans and have compared results with manual segmentations by an expert for a
smaller pilot group. The results are within a 5% difference. Compared to other studies reported in the literature,
reliable values are obtained for visceral and subcutaneous adipose tissue areas.
Posters: Head and Neck
Automated contralateral subtraction of dental panoramic radiographs for detecting abnormalities in paranasal sinus
Show abstract
Inflammation in the paranasal sinus is often observed in seasonal allergic rhinitis or with colds, but is also an
indication for odontogenic tumors, carcinoma of the maxillary sinus or a maxillary cyst. The detection of those findings
in dental panoramic radiographs is not difficult for radiologists, but general dentists may miss the findings since they
focus on treatments of teeth. The purpose of this work is to develop a contralateral subtraction method for detecting the
odontogenic sinusitis region on dental panoramic radiographs. We developed a contralateral subtraction technique in
paranasal sinus region, consisting of 1) image filtering of the smoothing and sobel operation for noise reduction and edge
extraction, 2) image registration of mirrored image by using mutual information, and 3) image display method of
subtracted pixel data. We employed 56 cases (24 normal and 32 abnormal). The abnormal regions and the normal cases
were verified by a board-certified radiologist using CT scans. Observer studies with and without subtraction images were
performed for 9 readers. The true-positive rate at a 50% confidence level in 7 out of 9 readers was improved, but there
was no statistical significance in the difference of area-under-curve (AUC) in each radiologist. In conclusion, the
contralateral subtraction images of dental panoramic radiographs may improve the detection rate of abnormal regions in
paranasal sinus.
Preoperative volume determination for pituitary adenoma
Show abstract
The most common sellar lesion is the pituitary adenoma, and sellar tumors are approximately 10-15% of all intracranial
neoplasms. Manual slice-by-slice segmentation takes quite some time that can be reduced by using the appropriate
algorithms. In this contribution, we present a segmentation method for pituitary adenoma. The method is based on an
algorithm that we have applied recently to segmenting glioblastoma multiforme. A modification of this scheme is used
for adenoma segmentation that is much harder to perform, due to lack of contrast-enhanced boundaries. In our
experimental evaluation, neurosurgeons performed manual slice-by-slice segmentation of ten magnetic resonance
imaging (MRI) cases. The segmentations were compared to the segmentation results of the proposed method using the
Dice Similarity Coefficient (DSC). The average DSC for all datasets was 75.92%±7.24%. A manual segmentation took
about four minutes and our algorithm required about one second.
Prediction of brain tumor progression using multiple histogram matched MRI scans
Show abstract
In a recent study [1], we investigated the feasibility of predicting brain tumor progression based on multiple MRI series
and we tested our methods on seven patients' MRI images scanned at three consecutive visits A, B and C. Experimental
results showed that it is feasible to predict tumor progression from visit A to visit C using a model trained by the
information from visit A to visit B. However, the trained model failed when we tried to predict tumor progression from
visit B to visit C, though it is clinically more important. Upon a closer look at the MRI scans revealed that histograms of
MRI scans such as T1, T2, FLAIR etc taken at different times have slight shifts or different shapes. This is because those
MRI scans are qualitative instead of quantitative so MRI scans taken at different times or by different scanners might
have slightly different scales or have different homogeneities in the scanning region. In this paper, we proposed a
method to overcome this difficulty. The overall goal of this study is to assess brain tumor progression by exploring seven
patients' complete MRI records scanned during their visits in the past two years. There are ten MRI series in each visit,
including FLAIR, T1-weighted, post-contrast T1-weighted, T2-weighted and five DTI derived MRI volumes: ADC, FA,
Max, Min and Middle Eigen Values. After registering all series to the corresponding DTI scan at the first visit, we
applied a histogram matching algorithm to non-DTI MRI scans to match their histograms to those of the corresponding
MRI scans at the first visit. DTI derived series are quantitative and do not require the histogram matching procedure. A
machine learning algorithm was then trained using the data containing information from visit A to visit B, and the
trained model was used to predict tumor progression from visit B to visit C. An average of 72% pixel-wise accuracy was
achieved for tumor progression prediction from visit B to visit C.
Computer-aided tracking of MS lesions
Show abstract
Multiple Sclerosis (MS) lesions are known to change over time. The location, size and shape characteristics of lesions
are often used to diagnose and to track disease progression. We have improved our lesion-browsing tool that allows
users to automatically locate successive significant lesions in a MRI stack. In addition, an automatic alignment feature
was implemented to facilitate comparisons across stacks. A lesion stack is formed that can be browsed independently or
in tandem with the image windows. Lesions of interest can then be measured, rendered and rotated. Multiple windows
allow the viewer to compare the size and shape of lesions from the MRI images of the same patient taken at different
time intervals.
Posters: Lung
Computer-aided assessment of pulmonary disease in novel swine-origin H1N1 influenza on CT
Show abstract
The 2009 pandemic is a global outbreak of novel H1N1 influenza. Radiologic images can be used to assess
the presence and severity of pulmonary infection. We develop a computer-aided assessment system to
analyze the CT images from Swine-Origin Influenza A virus (S-OIV) novel H1N1 cases. The technique is
based on the analysis of lung texture patterns and classification using a support vector machine (SVM).
Pixel-wise tissue classification is computed from the SVM value. The method was validated on four H1N1
cases and ten normal cases. We demonstrated that the technique can detect regions of pulmonary
abnormality in novel H1N1 patients and differentiate these regions from visually normal lung (area under
the ROC curve is 0.993). This technique can also be applied to differentiate regions infected by different
pulmonary diseases.
Lung ventilation analysis using deformable registration in Xe-enhanced CT images
Show abstract
To analyze lung regional ventilation using two-phase Xe-enhanced CT with wash-in and wash-out periods, we propose
an accurate and fast deformable registration and ventilation imaging. To restrict the registration to the lung parenchyma,
the left and right lungs are segmented. To correct position difference and local deformation of the lungs, affine and
demon-based deformable registrations are performed. The lungs of wash-out image are globally aligned to the wash-in
image by narrow-band distance propagation based affine registration and nonlinearly deformed by a demon algorithm
using a combined gradient force and active cells. To assess the lung ventilation, color-coded ventilation pattern map is
generated by deformable registration and histogram analysis of xenon attenuation. Experimental results show that our
accurate and fast deformable registration corrects not only positional difference but also local deformation. Our
ventilation imaging helps the analysis of lung regional ventilation.
Lung tumours segmentation on CT using sparse field active model
Show abstract
Three-dimensional (3D) manual segmentation of lung tumours is observer-dependent and time consuming, which are
major limitations for use in clinical trials. In this paper we present a semi-automated 3D segmentation method, which is
more time-efficient and less operator dependent than manual segmentation. We developed a semi-automated algorithm
to segment lung tumours on chest computed tomography (CT) images using shape constrained multi-thresholding
(SCMT) and sparse field active model (SFAM) techniques. For each 2D slice of CT tumour image, an initial contour
was generated using SCMT. This initial contour was then deformed using SFAM. Seven energies were utilized in the
SFAM technique to control the deformation namely: global region, local region, curvature, edge information,
smoothness, anchor, and partial volume. The proposed algorithm was tested with 70 CT tumour slices (19 well-defined
tumours (WD) located centrally in the lung parenchyma without significant vasculature and 51 vascularized or
juxtapleural tumours (VJ)). Our results showed that the initial contour generated by the SCMT technique was sufficient
to segment the well-defined (WD) tumours without any deformation. However, the deformation using SFAM was
required to segment vascularized or juxtapleural (VJ) tumours. The results of the proposed segmentation algorithm were
evaluated by comparing them to manual segmentation using the dice coefficient (DC). The average DC was 96.3±1.1%
and 95.2±1.6% for WD and VJ tumour images respectively. The average DC obtained for the entire data set was
95.5±1.6%, which shows that the proposed algorithm can accurately segment lung tumours and can be utilized for
monitoring tumours response to treatment.
Automated segmentation of pulmonary nodule depicted on CT images
Show abstract
In this study, an efficient computational geometry approach is introduced to segment pulmonary nodules. The
basic idea is to estimate the three-dimensional surface of a nodule in question by analyzing the shape characteristics of
its surrounding tissues in geometric space. Given a seed point or a specific location where a suspicious nodule may be,
three steps are involved in this approach. First, a sub-volume centered at this seed point is extracted and the contained
anatomy structures are modeled in the form of a triangle mesh surface. Second, a "visibility" test combined with a shape
classification algorithm based on principal curvature analysis removes surfaces determined not to belong to nodule
boundaries by specific rules. This step results in a partial surface of a nodule boundary. Third, an interpolation /
extrapolation based shape reconstruction procedure is used to estimate a complete nodule surface by representing the
partial surface as an implicit function. The preliminary experiments on 158 annotated CT examinations demonstrated
that this scheme could achieve a reasonable performance in nodule segmentation.
A study on quantifying COPD severity by combining pulmonary function tests and CT image analysis
Show abstract
This paper describes a novel method that can evaluate chronic obstructive pulmonary disease (COPD) severity
by combining measurements of pulmonary function tests and measurements obtained from CT image analysis.
There is no cure for COPD. However, with regular medical care and consistent patient compliance with treatments
and lifestyle changes, the symptoms of COPD can be minimized and progression of the disease can be slowed.
Therefore, many diagnosis methods based on CT image analysis have been proposed for quantifying COPD.
Most of diagnosis methods for COPD extract the lesions as low-attenuation areas (LAA) by thresholding and
evaluate the COPD severity by calculating the LAA in the lung (LAA%). However, COPD is usually the result
of a combination of two conditions, emphysema and chronic obstructive bronchitis. Therefore, the previous
methods based on only LAA% do not work well. The proposed method utilizes both of information including
the measurements of pulmonary function tests and the results of the chest CT image analysis to evaluate the
COPD severity. In this paper, we utilize a multi-class AdaBoost to combine both of information and classify the
COPD severity into five stages automatically. The experimental results revealed that the accuracy rate of the
proposed method was 88.9% (resubstitution scheme) and 64.4% (leave-one-out scheme).
Classification algorithm of lung lobe for lung disease cases based on multislice CT images
Show abstract
With the development of multi-slice CT technology, to obtain an accurate 3D image of lung field in a short time is
possible. To support that, a lot of image processing methods need to be developed. In clinical setting for diagnosis of
lung cancer, it is important to study and analyse lung structure. Therefore, classification of lung lobe provides useful
information for lung cancer analysis. In this report, we describe algorithm which classify lungs into lung lobes for lung
disease cases from multi-slice CT images. The classification algorithm of lung lobes is efficiently carried out using
information of lung blood vessel, bronchus, and interlobar fissure. Applying the classification algorithms to multi-slice
CT images of 20 normal cases and 5 lung disease cases, we demonstrate the usefulness of the proposed algorithms.
Segmentation of lung fields using Chan-Vese active contour model in chest radiographs
Show abstract
A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We
develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two
requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation
of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to
apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active
contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of
lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge
on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented
out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea,
find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive.
The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the
Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown.
We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic
imaging techniques such as dual energy radiography and chest tomosynthesis.
Catheter detection and classification on chest radiographs: an automated prototype computer-aided detection (CAD) system for radiologists
Show abstract
Chest radiographs are the quickest and safest method to check placement of man-made medical devices placed in the
body like catheters, stents and pacemakers etc out of which catheters are the most commonly used devices. The two most
often used catheters especially in the ICU are the Endotracheal (ET) tube used to maintain patient's airway and the
Nasogastric (NG) tube used to feed and administer drugs. Tertiary ICU's typically generate over 250 chest radiographs
per day to confirm tube placement. Incorrect tube placements can cause serious complications and can even be fatal. The
task of identifying these tubes on chest radiographs is difficult for radiologists and ICU personnel given the high volume
of cases. This motivates the need for an automatic detection system to aid radiologists in processing these critical cases
in a timely fashion while maintaining patient safety. To-date there has been very little research in this area. This paper
develops a new fully automatic prototype computer-aided detection (CAD) system for detection and classification of
catheters on chest radiographs using a combination of template matching, morphological processing and region growing.
The preliminary evaluation was carried out on 25 cases. The prototype CAD system was able to detect ET and NG tubes
with sensitivities of 73.7% and 76.5% respectively and with specificities of 91.3% and 84.0% respectively. The results
from the prototype system show that it is feasible to automatically detect both catheters on chest radiographs, with the
potential to significantly speed the delivery of imaging services while maintaining high accuracy.
Automatic detection of lung vessel bifurcation in thoracic CT images
Show abstract
Computer-aided diagnosis (CAD) systems for detection of lung nodules have been an active topic of research for last few
years. It is desirable that a CAD system should generate very low false positives (FPs) while maintaining high
sensitivity. This work aims to reduce the number of false positives occurring at vessel bifurcation point. FPs occur quite
frequently on vessel branching point due to its shape which can appear locally spherical due to the intrinsic geometry of
intersecting tubular vessel structures combined with partial volume effects and soft tissue attenuation appearance
surrounded by parenchyma.
We propose a model-based technique for detection of vessel branching points using skeletonization, followed by branch-point
analysis. First we perform vessel structure enhancement using a multi-scale Hessian filter to accurately segment
tubular structures of various sizes followed by thresholding to get binary vessel structure segmentation [6]. A modified
Reebgraph [7] is applied next to extract the critical points of structure and these are joined by a nearest neighbor criterion
to obtain complete skeletal model of vessel structure. Finally, the skeletal model is traversed to identify branch points,
and extract metrics including individual branch length, number of branches and angle between various branches. Results
on 80 sub-volumes consisting of 60 actual vessel-branching and 20 solitary solid nodules show that the algorithm
identified correctly vessel branching points for 57 sub-volumes (95% sensitivity) and misclassified 2 nodules as vessel
branch. Thus, this technique has potential in explicit identification of vessel branching points for general vessel analysis, and could be useful in false positive reduction in a lung CAD system.
Hybrid CAD scheme for lung nodule detection in PET/CT images
Show abstract
Lung cancer is the leading cause of death among male in the world. PET/CT is useful for the detection of
early lung cancer since it is an imaging technique that has functional and anatomical information. However,
radiologist has to examine using the large number of images. Therefore reduction of radiologist's
load is strongly desired. In this study, hybrid CAD scheme has been proposed to detect lung nodule in
PET/CT images. Proposed method detects the lung nodule from both CT and PET images. As for the detection
in CT images, solitary nodules are detected using Cylindrical Filter that we developed. PET images
are binarized based on standard uptake value (SUV); highly uptake regions are detected. FP reduction
is performed using seven characteristic features and Support Vector Machine. Finally by integrating
these results, candidate regions are obtained. In the experiment, we evaluated proposed method using 50
cases of PET/CT images obtained for the cancer-screening program. We evaluated true-positive fraction
(TPF) and the number of false positives / case (FPs/case). As a result, TPFs for CT and PET were 0.67
and 0.38, respectively. By integrating the both results, TPF was improved to 0.80. These results indicate
that our method may be useful for the lung cancer detection using PET/CT images.
Classification of texture patterns in CT lung imaging
Show abstract
Since several lung diseases can be potentially diagnosed based on the patterns of lung tissue observed in medical images,
automated texture classification can be useful in assisting the diagnosis. In this paper, we propose a methodology for
discriminating between various types of normal and diseased lung tissue in computed tomography (CT) images that
utilizes Vector Quantization (VQ), an image compression technique, to extract discriminative texture features. Rather
than focusing on images of the entire lung, we direct our attention to the extraction of local descriptors from individual
regions of interest (ROIs) as determined by domain experts. After determining the ROIs, we generate "locally optimal"
codebooks representing texture features of each region using the Generalized Lloyd Algorithm. We then utilize the
codeword usage frequency of each codebook as a discriminative feature vector for the region it represents. We compare
k-nearest neighbor, support vector machine and neural network classification approaches using the normalized
histogram intersection as a similarity measure. The classification accuracy reached up to 98% for certain experimental
settings, indicating that our approach may potentially assist clinicians in the interpretation of lung images and facilitate
the investigation of relationships among structure, texture and function or pathology related to several lung diseases.
Toward the detection of abnormal chest radiographs the way radiologists do it
Show abstract
Computer Aided Detection (CADe) and Computer Aided Diagnosis (CADx) are relatively recent areas of research that
attempt to employ feature extraction, pattern recognition, and machine learning algorithms to aid radiologists in
detecting and diagnosing abnormalities in medical images. However, these computational methods are based on the
assumption that there are distinct classes of abnormalities, and that each class has some distinguishing features that set it
apart from other classes. However, abnormalities in chest radiographs tend to be very heterogeneous. The literature
suggests that thoracic (chest) radiologists develop their ability to detect abnormalities by developing a sense of what is
normal, so that anything that is abnormal attracts their attention. This paper discusses an approach to CADe that is based
on a technique called anomaly detection (which aims to detect outliers in data sets) for the purpose of detecting atypical
regions in chest radiographs. However, in order to apply anomaly detection to chest radiographs, it is necessary to
develop a basis for extracting features from corresponding anatomical locations in different chest radiographs. This
paper proposes a method for doing this, and describes how it can be used to support CADe.
Automated detection of pulmonary nodules in CT: false positive reduction by combining multiple classifiers
Show abstract
The purpose of this study was to investigate the usefulness of various classifier combination methods for improving the
performance of a CAD system for pulmonary nodule detection in CT. We employed CT cases in the publicly available
lung image database consortium (LIDC) dataset, which included 85 CT cases with 110 nodules. We first used six
individual classifiers for nodule detection in CT, including linear discriminant analysis (LDA), quadratic discriminant
analysis (QDA), artificial neural network (ANN), and three types of support vector machines (SVM). Five informationfusion
methods were then employed to combine the classifiers' outputs for improving detection performance. The five
combination methods included two supervised (likelihood ratio method and neural network) and three unsupervised ones
(the mean, the product, and the majority-vote of the output scores from the six individual classifiers). Leave-one-caseout
was employed to train and test individual classifiers and supervised combination methods. At a sensitivity of 80 %,
the numbers of false positives per case for the six individual classifiers were 6.1 for LDA, 19.9 for QDA, 8.6 for ANN,
23.7 for SVM-dot, 17.0 for SVM-poly, and 23.35 for SVM-ANOVA; the numbers of false positives per case for the five
combination methods were 3.4 for the majority-vote rule, 6.2 for the mean, 5.7 for the product, 9.7 for the neural
network, and 28.1 for the likelihood ratio method. The majority-vote rule achieved higher performance levels than other
combination methods. It also achieved higher performance than the best individual classifier, which is not the case for
other combination methods.
Scan-rescan reproducibility of CT densitometric measures of emphysema
Show abstract
This study investigated the reproducibility of HRCT densitometric measures of emphysema in patients
scanned twice one week apart. 24 emphysema patients from a multicenter study were scanned at full
inspiration (TLC) and expiration (RV), then again a week later for four scans total. Scans for each patient
used the same scanner and protocol, except for tube current in three patients. Lung segmentation with gross
airway removal was performed on the scans. Volume, weight, mean lung density (MLD), relative area
under -950HU (RA-950), and 15th percentile (PD-15) were calculated for TLC, and volume and an airtrapping
mask (RA-air) between -950 and -850HU for RV. For each measure, absolute differences were
computed for each scan pair, and linear regression was performed against volume difference in a subgroup
with volume difference <500mL. Two TLC scan pairs were excluded due to segmentation failure. The
mean lung volumes were 5802 +/- 1420mL for TLC, 3878 +/- 1077mL for RV. The mean absolute
differences were 169mL for TLC volume, 316mL for RV volume, 14.5g for weight, 5.0HU for MLD,
0.66p.p. for RA-950, 2.4HU for PD-15, and 3.1p.p. for RA-air. The <500mL subgroup had 20 scan pairs
for TLC and RV. The R2 values were 0.8 for weight, 0.60 for MLD, 0.29 for RA-950, 0.31 for PD-15, and
0.64 for RA-air. Our results indicate that considerable variability exists in densitometric measures over one
week that cannot be attributed to breathhold or physiology. This has implications for clinical trials relying
on these measures to assess emphysema treatment efficacy.
Improving the channeler ant model for lung CT analysis
Show abstract
The Channeler Ant Model (CAM) is an algorithm based on virtual ant colonies, conceived for the segmentation
of complex structures with different shapes and intensity in a 3D environment. It exploits the natural capabilities
of virtual ant colonies to modify the environment and communicate with each other by pheromone deposition.
When applied to lung CTs, the CAM can be turned into a Computer Aided Detection (CAD) method for the
identification of pulmonary nodules and the support to radiologists in the identification of early-stage pathological
objects. The CAM has been validated with the segmentation of 3D artificial objects and it has already been
successfully applied to the lung nodules detection in Computed Tomography images within the ANODE09
challenge. The model improvements for the segmentation of nodules attached to the pleura and to the vessel
tree are discussed, as well as a method to enhance the detection of low-intensity nodules. The results on five
datasets annotated with different criteria show that the analytical modules (i.e. up to the filtering stage) provide
a sensitivity in the 80 - 90% range with a number of FP/scan of the order of 20. The classification module,
although not yet optimised, keeps the sensitivity in the 70 - 85% range at about 10 FP/scan, in spite of the
fact that the annotation criteria for the training and the validation samples are different.
Posters: Microscopy
Comparative performance analysis of stained histopathology specimens using RGB and multispectral imaging
Show abstract
A performance study was conducted to compare classification accuracy using both multispectral imaging
(MSI) and standard bright-field imaging (RGB) to characterize breast tissue microarrays. The study was
primarily focused on investigating the classification power of texton features for differentiating cancerous
breast TMA discs from normal. The feature extraction algorithm includes two main processes: texton library
training and histogram construction. First, two texton libraries were built for multispectral cubes and RGB
images respectively, which comprised the training process. Second, texton histograms from each
multispectral cube and RGB image were used as testing sets. Finally, within each spectral band, exhaustive
feature selection was used to search for the combination of features that yielded the best classification
accuracy using the pathologic result as a golden standard. Support vector machine was applied as a classifier
using leave-one-out cross-validation. The spectra carrying the greatest discriminatory power were
automatically chosen and a majority vote was used to make the final classification. The study included 122
breast TMA discs that showed poor classification power based on simple visualization of RGB images. Use
of multispectral cubes showed improved sensitivity and specificity compared to the RGB images (85%
sensitivity & 85% specificity for MSI vs. 75% & 65% for RGB). This study demonstrates that use of texton
features derived from MSI datasets achieve better classification accuracy than those derived from RGB
datasets. This study further shows that MSI provided statistically significant improvements in automated
analysis of single-stained bright-field images. Future work will examine MSI performance in assessing multistained
specimens.
Automatic location of microscopic focal planes for computerized stereology
Daniel T. Elozory,
Om Pavithra Bonam,
Kurt Kramer,
et al.
Show abstract
When applying design-based stereology to biological tissue, there are two primary applications for an auto-focusing
function in the software of computerized stereology system. The system must first locate the in-focus optical planes at
the upper and lower surfaces of stained tissue sections, thus identifying the top and bottom as well as the thickness of the
tissue. Second, the system must find the start and end along the Z-axis of stained objects within a Z-stack of images
through tissue sections. In contrast to traditional autofocus algorithms that seek a global maximum or peak on the focus
curve, the goal of this study was to find the two "knees" of the focus curve that represent the "just out-of-focus" focal
planes. The upper surface of the tissue section is defined as the image just before focus is detected moving down the Z-stack.
Continuing down, the lower surface is defined as the first image of the last set of adjacent images where focus is
no longer detected.
The performance of seven focus algorithms in locating the top and bottom focal planes of tissue sections was analyzed
by comparing each algorithm on 34 Z-stacks including a total of 828 images. The Thresholded Absolute Gradient
algorithm outperformed all others, correctly identifying the top or bottom focal plane within an average of 1 μm on the
training data as well as the test data.
Distribution fitting-based pixel labeling for histology image segmentation
Show abstract
This paper presents a new pixel labeling algorithm for complex histology image segmentation. For each image pixel, a
Gaussian mixture model is applied to estimate its neighborhood intensity distributions. With this local distribution
fitting, a set of pixels having a full set of source classes (e.g. nuclei, stroma, connective tissue, and background) in their
neighborhoods are identified as the seeds for pixel labeling. A seed pixel is labeled by measuring its intensity distance to
each of its neighborhood distributions, and the one with the shortest distance is selected to label the seed. For non-seed
pixels, we propose two different labeling schemes: global voting and local clustering. In global voting each seed
classifies a non-seed pixel into one of the seed's local distributions, i.e., it casts one vote; the final label for the non-seed
pixel is the class which gets the most votes, across all the seeds. In local clustering, each non-seed pixel is labeled by one
of its own neighborhood distributions. Because the local distributions in a non-seed pixel neighborhood do not
necessarily correspond to distinct source classes (i.e., two or more local distributions may be produced by the same
source class), we first identify the "true" source class of each local distribution by using the source classes of the seed
pixels and a minimum distance criterion to determine the closest source class. The pixel can then be labeled as belonging
to this class. With both labeling schemes, experiments on a set of uterine cervix histology images show encouraging
performance of our algorithm when compared with traditional multithresholding and K-means clustering, as well as
state-of-the-art mean shift clustering, multiphase active contours, and Markov random field-based algorithms.
Image-based histologic grade estimation using stochastic geometry analysis
Sokol Petushi,
Jasper Zhang,
Aladin Milutinovic,
et al.
Show abstract
Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally
diminished the prognostic value of histologic breast cancer grading. The objective of this study is to
assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image
processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with
Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and
study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the
raw image data and reduce the image domain complexity to a binary representation with the foreground representing
regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry
and texture analysis measurements to the segmented images and to produce shape distributions, transforming
the original color images into a histogram representation that captures their distinguishing properties between
various histological grades. Results: Computational results were compared against known histological grades
assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors
(KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and
a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as
an effective software tool in breast cancer histological grading.
Glandular object based tumor morphometry in H&E biopsy samples for prostate cancer prognosis
Show abstract
Morphological and architectural characteristics of primary prostate tissue compartments, such as epithelial nuclei (EN)
and cytoplasm, provide critical information for cancer diagnosis, prognosis and therapeutic response prediction. The
subjective and variable Gleason grade assessed by expert pathologists in Hematoxylin and Eosin (H&E) stained
specimens has been the standard for prostate cancer diagnosis and prognosis. We propose a novel morphometric,
glandular object-oriented image analysis approach for the robust quantification of H&E prostate biopsy images.
We demonstrate the utility of features extracted through the proposed method in predicting disease progression post
treatment in a multi-institution cohort of 1027 patients. The biopsy based features were univariately predictive for
clinical response post therapy; with concordance indexes (CI) ≤ 0.4 or ≥ 0.6. In multivariate analysis, a glandular object
feature quantifying tumor epithelial cells not directly associated with an intact tumor gland was selected in a model
incorporating preoperative clinical data, protein biomarker and morphological imaging features. The model achieved a
CI of 0.73 in validation, which was significantly higher than a CI of 0.69 for the standard multivariate model based
solely on clinical features currently used in clinical practice.
This work presents one of the first demonstrations of glandular object based morphological features in the H&E stained
biopsy specimen to predict disease progression post primary treatment. Additionally, it is the largest scale study of the
efficacy and robustness of the proposed features in prostate cancer prognosis.
Toward automated quantification of biological microstructures using unbiased stereology
Om Pavithra Bonam,
Daniel Elozory,
Kurt Kramer,
et al.
Show abstract
Quantitative analysis of biological microstructures using unbiased stereology plays a large and growing role in
bioscience research. Our aim is to add a fully automatic, high-throughput mode to a commercially available,
computerized stereology device (Stereologer). The current method for estimation of first- and second order parameters of
biological microstructures, requires a trained user to manually select objects of interest (cells, fibers etc.,) while stepping
through the depth of a stained tissue section in fixed intervals. The proposed approach uses a combination of color and
gray-level processing. Color processing is used to identify the objects of interest, by training on the images to obtain the
threshold range for objects of interest. In gray-level processing, a region-growing approach was used to assign a unique
identity to the objects of interest and enumerate them. This automatic approach achieved an overall object detection rate
of 93.27%. Thus, these results support the view that automatic color and gray-level processing combined with unbiased
sampling and assumption and model-free geometric probes can provide accurate and efficient quantification of biological
objects.
A distributed architecture for a loosely coupled virtual microscopy system
Show abstract
Virtual microscopy systems are typically implemented following standard client-server architectures, under which
the server must store a huge quantity of data. The server must attend requests from many clients as several
Regions of Interest (RoIs) at any desired levels of magnification and quality. The communication bandwidth
limitation, the I/O image data accesses, the decompression processing and specific raw image data operations
such as clipping or zooming to a desired magnification, are highly time-consuming processes. All this together
may result in poor navigation experiences with annoying effects produced by the delayed response times. This
article presents a virtual microscope system with a distributed storage system and parallel processing. The
system attends each request in parallel, using a clustered java virtual machine and a distributed filesystem.
Images are stored in JPEG2000 which allows natural parallelization by splitting the image data into a set of
small codeblocks that contain independent information of an image patch, namely, a particular magnification,
a specific image location and a pre-established quality level. The compressed J2K file is replicated within the
Distributed Filesystem, providing fault tolerance and fast access. A requested RoI is split into stripes which
are independently decoded for the distributed filesystem, using an index file which allows to easily locate the
particular node containing the required set of codeblocks. When comparing with a non-parallelized version of
the virtual microscope software, user experience is improved by speeding up RoI displaying in about 60 % using
two computers.
Counting of RBCs and WBCs in noisy normal blood smear microscopic images
Show abstract
This work focuses on the segmentation and counting of peripheral blood smear particles which plays a vital role in
medical diagnosis. Our approach profits from some powerful processing techniques. Firstly, the method used for
denoising a blood smear image is based on the Bivariate wavelet. Secondly, image edge preservation uses the Kuwahara
filter. Thirdly, a new binarization technique is introduced by merging the Otsu and Niblack methods. We have also
proposed an efficient step-by-step procedure to determine solid binary objects by merging modified binary, edged
images and modified Chan-Vese active contours. The separation of White Blood Cells (WBCs) from Red Blood Cells
(RBCs) into two sub-images based on the RBC (blood's dominant particle) size estimation is a critical step. Using
Granulometry, we get an approximation of the RBC size. The proposed separation algorithm is an iterative mechanism
which is based on morphological theory, saturation amount and RBC size. A primary aim of this work is to introduce an
accurate mechanism for counting blood smear particles. This is accomplished by using the Immersion Watershed
algorithm which counts red and white blood cells separately. To evaluate the capability of the proposed framework,
experiments were conducted on normal blood smear images. This framework was compared to other published
approaches and found to have lower complexity and better performance in its constituent steps; hence, it has a better
overall performance.
Posters: Musculoskeletal
Direct visualization of regions with lowered bone mineral density in dual-energy CT images of vertebrae
Show abstract
Dual-energy CT allows for a better material differentiation than conventional CT. For the purpose of osteoporosis
diagnosis, a detection of regions with lowered bone mineral density (BMD) is of high clinical interest. Based on
an existing biophysical model of the trabecular bone in vertebrae a new method for directly highlighting those
low density regions in the image data has been developed. For this, we combine image data acquired at 80 kV
and 140 kV with information about the BMD range in different vertebrae and derive a method for computing a
color enhanced image which clearly indicates low density regions. An evaluation of our method which compares
it with a quantitative method for BMD assessment shows a very good correspondence between both methods.
The strength of our method lies in its simplicity and speed.
Automated localization of vertebra landmarks in MRI images
Show abstract
The identification of key landmark points in an MR spine image is an important step for tasks such as
vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key
landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach
is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in
order to detect/localize the landmarks. A straightforward extension of the work described here is an automated
classification of spine section(s). It also serves as a useful building block for further automatic processing such as
extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.
Segmentation of knee injury swelling on infrared images
John Puentes,
Hélène Langet,
Christophe Herry,
et al.
Show abstract
Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature
differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like
anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the
original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation
remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of
inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of
rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied.
While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region
segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation.
The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to
identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment
the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain
show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution
of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography
images of knee injury swelling.
Posters: Oncology
Computer-aided tumor detection stemmed from the fuzzification of the Dempster-Shafer theory
Show abstract
In decision making processes where we have to deal with epistemic uncertainties, the Dempster-Shafer theory (DST) of
evidence and fuzzy logic have gained prominence as the methods of choice over traditional probabilistic methods. The
DST is unfortunately known to give wrong results in situations of high conflict. While some methods have been
proposed in the literature for improving the DST, such as the weighted DST which assumes that we have some
information about the relative reliabilities of the classifiers, we opted to incorporate fuzzy concepts in the DST
framework. This work was motivated by the desire to improve detection performance of a Computer-Aided Detection
(CAD) system under development for the detection of tumors in Positron Emission Tomography (PET) images by fusing
the outputs of multiple classifiers such as the SVM and LDA classifiers. A first implement based on a simple binary
fusion scheme gave a result of 69% true detections with an average of 2.5 false positive detections per 3D image (FPI).
These results prompted the use of the DST which resulted in 92% detection sensitivity and 25 FPI. As a way of further
reducing the false detections, we chose to tackle the limitations inherent to the DST by principally applying fuzzy
techniques in defining the hypotheses and experimenting with new combination rules. The best result of this modified
DST approach has been a 92% true tumor detection with 12 FPI; indicating a reduction by a factor of 2 of the false
detections while maintaining high sensitivity.
Analysis of transient thermal images to distinguish melanoma from dysplastic nevi
Show abstract
We have recently developed a dynamic infrared (IR) imaging system that provides accurate measurements of
transient thermal response of the skin surface for characterizing lesions. Our hypothesis was that malignant
pigmented lesions with increased proliferative potential generate quantifiable amounts of heat and possess an
ability to reheat more quickly than the surrounding normal skin, thereby creating a marker of melanoma lesions
vs. non-proliferative nevi. In our previous studies, we demonstrated that the visualization and measurement
of the transient thermal response of the skin to a cooling excitation can aid the identification of skin lesions
of different origin. This capability of distinguishing benign from malignant pigmented lesions is expected to
improve the specificity and sensitivity for melanoma as well as other skin cancers, while decreasing the number
of unnecessary biopsies. In this work, in order to quantify the transient thermal response with high accuracy,
we present a processing framework on multimodal images, which includes a feature point (landmark) detection
module, an IR image registration module that uses the resulting landmarks to correct involuntary body/limb
motion and an interactive white-light image segmentation module to delineate the contours of the lesions. The
proposed method is tested in a pilot patient study in which all the patients possess a pigmented lesion with a
clinical indication for biopsy. After scanning, biopsying, and grading the lesions for malignant potential, we
observe that the results of our approach match well with the biopsy results.
BCC skin cancer diagnosis based on texture analysis techniques
Show abstract
In this paper, we present a texture analysis based method for diagnosing the Basal Cell Carcinoma (BCC) skin cancer
using optical images taken from the suspicious skin regions. We first extracted the Run Length Matrix and Haralick
texture features from the images and used a feature selection algorithm to identify the most effective feature set for the
diagnosis. We then utilized a Multi-Layer Perceptron (MLP) classifier to classify the images to BCC or normal cases.
Experiments showed that detecting BCC cancer based on optical images is feasible. The best sensitivity and specificity
we achieved on our data set were 94% and 95%, respectively.
Computer-aided diagnosis for prostate cancer detection in the peripheral zone via multisequence MRI
Show abstract
We propose a Computer Assisted Diagnosis Interview (CADi) scheme for determining a likelihood measure of prostate
cancer presence in the peripheral zone (PZ) based on multisequence magnetic resonance imaging, including T2-weighted
(T2w), diffusion-weighted (DWI) and dynamic contrast-enhanced (DCE) MRI at 1.5 Tesla (T). Based on a feature set
derived from the gray level images, including first order statistics, Haralick's features, gradient features, semi-quantitative
and quantitative (pharmacokinetic modeling) dynamic parameters, we trained and compared four kinds of
classifiers: Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), k-Nearest Neighbours (KNN) and
Naïve Bayes (NB). The aim is twofold: we try to discriminate between the relevant features as well as creating an
efficient classifier using these features. The database consists of 23 radical prostatectomy patients. Using histologic
sections as the gold standard, both cancers and non-malignant tissues (suspicious and clearly benign) were annotated in
consensus on all MR images by two radiologists, a histopathologist and a researcher. Diagnostic performances were
evaluated based on a ROC curves analysis. From the outputs of all evaluated feature selection methods on the test bench,
we discriminated a restrictive set of about 20 highly informative features. Quantitative evaluation of the diagnostic
performance yielded to a maximal Area Under the ROC Curve (AUC) of 0.89. Moreover, the optimal CADi scheme
outperformed, in terms of specificity, our human experts in differentiating malignant from suspicious tissues, thus
demonstrating its potential for assisting cancer identification in the PZ.
A CAD system based on multi-parametric analysis for cancer prostate detection on DCE-MRI
Simone Mazzetti,
Massimo De Luca,
Christian Bracco,
et al.
Show abstract
Computer-aided diagnosis (CAD) systems using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI)
data may be developed to help localize prostate cancer and guide biopsy, avoiding random sampling of the whole gland.
The purpose of this study is to present a DCE-MRI CAD system, which calculates the likelihood of malignancy in a
given area of the prostate by combining model-based and model-free parameters. The dataset includes 10 patients with
prostate cancer, with a total of 13 foci of adenocarcinoma. The post-processing is based on the following steps: testing of
registration quality, noise filtering, and extracting the proposed features needed to the CAD. Parameters with the best
performance in discriminating between normal and cancer regions are selected by computing the area under the ROC
curve, and by evaluating the correlation between pairs of features. A 6-dimensional parameters vector is generated for
each pixel and fed into a Bayesian classifier, in which the output is the probability of malignancy. The classification
performance is estimated using the leave-one-out method. The resulting area under the ROC curve is 0.899
(95%CI:0.893-0.905); sensitivity and specificity are 82.4% and 82.1% respectively at the best cut-off point (0.352).
Preliminary results show that the system is accurate in detecting areas of the gland that are involved by tumor. Further
studies will be necessary to confirm these promising preliminary results.
Posters: Retina
Modeling photo-bleaching kinetics to map local variations in rod rhodopsin density
Show abstract
Localized rod photoreceptor and rhodopsin losses have been observed in post mortem histology both in normal
aging and in age-related maculopathy. We propose to noninvasively map local rod rhodopsin density through
analysis of the brightening of the underlying lipofuscin autofluorescence (LAF) in confocal scanning laser ophthalmoscopy
(cSLO) imaging sequences starting in the dark adapted eye. The detected LAF increases as rhodopsin is
bleached (time constant ≈ 25sec) by the average retinal irradiance of the cSLO 488nm laser beam. We fit parameters
of analytical expressions for the kinetics of rhodopsin bleaching that Lamb validated using electroretinogram
recordings in human. By performing localized (≈ 100μm) kinetic analysis, we create high resolution maps of the
rhodopsin density. This new noninvasive imaging and analysis approach appears well-suited for measuring localized
changes in the rod photoreceptors and correlating them at high spatial resolution with localized pathological
changes of the retinal pigment epithelium (RPE) seen in steady-state LAF images.
Identifying glaucoma with multi-fractal features from optical coherence tomography (OCT)
Show abstract
We propose a novel technique that exploits multi-fractal features for classifying glaucoma from ocular normal patients
using retinal nerve fiber layer (RNFL) thickness measurement data. We apply a box-counting (BC) method, which
utilizes pseudo 2D images from 1D RNFL data, and a multi-fractional Brownian motion (mBm) method, which
incorporates both fractal and wavelet analyses, to analyze optical coherence tomography (OCT) data from 136 study
participants (63 with glaucoma and 73 ocular normal patients). For statistical performance comparison, we compute the
sensitivity, specificity and area under receiver operating curve (AUROC). The AUROCs in identifying glaucoma from
ocular normal patients were 0.81 (BC), 0.87 (mBm), and 0.89 (BC+mBm), respectively.
Feature-based glaucomatous progression prediction using scanning laser polarimetry (SLP) data
Show abstract
In this work, we investigate the effectiveness of a novel fractal feature-based technique in predicting glaucomatous
progression using the retinal nerve fiber layer (RNFL) thickness measurement data. The technique is used to analyze
GDx variable corneal compensator (GDx-VCC) scanning laser polarimeter (SLP) data from one eye of 96 study
participants (14 progressors, 45 non-progressors, and 37 ocular normal patients). The novel feature is obtained by using
a 2D box-counting (BC) method, which utilizes pseudo 2D images from 1D temporal, superior, nasal, inferior, temporal
(TSNIT) RNFL data. For statistical performance evaluation and comparison, we compute sensitivity, specificity and area
under receiver operating curve (AUROC) for fractal analysis (FA) and other existing feature-based techniques such as
fast Fourier analysis (FFA) and wavelet-Fourier analysis (WFA). The AUROCs indicating discrimination between
progressors and non-progressors using the classifiers with the selected FA, WFA, and FFA features are 0.82, 0.78 and
0.82 respectively for 6 months prior to progression and 0.63, 0.69 and 0.73 respectively 18 months prior to progression.
We then use the same classifiers to compute specificity in ocular normal patients. The corresponding specificities for
ocular normal patients are 0.86, 0.76 and 0.86 for FFA, WFA and FA methods, respectively.
Automatic arteriovenous crossing phenomenon detection on retinal fundus images
Show abstract
Arteriolosclerosis is one cause of acquired blindness. Retinal fundus image examination is useful for early detection of
arteriolosclerosis. In order to diagnose the presence of arteriolosclerosis, the physicians find the silver-wire arteries, the
copper-wire arteries and arteriovenous crossing phenomenon on retinal fundus images. The focus of this study was to
develop the automated detection method of the arteriovenous crossing phenomenon on the retinal images. The blood
vessel regions were detected by using a double ring filter, and the crossing sections of artery and vein were detected by
using a ring filter. The center of that ring was an interest point, and that point was determined as a crossing section when
there were over four blood vessel segments on that ring. And two blood vessels gone through on the ring were classified
into artery and vein by using the pixel values on red and blue component image. Finally, V2-to-V1 ratio was measured for
recognition of abnormalities. V1 was the venous diameter far from the blood vessel crossing section, and V2 was the
venous diameter near from the blood vessel crossing section. The crossing section with V2-to-V1 ratio over 0.8 was
experimentally determined as abnormality. Twenty four images, including 27 abnormalities and 54 normal crossing
sections, were used for preliminary evaluation of the proposed method. The proposed method was detected 73% of
crossing sections when the 2.8 sections per image were mis-detected. And, 59% of abnormalities were detected by
measurement of V1-to-V2 ratio when the 1.7 sections per image were mis-detected.
Dual angle scan protocol with Doppler optical coherence tomography for retinal blood flow measurement
Show abstract
To improve the scan quality of Doppler Optical coherence tomography for blood flow measurement, we investigate how
to improve the Doppler signal for all vessels around optic disc. Doppler signal is depending on the Doppler angle, which
is defined as angle between OCT beams and normal direction vessel. In this examination, we test the effect of different
OCT beam direction on Doppler angles of all veins. We also test maximizing the Doppler angle by combining scans with
different OCT beams direction. Three criteria were used to evaluate the overall quality, average Doppler angle, the
percentage of vessels with Doppler angle larger than the optimize value, the percentage of vessel with Coefficient
variance of Doppler angle less than the optimize value. The result showed that the best protocol is to maximize the
Doppler angle from one scan with OCT beam through supranasal portion of pupil and other scan with OCT beam
through infranasal portion of pupil.
Posters: Other
Continuous measurements of mandibular cortical width on dental panoramic radiographs for computer-aided diagnosis of osteoporosis
M. S. Kavitha,
Akira Asano,
Akira Taguchi
Show abstract
The aim of this study is to develop a computer-aided osteoporosis diagnosis system that automatically determines the
inferior cortical width of the mandible continuously on dental panoramic radiographs to realize statistically more robust
measurements than the conventional one-point measurements. The cortical width was continuously measured on dental
panoramic radiographs by enhancing the original image, determining cortical boundaries, and finally evaluating the
distance between boundaries continuously throughout the region of interest. The diagnostic performance using the
average width calculated from the continuous measurement was compared with BMD at lumbar spine and femoral neck
in 100 postmenopausal women of whom 50 to the development of the tool and 50 to its validation with no history of
osteoporosis was evaluated. We experimentally showed the superiority of our method with improved sensitivity and
specificity of identifying the development subjects were 90.0% and 75.0% in women with low spinal BMD and 81.8%
and 69.2% in those with low femoral BMD, respectively. The corresponding values in the validation subjects were
93.3% and 82.9% at the lumbar spine and 92.3% and 75.7% at the femoral neck, respectively in terms of efficacy for
diagnosing osteoporosis. We also assessed the diagnosis and classification of women with osteoporosis using support
vector machine employing the average and variance of the continuous measurements gave excellent discrimination
ability. It yields sensitivity and specificity of 90.9% and 83.8%, respectively with lumbar spine and 90.0% and 69.1%,
respectively with femoral neck BMD. Performance comparison and simplicity of this method indicate that our computeraided
system is readily applicable to clinical practice.
Automated classification and visualization of healthy and pathological dental tissues based on near-infrared hyper-spectral imaging
Show abstract
Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent
chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel
crystals, commonly known as white spots which are difficult to diagnose. If detected early enough, such
demineralization can be arrested and reversed by non-surgical means through well established dental treatments (fluoride
therapy, anti-bacterial therapy, low intensity laser irradiation). Near-infrared (NIR) hyper-spectral imaging is a new
promising technique for early detection of demineralization based on distinct spectral features of healthy and
pathological dental tissues. In this study, we apply NIR hyper-spectral imaging to classify and visualize healthy and
pathological dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized areas. For
this purpose, a standardized teeth database was constructed consisting of 12 extracted human teeth with different degrees
of natural dental lesions imaged by NIR hyper-spectral system, X-ray and digital color camera. The color and X-ray
images of teeth were presented to a clinical expert for localization and classification of the dental tissues, thereby
obtaining the gold standard. Principal component analysis was used for multivariate local modeling of healthy and
pathological dental tissues. Finally, the dental tissues were classified by employing multiple discriminant analysis. High
agreement was observed between the resulting classification and the gold standard with the classification sensitivity and
specificity exceeding 85 % and 97 %, respectively. This study demonstrates that NIR hyper-spectral imaging has
considerable diagnostic potential for imaging hard dental tissues.
A two-view ultrasound CAD system for spina bifida detection using Zernike features
Show abstract
In this work, we address a very specific CAD (Computer Aided Detection/Diagnosis) problem and try to detect one of
the relatively common birth defects - spina bifida, in the prenatal period. To do this, fetal ultrasound images are used as
the input imaging modality, which is the most convenient so far. Our approach is to decide using two particular types of
views of the fetal neural tube. Transcerebellar head (i.e. brain) and transverse (axial) spine images are processed to
extract features which are then used to classify healthy (normal), suspicious (probably defective) and non-decidable
cases. Decisions raised by two independent classifiers may be individually treated, or if desired and data related to both
modalities are available, those decisions can be combined to keep matters more secure. Even more security can be
attained by using more than two modalities and base the final decision on all those potential classifiers.
Our current system relies on feature extraction from images for cases (for particular patients). The first step is image
preprocessing and segmentation to get rid of useless image pixels and represent the input in a more compact domain,
which is hopefully more representative for good classification performance. Next, a particular type of feature extraction,
which uses Zernike moments computed on either B/W or gray-scale image segments, is performed. The aim here is to
obtain values for indicative markers that signal the presence of spina bifida. Markers differ depending on the image
modality being used. Either shape or texture information captured by moments may propose useful features. Finally,
SVM is used to train classifiers to be used as decision makers. Our experimental results show that a promising CAD
system can be actualized for the specific purpose. On the other hand, the performance of such a system would highly
depend on the qualities of image preprocessing, segmentation, feature extraction and comprehensiveness of image data.
Automatic measurement of early gestational sac diameters from one scan session
Show abstract
Gestational sac (GS) diameters are commonly measured by routine ultrasound in early pregnancy. However, manually
searching for the standardized plane of GS (SPGS) and measuring the diameters are time-consuming. In this paper, we
develop a three-stage automatic solution for this procedure. In order to precisely and efficiently locate the position of GS
in each frame, a coarse to fine GS detection scheme based on AdaBoost algorithm is explored. Then, an efficient method
based on local context information is introduced to reduce the false positives (FP) generated by the above detection
process. Finally, a database (DB) guided spectral segmentation is proposed to separate GS region from the background
for further diameters measurement. Experiments carried out on 31 videos show that by using the proposed methods, the
number of SPGS searching error is only one, and the average measurement error is 0.059 for the length diameters and
0.083 for the depth diameters.