Show all abstracts
View Session
- Front Matter: Volume 7260
- Cardiac and Skeleton
- Lung Imaging I
- CAD Methodology
- Breast Imaging
- Mammogram Analysis
- Lung Imaging II
- Brain and Microscopy
- Colon and Breast
- Novel Applications
- Retina and ROC Challenge
- Lung Nodules and ANODE Challenge
- Poster Session: Brain
- Poster Session: Breast Imaging
- Poster Session: Cardiovascular
- Poster Session: Colon
- Poster Session: Lung
- Poster Session: Microscopy
- Poster Session: Prostate
- Poster Session: Retina
- Poster Session: Skeleton
- Poster Session: Other
Front Matter: Volume 7260
Front Matter: Volume 7260
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7260, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Cardiac and Skeleton
Multi-scale feature extraction for learning-based classification of coronary artery stenosis
Show abstract
Assessment of computed tomography coronary angiograms for diagnostic purposes is a mostly manual, timeconsuming
task demanding a high degree of clinical experience. In order to support diagnosis, a method for
reliable automatic detection of stenotic lesions in computed tomography angiograms is presented. Thereby,
lesions are detected by boosting-based classification. Hence, a strong classifier is trained using the AdaBoost
algorithm on annotated data. Subsequently, the resulting strong classification function is used in order to
detect different types of coronary lesions in previously unseen data. As pattern recognition algorithms require
a description of the objects to be classified, a novel approach for feature extraction in computed tomography
angiograms is introduced. By generation of cylinder segments that approximate the vessel shape at multiple
scales, feature values can be extracted that adequately describe the properties of stenotic lesions. As a result of
the multi-scale approach, the algorithm is capable of dealing with the variability of stenotic lesion configuration.
Evaluation of the algorithm was performed on a large database containing unseen segmented centerlines from
cardiac computed tomography images. Results showed that the method was able to detect stenotic cardiovascular
diseases with high sensitivity and specificity. Moreover, lesion based evaluation revealed that the majority of
stenosis can be reliable identified in terms of position, type and extent.
Automated coronary CT angiography plaque-lumen segmentation
Show abstract
We are investigating the feasibility of a computer-aided detection (CAD) system to assist radiologists in diagnosing
coronary artery disease in ECG gated cardiac multi-detector CT scans having calcified plaque. Coronary artery stenosis
analysis is challenging if calcified plaque or the iodinated blood pool hides viable lumen. The research described herein
provides an improved presentation to the radiologist by removing obscuring calcified plaque and blood pool. The
algorithm derives a Gaussian estimate of the point spread function (PSF) of the scanner responsible for plaque blooming
by fitting measured CTA image profiles. An initial estimate of the extent of calcified plaque is obtained from the image
evidence using a simple threshold. The Gaussian PSF estimate is then convolved with the initial plaque estimate to
obtain an estimate of the extent of the blooming artifact and this plaque blooming image is subtracted from the CT image
to obtain an image largely free of obscuring plaque. In a separate step, the obscuring blood pool is suppressed using
morphological operations and adaptive region growing. After processing by our algorithm, we are able to project the
segmented plaque-free lumen to form synthetic angiograms free from obstruction. We can also analyze the coronary
arteries with vessel tracking and centerline extraction to produce cross sectional images for measuring lumen stenosis.
As an additional aid to radiologists, we also produce plots of calcified plaque and lumen cross-sectional area along
selected blood vessels. The method was validated using digital phantoms and actual patient data, including in one case, a
validation against the results of a catheter angiogram.
Structural scene analysis and content-based image retrieval applied to bone age assessment
Show abstract
Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions
or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the
skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic
approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation.
Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this
graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location,
shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative
positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter,
segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract
visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii)
the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from
the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall,
an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need
adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for
further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.
3D structural measurements of the proximal femur from 2D DXA images using a statistical atlas
Show abstract
A method to obtain 3D structural measurements of the proximal femur from 2D DXA images and a statistical atlas is
presented. A statistical atlas of a proximal femur was created consisting of both 3D shape and volumetric density
information and then deformably registered to 2D fan-beam DXA images. After the registration process, a series of 3D
structural measurements were taken on QCT-estimates generated by transforming the registered statistical atlas into a
voxel volume. These measurements were compared to the equivalent measurements taken on the actual QCT (ground
truth) associated with the DXA images for each of 20 human cadaveric femora. The methodology and results are
presented to address the potential clinical feasibility of obtaining 3D structural measurements from limited angle DXA
scans and a statistical atlas of the proximal femur in-vivo.
Quantitative local topological texture properties obtained from radiographs of the proximal femur in patients with pertrochanteric and transcervical hip fractures
Show abstract
The incidence of osteoporosis and associated fractures becomes an increasingly relevant issue for the public health
institutions of industrialized nations. Fractures of the hip represent the worst complication of osteoporosis with a
significantly elevated rate of mortality. Prediction of fracture risk is a major focus of osteoporosis research and, over
the years, has been approched from different angles.
There exist two distinct subtypes of transcervical and pertrochanteric hip fracture that can be distinguished on the
basis of the anatomical location of the injury. While the epidemiology of hip fractures has been well described,
typically, little or no distinction is made between the subtypes. The object of this study was to determine whether
local topological texture properties based on the Minkowski Functionals (MF) obtained from standard radiographs of
the proximal femur in patients with hip fracture can be used to differentiate between the two types of fracture pattern.
The texture features were extracted from standardized regions of interest (femoral head, neck, and pertrochanteric
region) in clinical radiographs of the hip obtained from 90 post-menopausal women (69.8 ± 7.9 yrs). 30 of the women
had sustained pertrochanteric fractures, 30 had transcervical hip fractures and 30 were age-matched controls.
We determined an optimized topological parameter MF2Dloc using an integrative filtering procedure based on a
sliding-windows algorithm. Statistical relationship between the fracture type (pertrochanteric/transcervical) and the
value of MF2Dloc was assessed by receiver-operator characteristic (ROC) analysis.
Depending on the anatomical location of the region of interest for texture analysis correct classification of tanscervial
and pertrochanteric fractures ranged from AUC = 0.79 to 0.98.
In conclusion, quantitative texture properties of trabecular bone extracted from radiographs of the hip can be used to
identify patients with hip fracture and to distinguish between pertrochanteric and transcervical fracture types. The
degree of correct classification varies with choice of anatomical site for texture analysis. The results of our study may
help to understand the mechanism of the two types of hip fracture.
Lung Imaging I
An adaptive knowledge-driven medical image search engine for interactive diffuse parenchymal lung disease quantification
Show abstract
Characterization and quantification of the severity of diffuse parenchymal lung diseases (DPLDs) using Computed
Tomography (CT) is an important issue in clinical research. Recently, several classification-based computer-aided
diagnosis (CAD) systems [1-3] for DPLD have been proposed. For some of those systems, a degradation of performance
[2] was reported on unseen data because of considerable inter-patient variances of parenchymal tissue patterns.
We believe that a CAD system of real clinical value should be robust to inter-patient variances and be able to classify
unseen cases online more effectively. In this work, we have developed a novel adaptive knowledge-driven CT image
search engine that combines offline learning aspects of classification-based CAD systems with online learning aspects of
content-based image retrieval (CBIR) systems. Our system can seamlessly and adaptively fuse offline accumulated
knowledge with online feedback, leading to an improved online performance in detecting DPLD in both accuracy and
speed aspects. Our contribution lies in: (1) newly developed 3D texture-based and morphology-based features; (2) a
multi-class offline feature selection method; and, (3) a novel image search engine framework for detecting DPLD. Very
promising results have been obtained on a small test set.
Quantitative assessment of emphysema from whole lung CT scans: comparison with visual grading
Show abstract
Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction.
CT scans allow for imaging of the anatomical basis of emphysema and for visual assessment by radiologists of the extent
present in the lungs. Several measures have been introduced for the quantification of the extent of disease directly from
CT data in order to add to the qualitative assessments made by radiologists. In this paper we compare emphysema index,
mean lung density, histogram percentiles, and the fractal dimension to visual grade in order to evaluate the predictability
of radiologist visual scoring of emphysema from low-dose CT scans through quantitative scores, in order to determine
which measures can be useful as surrogates for visual assessment. All measures were computed over nine divisions of
the lung field (whole lung, individual lungs, and upper/middle/lower thirds of each lung) for each of 148 low-dose,
whole lung scans. In addition, a visual grade of each section was also given by an expert radiologist. One-way ANOVA
and multinomial logistic regression were used to determine the ability of the measures to predict visual grade from
quantitative score. We found that all measures were able to distinguish between normal and severe grades (p<0.01), and
between mild/moderate and all other grades (p<0.05). However, no measure was able to distinguish between mild and
moderate cases. Approximately 65% prediction accuracy was achieved from using quantitative score to predict visual
grade, with 73% if mild and moderate cases are considered as a single class.
A template-based approach for the analysis of lung nodules in a volumetric CT phantom study
Show abstract
Volumetric CT has the potential to improve the quantitative analysis of lung nodule size
change compared to currently used one-dimensional measurement practices. Towards that
goal, we have been conducting studies using an anthropomorphic phantom to quantify
sources of volume measurement error. One source of error is the measurement technique or
software tool used to estimate lesion volume. In this manuscript, we present a template-based
approach which utilizes the properties of the acquisition and reconstruction system to
quantify nodule volume. This approach may reduce the error associated with the volume
estimation technique, thereby improving our ability to estimate the error directly associated
with CT parameters and nodule characteristics. Our estimation approach consists of: (a) the
simulation of the object-to-image transformation of a helical CT system, (b) the creation of a
bank of simulated 3D nodule templates of varying sizes, and (c) the 3D matching of synthetic
nodules - that were attached to lung vasculature and scanned with a 16-slice MDCT system - to the bank of simulated templates to estimate nodule volume. Results based on 10 repeat
scans for different protocols and a root mean square error (RMSE) similarity metric showed a
relative bias of 88%, 14%, and 4% for the measurement of 5 mm, 8 mm and 10 mm low
density nodules (-630 HU) compared to -3%, -6%, and 8% for nodules of +100HU density.
However, the relative bias for the small, low density nodules (5 mm, -630 HU), was
significantly reduced to 7% when a penalized RMSE metric was used to enforce a symmetry
constraint that reduced the impact of attached vessels. The results are promising for the use
of this measurement approach as a low-bias estimator of nodule volume which will allow the
systematic quantification and ranking of measurement error in volumetric CT analysis of lung
nodules.
Bronchial segment matching in low-dose lung CT scan pairs
Show abstract
Documenting any change in airway dimensions over time may be relevant for monitoring the progression of
pulmonary diseases. In order to correctly measure the change in segmental dimensions of airways, it is necessary
to locate the identical airway segments across two scans. In this paper, we present an automated method to match
individual bronchial segments from a pair of low-dose CT scans. Our method uses the intensity information in
addition to the graph structure as evidences for matching the individual segments. 3D image correlation matching
technique is employed to match the region of interest around the branch points in two scans and therefore locate
the matching bronchial segments. The matching process was designed to address the differences in airway tree
structures from two scans due to the variation in tree segmentations. The algorithm was evaluated using 114
pairs of low-dose CT scans (120 kV, 40 mAs). The total number of segments matched was 3591, of which 99.7%
were correctly matched. When the matching was limited to the bronchial segments of the fourth generation or
less, the algorithm correctly identified all of 1553 matched segments.
Evaluation of computerized detection of pulmonary embolism in independent data sets of computed tomographic pulmonary angiographic (CTPA) scans
Show abstract
Computed tomographic pulmonary angiography (CTPA) has been reported to be an effective means for clinical
diagnosis of pulmonary embolism (PE). We are developing a computer-aided diagnosis (CAD) system for assisting
radiologists in detection of pulmonary embolism in CTPA images. The pulmonary vessel tree is extracted based on the
analysis of eigenvalues of Hessian matrices at multiple scales followed by 3D hierarchical EM segmentation. A multiprescreening
method is designed to identify suspicious PEs along the extracted vessels. A linear discriminant analysis
(LDA) classifier with feature selection is then used to reduce false positives (FPs). Two data sets of 59 and 69 CTPA PE
cases were randomly selected from patient files at the University of Michigan (UM) and the PIOPED II study,
respectively, and used as independent training and test sets. The PEs that were identified by three experienced thoracic
radiologists were used as the gold standard. The detection performance of the CAD system was assessed by free
response receiver operating characteristic analysis. The results indicated that our PE detection system can achieve a
sensitivity of 80% at 18.9 FPs/case on the PIOPED cases when the LDA classifier was trained with the UM cases. The
test sensitivity with the UM cases is 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED
cases.
Bag-of-features approach for improvement of lung tissue classification in diffuse lung disease
Show abstract
Many automated techniques have been proposed to classify diffuse lung disease patterns. Most of the techniques utilize
texture analysis approaches with second and higher order statistics, and show successful classification result among
various lung tissue patterns. However, the approaches do not work well for the patterns with inhomogeneous texture
distribution within a region of interest (ROI), such as reticular and honeycombing patterns, because the statistics can
only capture averaged feature over the ROI. In this work, we have introduced the bag-of-features approach to overcome
this difficulty. In the approach, texture images are represented as histograms or distributions of a few basic primitives,
which are obtained by clustering local image features. The intensity descriptor and the Scale Invariant Feature
Transformation (SIFT) descriptor are utilized to extract the local features, which have significant discriminatory power
due to their specificity to a particular image class. In contrast, the drawback of the local features is lack of invariance
under translation and rotation. We improved the invariance by sampling many local regions so that the distribution of the
local features is unchanged. We evaluated the performance of our system in the classification task with 5 image classes
(ground glass, reticular, honeycombing, emphysema, and normal) using 1109 ROIs from 211 patients. Our system
achieved high classification accuracy of 92.8%, which is superior to that of the conventional system with the gray level
co-occurrence matrix (GLCM) feature especially for inhomogeneous texture patterns.
Automated extraction of pleural effusion in three-dimensional thoracic CT images
Show abstract
It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional
thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is
difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from
thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic
wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting
lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of
lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing
between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of
livers which were extracted using a region-growing algorithm. Second step was a local matching processing between
normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing.
Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters
obtained from the global matching processing and the local matching processing. We compared our method with a
conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of
pleural effusions obtained from our method were much higher than those obtained from other methods. Automated
extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis
of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.
CAD Methodology
An analysis of two ground truth estimation methods
Show abstract
An estimation of the so called Ground Truth (GT), i.e. the actual lesion region, can minimize readers' subjectivity
if multiple readers' markings are combined. Two methods perform this estimate by considering the
spatial location of voxels: Thresholded Probability-Map (TPM) and Simultaneous Truth and Performance Level
Estimation (STAPLE). An analysis of these two methods has already been performed. The purpose of this study,
however, is gaining a new insight into the method outcomes by comparing the estimated regions. A subset of the
publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented
by all four radiologists. The TPM estimator was computed by assigning to each voxel a value equal to average
number of readers that included such voxel in their markings and then applying a threshold of 0.5. Our STAPLE
implementation is loosely based on a version from ITK, to which we added the graph cut post-processing. The
pair-wise similarities between the estimated ground truths were analyzed by computing the respective Jaccard
coefficients. Then, the sign test of the differences between the volumes of TPM and STAPLE was performed.
A total of 35 nodules documented on 26 scans by all four radiologists were available. The spatial agreement
had a one-sided 90% Confidence Interval of [0.92, 1.00]. The sign test of the differences had a p-value less than
0.001. We found that (a) the differences in their volume estimates are statistically significant, (b) the spatial
disagreement between the two estimators is almost completely due to the exclusion of voxels marked by exactly
two readers, (c) STAPLE tends to weight more, in its GT estimate, readers marking broader regions.
A comparative study of database reduction methods for case-based computer-aided detection systems: preliminary results
Show abstract
In case-based computer-aided decision systems (CB-CAD) a query case is compared to known examples
stored in the systems case base (also called a reference library). These systems offer competitive
classification performance and are easy to expand. However, they also require efficient management of
the case base. As CB-CAD systems are becoming more popular, the problem of case base optimization
has recently attracted interest among CAD researchers. In this paper we present preliminary results of
a study comparing several case base reduction techniques. We implemented six techniques previously
proposed in machine learning literature and applied it to the classification problem of distinguishing
masses and normal tissue in mammographic regions of interest. The results show that the random
mutation hill climbing technique offers a drastic reduction of the number of case base examples while
providing a significant improvement in classification performance. Random selection allowed for reduction
of the case base to 30% without notable decrease in performance. The remaining techniques (i.e.,
condensed nearest neighbor, reduced nearest neighbor, edited nearest neighbor, and All k-NN) resulted
in moderate reduction (to 50-70% of the original size) at the cost of decrease in CB-CAD performance.
The exploration machine: a novel method for analyzing high-dimensional data in computer-aided diagnosis
Show abstract
Purpose: To develop, test, and evaluate a novel unsupervised machine learning method for computer-aided diagnosis and
analysis of multidimensional data, such as biomedical imaging data. Methods: We introduce the Exploration Machine
(XOM) as a method for computing low-dimensional representations of high-dimensional observations. XOM systematically
inverts functional and structural components of topology-preserving mappings. By this trick, it can contribute to
both structure-preserving visualization and data clustering. We applied XOM to the analysis of whole-genome microarray
imaging data, comprising 2467 79-dimensional gene expression profiles of Saccharomyces cerevisiae, and to model-free
analysis of functional brain MRI data by unsupervised clustering. For both applications, we performed quantitative comparisons
to results obtained by established algorithms. Results: Genome data: Absolute (relative) Sammon error values
were 5.91·105 (1.00) for XOM, 6.50·105 (1.10) for Sammon's mapping, 6.56·105 (1.11) for PCA, and 7.24·105 (1.22) for
Self-Organizing Map (SOM). Computation times were 72, 216, 2, and 881 seconds for XOM, Sammon, PCA, and SOM,
respectively. - Functional MRI data: Areas under ROC curves for detection of task-related brain activation were 0.984 ±
0.03 for XOM, 0.983 ± 0.02 for Minimal-Free-Energy VQ, and 0.979 ± 0.02 for SOM. Conclusion: For both multidimensional
imaging applications, i.e. gene expression visualization and functional MRI clustering, XOM yields competitive
results when compared to established algorithms. Its surprising versatility to simultaneously contribute to dimensionality
reduction and data clustering qualifies XOM to serve as a useful novel method for the analysis of multidimensional data,
such as biomedical image data in computer-aided diagnosis.
Agreement of CAD features with expert observer ratings for characterization of pulmonary nodules in CT using the LIDC-IDRI database
Show abstract
We have analyzed 3000 nodule delineations and malignancy ratings of pulmonary nodules made by expert observers in the IDRI CT lung image database. The agreement between nodule volume from automatic segmentation and expert delineations is almost as high as inter-observer agreement. For the experts' malignancy rating the inter-observer agreement is quite modest. Linear and support vector regression models have been tested to emulate the expert
malignancy rating from a small number of automatically computed numerical image features. Machine-observer and inter-observer agreement have been evaluated using linear correlation and weighted kappa coefficient. The results suggest that the numerical computed malignancy - if considered as an additional observer - cannot be distinguished from the expert ratings.
Scale focusing of statistical shape models for breast region segmentation and pectoral muscle suppression
Show abstract
Automated image analysis is used for risk assessment and computer aided diagnosis of breast cancer. A prerequisite for this automation is an efficient and robust segmentation of the region of interest (ROI). Extraction of breast without the pectoral muscle, the ROI in our study, from mammograms is a challenging task due to the tapering nature of the breast at the skin-air interface and the overlap between the high density regions and the pectoral muscle in the medio-lateral oblique (MLO) views. To segment breast skin-air interface, Otsu's multilevel threshold based algorithm constrained with a shape prior is used. Starting at a coarse scale, the pectoral muscle is detected by fitting a novel adaptive statistical shape model at the top left corner in the medio-lateral oblique views of mammograms and its position is localized by scale focusing. A novel energy term is proposed to fit the shape model. The proposed algorithm is applied to a set of 50 mammograms as a post hoc analysis of an earlier trial. The results are evaluated by comparing with manual annotations. In the pectoral muscle detection the metrics used for the evaluation are: (a) mean Hausdorff distance (MHD):6.7 pixels; (b) area overlap:87%; (c) false positive rate (FP):0.7%; (d) false negative rate (FN):12.1% and (e) mean relative breast area error (MRBAE):101.19±5.26 %.
Breast Imaging
Using three-class BANN classifier in the automated analysis of breast cancer lesions in DCE-MRI
Show abstract
The purpose of this study is to investigate three-class Bayesian artificial neural networks (BANN) in dynamic contrastenhanced
MRI (DCE-MRI) CAD in distinguishing different types of breast lesions including ductal carcinoma in situ
(DCIS), invasive ductal carcinoma (IDC), and benign. The database contains 72 DCIS lesions, 124 IDC lesions, and 131
benign breast lesions (no cysts). Breast MR images were obtained with a clinical DCE-MRI scanning protocol. In 3D,
we automatically segmented each lesion and calculated its characteristic kinetic curve using the fuzzy c-means method.
Morphological and kinetic features were automatically extracted, and stepwise linear discriminant analysis was utilized
for feature selection in four subcategories: DCIS vs. IDC, DCIS vs. benign, IDC vs. benign, and malignant (DCIS +
IDC) vs. benign. Classification was automatically performed with the selected features for each subcategory using
round-robin-by-lesion two-class BANN and three-class BANN. The performances of the classifiers were assessed with
two-class ROC analysis. We failed to show any statistically significant differences between the two-class BANN and
three-class BANN for all four classification tasks, demonstrating that the three-class BANN performed similarly to the
two-class BANN. A three-class BANN is expected to be more desirable in the clinical arena for both diagnosis and
patient management.
Assessment of texture analysis on DCE-MRI data for the differentiation of breast tumor lesions
Show abstract
Breast cancer diagnosis based on magnetic resonance images (breast MRI) is increasingly being accepted as an
additional diagnostic tool to mammography and ultrasound, with distinct clinical indications.1 Its capability
to detect and differentiate lesion types with high sensitivity and specificity is countered by the fact that visual
human assessment of breast MRI requires long experience. Moreover, the lack of evaluation standards causes
diagnostic results to vary even among experts. The most important MR acquisition technique is dynamic contrast
enhanced (DCE) MR imaging since different lesion types accumulate contrast material (CM) differently. The
wash-in and wash-out characteristic as well as the morphologic characteristic recorded and assessed from MR
images therefore allows to differentiate benign from malignant lesions. In this work, we propose to calculate
second order statistical features (Haralick textures) for given lesions based on subtraction and 4D images and
on parametermaps. The lesions are classified with a linear classification scheme into probably malignant or
probably benign. The method and model was developed on 104 histologically graded lesions (69 malignant and
35 benign). The area under the ROC curve obtained is 0.91 and is already comparable to the performance of a
trained radiologist.
A computer-aided diagnosis system for prediction of the probability of malignancy of breast masses on ultrasound images
Show abstract
A computer-aided diagnosis (CADx) system with the ability to predict the probability of malignancy (PM) of a mass can
potentially assist radiologists in making correct diagnostic decisions. In this study, we designed a CADx system using
logistic regression (LR) as the feature classifier which could estimate the PM of a mass. Our data set included 488
ultrasound (US) images from 250 biopsy-proven breast masses (100 malignant and 150 benign). The data set was
divided into two subsets T1 and T2. Two experienced radiologists, R1 and R2, independently provided Breast Imaging
Reporting and Data System (BI-RADS) assessments and PM ratings for data subsets T2 and T1, respectively. An LR
classifier was designed to estimate the PM of a mass using two-fold cross validation, in which the data subsets T1 and
T2 served once as the training and once as the test set. To evaluate the performance of the system, we compared the PM
estimated by the CADx system with radiologists' PM ratings (12-point scale) and BI-RADS assessments (6-point scale).
The correlation coefficients between the PM ratings estimated by the radiologists and by the CADx system were 0.71
and 0.72 for data subsets T1 and T2, respectively. For the BI-RADS assessments provided by the radiologists and
estimated by the CADx system, the correlation coefficients were 0.60 and 0.67 for data subsets T1 and T2, respectively.
Our results indicate that the CADx system may be able to provide not only a malignancy score, but also a more
quantitative estimate for the PM of a breast mass.
Noise model for microcalcification detection in reconstructed tomosynthesis slices
Show abstract
For the detection of microcalcifications, accurate noise estimation has shown to be an important step. In
tomosynthesis, noise models have been proposed for projection data. However, it is expected that manufacturers
of tomosynthesis systems will not store the raw projection images, but only the reconstructed volumes. We
therefore investigated if and how signal dependent image noise can be modelled in the reconstructed volumes.
For this research we used a dataset of 41 tomosynthesis volumes, of which 12 volumes contained a total of 20
microcalcification clusters. All volumes were acquired with a prototype of Sectra's photon-counting tomosynthesis
system. Preliminary results show that image noise is signal dependent in a reconstructed volume, and that a
model of this noise can be estimated from a volume at hand. Evaluation of the noise model was performed by
using a basic microcalcification cluster detection algorithm that classifies voxels by using a threshold on a local
contrast filter. Image noise was normalized by dividing local contrast in a voxel by the standard deviation of
the estimated image noise in that voxel. FROC analysis shows that performance increases strongly, when we use
our model to correct for signal dependent image noise in reconstructed volumes.
Computerized breast parenchymal analysis on DCE-MRI
Show abstract
Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been
recommended for high-risk women screening, however, it is still unknown how the breast parenchymal
enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI
exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume
was automatically segmented using a volume-growing based algorithm. The extracted breast volume was
classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal
kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means
clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels.
Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS
density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of
most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast
time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th
post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more
parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women
with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast
parenchymal enhancement may be associated with breast density and may be potential useful as an additional
characteristic for assessing breast cancer risk.
Breast cancer classification with mammography and DCE-MRI
Show abstract
Since different imaging modalities provide complementary information
regarding the same lesion, combining information from different modalities
may increase diagnostic accuracy. In this study, we investigated the
use of computerized features of lesions imaged via both full-field
digital mammography (FFDM) and dynamic contrast-enhanced magnetic
resonance imaging (DCE-MRI) in the classification of breast lesions.
Using a manually identified lesion location, i.e. a seed point on
FFDM images or a ROI on DCE-MRI images, the computer automatically
segmented mass lesions and extracted a set of features for each lesion.
Linear stepwise feature selection was firstly performed on single
modality, yielding one feature subset for each modality. Then, these
selected features served as the input to another feature selection
procedure when extracting useful information from both modalities.
The selected features were merged by linear discriminant analysis
(LDA) into a discriminant score. Receiver operating characteristic
(ROC) analysis was used to evaluate the performance of the selected
feature subset in the task of distinguishing between malignant and
benign lesions. From a FFDM database with 321 lesions (167 malignant
and 154 benign), and a DCE-MRI database including 181 lesions
(97 malignant and 84 benign), we constructed a multi-modality
dataset with 51 lesions (29 malignant and 22 benign). With
leave-one-out-by-lesion evaluation on the multi-modality dataset,
the mammography-only features yielded an area under the ROC curve
(AUC) of 0.62 ± 0.08 and the DCE-MRI-only features yielded an AUC
of 0.94±0.03. The combination of these two modalities, which
included a spiculation feature from mammography and a kinetic feature
from DCE-MRI, yielded an AUC of 0.94. The improvement of
combining multi-modality information was statistically significant
as compared to the use of mammography only (p=0.0001). However,
we failed to show the statistically significant improvement as compared
to DCE-MRI, with the limited multi-modality dataset (p=0.22).
Mammogram Analysis
Detection of convergence areas in digital breast tomosynthesis using a contrario modeling
Show abstract
We propose a new method to detect architectural distortions and spiculated masses in digital breast tomosynthesis volumes. To achieve this goal, an a contrario approach is used. In this approach, an event, corresponding to a minimal number of structures converging toward the same location, is defined such that its expectation of occurrence within a random image is very low. Occurrences of this event in real images are then detected and considered as possible lesion locations. During the last step, the number of false positives is reduced through classification using attributes computed on histograms of structure orientations.
The approach was tested using the leave-one-out method on a database composed of 38 breasts (10 containing a lesion and 28 containing no lesion). A sensitivity of 0.8 at 1.68 false positives/breast was achieved.
The use of contextual information for computer aided detection of masses in mammograms
Show abstract
In breast cancer screening, radiologists not only look at local properties of suspicious regions in the mammogram
but take also into account more general contextual information. In this study we investigated the use of similar
information for computer aided detection of malignant masses. We developed a new set of features that combine
information from the candidate mass region and the whole image or mammogram. The developed context
features were constructed to give information about suspiciousness of a region relative to other areas in the
mammogram, the location in the image, the location in relation to dense tissue and the overall amount of dense
tissue in the mammogram. We used a step-wise floating feature selection algorithm to select subsets from the
set of available features. Feature selection was performed two times, once using the complete feature set (37
context and 40 local features) and once using only the local features. It was found that in the subsets selected
from the complete feature set 30-60% were context features. At most one local feature present in the subset
containing context features was not present in the subset without context features. We validated the performance
of the selected subsets on a separate data set using cross validation and bootstrapping. For each subset size we
compared the performance obtained using the features selected from the complete feature set to the performance
obtained using the features selected from the local feature set. We found that subsets containing context features
performed significantly better than feature sets containing no context features.
Presentation of similar images for diagnosis of breast masses on mammograms: analysis of the effect on residents
Show abstract
We have been developing a computerized scheme for selecting visually similar images that would be useful to
radiologists in the diagnosis of masses on mammograms. Based on the results of the observer performance study, the
presentation of similar images was useful, especially for less experienced observers. The test cases included 50 benign
and 50 malignant masses. Ten observers, including five breast radiologists and five residents, were asked to provide the
confidence level of the lesions being malignant before and after the presentation of similar images. By use of multireader,
multi-case receiver operating characteristic analysis, the average areas under the curves for the five residents
were 0.880 and 0.896 without and with similar images, respectively (p=0.040). There were four malignant cases in
which the initial ratings were relatively low, but the similar images alerted the residents to increase their confidence
levels of malignancy close to those by the breast radiologists. The presentation of similar images may cause some
observers falsely to increase their suspicion for some benign cases; however, if similar images can alert radiologists to
recognize the signs of malignancy and also help them to decrease their suspicion correctly for some benign cases, they
can be useful in the diagnosis on mammograms.
Comparison of computerized mass detection in digital breast tomosynthesis (DBT) mammograms and conventional mammograms
Show abstract
We are developing a CAD system for mass detection on digital breast tomosynthesis (DBT) mammograms. In this study,
we compared the detection accuracy on DBT and conventional screen-film mammograms (SFMs). DBT mammograms
were acquired with a GE prototype system at the University of Michigan. 47 cases containing the CC- and MLO-view
DBT mammograms of the breast with a biopsy-proven mass and the corresponding two-view SFMs of the same breast
were collected. Subjective judgment showed that the masses were much more conspicuous on DBT slices than on
SFMs. The CAD system for DBT includes two parallel processes, one performs mass detection in the reconstructed
DBT volume, and the other in the projection view (PV) images. The mass likelihood scores estimated for each mass
candidate in the two processes are merged to differentiate masses and false positives (FPs). For detection on SFMs, we
previously developed a dual system approach by fusing two single CAD systems optimized for detection of average and
subtle masses, respectively. A trained neural network is used to merge the mass likelihood scores of the two single
systems to reduce FPs. At the case-based sensitivities of 80% and 85%, mass detection in the DBT volume resulted in
an average of 0.72 and 1.06 FPs/view, and detection in the SFMs yielded 0.94 and 1.67 FPs/view, respectively. The
difference fell short of statistical significance (p=0.07) by JAFROC analysis. Study is underway to collect a larger data
set and to further improve the DBT CAD system.
Using volumetric density estimation in computer aided mass detection in mammography
Show abstract
With the introduction of Full Field Digital Mammography (FFDM) accurate automatic volumetric breast density
(VBD) estimation has become possible. As VBD enables the design of features that incorporate 3D properties,
these methods offer opportunities for computer aided detection schemes. In this study we use VBD to develop
features that represent how well a segmented region resembles the projection of a spherical object. The idea
behind this is that due to compression of the breast, glandular tissue is likely to be compressed to a disc like
shape, whereas cancerous tissue, being more difficult to compress, will retain its uncompressed shape. For each
pixel in a segmented region we calculate the predicted dense tissue thickness assuming that the lesion has a
spherical shape. The predicted thickness is then compared to the observed thickness by calculating the slope of
a linear function relating the two. In addition we calculate the variance of the error of the fit. To evaluate the
contribution of the developed VBD features to our CAD system we use an FFDM dataset consisting of 266 cases,
of which 103 were biopsy proven malignant masses and 163 normals. It was found that compared to the false
positives, a large fraction of the true positives has a slope close to 1.0 indicating that the true positives fit the
modeled spheres best. When the VBD based features were added to our CAD system, aimed at the detection
and classification of malignant masses, a small but significant increase in performance was achieved.
Lung Imaging II
Quantitative analysis of the central-chest lymph nodes based on 3D MDCT image data
Show abstract
Lung cancer is the leading cause of cancer death in the United States. In lung-cancer staging, central-chest
lymph nodes and associated nodal stations, as observed in three-dimensional (3D) multidetector CT (MDCT)
scans, play a vital role. However, little work has been done in relation to lymph nodes, based on MDCT data,
due to the complicated phenomena that give rise to them. Using our custom computer-based system for 3D
MDCT-based pulmonary lymph-node analysis, we conduct a detailed study of lymph nodes as depicted in 3D
MDCT scans. In this work, the Mountain lymph-node stations are automatically defined by the system. These
defined stations, in conjunction with our system's image processing and visualization tools, facilitate lymph-node
detection, classification, and segmentation. An expert pulmonologist, chest radiologist, and trained technician
verified the accuracy of the automatically defined stations and indicated observable lymph nodes. Next, using
semi-automatic tools in our system, we defined all indicated nodes. Finally, we performed a global quantitative
analysis of the characteristics of the observed nodes and stations. This study drew upon a database of 32 human
MDCT chest scans. 320 Mountain-based stations (10 per scan) and 852 pulmonary lymph nodes were defined
overall from this database. Based on the numerical results, over 90% of the automatically defined stations were
deemed accurate. This paper also presents a detailed summary of central-chest lymph-node characteristics for the
first time.
Automatic mediastinal lymph node detection in chest CT
Show abstract
Computed tomography (CT) of the chest is a very common staging investigation for the assessment of mediastinal, hilar, and intrapulmonary lymph nodes in the context of lung cancer. In the current clinical workflow, the detection and assessment of lymph nodes is usually performed manually, which can be error-prone and timeconsuming. We therefore propose a method for the automatic detection of mediastinal, hilar, and intrapulmonary lymph node candidates in contrast-enhanced chest CT. Based on the segmentation of important mediastinal anatomy (bronchial tree, aortic arch) and making use of anatomical knowledge, we utilize Hessian eigenvalues to detect lymph node candidates. As lymph nodes can be characterized as blob-like structures of varying size and shape within a specific intensity interval, we can utilize these characteristics to reduce the number of false positive candidates significantly. We applied our method to 5 cases suspected to have lung cancer. The processing time of our algorithm did not exceed 6 minutes, and we achieved an average sensitivity of 82.1% and an average precision of 13.3%.
Follow-up segmentation of lung tumors in PET and CT data
Show abstract
Early response assessment of cancer therapy is a crucial component towards a more effective and patient individualized
cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with
functional information. We have developed algorithms which allow the user to track both tumor volume and
standardized uptake value (SUV) measurements during the therapy from series of CT and PET images, respectively.
To prepare for tumor volume estimation we have developed a new technique for a fast, flexible, and
intuitive 3D definition of meshes. This initial surface is then automatically adapted by means of a model-based
segmentation algorithm and propagated to each follow-up scan. If necessary, manual corrections can be added by
the user. To determine SUV measurements a prioritized region growing algorithm is employed. For an improved
workflow all algorithms are embedded in a PET/CT therapy monitoring software suite giving the clinician a
unified and immediate access to all data sets. Whenever the user clicks on a tumor in a base-line scan, the
courses of segmented tumor volumes and SUV measurements are automatically identified and displayed to the
user as a graph plot. According to each course, the therapy progress can be classified as complete or partial
response or as progressive or stable disease. We have tested our methods with series of PET/CT data from 9
lung cancer patients acquired at Princess Margaret Hospital in Toronto. Each patient underwent three PET/CT
scans during a radiation therapy. Our results indicate that a combination of mean metabolic activity in the
tumor with the PET-based tumor volume can lead to an earlier response detection than a purely volume based
(CT diameter) or purely functional based (e.g. SUV max or SUV mean) response measures. The new software
seems applicable for easy, faster, and reproducible quantification to routinely monitor tumor therapy.
The effect of CT technical factors on quantification of lung fissure integrity
Show abstract
A new emphysema treatment uses endobronchial valves to perform lobar volume reduction. The degree of
fissure completeness may predict treatment efficacy. This study investigated the behavior of a semiautomated
algorithm for quantifying lung fissure integrity in CT with respect to reconstruction kernel and
dose. Raw CT data was obtained for six asymptomatic patients from a high-risk population for lung cancer.
The patients were scanned on either a Siemens Sensation 16 or 64, using a low-dose protocol of 120 kVp,
25 mAs. Images were reconstructed using kernels ranging from smooth to sharp (B10f, B30f, B50f, B70f).
Research software was used to simulate an even lower-dose acquisition of 15 mAs, and images were
generated at the same kernels resulting in 8 series per patient. The left major fissure was manually
contoured axially at regular intervals, yielding 37 contours across all patients. These contours were read
into an image analysis and pattern classification system which computed a Fissure Integrity Score (FIS) for
each kernel and dose. FIS values were analyzed using a mixed-effects model with kernel and dose as fixed
effects and patient as random effect to test for difference due to kernel and dose. Analysis revealed no
difference in FIS between the smooth kernels (B10f, B30f) nor between sharp kernels (B50f, B70f), but
there was a significant difference between the sharp and smooth groups (p = 0.020). There was no
significant difference in FIS between the two low-dose reconstructions (p = 0.882). Using a cutoff of 90%,
the number of incomplete fissures increased from 5 to 10 when the imaging protocol changed from B50f to
B30f. Reconstruction kernel has a significant effect on quantification of fissure integrity in CT. This has
potential implications when selecting patients for endobronchial valve therapy.
Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules
Show abstract
Features calculated from different dimensions of images capture quantitative information of the lung nodules through
one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional
(2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of
the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the
importance of combining features calculated in different dimensions. We have performed CADx experiments on 125
pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D
image features of the lesions. Leave-one-out experiments were performed using five different combinations of features
from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for
each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests
were applied to compare the classification results from these five different combinations of features. Our results showed
that 3D image features generate the best result compared with other combinations of features. This suggests one
approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the
system while maintaining diagnostic accuracy.
Adaptive contrast-based computer aided detection for pulmonary embolism
Show abstract
This work involves the computer-aided diagnosis (CAD) of pulmonary embolism (PE) in contrast-enhanced computed
tomography pulmonary angiography (CTPA). Contrast plays an important role in analyzing and identifying PE in CTPA.
At times the contrast mixing in blood may be insufficient due to several factors such as scanning speed, body weight and
injection duration. This results in a suboptimal study (mixing artifact) due to non-homogeneous enhancement of blood's
opacity. Most current CAD systems are not optimized to detect PE in sub optimal studies. To this effect, we propose new
techniques for CAD to work robustly in both optimal and suboptimal situations.
First, the contrast level at the pulmonary trunk is automatically detected using a landmark detection tool. This
information is then used to dynamically configure the candidate generation (CG) and classification stages of the
algorithm. In CG, a fast method based on tobogganing is proposed which also detects wall-adhering emboli. In addition,
our proposed method correctly encapsulates potential PE candidates that enable accurate feature calculation over the
entire PE candidate. Finally a classifier gating scheme has been designed that automatically switches the appropriate
classifier for suboptimal and optimal studies.
The system performance has been validated on 86 real-world cases collected from different clinical sites. Results
show around 5% improvement in the detection of segmental PE and 6% improvement in lobar and sub segmental PE
with a 40% decrease in the average false positive rate when compared to a similar system without contrast detection.
Brain and Microscopy
Automatic identification of intracranial hemorrhage in non-contrast CT with large slice thickness for trauma cases
Show abstract
In this paper we propose a technique for automatic detection of intracranial hemorrhage (ICH) and acute
intracranial hemorrhage (AIH) in brain Computed Tomography (CT) for trauma cases where no contrast can be
applied and the CT has large slice thickness. ICH or AIH comprise of internal bleeding (intra-axial) or external
(extra-axial) to the brain substance. Large bleeds like in intra-axial region are easy to diagnose whereas it can
be challenging if small bleed occurs in extra-axial region particularly in the absence of contrast. Bleed region
needs to be distinguished from bleed-look-alike brain regions which are abnormally bright falx and fresh flowing
blood. We propose an algorithm for detection of brain bleed in various anatomical locations. A preprocessing
step is performed to segment intracranial contents and enhancement of region of interests(ROIs). A number of
bleed and bleed-look-alike candidates are identified from a set of 11 available cases. For each candidate texture
based features are extracted from non-separable quincunx wavelet transform along with some other descriptive
features. The candidates are randomly divided into a training and test set consisting of both bleed and bleed-look-
alike. A supervised classifier is designed based on the training sample features. A performance accuracy of
96% is attained for the independent test candidates.
Optimal feature selection for automated classification of FDG-PET in patients with suspected dementia
Show abstract
FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as
Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to
induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with
suspected dementia is not straightforward, since patients are imaged at different stages of progression of
neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly
over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification
of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal
subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in
patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to
existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection
methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of
normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features
selected.
Quantitative evaluation of Alzheimer's disease
Show abstract
We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at
estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group
consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or
neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL
and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and
probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject
and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots,
Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear
discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative
biomarker in AD would act as an important surrogate marker of disease status and progression.
Influence of nuclei segmentation on breast cancer malignancy classification
Show abstract
Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis
are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer
diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides
and its influence on malignancy classification. Classification of malignancy plays a very important role during
the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable
information about the cancer malignancy grade which helps to choose an appropriate treatment. This process
involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important.
In this work we compare three powerful segmentation approaches and test their impact on the classification of
breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation
and textural segmentation based on co-occurrence matrix.
Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes
four different classifiers were trained and tested with previously extracted features. The compared classifiers
are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network
(PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the
best results over the three compared approaches and leads to a good feature extraction with a lowest average
error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron
with an error rate of 3.07% using fuzzy c-means segmentation.
Robust tumor morphometry in multispectral fluorescence microscopy
Show abstract
Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and
cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two
feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy
images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from
the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the
EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We
demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of
1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for
predicting cancer recurrence (p ≤ 0.0001). In multivariate analysis, an MST feature was selected for a model
incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set,
which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features
currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first
demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest
scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.
Localization of tissues in high-resolution digital anatomic pathology images
Show abstract
High resolution digital pathology images have a wide range of variability in color, shape, size, number, appearance,
location, and texture. The segmentation problem is challenging in this environment. We introduce a hybrid method that combines parametric machine learning with heuristic methods for feature extraction as well as pre- and post-processing steps for localizing diverse tissues in slide images. The method uses features such
as color, intensity, texture, and spatial distribution. We use principal component analysis for feature reduction and train a two layer back propagation neural network (with one hidden layer). We perform image labeling at pixel-level and achieve higher than 96% automatic localization accuracy on 294 test images.
Colon and Breast
Information-theoretic CAD system in mammography: improved mass detection by incorporating a Gaussian saliency map
Show abstract
We are presenting continuing development of an information-theoretic (IT) CADe system for location-specific
interrogation of screening mammograms to detect masses. IT-CADe relies on a knowledge library of mammographic
cases with known ground truth and an evidence-based approach to make a decision regarding a query case. If the query
is more similar to abnormal cases stored in the library, then the query is deemed also abnormal. Case similarity is
measured using mutual information (MI). MI takes into account only the probabilities of the underlying image pixels but
not their relative significance in the image. To address this limitation, we investigated a novel modification of the MI
similarity measure by incorporating the saliency of image pixels. Specifically, a Gaussian saliency map was applied
where central image pixels were given a higher weight and pixels' importance degraded progressively towards the image
periphery. This map makes intuitively sense. If a mass is suspected at a particular location, then image pixels
surrounding this location should be given higher importance in the MI calculation than pixels further away from this
specific location. The new MI measure was tested with a leave-one-out scheme on a database of 1,820 mammographic
regions (901 with masses and 919 normal). Further validation was performed on additional datasets of mammographic
regions deemed as suspicious by a computer algorithm and by expert mammographers. Incorporation of the Gaussian
saliency map resulted in consistent and often significant improvement of IT-CADe performance across all but one
datasets.
Relational representation for improved decisions with an information-theoretic CADe system: initial experience
Show abstract
Our previously presented information-theoretic computer-aided detection (IT-CADe) system for distinguishing
masses and normal parenchyma in mammograms is an example of a case-based system.
IT-CAD makes decisions by evaluating the querys average similarity with known mass and normal
examples stored in the systems case base. Pairwise case similarity is measured in terms of their normalized
mutual information. The purpose of this study was to evaluate whether incorporating a new
machine learning concept of relational representation to IT-CAD is a more effective strategy than the
decision algorithm that is currently in place. A trainable relational representation classifier builds a
decision rule using the relational representation of cases. Instead of describing a case by a vector of
intrinsic features, the case is described by its NMI-based similarity to a set of known examples. For this
study, we first applied random mutation hill climbing algorithm to select the concise set of knowledge
cases and then we applied a support vector machine to derive a decision rule using the relational representation
of cases. We performed the study with a database of 600 mammographic regions of interest
(300 with masses and 300 with normal parenchyma). Our experiments indicate that incorporating the
concept of relational representation with a trainable classifier to IT-CAD provides an improvement
in performance as compared with the original decision rule. Therefore, relational representation is a
promising strategy for IT-CADe.
Physical priors in virtual colonoscopy
Show abstract
Electronic colon cleansing (ECC) aims to remove the contrast agent from the CT abdominal images so that a virtual
model of the colon can be constructed. Virtual colonoscopy requires either liquid or solid preparation of the colon before
CT imaging. This paper has two parts to address ECC in both preparation methods. In the first part, meniscus removal in
the liquid preparation is studied. The meniscus is the curve seen at the top of a liquid in response to its container. Left on
the colon wall, the meniscus can decrease the sensitivity and specificity of virtual colonoscopy. We state the differential
equation that governs the profile of the meniscus and propose an algorithm for calculating the boundary of the contrast
agent. We compute the surface tension of the liquid-colon wall contact using in-vivo CT data. Our results show that the
surface tension can be estimated with an acceptable degree of uncertainty. Such an estimate, along with the meniscus
profile differential equation will be used as an a priori knowledge to aid meniscus segmentation. In the second part, we
study ECC in solid preparation of colon. Since the colon is pressurized with air before acquisition of the CT images, a
prior on the shape of the colon wall can be obtained. We present such prior and investigate it using patient data. We
show the shape prior is held in certain parts of the colon and propose a method that uses this prior to ease pseudoenhancement
correction.
A CAD utilizing 3D massive-training ANNs for detection of flat lesions in CT colonography: preliminary results
Show abstract
Our purpose was to develop a computer-aided diagnostic (CAD) scheme for detection of flat lesions (also known as
superficial elevated or depressed lesions) in CT colonography (CTC), which utilized 3D massive-training artificial
neural networks (MTANNs) for false-positive (FP) reduction. Our CAD scheme consisted of colon segmentation, polyp
candidate detection, linear discriminant analysis, and MTANNs. To detect flat lesions, we developed a precise shape
analysis in the polyp detection step to accommodate the analysis to include a flat shape. With our MTANN CAD
scheme, 68% (19/28) of flat lesions, including six lesions "missed" by radiologists in a multicenter clinical trial, were
detected correctly, with 10 (249/25) FPs per patient.
High-performance computer aided detection system for polyp detection in CT colonography with fluid and fecal tagging
Show abstract
CT colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer
screening. Computer-aided detection (CAD) of polyps has improved consistency and sensitivity of virtual colonoscopy
interpretation and reduced interpretation burden. A CAD system typically consists of four stages: (1) image preprocessing
including colon segmentation; (2) initial detection generation; (3) feature selection; and (4) detection
classification. In our experience, three existing problems limit the performance of our current CAD system. First, highdensity
orally administered contrast agents in fecal-tagging CTC have scatter effects on neighboring tissues. The
scattering manifests itself as an artificial elevation in the observed CT attenuation values of the neighboring tissues. This
pseudo-enhancement phenomenon presents a problem for the application of computer-aided polyp detection, especially
when polyps are submerged in the contrast agents. Second, general kernel approach for surface curvature computation in
the second stage of our CAD system could yield erroneous results for thin structures such as small (6-9 mm) polyps and
for touching structures such as polyps that lie on haustral folds. Those erroneous curvatures will reduce the sensitivity of
polyp detection. The third problem is that more than 150 features are selected from each polyp candidate in the third
stage of our CAD system. These high dimensional features make it difficult to learn a good decision boundary for
detection classification and reduce the accuracy of predictions. Therefore, an improved CAD system for polyp detection
in CTC data is proposed by introducing three new techniques. First, a scale-based scatter correction algorithm is applied
to reduce pseudo-enhancement effects in the image pre-processing stage. Second, a cubic spline interpolation method is
utilized to accurately estimate curvatures for initial detection generation. Third, a new dimensionality reduction
classifier, diffusion map and local linear embedding (DMLLE), is developed for classification and false positives (FP)
reduction. Performance of the improved CAD system is evaluated and compared with our existing CAD system (without
applying those techniques) using CT scans of 1186 patients. These scans are divided into a training set and a test set. The
sensitivity of the improved CAD system increased 18% on training data at a rate of 5 FPs per patient and 15% on test
data at a rate of 5 FPs per patient. Our results indicated that the improved CAD system achieved significantly better
performance on medium-sized colonic adenomas with higher sensitivity and lower FP rate in CTC.
Novel Applications
Robust algorithms for anatomic plane primitive detection in MR
Show abstract
One of primary challenges in the medical image data analysis is the ability to handle abnormal, irregular and/or
partial cases. In this paper, we present two different robust algorithms towards the goal of automatic planar
primitive detection in 3D volumes. The overall algorithm is a bottoms-up approach starting with the anatomic
point primitives (or landmarks) detection. The robustness in computing the planar primitives is built in through
both a novel consensus-based voting approach, and a random sampling-based weighted least squares regression
method. Both these approaches remove inconsistent landmarks and outliers detected in the landmark detection
step. Unlike earlier approaches focused towards a particular plane, the presented approach is generic and can be
easily adapted to computing more complex primitives such as ROIs or surfaces. To demonstrate the robustness
and accuracy of our approach, we present extensive results for automatic plane detection (Mig-Sagittal and
Optical Triangle planes) in brain MR-images. In comparison to ground truth, our approach has marginal errors
on about 90 patients. The algorithm also works really well under adverse conditions of arbitrary rotation and
cropping of the 3D volume. In order to exhibit generalization of the approach, we also present preliminary results
on intervertebrae-plane detection for 3D spine MR application.
Toward knowledge-enhanced viewing using encyclopedias and model-based segmentation
Show abstract
To make accurate decisions based on imaging data, radiologists must associate the viewed imaging data with the corresponding anatomical structures. Furthermore, given a disease hypothesis possible image findings which verify the hypothesis must be considered and where and how they are expressed in the viewed images. If rare anatomical variants, rare pathologies, unfamiliar protocols, or ambiguous findings are present, external knowledge sources such as medical
encyclopedias are consulted. These sources are accessed using keywords typically describing anatomical structures, image findings, pathologies.
In this paper we present our vision of how a patient's imaging data can be automatically enhanced with anatomical knowledge as well as knowledge about image findings. On one hand, we propose the automatic annotation of the images with labels from a standard anatomical ontology. These labels are used as keywords for a medical encyclopedia such as STATdx to access anatomical descriptions, information about pathologies and image findings. On the other hand we
envision encyclopedias to contain links to region- and finding-specific image processing algorithms. Then a finding is
evaluated on an image by applying the respective algorithm in the associated anatomical region.
Towards realization of our vision, we present our method and results of automatic annotation of anatomical structures in 3D MRI brain images. Thereby we develop a complex surface mesh model incorporating major structures of the brain and a model-based segmentation method. We demonstrate the validity by analyzing the results of several training and segmentation experiments with clinical data focusing particularly on the visual pathway.
A computer-aided differential diagnosis between UIP and NSIP using automated assessment of the extent and distribution of regional disease patterns at HRCT: comparison with the radiologist's decision
Show abstract
To evaluate the accuracy of computer aided differential diagnosis (CADD) between usual interstitial pneumonia (UIP)
and nonspecific interstitial pneumonia (NSIP) at HRCT in comparison with that of a radiologist's decision.
A computerized classification for six local disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity,
RO; honeycombing, HC; emphysema, EM; and consolidation, CON) using texture/shape analyses and a SVM classifier
at HRCT was used for pixel-by-pixel labeling on the whole lung area. The mode filter was applied on the results to
reduce noise. Area fraction (AF) of each pattern, directional probabilistic density function (pdf) (dPDF: mean, SD,
skewness of pdf /3 directions: superior-inferior, anterior-posterior, central-peripheral), regional cluster distribution
pattern (RCDP: number, mean, SD of clusters, mean, SD of centroid of clusters) were automatically evaluated. Spatially
normalized left and right lungs were evaluated separately. Disease division index (DDI) on every combination of AFs
and asymmetric index (AI) between left and right lung ((left-right)/left) were also evaluated. To assess the accuracy of
the system, fifty-four HRCT data sets in patients with pathologically diagnosed UIP (n=26) and NSIP (n=28) were used.
For a classification procedure, a CADD-SVM classifier with internal parameter optimization, and sequential forward
floating feature selection (SFFS) were employed. The accuracy was assessed by a 5-folding cross validation with 20-
times repetition. For comparison, two thoracic radiologists reviewed the whole HRCT images without clinical
information and diagnose each case either as UIP or NSIP.
The accuracies of radiologists' decision were 0.75 and 0.87, respectively. The accuracies of the CADD system using the
features of AF, dPDF, AI of dPDF, RDP, AI of RDP, DDI were 0.70, 0.79, 0.77, 0.80, 0.78, 0.81, respectively. The
accuracy of optimized CADD using all features after SFFS was 0.91.
We developed the CADD system to differentiate between UIP and NSIP using automated assessment of the extent and
distribution of regional disease patterns at HRCT.
Automatic classification of retinal vessels into arteries and veins
Show abstract
Separating the retinal vascular tree into arteries and veins is important for quantifying vessel changes that
preferentially affect either the veins or the arteries. For example the ratio of arterial to venous diameter, the
retinal a/v ratio, is well established to be predictive of stroke and other cardiovascular events in adults, as well
as the staging of retinopathy of prematurity in premature infants. This work presents a supervised, automatic
method that can determine whether a vessel is an artery or a vein based on intensity and derivative information.
After thinning of the vessel segmentation, vessel crossing and bifurcation points are removed leaving a set of
vessel segments containing centerline pixels. A set of features is extracted from each centerline pixel and using
these each is assigned a soft label indicating the likelihood that it is part of a vein. As all centerline pixels in
a connected segment should be the same type we average the soft labels and assign this average label to each
centerline pixel in the segment. We train and test the algorithm using the data (40 color fundus photographs)
from the DRIVE database1 with an enhanced reference standard. In the enhanced reference standard a fellowship
trained retinal specialist (MDA) labeled all vessels for which it was possible to visually determine whether it was
a vein or an artery. After applying the proposed method to the 20 images of the DRIVE test set we obtained
an area under the receiver operator characteristic (ROC) curve of 0.88 for correctly assigning centerline pixels
to either the vein or artery classes.
Developing assessment system for wireless capsule endoscopy videos based on event detection
Show abstract
Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the
combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical
procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel
noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not
only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it
is important to automate such process so that the medical clinicians only focus on interested events. As an extension
from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment
system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first
extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the
event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE
event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This
assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more
diseases, by using new special features. In addition, the system provides a score for every WCE image for each event.
Using the event scores, the system helps a specialist to speedup the diagnosis process.
Hands-free interactive image segmentation using eyegaze
Show abstract
This paper explores a novel approach to interactive user-guided image segmentation, using eyegaze information
as an input. The method includes three steps: 1) eyegaze tracking for providing user input, such as setting
object and background seed pixel selection; 2) an optimization method for image labeling that is constrained
or affected by user input; and 3) linking the two previous steps via a graphical user interface for displaying the
images and other controls to the user and for providing real-time visual feedback of eyegaze and seed locations,
thus enabling the interactive segmentation procedure. We developed a new graphical user interface supported
by an eyegaze tracking monitor to capture the user's eyegaze movement and fixations (as opposed to traditional
mouse moving and clicking). The user simply looks at different parts of the screen to select which image to
segment, to perform foreground and background seed placement and to set optional segmentation parameters.
There is an eyegaze-controlled "zoom" feature for difficult images containing objects with narrow parts, holes
or weak boundaries. The image is then segmented using the random walker image segmentation method. We
performed a pilot study with 7 subjects who segmented synthetic, natural and real medical images. Our results
show that getting used the new interface takes about only 5 minutes. Compared with traditional mouse-based
control, the new eyegaze approach provided a 18.6% speed improvement for more than 90% of images with high
object-background contrast. However, for low contrast and more difficult images it took longer to place seeds
using the eyegaze-based "zoom" to relax the required eyegaze accuracy of seed placement.
Retina and ROC Challenge
Active learning approach for detection of hard exudates, cotton wool spots, and drusen in retinal images
Show abstract
Computer-aided Diagnosis (CAD) systems for the automatic identification of abnormalities in retinal images are
gaining importance in diabetic retinopathy screening programs. A huge amount of retinal images are collected
during these programs and they provide a starting point for the design of machine learning algorithms. However,
manual annotations of retinal images are scarce and expensive to obtain. This paper proposes a dynamic CAD
system based on active learning for the automatic identification of hard exudates, cotton wool spots and drusen
in retinal images. An uncertainty sampling method is applied to select samples that need to be labeled by an
expert from an unlabeled set of 4000 retinal images. It reduces the number of training samples needed to obtain
an optimum accuracy by dynamically selecting the most informative samples. Results show that the proposed
method increases the classification accuracy compared to alternative techniques, achieving an area under the
ROC curve of 0.87, 0.82 and 0.78 for the detection of hard exudates, cotton wool spots and drusen, respectively.
Automated detection of kinks from blood vessels for optic cup segmentation in retinal images
Show abstract
The accurate localization of the optic cup in retinal images is important to assess the cup to disc ratio (CDR) for
glaucoma screening and management. Glaucoma is physiologically assessed by the increased excavation of the optic cup
within the optic nerve head, also known as the optic disc. The CDR is thus an important indicator of risk and severity of
glaucoma. In this paper, we propose a method of determining the cup boundary using non-stereographic retinal images
by the automatic detection of a morphological feature within the optic disc known as kinks. Kinks are defined as the
bendings of small vessels as they traverse from the disc to the cup, providing physiological validation for the cup
boundary. To detect kinks, localized patches are first generated from a preliminary cup boundary obtained via level set.
Features obtained using edge detection and wavelet transform are combined using a statistical approach rule to identify
likely vessel edges. The kinks are then obtained automatically by analyzing the detected vessel edges for angular
changes, and these kinks are subsequently used to obtain the cup boundary. A set of retinal images from the Singapore
Eye Research Institute was obtained to assess the performance of the method, with each image being clinically graded
for the CDR. From experiments, when kinks were used, the error on the CDR was reduced to less than 0.1 CDR units
relative to the clinical CDR, which is within the intra-observer variability of 0.2 CDR units.
Hierarchical detection of red lesions in retinal images by multiscale correlation filtering
Show abstract
This paper presents an approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR) -- a common and
severe complication of long-term diabetes which damages the retina and cause blindness. Since red lesions are regarded
as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities
in retinal images. In contrast to existing algorithms, a new approach based on Multiscale Correlation Filtering (MSCF)
and dynamic thresholding is developed. This consists of two levels, Red Lesion Candidate Detection (coarse level) and
True Red Lesion Detection (fine level). The approach was evaluated using data from Retinopathy On-line Challenge
(ROC) competition website and we conclude our method to be effective and efficient.
Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images
Show abstract
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early
detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence
of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis
technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images
is a difficult task due to the wide variability that these images usually present in screening programs. We propose a
statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the
appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online
Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed
algorithms.
Automated microaneurysm detection method based on double ring filter in retinal fundus images
Show abstract
The presence of microaneurysms in the eye is one of the early signs of diabetic retinopathy, which is one of the leading
causes of vision loss. We have been investigating a computerized method for the detection of microaneurysms on retinal
fundus images, which were obtained from the Retinopathy Online Challenge (ROC) database. The ROC provides 50
training cases, in which "gold standard" locations of microaneurysms are provided, and 50 test cases without the gold
standard locations. In this study, the computerized scheme was developed by using the training cases. Although the
results for the test cases are also included, this paper mainly discusses the results for the training cases because the
"gold
standard" for the test cases is not known. After image preprocessing, candidate regions for microaneurysms were
detected using a double-ring filter. Any potential false positives located in the regions corresponding to blood vessels
were removed by automatic extraction of blood vessels from the images. Twelve image features were determined, and
the candidate lesions were classified into microaneurysms or false positives using the rule-based method and an artificial
neural network. The true positive fraction of the proposed method was 0.45 at 27 false positives per image. Forty-two
percent of microaneurysms in the 50 training cases were considered invisible by the consensus of two co-investigators.
When the method was evaluated for visible microaneurysms, the sensitivity for detecting microaneurysms was 65% at
27 false positives per image. Our computerized detection scheme could be improved for helping ophthalmologists in the
early diagnosis of diabetic retinopathy.
Lung Nodules and ANODE Challenge
Growth-pattern classification of pulmonary nodules based on variation of CT number histogram and its potential usefulness in nodule differentiation
Show abstract
In recent years, high resolution CT has been developed. CAD system is indispensable for pulmonary cancer screening.
In research and development of computer-aided differential diagnosis, there is now widespread interest in the use of
nodule doubling time for measuring the volumetric changes of pulmonary nodule. The evolution pattern of each nodule
might depend on the CT number distribution pattern inside nodule such as pure GGO, mixed GGO, or solid nodules.
This paper presents a computerized approach to measure nodule CT number variation inside pulmonary nodule. The
approach consists of four steps: (1) nodule segmentation, (2) computation of CT number histogram, (3) nodule
categorization (α, β, γ, ε) based on CT number histogram, (4) computation of doubling time based on CT number
histogram, and growth-pattern classification which consists of six categories such as decrease, gradual decrease, no
change, slow increase, gradual increase, and increase, and (5) classification between benign and malignant cases. Using
our dataset of follow-up scans for whom the final diagnosis was known (62 benign and 42 malignant cases), we
evaluated growth-pattern of nodules and designed the classification strategy between benign and malignant cases. In
order to compare the performance between the proposed features and volumetric doubling time, the classification result
was analyzed by an area under the receiver operating characteristic curve. The preliminary experimental result
demonstrated that our approach has a highly potential usefulness to assess the nodule evolution using 3-D thoracic CT
images.
Improved precision of repeat image change measurement of pulmonary nodules using moment-based z-compensation on a zero-change dataset
Show abstract
CT scanners often have higher in-plane resolution than axial resolution. As a result, measurements in the axial direction are less reliable than measurements in-plane, and this should be considered when performing nodule growth measurements. We propose a method to measure nodule growth rates by a moment-based algorithm using the central second order moments for the in-plane directions. The interscan repeatability of the new method was compared to a volumetric measurement method on a database of 22 nodules with multiple scans taken in the same session. The interscan variability was defined as the 95% confidence interval of the relative volume change. For the entire database of nodules, the interscan variability of the volumetric growth method was (-52.1%, 30.1%); the moment-based method improved the variability to (-34.2%, 23.3%). For the 11 nodules with scans of the same slice thickness between scans, the variability of the volumetric growth method was (24.0%, 30.1%), compared to (-12.4%, 12.7%) for the moment-based method. The 11 nodules with scans of different slice thickness had a variability for the volumetric method of (-68.4%, 30.2%) and for the moment-based method, (-46.5%, 24.4%). The moment-based method showed improvement in interscan variability for all cases. This study shows promising preliminary results of improved repeatability of the new moment-based method over a volumetric method and suggests that measurements on scans of the same slice thickness are more repeatable than on scans of different slice thickness. The 11 nodules with the same slice thickness are publicly available.
A multiscale Laplacian of Gaussian filtering approach to automated pulmonary nodule detection from whole-lung low-dose CT scans
Show abstract
The primary stage of a pulmonary nodule detection system is typically a candidate generator that efficiently
provides the centroid location and size estimate of candidate nodules. A scale-normalized Laplacian of Gaussian
(LOG) filtering method presented in this paper has been found to provide high sensitivity along with precise
locality and size estimation. This approach involves a computationally efficient algorithm that is designed to
identify all solid nodules in a whole lung anisotropic CT scan.
This nodule candidate generator has been evaluated in conjunction with a set of discriminative features that
target both isolated and attached nodules. The entire detection system was evaluated with respect to a sizeenriched
dataset of 656 whole-lung low-dose CT scans containing 459 solid nodules with diameter greater than 4
mm. Using a soft margin SVM classifier, and setting false positive rate of 10 per scan, we obtained a sensitivity
of 97% for isolated, 93% for attached, and 89% for both nodule types combined. Furthermore, the LOG filter
was shown to have good agreement with the radiologist ground truth for size estimation.
A voxel-based neural approach (VBNA) to identify lung nodules in the ANODE09 study
Show abstract
The computer-aided detection (CAD) system we applied on the ANODE09 dataset is devoted to identify pulmonary
nodules in low-dose and thin-slice computed tomography (CT) images: we developed two different
systems for internal (CADI) and juxtapleural nodules (CADJP) in the framework of the italian MAGIC-5 collaboration.
The basic modules of CADI subsystem are: a 3D dot-enhancement algorithm for nodule candidate
identification and an original approach, we referred as Voxel-Based Neural Approach (VBNA), to reduce the
amount of false-positive findings based on a neural classifier working at the voxel level. To detect juxtapleural
nodules we developed the CADJP subsystem based on a procedure enhancing regions where many pleura surface
normals intersect, followed by a VBNA classification. We present both the FROC curves we obtained on the 5
annotated ANODE09 example dataset, and on all the ANODE09 50 test cases.
Automated lung nodule detection and segmentation
Show abstract
A computer-aided detection (CAD) system for lung nodules in CT scans was developed. For the detection of lung
nodules two different methods were applied and only pixels which were detected by both methods are marked as true
positives. The first method uses a multi-threshold algorithm, which detect connected regions within the lung that have
an intensity between specified threshold values. The second is a multi-scale detection method. The data are searched for
points located in spherical objects. The image data were smoothed with a 3D Gaussian filter and computed the Hessian
matrix and eigenvectors and eigenvalues for all pixels detected by the first algorithm. By analyzing the eigenvalues
points that lie within a spherical structure can be located. For segmentation of the detected nodules an active contour
model was used. A two-dimensional active contour with four energy terms describing form and position of the contour
in the image data was implemented. In addition balloon energy to get the active contour was used growing out from one
point. The result of our detection part is used as input for the segmentation part. To test the detection algorithms we
used 19 CT volume data sets from a low-dose CT studies. Our CAD system detected 58% of the nodules with a falsepositive
rate of 1.38. Additionally we take part at the ANODE09 study whose results will be presented at the SPIE
meeting in 2009.
The Lung TIME: annotated lung nodule dataset and nodule detection framework
Show abstract
The Lung Test Images from Motol Environment (Lung TIME) is a new publicly available dataset of thoracic CT scans with manually annotated pulmonary nodules. It is larger than other publicly available datasets. Pulmonary nodules are lesions in the lungs, which may indicate lung cancer. Their early detection significantly improves
survival rate of patients. Automatic nodule detecting systems using CT scans are being developed to reduce physicians' load and to improve detection quality. Besides presenting our own nodule detection system, in this article, we mainly address the problem of testing and comparison of automatic nodule detection methods. Our
publicly available 157 CT scan dataset with 394 annotated nodules contains almost every nodule types (pleura attached, vessel attached, solitary, regular, irregular) with 2-10mm in diameter, except ground glass opacities (GGO). Annotation was done consensually by two experienced radiologists. The data are in DICOM format,
annotations are provided in XML format compatible with the Lung Imaging Database Consortium (LIDC). Our computer aided diagnosis system (CAD) is based on mathematical morphology and filtration with a subsequent classification step. We use Asymmetric AdaBoost classifier. The system was tested using TIME, LIDC and
ANODE09 databases. The performance was evaluated by cross-validation for Lung TIME and LIDC, and using the supplied evaluation procedure for ANODE09. The sensitivity at chosen working point was 94.27% with 7.57 false positives/slice for TIME and LIDC datasets combined, 94.03% with 5.46 FPs/slice for the Lung TIME, 89.62% sensitivity with 12.03 FPs/slice for LIDC, and 78.68% with 4,61 FPs/slice when applied on ANODE09.
Poster Session: Brain
Interactive segmentation in multi-modal brain imagery using a Bayesian transductive learning approach
Show abstract
Labeled training data in the medical domain is rare and expensive to obtain. The lack of labeled multimodal medical
image data is a major obstacle for devising learning-based interactive segmentation tools. Transductive learning (TL) or
semi-supervised learning (SSL) offers a workaround by leveraging unlabeled and labeled data to infer labels for the test
set given a small portion of label information. In this paper we propose a novel algorithm for interactive segmentation
using transductive learning and inference in conditional mixture naïve Bayes models (T-CMNB) with spatial
regularization constraints. T-CMNB is an extension of the transductive naïve Bayes algorithm [1, 20]. The multimodal
Gaussian mixture assumption on the class-conditional likelihood and spatial regularization constraints allow us to
explain more complex distributions required for spatial classification in multimodal imagery. To simplify the estimation
we reduce the parameter space by assuming naïve conditional independence between the feature space and the class
label. The naïve conditional independence assumption allows efficient inference of marginal and conditional
distributions for large scale learning and inference [19]. We evaluate the proposed algorithm on multimodal MRI brain
imagery using ROC statistics and provide preliminary results. The algorithm shows promising segmentation
performance with a sensitivity and specificity of 90.37% and 99.74% respectively and compares competitively to
alternative interactive segmentation schemes.
Objective tumour heterogeneity determination in gliomas
Show abstract
Diffusion weighted imaging (DWI) derived apparent diffusion coefficient (ADC) values are known to correlate
inversely to tumour cellularity in brain tumours. The average ADC value increases after successful chemotherapy,
radiotherapy or a combination of both and can be therewith used as a surrogate marker for treatment response.
Moreover, high and low malignant areas can be distinguished. The main purpose of our project was to develop
a software platform that enables the automated delineation and ADC quantification of different tumour sections
in a fast, objective, user independent manner. Moreover, the software platform allows for an analysis of the
probability density of the ADC in high and low malignant areas in ROIs drawn on conventional imaging to
create a ground truth. We tested an Expectation Maximization algorithm with a Gaussian mixture model to
objectively determine tumour heterogeneity in gliomas because of yielding Gaussian distributions in the different
areas. Furthermore, the algorithm was initialized by seed points in the areas of the gross tumour volume and the
data indicated that an automatic initialization should be possible. Thus automated clustering of high and low
malignant areas and subsequent ADC determination within these areas is possible yielding reproducible ADC
measurements within heterogeneous gliomas.
Histogram-based classification with Gaussian mixture modeling for GBM tumor treatment response using ADC map
Show abstract
This study applied a Gaussian Mixture Model (GMM) to apparent diffusion coefficient (ADC) histograms to evaluate
glioblastoma multiforme (GBM) tumor treatment response using diffusion weighted (DW) MR images. ADC mapping,
calculated from DW images, has been shown to reveal changes in the tumor's microenvironment preceding
morphologic tumor changes. In this study, we investigated the effectiveness of features that represent changes from
pre- and post-treatment tumor ADC histograms to detect treatment response. The main contribution of this work is to
model the ADC histogram as the composition of two components, fitted by GMM with expectation maximization (EM)
algorithm. For both pre- and post-treatment scans taken 5-7 weeks apart, we obtained the tumor ADC histogram,
calculated the two-component features, as well as the other standard histogram-based features, and applied supervised
learning for classification. We evaluated our approach with data from 85 patients with GBM under chemotherapy, in
which 33 responded and 52 did not respond based on tumor size reduction. We compared AdaBoost and random
forests classification algorithms, using ten-fold cross validation, resulting in a best accuracy of 69.41%.
Segmentation propagation for the automated quantification of ventricle volume from serial MRI
Show abstract
Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of
communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in
patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly
over short time intervals. Because of the complex alterations of brain morphology in these patients, the
segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI
examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to
generate a ventricle template using fast marching methods and geodesic active contours, and (iii)
propagated the segmentation using deformable registration of the original MRI datasets. By applying this
deformation to the ventricle template, serial volume estimates were obtained in a robust manner from
routine clinical images (0.93 overlap) and their variation analyzed.
Efficacy of texture, shape, and intensity features for robust posterior-fossa tumor segmentation in MRI
Show abstract
Our previous works suggest that fractal-based texture features are very useful for detection, segmentation and
classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. In this work, we investigate and
compare efficacy of our texture features such as fractal and multifractional Brownian motion (mBm), and intensity
along with another useful level-set based shape feature in PF tumor segmentation. We study feature selection and
ranking using Kullback -Leibler Divergence (KLD) and subsequent tumor segmentation; all in an integrated
Expectation Maximization (EM) framework. We study the efficacy of all four features in both multimodality as well
as disparate MRI modalities such as T1, T2 and FLAIR. Both KLD feature plots and information theoretic entropy
measure suggest that mBm feature offers the maximum separation between tumor and non-tumor tissues in T1 and
FLAIR MRI modalities. The same metrics show that intensity feature offers the maximum separation between tumor
and non-tumor tissue in T2 MRI modality. The efficacies of these features are further validated in segmenting PF
tumor using both single modality and multimodality MRI for six pediatric patients with over 520 real MR images.
Poster Session: Breast Imaging
Interactive breast cancer segmentation based on relevance feedback: from user-centered design to evaluation
Show abstract
Computer systems play an important role in medical imaging industry since radiologists depend on it for visualization,
interpretation, communication and archiving. In particular, computer-aided diagnosis (CAD) systems help in lesion
detection tasks. This paper presents the design and the development of an interactive segmentation tool for breast cancer
screening and diagnosis. The tool conception is based upon a user-centered approach in order to ensure that the
application is of real benefit to radiologists. The analysis of user expectations, workflow and decision-making practices
give rise to the need for an interactive reporting system based on the BIRADS, that would not only include the numerical
features extracted from the segmentation of the findings in a structured manner, but also support human relevance
feedback as well. This way, the numerical results from segmentation can be either validated by end-users or enhanced
thanks to domain-experts subjective interpretation. Such a domain-expert centered system requires the segmentation to
be sufficiently accurate and locally adapted, and the features to be carefully selected in order to best suit user's
knowledge and to be of use in enhancing segmentation. Improving segmentation accuracy with relevance feedback and
providing radiologists with a user-friendly interface to support image analysis are the contributions of this work. The
preliminary result is first the tool conception, and second the improvement of the segmentation precision.
Breast tissue classification in digital breast tomosynthesis images using texture features: a feasibility study
Show abstract
Mammographic breast density is a known breast cancer risk factor. Studies have shown the potential to automate breast
density estimation by using computerized texture-based segmentation of the dense tissue in mammograms. Digital
breast tomosynthesis (DBT) is a tomographic x-ray breast imaging modality that could allow volumetric breast density
estimation. We evaluated the feasibility of distinguishing between dense and fatty breast regions in DBT using
computer-extracted texture features. Our long-term hypothesis is that DBT texture analysis can be used to develop 3D
dense tissue segmentation algorithms for estimating volumetric breast density. DBT images from 40 women were
analyzed. The dense tissue area was delineated within each central source projection (CSP) image using a thresholding
technique (Cumulus, Univ. Toronto). Two (2.5cm)2 ROIs were manually selected: one within the dense tissue region
and another within the fatty region. Corresponding (2.5cm)3 ROIs were placed within the reconstructed DBT images.
Texture features, previously used for mammographic dense tissue segmentation, were computed. Receiver operating
characteristic (ROC) curve analysis was performed to evaluate feature classification performance. Different texture
features appeared to perform best in the 3D reconstructed DBT compared to the 2D CSP images. Fractal dimension was
superior in DBT (AUC=0.90), while contrast was best in CSP images (AUC=0.92). We attribute these differences to the
effects of tissue superimposition in CSP and the volumetric visualization of the breast tissue in DBT. Our results
suggest that novel approaches, different than those conventionally used in projection mammography, need to be
investigated in order to develop DBT dense tissue segmentation algorithms for estimating volumetric breast density.
Incorporating a segmentation routine for mammographic masses into a knowledge-based CADx approach
Show abstract
Computer aided diagnosis (CADx) systems have the potential to support the radiologist in the complex task of discriminating benign and malignant types of breast lesions based on their appearance in mammograms. Previously, we have proposed a knowledge-based CADx approach for mammographic mass lesions using case-based reasoning. The input of the systems reasoning process are features that are automatically extracted from regions of interest (ROIs) depicting mammographic masses. However, despite the fact that the shape of a mass as well as the characteristics of its boundary are highly discriminative attributes for its diagnosis, we have not included shape and boundary features that are based on an explicit segmentation of the mass from the background tissue in the previously proposed CADx approach. Hence, we present a novel method for the segmentation of mammographic masses in this work and describe how we have integrated this segmentation module into our existent CADx system. The approach is based on the observation that the optical density of a mass is usually high near its core and decreases towards its boundary. Because of tissue superposition and the broad variety of appearances of masses, their automatic segmentation is a difficult task. Thus, it is not surprising that even after many years of research concerning the segmentation of masses no fully automatic approach that robustly solves the problem seems to exist. For this reason, we have included optional interactive modules in the proposed segmentation approach that allow fast and easy corrective interference of the radiologist with the segmentation process.
A fully automatic lesion detection method for DCE-MRI fat-suppressed breast images
Show abstract
Dynamic Contrast Enhanced MRI (DCE-MRI) has today a well-established role, complementary to routine imaging techniques for breast cancer diagnosis such as mammography. Despite its undoubted clinical advantages, DCE-MRI data analysis is time-consuming and Computer Aided Diagnosis (CAD) systems are required to help radiologists. Segmentation is one of the key step of every CAD image processing pipeline, but most techniques available require human interaction.
We here present the preliminary results of a fully automatic lesion detection method, capable of dealing with fat suppression image acquisition sequences, which represents a challenge for image processing algorithms due to the low SNR. The method is based on four fundamental steps: registration to correct for motion artifacts; anatomical segmentation to discard anatomical structures located outside clinically interesting lesions; lesion detection to select enhanced areas and false positive reduction based on morphological and kinetic criteria. The testing set was composed by 13 cases and included 27 lesions (10 benign and 17 malignant) of diameter > 5 mm. The system achieves a per-lesion sensitivity of 93%, while yielding an acceptable number of false positives (26 on average). The results of our segmentation algorithm were verified by visual inspection, and qualitative comparison with a manual segmentation yielded encouraging results.
Three dimensional breast masses autodetection in cone beam breast CT
Show abstract
Cone Beam Breast CT (CBBCT) acquires 3D breast images without compression to the breast. More detailed and
accurate information of breast lesions is revealed in CBBCT images. In our research, based on the observation that tumor
masses are more concentrated than the surrounding tissues, we designed a weighted average filter and a threedimensional
Iris filter to operate on the three-dimensional images. The basic process is: After weighted average filtering
and iris filtering, a thresholding is applied to extract suspicious regions. Next, after morphological processing, suspicious
regions are sorted based on their average Iris filter responses and the top 10 candidates are selected as detection results.
The detection results are marked out and provided to radiologists as CAD system output. In our experiment, our method
detects 12 mass locations out of 14 pathology-proven malignant clinical cases.
Computer-aided detection of HER2 amplification status using FISH images: a preliminary study
Show abstract
The amplification status of human epidermal growth factor receptors 2 (HER2) genes is strongly associated
with clinical outcome in patients with breast cancer. The American Society of Clinical Oncology Tumor Marker
Guidelines Panel has recommended routine testing of HER2 status on all newly diagnosed metastatic breast cancers
since 2001. Although fluorescent in situ hybridization (FISH) technology provides superior accuracy as compared with
other approaches, current manual FISH analysis methods are somewhat subjective, tedious, and may introduce interreader
variability. The goal of this preliminary study is to develop and test a computer-aided detection (CAD) scheme to
assess HER2 status using FISH images. Forty FISH images were selected for this study from our genetic laboratory. The
CAD scheme first applies an adaptive, iterative threshold method followed by a labeling algorithm to segment cells of
possible interest. A set of classification rules is then used to identify analyzable interphase cells and discard nonanalyzable
cells due to cell overlapping and/or other image staining debris (or artifacts). The scheme then maps the
detected analyzable cells onto two other gray scale images corresponding to the red and green color of the original image
followed by application of a raster scan and labeling algorithms to separately detect the HER-2/neu ("red") and CEP17
("green") FISH signals. A simple distance based criterion is applied to detect and merge split FISH signals within each
cell. The CAD scheme computes the ratio between independent "red" and "green" FISH signals of all analyzable cells
identified on an image. If the ratio is ≥ 2.0, the FISH image is assumed to have been acquired from a HER2+ case;
otherwise, the FISH image is assumed to have been acquired from HER2- case. When we applied the CAD scheme to
the testing dataset, the average computed HER2 amplification ratios were 1.06±0.25 and 2.53±0.81 for HER2- and
HER2+ samples, respectively. The results show that the CAD scheme has the ability to automatically detect HER2 status
using FISH images. The success of CAD-guided FISH image analysis could result in a more objective, consistent, and
efficient approach in determining HER2 status of breast cancers.
Temporal variations in apparent breast lesion volume in dynamic contrast enhanced breast MR imaging
Show abstract
Dynamic Contrast Enhanced Breast MR Imaging (DCE BMRI) has emerged as a modality for breast cancer diagnosis. In
this modality a temporal sequence of volume images of the breasts is acquired, where a contrast agent is injected after
acquisition of the first 3D image. Since the introduction of the modality, research has been directed at the development
of computer-aided support for the diagnostic workup. This includes automatic segmentation of mass-like lesions, lesion
characterization, and lesion classification. Robustness, user-independence, and reproducibility of the results of
computerized methods are essential for such methods to be acceptable for clinical application.
A previously proposed and evaluated computerized lesion segmentation method has been further analyzed in this study.
The segmentation method uses as input a subtraction image (post-contrast - pre-contrast) and a user defined region of
interest (ROI). Previous evaluation studies investigated the robustness of the segmentation against variations in the user
selected ROI. Robustness of the method against variations in the image data itself has so far not been investigated. To fill
this gap is the purpose of this study.
In this study, the segmentation algorithm was applied to a series of subtraction images built from the pre-contrast volume
and all available post-contrast image volumes, successively. This provides set of typically 4-5 delineations per lesion,
each based on a different phase of the dynamic sequence.
Analysis of the apparent lesion volumes derived from these delineations and comparison to manual delineations showed
that computerized segmentation is more robust and reproducible than manual segmentation, even if computer
segmentations are computed on subtraction images derived from different dynamic phases of the DCE MRI study, while
all manual segmentations of a lesion are derived from one and the same dynamic phase of the study.
Furthermore, it could be shown that the rate of apparent change of lesion volume over the course of a DCE MRI study is
significantly dependent on the lesion type (benign vs. malignant).
Principal component analysis, classifier complexity, and robustness of sonographic breast lesion classification
Show abstract
We investigated three classifiers for the task of distinguishing between benign and malignant breast lesions.
Classification performance was measured in terms of area under the ROC curve (AUC value). We compared linear
discriminant analysis (LDA), quadratic discriminant analysis (QDA) and a Bayesian neural net (BNN) with 5 hidden
units. For each lesion, 46 image features were extracted and principal component analysis (PCA) of these features was
used as classifier input. For each classifier, the optimal number of principal components was determined by performing
PCA within each step of a leave-one-case-out protocol for the training dataset (1125 lesions, 14% cancer prevalence)
and determining which number of components maximized the AUC value. Subsequently, each classifier was trained on
the training dataset and applied 'cold turkey' to an independent test set from a different population (341 lesions, 30%
cancer prevalence). The optimal number of principal components for LDA was 24, accounting for 97% of the variance
in the image features. For QDA and BNN, these numbers were 5 (70%) and 15 (93%), respectively. The LDA, QDA and
BNN obtained AUC values of 0.88, 0.85, and 0.91, respectively, in the leave-one-case-out analysis. In the independent
test - with AUCs of 0.88, 0.76, and 0.82 - only LDA achieved performance identical to that for the training set (lower
bound of 95% non-inferiority interval -.0067), while the others performed significantly worse (p-values << 0.05). While
the more complex BNN classifier outperformed the others in leave-one-case-out of a large dataset, LDA was the robust
best-performer in an independent test.
A quantitative analysis of breast densities using cone beam CT images
Show abstract
Duct patterns are formed by desmoplastic reactions as most breast carcinomas are. Hence, it has been suggested that
the denser a breast is, the higher the likelihood to develop breast cancer. Consequently, breast density has been one of the
suggested parameters to estimate the risk to develop breast cancer. Currently, the main technique to evaluate breast
densities is through mammograms. However, mammograms have the disadvantage of displaying overlapping structures
within the breast. Although there are efficient techniques to obtain breast densities from mammograms, mammography
can only provide a rough estimate because of the overlapping breast tissue. In this study, cone beam CT images were
utilized to evaluate the breast density of sixteen breast images. First, a breast phantom with known volumes representing
fatty, glandular and calcified tissues was designed to calibrate the system. Since cone beam CT provides 3D-isotropic
resolution images throughout the field of view, the issue of overlapping structures disappears, allowing greater accuracy
in evaluating the volumes of each different part of the phantom. Then, using cone beam CT breast images, the breast
density of eight patients was evaluated using a semi-automatic segmentation algorithm that differentiates between fatty,
glandular and calcified tissues. The results demonstrated that cone beam CT images provide a better tool to evaluate the
breast density of the whole breast more accurately. The results also demonstrated that using this semi-automatic
segmentation algorithm improves the efficiency of classifying the breast into the four classifications as recommended by
the American College of Radiology.
Multi-modality computer-aided diagnosis system for axillary lymph node (ALN) staging: segmentation of ALN on ultrasound images
Show abstract
Our goal was to develop and evaluate a reliable segmentation method to delineate axillary lymph node (ALN) from
surrounding tissues on US images as the first step of building a multi-modality CADx system for staging ALN.
Ultrasound images of 24 ALN from 18 breast cancer patients were used. An elliptical model algorithm was used to fit
ALNs boundaries using the following steps: reduce image noise, extract image edges using the Canny edge detector,
select edge pixels and fit an ellipse by minimizing the quadratic error, Find the best fitting ellipse based on RANSAC.
The segmentation was qualitatively evaluated by 3 expert readers using 4 aspects: Orientation of long axis (OLA):
within +- 45 degrees, or off by +-45 degrees, overlap (OV): the fitted ellipse completely included ALN, partially
included ALN, or missed the ALN, size (SZ): too small, good within 20% error margin, or too large, and aspect ratio
(AR): correct or wrong. Nightly six % of ALNs were correctly evaluated by all readers in terms of OLA and AR, 90.2%
in terms of OV and 86.11 in terms of SZ. Readers agreed that the segmentation was correct in 70% of the cases in all
aspects. Due to small sample size and small variation among readers, we don't have power to show the accuracy of them
is different.
Fractal dimension and lacunarity analysis of mammographic patterns in assessing breast cancer risk related to HRT treated population: a longitudinal and cross-sectional study
Show abstract
Structural texture measures are used to address the aspect of breast cancer risk assessment in screening
mammograms. The current study investigates whether texture properties characterized by local Fractal
Dimension (FD) and Lacunarity contribute to asses breast cancer risk. FD represents the complexity while
the Lacunarity characterize the gappiness of a fractal. Our cross-sectional case-control study includes
mammograms of 50 patients diagnosed with breast cancer in the subsequent 2-4 years and 50 matched
controls. The longitudinal double blind placebo controlled HRT study includes 39 placebo and 36 HRT
treated volunteers for two years. ROIs with same dimension (250*150 pixels) were created behind the
nipple region on these radiographs. Box counting method was used to calculate the fractal dimension (FD)
and the Lacunarity. Paired t-test and Pearson correlation coefficient were calculated. It was found that there
were no differences between cancer and control group for FD (P=0.8) and Lacunarity (P=0.8) in crosssectional
study whereas earlier published heterogeneity examination of radiographs (BC-HER) breast
cancer risk score separated groups (p=0.002). In the longitudinal study, FD decreased significantly
(P<0.05) in the HRT treated population while Lacunarity remained insignificant (P=0.2). FD is negatively
correlated to Lacunarity (-0.74, P<0.001), BIRADS (-0.34, P<0.001) and Percentage Density (-0.41,
P<0.001). FD is invariant to the mammographic texture change from control to cancer population but
marginally varying in HRT treated population. This study yields no evidence that lacunarity or FD are
suitable surrogate markers of mammographic heterogeneity as they neither pick up breast cancer risk, nor
show good sensitivity to HRT.
Preprocessing for improving CAD scheme performance for microcalcification detection based on mammography imaging quality parameters
Show abstract
Database characteristics can affect significantly the performance of a mammography CAD scheme. Hence adequate
performance comparison among different CAD schemes is not suitable since a single scheme could present different
results depending on the set of chosen cases. Images in database should follow a set of quality criteria, since the imaging
process up to digital file. CAD schemes can not be developed without a database used to test their efficacy, but each
database with particular characteristics may influence on the processing scheme performance. A possible solution could
be using information on the imaging equipment characteristics. This work describes a preprocessing in order to
"compensate" the image degradation during the acquisition steps, assuring a better "uniformity" relative to the images
quality. Thus, poor quality images would be restored, providing therefore some independence on the images source to
CAD schemes and allowing to reach the better possible performance. Tests performed with mammography images sets
reported a 14% increase in sensitivity for microcalcifications detection. Although this result was followed by a little
increase in false positive rates, simple changes in techniques parameters can provide the same improvement but with a
reduction of the false positive detections.
Comparison of breast parenchymal pattern on prior mammograms of breast cancer patients and normal subjects
Show abstract
We are investigating the feasibility of predicting the risk of developing breast cancer in future years by analysis of breast
parenchymal patterns on mammograms. A data set of CC-view mammograms from prior exams of 96 cancer patients
and 491 normal subjects was collected from patient files. The prior mammograms were obtained at least one year
before diagnosis for cancer patients and two-years of cancer-free follow-up for normal subjects. The percent dense
area was estimated by automated gray-level histogram analysis. Texture features were extracted from a region of
interest in the retroareolar area. A feature space was constructed by using the percent dense area and texture features in
combination with patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was
trained to evaluate whether the breast parenchyma of future cancer patients can be distinguished from those of normal
subjects in the selected feature space. The areas under receiver operating characteristic curves (Az) were 0.90±0.02,
0.86±0.02, and 0.69±0.02 for the classification of future cancer patients and normal subjects by using the breast that
would develop cancer (M-vs-N), the contralateral breast (CoM-vs-N) and the patient age only (C-vs-N-with-Age),
respectively. The difference in the Az values between the M-vs-N and CoM-vs-N approaches did not achieve statistical
significance (p=0.11) by using ROC analysis. The performances of M-vs-N and CoM-vs-N were significantly better
than that of C-vs-N-with-Age (p<0.05). Our preliminary result indicates that computerized mammographic
parenchymal analysis might be useful for predicting the elevated risk of developing breast cancer in future years.
Poster Session: Cardiovascular
Myocardial wall thickening from gated magnetic resonance images using Laplace's equation
Show abstract
The aim of our work is to present a robust 3D automated method for measuring regional myocardial thickening using cardiac magnetic resonance imaging (MRI) based on Laplace's equation. Multiple slices of the myocardium in short-axis orientation at end-diastolic and end-systolic phases were considered for this analysis. Automatically assigned 3D epicardial and endocardial boundaries were fitted
to short-axis and long axis slices corrected for breathold related misregistration, and final boundaries were edited by a cardiologist if required. Myocardial thickness was quantified at the two cardiac phases by computing the distances between the myocardial boundaries over the entire volume using Laplace's equation. The distance between the surfaces was found by computing normalized gradients that form a
vector field. The vector fields represent tangent vectors along field lines connecting both boundaries. 3D thickening measurements were transformed into polar map representation and 17-segment model
(American Heart Association) regional thickening values were derived. The thickening results were then compared with standard 17-segment 6-point visual scoring of wall motion/wall thickening (0=normal;
5=greatest abnormality) performed by a consensus of two experienced imaging cardiologists. Preliminary results on eight subjects indicated a strong negative correlation (r=-0.8, p<0.0001) between the average thickening obtained using Laplace and the summed segmental visual scores. Additionally, quantitative
ejection fraction measurements also correlated well with average thickening scores (r=0.72, p<0.0001). For
segmental analysis, we obtained an overall correlation of -0.55 (p<0.0001) with higher agreement along the
mid and apical regions (r=-0.6). In conclusion 3D Laplace transform can be used to quantify myocardial thickening in 3D.
Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database
Show abstract
The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.
Poster Session: Colon
Characteristics of false positive findings in CT colonography CAD: a comparison of two fecal tagging regimens
Show abstract
The successful application of Computer Aided Detection schemes to CT Colonography depends not only on their
performances in terms of sensitivity and specificity, but also on the interaction with the radiologist, and thus ultimately
on factors such as the nature of CAD prompts and the reading paradigm. Fecal tagging is emerging as a widely accepted
technique for patient preparation, and patient-friendlier schemes are being proposed in an effort to increase compliance
to screening programs; the interaction between CAD and FT regimens should likewise be taken into account. In this
scenario, an analysis of the characteristics of CAD prompts is of paramount importance in order to guide further
research, both from clinical and technical viewpoints. The CAD scheme analyzed in this paper is essentially composed
of five steps: electronic cleansing, colon surface extraction, polyp candidate segmentation, pre-filtering of residual
tagged stool and classification of the generated candidates in true polyps vs. false alarms. False positives were divided
into six categories: untagged and tagged solid stool, haustral folds, extra-colonic candidates, ileocecal valve and
cleansing artifacts. A full cathartic preparation was compared with a semi-cathartic regimen with sameday fecal tagging,
which is characterized by higher patient acceptance but also higher inhomogeneity. The distribution of false positives at
segmentation reflects the quality of preparation, as more inhomogeneous tagging results in a higher number of untagged
solid stool and cleansing artifacts.
Haustral fold detection method for CT colonography based on difference filter along colon centerline
Show abstract
This paper presents a haustral fold detection method from 3D abdominal CT images. CT colonography (CTC)
or virtual colonoscopy is a new colon diagnostic method to examine the inside of the colon. CTC system can
visualize the interior of the colon from any viewpoint and viewing direction based on CT images of a patient.
Both the supine and the prone positions of CT images are used for colon diagnosis to improve sensitivity of
lesion detection. Registration of the supine and the prone positions of a patient is needed to improve efficiency of
diagnosis using CT images in the two positions. Positions of haustral folds are utilizable as landmarks to establish
correspondence between these two positions. We present a haustral fold detection method for registration of
the supine and the prone positions. Haustral folds protrude almost perpendicular to a centerline of the colon.
We designed new difference filter of CT values that can detect haustral folds. The difference filter calculates
difference values of CT values along the colon centerline. It outputs high values in the haustral folds. False
positive elimination is performed using two feature values including output value of the difference filter and
volume of connected component. As the results of experiments using 12 cases of CT images, we confirmed that
the proposed method can detect haustral folds satisfactorily. From evaluation using haustral folds ⪆ 3 [mm] in
height and thickness, sensitivity of our method was 90.8% with 6.1 FPs/case.
Reduction of false positives by machine learning for computer-aided detection of colonic polyps
Show abstract
With the development of computer-aided detection of polyps (CADpolyp), various features have been extracted to detect
the initial polyp candidates (IPCs). In this paper, three approaches were utilized to reduce the number of false positives
(FPs): the multiply linear regression (MLR) and two modified machine learning methods, i.e., neural network (NN) and
support vector machine (SVM), based on their own characteristics and specific learning purposes. Compared to MLR,
the two modified machine learning methods are much more sophisticated and well-adapted to the data provided. To
achieve the optimal sensitivity and specificity, raw features were pre-processed by the principle component analysis
(PCA) in the hope of removing the second-order statistical correlation prior to any learning actions. The gain by the use
of PCA was evidenced by the collected 26 patient studies, which included 32 colonic polyps confirmed by both optical
colonoscopy (OC) and virtual colonoscopy (VC). The learning and testing results showed that the two modified
machine-learning methods can reduce the number of FPs by 48.9% (or 7.2 FPs per patient) and 45.3% (or 7.7 FPs per
patient) respectively, at 100% detection sensitivity in comparison with that of traditional MLR method. Generally, more
than necessary number of features were stacked as input vectors to machine learning algorithms, dimensionality
reduction for a more compact feature combination, i.e., how to determine the remaining dimensionality via PCA linear
transform was considered and discussed in this paper. In addition, we proposed a new PCA-scaled data pre-processing
method to help reduce the FPs significantly. Finally, fROC (free-response receiver operating characteristic) curves
corresponding to three FP-reduction approaches were acquired, and comparative analysis was conducted.
Adaptive remapping procedure for electronic cleansing of fecal tagging CT colonography images
Show abstract
Fecal tagging preparations are attracting notable interest as a way to increase patients' compliance to virtual colonoscopy.
Patient-friendly preparations, however, often result in less homogeneous tagging. Electronic cleansing algorithms should
be capable of dealing with such preparations and yield good quality 2D and 3D images; moreover, successful electronic
cleansing lays the basis for the application of Computer Aided Detection schemes. In this work, we present a cleansing
algorithm based on an adaptive remapping procedure, which is based on a model of how partial volume affects both the
air-tissue and the soft-tissue interfaces. Partial volume at the stool-soft tissue interface is characterized in terms of the
local characteristics of tagged regions, in order to account for variations in tagging intensity throughout the colon. The
two models are then combined in order to obtain a remapping equation relating the observed intensity to the that of the
cleansed colon. The electronic cleansed datasets were then processed by a CAD scheme composed of three main steps:
colon surface extraction, polyp candidate segmentation through curvature-based features, and linear classifier-based
discrimination between true polyps and false alarms. Results obtained were compared with a previous version of the
cleansing algorithm, in which a simpler remapping procedure was used. Performances are increased both in terms of the
visual quality of the 2D cleansed images and 3D rendered volumes, and of CAD performances on a sameday FT virtual
colonoscopy dataset.
A comparison of blood vessel features and local binary patterns for colorectal polyp classification
Show abstract
Colorectal cancer is the third leading cause of cancer deaths in the United States of America for both women
and men. By means of early detection, the five year survival rate can be up to 90%. Polyps can to be grouped
into three different classes: hyperplastic, adenomatous, and carcinomatous polyps. Hyperplastic polyps are
benign and are not likely to develop into cancer. Adenomas, on the other hand, are known to grow into cancer
(adenoma-carcinoma sequence). Carcinomas are fully developed cancers and can be easily distinguished from
adenomas and hyperplastic polyps. A recent narrow band imaging (NBI) study by Tischendorf et al. has shown
that hyperplastic polyps and adenomas can be discriminated by their blood vessel structure. We designed a
computer-aided system for the differentiation between hyperplastic and adenomatous polyps. Our development
aim is to provide the medical practitioner with an additional objective interpretation of the available image data
as well as a confidence measure for the classification. We propose classification features calculated on the basis
of the extracted blood vessel structure. We use the combined length of the detected blood vessels, the average
perimeter of the vessels and their average gray level value. We achieve a successful classification rate of more than
90% on 102 polyps from our polyp data base. The classification results based on these features are compared to
the results of Local Binary Patterns (LBP). The results indicate that the implemented features are superior to
LBP.
Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming
Show abstract
Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set
and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).
Classification of colon polyps in NBI endoscopy using vascularization features
Show abstract
The evolution of colon cancer starts with colon polyps. There are two different types of colon polyps, namely
hyperplasias and adenomas. Hyperplasias are benign polyps which are known not to evolve into cancer and,
therefore, do not need to be removed. By contrast, adenomas have a strong tendency to become malignant.
Therefore, they have to be removed immediately via polypectomy. For this reason, a method to differentiate
reliably adenomas from hyperplasias during a preventive medical endoscopy of the colon (colonoscopy) is highly
desirable. A recent study has shown that it is possible to distinguish both types of polyps visually by means
of their vascularization. Adenomas exhibit a large amount of blood vessel capillaries on their surface whereas
hyperplasias show only few of them. In this paper, we show the feasibility of computer-based classification of
colon polyps using vascularization features. The proposed classification algorithm consists of several steps: For
the critical part of vessel segmentation, we implemented and compared two segmentation algorithms. After a
skeletonization of the detected blood vessel candidates, we used the results as seed points for the Fast Marching
algorithm which is used to segment the whole vessel lumen. Subsequently, features are computed from this
segmentation which are then used to classify the polyps. In leave-one-out tests on our polyp database (56
polyps), we achieve a correct classification rate of approximately 90%.
Computer-aided detection of initial polyp candidates with level set-based adaptive convolution
Show abstract
In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and
normalized convolution methods were used to compute the first and second order spatial derivatives of computed
tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of
such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL)
in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the
spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was
applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that
it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.
Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images
Show abstract
Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new,
less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the
second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks
for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and
compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy
images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first
method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is
obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection,
based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume.
Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After
experimentation with different datasets, results are found to be promising. The results also demonstrate that the first
method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.
Poster Session: Lung
Acoustical markers for CAD-detected pulmonary nodules in chest CT: A way to avoid suggestion and distraction of radiologist's attention?
Show abstract
Purpose: To compare the influence of visual and acoustical CAD markers on radiologist's performance with regard
to suggestive and distractive effects.
Materials and methods: Ten radiologists analyzed 150 pictures of chest CT slices. Every picture contained a visual
CAD marker. 100 pictures showed one nodule: CAD marker marked this in 50 cases and in 50 cases a false positive
finding (f.p.). The other 50 cases showed no nodule but an f.p. marker. After 3 years same images were presented to
thirteen radiologists with only a sound as CAD marker. 55 of 150 images were marked, 30 true positive and 25 f.p.
Sensitivity and f.p. rate were calculated for both marker types. Significance between sensitivities and f.p. rates were
calculated by multiple-analysis-of-variance (MANOVA).
Results: Without CAD mean sensitivity resp. f.p. were 57.7% /.13. In case of correct optical resp. acoustical marker
sensitivity increased to 75.6% resp. 63.1%. For incorrect set marker mean f.p. rate increased to .31 resp. .24.
MANOVA showed that marker's correctness highly significantly influenced sensitivity (p<.001) and f.n. (p=.005).
Type of marker showed no significant influence on sensitivity (p=.26) or f.n. (p=.23) but on f.p. (p<.001).
New work to be presented: Acoustical markers are a new means to increase radiologist's awareness of the presence
of pulmonary nodules at CT scans with much less suggestive effect compared to optical markers.
Conclusion: We found an unexpectedly low distraction effect for misplaced CAD markers. A suggestive effect was
remarkable especially for optical markers. However acoustical markers offered less increase of sensitivity.
Identification of asymmetric pulmonary nodule growth using a moment-based algorithm
Show abstract
The growth rate of pulmonary nodules has been shown to be an indicator of malignancy, and previous work on pulmonary nodule characterization has suggested that the asymmetry of a nodule's shape may be correlated with malignancy. We have also observed that measurements in the axial direction on CT scans are less repeatable than measurements in-plane and this should be considered when making lesion size-change measurements. To address this, we present a method to measure the asymmetry of a pulmonary nodule's growth by the use of second-order central moments that are insensitive to z-direction variation. The difference in the moment ratios on each scan is used as a measure of the asymmetry of growth. To establish what level of difference is significant, the 95% confidence interval of the differences was determined on a zero-change dataset of 22 solid pulmonary nodules with repeat scans in the same session. This method was applied to a set of 47 solid, stable pulmonary nodules and a set of 49 solid, malignant nodules. The confidence interval established from the zero-change dataset was (-0.45, 0.38); nodules with differences outside this confidence interval are considered to have asymmetric growth. Of the 47 stable nodules, 12.8% (6/47) were found to have asymmetric growth compared to 24.5% (12/49) of malignant nodules. These preliminary results suggest that nodules with asymmetric growth can be identified.
Evaluation of scoring accuracy for airway wall thickness
Show abstract
Bronchial wall thickening is commonly observed in airway diseases. One method often used to quantitatively evaluate
wall thickening in CT images is to estimate the ratio of the bronchial wall to the accompanying artery, or BWA ratio,
and then assign a severity score based on the ratio. Assessment by visual inspection is unfortunately limited to airways
perpendicular or parallel to the scanning plane. With high-resolution images from multi-detector CT scanners, it
becomes possible to assess airways in any orientation. We selected CT scans from 20 patients with mild to severe
COPD. A computer system automatically segmented each bronchial tree and measured the bronchial wall thicknesses.
Next, neighboring arteries were detected and measured to determine BWA ratios. A score characterizing the extent and
severity of wall thickening within each lobe was computed according to recommendations by Sheehan et al [1]. Two
experienced radiologists independently scored wall thickening using visual assessment. Spearman's rank correlation
showed a non-significant negative correlation (r=-0.1) between the computer and the reader average (p=0.4), while the
correlation between readers was significant at r=0.65 (p=0.001). We subsequently identified 24 lobes with high
discrepancies between visual and automated scoring. The readers re-examined those lobes and measured wall thickness
using electronic calipers on perpendicular cross sections, rather than visual assessment. Using this more objective
standard of wall thickness, the reader estimates of wall thickening increased to reach a significant positive correlation
with automated scoring of r=0.65 (p=0.001). These results indicate that subjectivity is an important problem with visual
evaluation, and that visual inspection may frequently underestimate disease extent and severity. Given that a manual
evaluation of all airways is infeasible in routine clinical practice, we argue that automated methods should be developed
and utilized.
Characterizing pulmonary nodule shape using a boundary-region approach
Show abstract
Using computer-calculated features to characterize the shape of suspicious lesions aims to assist the diagnosis of
pulmonary nodules; moreover, these computerized features have to be in agreement with radiologists' ratings
measuring their human perception of the nodules' shape. In the Lung Image Database Consortium (LIDC), there
exists strong disagreement among the radiologists on the ratings of the shape diagnostic characteristics as well as on
their drawn outlines of the extent of the nodules. Since shape is often considered a property of the object boundary
and the manual boundaries are not consistent among radiologists, new methods are necessary to, first, define regionbased
boundaries that use radiologists' outlines as guides and, second, adapt computer-based shape measurements
to use regions rather than the traditional nodule segmentation outlines. This paper introduces a method for defining a
boundary region of interest by combining radiologist-drawn outlines (the pixel-set difference between the union and
intersection of all radiologist-drawn outlines for a specific nodule), then adapts a radial gradient indexing method for
use within image regions, and lastly predicts several composite ratings of sets of radiologists for shape-based
characteristics: spiculation, lobulation, and sphericity. The prediction of the majority (mode) rating significantly
outperforms earlier work on predicting the ratings of individual radiologists. The prediction of spiculation improves
to 53% from 41%, lobulation increases to 44% from 38%, and sphericity improves to 58% from 43%. A binary
version of the rating has high accuracy but poor Kappa agreement for all three shape characteristics.
Dissimilarity representations in lung parenchyma classification
Show abstract
A good problem representation is important for a pattern recognition system to be successful. The traditional approach to statistical pattern recognition is feature representation. More specifically, objects are represented by a number of features in a feature vector space, and classifiers are built in this representation. This is also the general trend in lung parenchyma classification in computed tomography (CT) images, where the features often are measures on feature histograms. Instead, we propose to build normal density based classifiers in dissimilarity representations for lung parenchyma classification. This allows for the classifiers to work on dissimilarities between objects, which might be a more natural way of representing lung parenchyma. In this context, dissimilarity is defined between CT regions of interest (ROI)s. ROIs are represented by their CT attenuation histogram and ROI dissimilarity is defined as a histogram dissimilarity measure between the attenuation histograms. In this setting, the full histograms are utilized according to the chosen histogram dissimilarity measure.
We apply this idea to classification of different emphysema patterns as well as normal, healthy tissue.
Two dissimilarity representation approaches as well as different histogram dissimilarity measures are considered. The approaches are evaluated on a set of 168 CT ROIs using normal density based classifiers all showing good performance. Compared to using histogram dissimilarity directly as distance in a emph{k} nearest neighbor classifier, which achieves a classification accuracy of $92.9%$, the best dissimilarity representation based classifier is significantly better with a classification accuracy of 97.0% ($text{emph{p" border="0" class="imgtopleft"> = 0.046$).
Computer-assisted detection (CAD) methodology for early detection of response to pharmaceutical therapy in tuberculosis patients
Show abstract
The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological
features that change in response to successful pharmaceutical therapy can be followed with longitudinal
studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical
therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of
200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer
algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays
of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection
(CAD) system that may provide physicians with the ability to detect poor treatment response earlier in
pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives
and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of
patients each year.
Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach
Show abstract
Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial
images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper
proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for
quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of
several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect
and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution
decomposition, six features are computed, for which fuzzy membership functions define a probability of association with
a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned
to each membership function and two threshold values are used to decide the final class of the pattern. The proposed
approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing:
84% and ground glass: 97%.
Emphysema quantification from CT scans using novel application of diaphragm curvature estimation: comparison with standard quantification methods and pulmonary function data
Show abstract
Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction.
CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state.
Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are
based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to
standardize across sites and time. Given that one of the anatomical variations associated with the progression of
emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the
diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based
measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To
evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to
diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation
coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations,
of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have
only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the
disease.
A hybrid lung and vessel segmentation algorithm for computer aided detection of pulmonary embolism
Show abstract
Advances in multi-detector technology have made CT pulmonary angiography (CTPA) a popular radiological tool for
pulmonary emboli (PE) detection. CTPA provide rich detail of lung anatomy and is a useful diagnostic aid in highlighting
even very small PE. However analyzing hundreds of slices is laborious and time-consuming for the practicing radiologist
which may also cause misdiagnosis due to the presence of various PE look-alike.
Computer-aided diagnosis (CAD) can be a potential second reader in providing key diagnostic information. Since
PE occurs only in vessel arteries, it is important to mark this region of interest (ROI) during CAD preprocessing. In
this paper, we present a new lung and vessel segmentation algorithm for extracting contrast-enhanced vessel ROI in
CTPA. Existing approaches to segmentation either provide only the larger lung area without highlighting the vessels or
is computationally prohibitive.
In this paper, we propose a hybrid lung and vessel segmentation which uses an initial lung ROI and determines the
vessels through a series of refinement steps. We first identify a coarse vessel ROI by finding the "holes" from the lung
ROI. We then use the initial ROI as seed-points for a region-growing process while carefully excluding regions which are
not relevant. The vessel segmentation mask covers 99% of the 259 PE from a real-world set of 107 CTPA. Further, our
algorithm increases the net sensitivity of a prototype CAD system by 5-9% across all PE categories in the training and
validation data sets. The average run-time of algorithm was only 100 seconds on a standard workstation.
Algorithm for lung cancer detection based on PET/CT images
Show abstract
The five year survival rate of the lung cancer is low with about twenty-five percent. In addition it is an obstinate lung
cancer wherein three out of four people die within five years. Then, the early stage detection and treatment of the lung
cancer are important. Recently, we can obtain CT and PET image at the same time because PET/CT device has been
developed. PET/CT is possible for a highly accurate cancer diagnosis because it analyzes quantitative shape information
from CT image and FDG distribution from PET image. However, neither benign-malignant classification nor staging
intended for lung cancer have been established still enough by using PET/CT images. In this study, we detect lung
nodules based on internal organs extracted from CT image, and we also develop algorithm which classifies benignmalignant
and metastatic or non metastatic lung cancer using lung structure and FDG distribution(one and two hour after
administering FDG). We apply the algorithm to 59 PET/CT images (malignant 43 cases [Ad:31, Sq:9, sm:3], benign 16
cases) and show the effectiveness of this algorithm.
Volume change determination of metastatic lung tumors in CT images using 3D template matching
Show abstract
The ability of a clinician to properly detect changes in the size of lung nodules over time is a vital element to both the
diagnosis of malignant growths and the monitoring of the response of cancerous lesions to therapy. We have developed
a novel metastasis sizing algorithm based on 3-D template matching with spherical tumor appearance models that were
created to match the expected geometry of the tumors of interest while accounting for potential spatial offsets of nodules
in the slice thickness direction. The spherical template that best-fits the overall volume of each lung metastasis was
determined through the optimization of the 3-D normalized cross-correlation coefficients (NCCC) calculated between
the templates and the nodules. A total of 17 different lung metastases were extracted manually from real patient CT
datasets and reconstructed in 3-D using spherical harmonics equations to generate simulated nodules for testing our
algorithm. Each metastasis 3-D shape was then subjected to 10%, 25%, 50%, 75% and 90% scaling of its volume to allow for 5 possible volume change combinations relative to the original size per each reconstructed nodule and inserted back into CT datasets with appropriate blurring and noise addition. When plotted against the true volume change, the nodule volume changes calculated by our algorithm for these 85 data points exhibited a high degree of accuracy (slope = 0.9817, R2 = 0.9957). Our results demonstrate that the 3-D template matching method can be an effective, fast, and accurate tool for automated sizing of metastatic tumors.
Knowledge based optimum feature selection for lung nodule diagnosis on thin section thoracic CT
Show abstract
An approach for optimum selection of lung nodule image characteristics in the feature domain is presented. This was
applied to the classification module in the CAD system with data that was extracted from 42 ROI's of the 38 cases with
an effective diameter of 3 to 8.5mm. 11 fundamental features were computed on the basis of dimensionality and image
characteristics. The relation between the represented features of the 4 radiologists and the computed features was
mapped using non-parametric correlation coefficients, multiple regression analysis and principle component analysis
(PCA). Malignant and benign modules were classified based on the artificial neural network (ANN) to confirm the
hypothesis from the mapping analysis. From the computed features and the radiologist's annotations, correlation
coefficients between 0.2693 and 0.5178 were obtained. A combination of analyses namely regression, PCA, correlation
and ANN were used to select optimum features. This resulted in F-test values of 0.821 and 0.643 for malignant and
benign nodules respectively. The study of the relationship between the features and the weightage towards each of the
representative classes resulted in optimum feature input for a CAD system. A composite analysis derived from
correlation, PCA, multiple regression and the classification algorithm, collectively termed as the knowledge base, was
used arrive at an "optimum" set of lung nodule features.
Classification of patterns for diffuse lung diseases in thoracic CT images by AdaBoost algorithm
Show abstract
CT images are considered as effective for differential diagnosis of diffuse lung diseases. However, the
diagnosis of diffuse lung diseases is a difficult problem for the radiologists, because they show a variety of
patterns on CT images. So, our purpose is to construct a computer-aided diagnosis (CAD) system for
classification of patterns for diffuse lung diseases in thoracic CT images, which gives both quantitative and
objective information as a second opinion, to decrease the burdens of radiologists. In this article, we propose a
CAD system based on the conventional pattern recognition framework, which consists of two sub-systems;
one is feature extraction part and the other is classification part. In the feature extraction part, we adopted a
Gabor filter, which can extract patterns such like local edges and segments from input textures, as a feature
extraction of CT images. In the recognition part, we used a boosting method. Boosting is a kind of voting
method by several classifiers to improve decision precision. We applied AdaBoost algorithm for boosting
method. At first, we evaluated each boosting component classifier, and we confirmed they had not enough
performances in classification of patterns for diffuse lung diseases. Next, we evaluated the performance of
boosting method. As a result, by use of our system, we could improve the classification rate of patterns for
diffuse lung diseases.
A novel scheme for detection of diffuse lung disease in MDCT by use of statistical texture features
Show abstract
The successful development of high performance computer-aided-diagnostic systems has potential to assist radiologists
in the detection and diagnosis of diffuse lung disease. We developed in this study an automated scheme for the detection
of diffuse lung disease on multi-detector computed tomography (MDCT). Our database consisted of 68 CT scans, which
included 31 normal and 37 abnormal cases with three kinds of abnormal patterns, i.e., ground glass opacity, reticular,
and honeycombing. Two radiologists first selected the CT scans with abnormal patterns based on clinical reports. The
areas that included specific abnormal patterns in the selected CT images were then delineated as reference standards by
an expert chest radiologist. To detect abnormal cases with diffuse lung disease, the lungs were first segmented from the
background in each slice by use of a texture analysis technique, and then divided into contiguous volumes of interest
(VOIs) with a 64×64×64 matrix size. For each VOI, we calculated many statistical texture features, including the mean
and standard deviation of CT values, features determined from the run length matrix, and features from the co-occurrence
matrix. A quadratic classifier was employed for distinguishing between normal and abnormal VOIs by use of
a leave-one-case-out validation scheme. A rule-based criterion was employed to further determine whether a case was
normal or abnormal. For the detection of abnormal VOIs, our CAD system achieved a sensitivity of 86% and a
specificity of 90%. For the detection of abnormal cases, it achieved a sensitivity of 89% and a specificity of 90%. This
preliminary study indicates that our CAD system would be useful for the detection of diffuse lung disease.
Improvement of computational efficiency using a cascade classification scheme for the classification of diffuse infiltrative lung disease on HRCT
Show abstract
In this paper, a cascade classification scheme was proposed to improve computational efficiency in lung parenchyma
quantification in HRCT images. Proposed cascade classification scheme includes four steps: cost-based class-specific
feature selection, class-specific classifier training, classifier ordering, cascade feature extraction and classification. In the
first step, feature sets were determined by sequential forward floating selection (SFFS) using performance improvement
to extraction cost ratio criterion. Then classifiers were trained to classify specific class from all of other classes. Using
accuracies of those classifiers, the order of classification was determined; from the highest accuracy to lowest accuracy.
To quantify new images, feature extraction and classification were sequentially repeated. The impact of using the
proposed cascade classification scheme is evaluated in terms of computational cost and classification accuracy. For
automated classification, support vector machine (SVM) was implemented. To assess the performance and crossvalidation
of the system, ten-folding method was used. In the experimental results, the computational cost was reduced
by 46% and the overall accuracy was 92.04% which is not significantly different in a comparison of conventional
method. This work shows that, in our classification problem, using the proposed cascade classification scheme can
reduce the computational cost in the feature extraction while maintaining the classification accuracy.
Automated detection of presence of mucus foci in airway diseases: preliminary results
Show abstract
Chronic Obstructive Pulmonary Disease (COPD) is often characterized by partial or complete obstruction of airflow in
the lungs. This can be due to airway wall thickening and retained secretions, resulting in foci of mucoid impactions.
Although radiologists have proposed scoring systems to assess extent and severity of airway diseases from CT images,
these scores are seldom used clinically due to impracticality. The high level of subjectivity from visual inspection and
the sheer number of airways in the lungs mean that automation is critical in order to realize accurate scoring. In this
work we assess the feasibility of including an automated mucus detection method in a clinical scoring system. Twenty
high-resolution datasets of patients with mild to severe bronchiectasis were randomly selected, and used to test the
ability of the computer to detect the presence or absence of mucus in each lobe (100 lobes in all). Two experienced
radiologists independently scored the presence or absence of mucus in each lobe based on the visual assessment method
recommended by Sheehan et al [1]. These results were compared with an automated method developed for mucus plug
detection [2]. Results showed agreement between the two readers on 44% of the lobes for presence of mucus, 39% of
lobes for absence of mucus, and discordant opinions on 17 lobes. For 61 lobes where 1 or both readers detected mucus,
the computer sensitivity was 75.4%, the specificity was 69.2%, and the positive predictive value (PPV) was 79.3%. Six
computer false positives were a-posteriori reviewed by the experts and reassessed as true positives, yielding results of
77.6% sensitivity, 81.8% for specificity, and 89.6% PPV.
Poster Session: Microscopy
Toward translational incremental similarity-based reasoning in breast cancer grading
Show abstract
One of the fundamental issues in bridging the gap between the proliferation of Content-Based Image Retrieval (CBIR)
systems in the scientific literature and the deficiency of their usage in medical community is based on the characteristic
of CBIR to access information by images or/and text only. Yet, the way physicians are reasoning about patients leads
intuitively to a case representation. Hence, a proper solution to overcome this gap is to consider a CBIR approach
inspired by Case-Based Reasoning (CBR), which naturally introduces medical knowledge structured by cases.
Moreover, in a CBR system, the knowledge is incrementally added and learned. The purpose of this study is to initiate a
translational solution from CBIR algorithms to clinical practice, using a CBIR/CBR hybrid approach. Therefore, we
advance the idea of a translational incremental similarity-based reasoning (TISBR), using combined CBIR and CBR
characteristics: incremental learning of medical knowledge, medical case-based structure of the knowledge (CBR),
image usage to retrieve similar cases (CBIR), similarity concept (central for both paradigms). For this purpose, three
major axes are explored: the indexing, the cases retrieval and the search refinement, applied to Breast Cancer Grading
(BCG), a powerful breast cancer prognosis exam. The effectiveness of this strategy is currently evaluated over cases
provided by the Pathology Department of Singapore National University Hospital, for the indexing. With its current
accuracy, TISBR launches interesting perspectives for complex reasoning in future medical research, opening the way to
a better knowledge traceability and a better acceptance rate of computer-aided diagnosis assistance among practitioners.
Segmentation based microscope autofocusing for blood smears
Show abstract
Focusing is a critical step in microscope observation of slides. Autoscanning microscopes have to perform
autofocus function accurately to record high quality images that may be later analyzed using sophisticated
algorithms. Video based autofocus has become a viable option due to the availability of high computing power
and cameras that provide high resolution images. A focus function which obtains a peak value when an image
in focus has been encountered is used by these methods. In this paper a novel focus function based on the shape
of the objects being observed is proposed. A segmentation based approach to autofocus blood smears where the
primary objects being observed, the red blood cells (RBC), are circular is presented. The scheme first segments
the RBCs and then determines the valid RBCs by using area criterion. The average form factor and eccentricity
values of the valid RBCs in a given frame are computed. A plot of these parameters vs. the frame number will
result in a peak and trough respectively for the in focus image. Results presented for various data sets show that
form factor is a suitable autofocus function.
Segmentation of histological structures for fractal analysis
Show abstract
Pathologists examine histology sections to make diagnostic and prognostic assessments regarding cancer based on
deviations in cellular and/or glandular structures. However, these assessments are subjective and exhibit some degree of
observer variability. Recent studies have shown that fractal dimension (a quantitative measure of structural complexity)
has proven useful for characterizing structural deviations and exhibits great potential for automated cancer diagnosis and
prognosis. Computing fractal dimension relies on accurate image segmentation to capture the architectural complexity
of the histology specimen. For this purpose, previous studies have used techniques such as intensity histogram analysis
and edge detection algorithms. However, care must be taken when segmenting pathologically relevant structures since
improper edge detection can result in an inaccurate estimation of fractal dimension. In this study, we established a
reliable method for segmenting edges from grayscale images. We used a Koch snowflake, an object of known fractal
dimension, to investigate the accuracy of various edge detection algorithms and selected the most appropriate algorithm
to extract the outline structures. Next, we created validation objects ranging in fractal dimension from 1.3 to 1.9
imitating the size, structural complexity, and spatial pixel intensity distribution of stained histology section images. We
applied increasing intensity thresholds to the validation objects to extract the outline structures and observe the effects on
the corresponding segmentation and fractal dimension. The intensity threshold yielding the maximum fractal dimension
provided the most accurate fractal dimension and segmentation, indicating that this quantitative method could be used in
an automated classification system for histology specimens.
A boosted distance metric: application to content based image retrieval and classification of digitized histopathology
Show abstract
Distance metrics are often used as a way to compare the similarity of two objects, each represented by a set of
features in high-dimensional space. The Euclidean metric is a popular distance metric, employed for a variety of
applications. Non-Euclidean distance metrics have also been proposed, and the choice of distance metric for any
specific application or domain is a non-trivial task. Furthermore, most distance metrics treat each dimension or
object feature as having the same relative importance in determining object similarity. In many applications,
such as in Content-Based Image Retrieval (CBIR), where images are quantified and then compared according
to their image content, it may be beneficial to utilize a similarity metric where features are weighted according
to their ability to distinguish between object classes. In the CBIR paradigm, every image is represented as
a vector of quantitative feature values derived from the image content, and a similarity measure is applied
to determine which of the database images is most similar to the query. In this work, we present a boosted
distance metric (BDM), where individual features are weighted according to their discriminatory power, and
compare the performance of this metric to 9 other traditional distance metrics in a CBIR system for digital
histopathology. We apply our system to three different breast tissue histology cohorts - (1) 54 breast histology
studies corresponding to benign and cancerous images, (2) 36 breast cancer studies corresponding to low and
high Bloom-Richardson (BR) grades, and (3) 41 breast cancer studies with high and low levels of lymphocytic
infiltration. Over all 3 data cohorts, the BDM performs better compared to 9 traditional metrics, with a greater
area under the precision-recall curve. In addition, we performed SVM classification using the BDM along with
the traditional metrics, and found that the boosted metric achieves a higher classification accuracy (over 96%)
in distinguishing between the tissue classes in each of 3 data cohorts considered. The 10 different similarity
metrics were also used to generate similarity matrices between all samples in each of the 3 cohorts. For each
cohort, each of the 10 similarity matrices were subjected to normalized cuts, resulting in a reduced dimensional
representation of the data samples. The BDM resulted in the best discrimination between tissue classes in the
reduced embedding space.
Finding regions of interest in pathological images: an attentional model approach
Show abstract
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological
images. This method is based on the cognitive process of visual selective attention that arises during a
pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse
search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's
cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical
medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two
components. The selected bottom-up information includes local low level features such as intensity, color, orientation
and texture information. Top-down information is related to the anatomical and pathological structures
known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm,
inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's
segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the
low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally,
a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49
images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a
classical bottom-up model of attention.
Poster Session: Prostate
Automatic diagnosis for prostate cancer using run-length matrix method
Show abstract
Prostate cancer is the most common type of cancer and the second leading cause of cancer death among men in US1.
Quantitative assessment of prostate histology provides potential automatic classification of prostate lesions and
prediction of response to therapy. Traditionally, prostate cancer diagnosis is made by the analysis of prostate-specific
antigen (PSA) levels and histopathological images of biopsy samples under microscopes. In this application, we utilize a
texture analysis method based on the run-length matrix for identifying tissue abnormalities in prostate histology. A tissue
sample was collected from a radical prostatectomy, H&E fixed, and assessed by a pathologist as normal tissue or
prostatic carcinoma (PCa). The sample was then subsequently digitized at 50X magnification. We divided the digitized
image into sub-regions of 20 X 20 pixels and classified each sub-region as normal or PCa by a texture analysis method.
In the texture analysis, we computed texture features for each of the sub-regions based on the Gray-level Run-length
Matrix(GL-RLM). Those features include LGRE, HGRE and RPC from the run-length matrix, mean and standard
deviation of the pixel intensity. We utilized a feature selection algorithm to select a set of effective features and used a
multi-layer perceptron (MLP) classifier to distinguish normal from PCa. In total, the whole histological image was
divided into 42 PCa and 6280 normal regions. Three-fold cross validation results show that the proposed method
achieves an average classification accuracy of 89.5% with a sensitivity and specificity of 90.48% and 89.49%,
respectively.
Integrating structural and functional imaging for computer assisted detection of prostate cancer on multi-protocol in vivo 3 Tesla MRI
Show abstract
Screening and detection of prostate cancer (CaP) currently lacks an image-based protocol which is reflected in the
high false negative rates currently associated with blinded sextant biopsies. Multi-protocol magnetic resonance
imaging (MRI) offers high resolution functional and structural data about internal body structures (such as
the prostate). In this paper we present a novel comprehensive computer-aided scheme for CaP detection from
high resolution in vivo multi-protocol MRI by integrating functional and structural information obtained via
dynamic-contrast enhanced (DCE) and T2-weighted (T2-w) MRI, respectively. Our scheme is fully-automated
and comprises (a) prostate segmentation, (b) multimodal image registration, and (c) data representation and
multi-classifier modules for information fusion. Following prostate boundary segmentation via an improved active
shape model, the DCE/T2-w protocols and the T2-w/ex vivo histological prostatectomy specimens are brought
into alignment via a deformable, multi-attribute registration scheme. T2-w/histology alignment allows for the
mapping of true CaP extent onto the in vivo MRI, which is used for training and evaluation of a multi-protocol
MRI CaP classifier. The meta-classifier used is a random forest constructed by bagging multiple decision tree
classifiers, each trained individually on T2-w structural, textural and DCE functional attributes. 3-fold classifier
cross validation was performed using a set of 18 images derived from 6 patient datasets on a per-pixel basis. Our
results show that the results of CaP detection obtained from integration of T2-w structural textural data and
DCE functional data (area under the ROC curve of 0.815) significantly outperforms detection based on either
of the individual modalities (0.704 (T2-w) and 0.682 (DCE)). It was also found that a meta-classifier trained
directly on integrated T2-w and DCE data (data-level integration) significantly outperformed a decision-level
meta-classifier, constructed by combining the classifier outputs from the individual T2-w and DCE channels.
Automated detection of prostate cancer using wavelet transform features of ultrasound RF time series
Show abstract
The aim of this research was to investigate the performance of wavelet transform based features of ultrasound
radiofrequency (RF) time series for automated detection of prostate cancer tumors in transrectal ultrasound images.
Sequential frames of RF echo signals from 35 extracted prostate specimens were recorded in parallel planes, while the
ultrasound probe and the tissue were fixed in position in each imaging plane. The sequence of RF echo signal samples
corresponding to a particular spot in tissue imaging plane constitutes one RF time series. Each region of interest (ROI) of
ultrasound image was represented by three groups of features of its time series, namely, wavelet, spectral and fractal
features.
Wavelet transform approximation and detail sequences of each ROI were averaged and used as wavelet features. The
average value of the normalized spectrum in four quarters of the frequency range along with the intercept and slope of a
regression line fitted to the values of the spectrum versus normalized frequency plot formed six spectral features. Fractal
dimension (FD) of the RF time series were computed based on the Higuchi's approach. A support vector machine
(SVM) classifier was used to classify the ROIs.
The results indicate that combining wavelet coefficient based features with previously proposed spectral and fractal
features of RF time series data would increase the area under ROC curve from 93.1% to 95.0%, respectively.
Furthermore, the accuracy, sensitivity, and specificity increases to 91.7%, 86.6%, and 94.7%, from 85.7%, 85.2%, and
86.1%, respectively, using only spectral and fractal features.
Poster Session: Retina
ARGALI: an automatic cup-to-disc ratio measurement system for glaucoma detection and AnaLysIs framework
Show abstract
Glaucoma is an irreversible ocular disease leading to permanent blindness. However, early detection can be effective in
slowing or halting the progression of the disease. Physiologically, glaucoma progression is quantified by increased
excavation of the optic cup. This progression can be quantified in retinal fundus images via the optic cup to disc ratio
(CDR), since in increased glaucomatous neuropathy, the relative size of the optic cup to the optic disc is increased. The
ARGALI framework constitutes of various segmentation approaches employing level set, color intensity thresholds and
ellipse fitting for the extraction of the optic cup and disc from retinal images as preliminary steps. Following this,
different combinations of the obtained results are then utilized to calculate the corresponding CDR values. The
individual results are subsequently fused using a neural network. The learning function of the neural network is trained
with a set of 100 retinal images For testing, a separate set 40 images is then used to compare the obtained CDR against a
clinically graded CDR, and it is shown that the neural network-based result performs better than the individual
components, with 96% of the results within intra-observer variability. The results indicate good promise for the further
development of ARGALI as a tool for the early detection of glaucoma.
Determination of cup-to-disc ratio of optical nerve head for diagnosis of glaucoma on stereo retinal fundus image pairs
Show abstract
A large cup-to-disc (C/D) ratio, which is the ratio of the diameter of the depression (cup) to that of the optical nerve head
(ONH, disc), can be one of the important signs for diagnosis of glaucoma. Eighty eyes, including 25 eyes with the signs
of glaucoma, were imaged by a stereo retinal fundus camera. An ophthalmologist provided the outlines of cup and disc
on a regular monitor and on the stereo display. The depth image of the ONH was created by determining the
corresponding pixels in a pair of images based on the correlation coefficient in localized regions. The areas of the disc
and cup were determined by use of the red component in one of the color images and by use of the depth image,
respectively. The C/D ratio was determined based on the largest vertical lengths in the cup and disc areas, which was
then compared with that by the ophthalmologist. The disc areas determined by the computerized method agreed
relatively well with those determined by the ophthalmologist, whereas the agreement for the cup areas was somewhat
lower. When C/D ratios were employed for distinction between the glaucomatous and non-glaucomatous eyes, the area
under the receiver operating characteristic curve (AUC) was 0.83. The computerized analysis of ONH can be useful for
diagnosis of glaucoma.
Poster Session: Skeleton
Automated quantification of lumbar vertebral kinematics from dynamic fluoroscopic sequences
Show abstract
We hypothesize that the vertebra-to-vertebra patterns of spinal flexion and extension motion of persons with lower back pain will differ from those of persons who are pain-free. Thus, it is our goal to measure the motion of individual lumbar vertebrae noninvasively from dynamic fluoroscopic sequences. Two-dimensional normalized mutual information-based image registration was used to track frame-to-frame motion. Software was developed that required the operator to
identify each vertebra on the first frame of the sequence using a four-point "caliper" placed at the posterior and anterior
edges of the inferior and superior end plates of the target vertebrae. The program then resolved the individual motions of
each vertebra independently throughout the entire sequence. To validate the technique, 6 cadaveric lumbar spine specimens were potted in polymethylmethacrylate and instrumented with optoelectric sensors. The specimens were then placed in a custom dynamic spine simulator and moved through flexion-extension cycles while kinematic data and fluoroscopic sequences were simultaneously acquired. We found strong correlation between the absolute flexionextension
range of motion of each vertebra as recorded by the optoelectric system and as determined from the fluoroscopic sequence via registration. We conclude that this method is a viable way of noninvasively assessing twodimensional vertebral motion.
Quantitative measurement and analysis for detection and treatment planning of developmental dysplasia of the hip
Show abstract
Developmental dysplasia of the hip is a congenital hip joint malformation affecting the proximal femurs and acetabulum
that are subluxatable, dislocatable, and dislocated. Conventionally, physicians made diagnoses and treatments only based
on findings from two-dimensional (2D) images by manually calculating clinic parameters. However, anatomical
complexity of the disease and the limitation of current standard procedures make accurate diagnosis quite difficultly. In
this study, we developed a system that provides quantitative measurement of 3D clinical indexes based on computed
tomography (CT) images. To extract bone structure from surrounding tissues more accurately, the system firstly
segments the bone using a knowledge-based fuzzy clustering method, which is formulated by modifying the objective
function of the standard fuzzy c-means algorithm with additive adaptation penalty. The second part of the system
calculates automatically the clinical indexes, which are extended from 2D to 3D for accurate description of spatial
relationship between femurs and acetabulum. To evaluate the system performance, experimental study based on 22
patients with unilateral or bilateral affected hip was performed. The results of 3D acetabulum index (AI) automatically
provided by the system were validated by comparison with 2D results measured by surgeons manually. The correlation
between the two results was found to be 0.622 (p<0.01).
Automated fetal spine detection in ultrasound images
Show abstract
A novel method is proposed for the automatic detection of fetal spine in ultrasound images along with its orientation in
this paper. This problem presents a variety of challenges, including robustness to speckle noise, variations in the visible
shape of the spine due to orientation of the ultrasound probe with respect to the fetus and the lack of a proper edge
enclosing the entire spine on account of its composition out of distinct vertebra. The proposed method improves
robustness and accuracy by making use of two independent techniques to estimate the spine, and then detects the exact
location using a cross-correlation approach. Experimental results show that the proposed method is promising for fetal
spine detection.
Context sensitive labeling of spinal structure in MR images
Show abstract
We present a new method for automatic detection of the lumbar vertebrae and disk structure from MR images.
In clinical settings, radiologists utilize several images of the lumbar structure for diagnosis of lumbar disorders.
These images are co-registered by technicians and represent orthogonal features of the lumbar region. We
combine information from T1W sagittal, T2W sagittal and T2W axial MR images to automatically label disks
and vertebral columns. The method couples geometric and tissue property information available from the three
types of images with image analysis approaches to achieve 98.8% accuracy for the disk labeling task on a test
set of 67 images containing 335 disks.
Assessment of scoliosis by direct measurement of the curvature of the spine
Show abstract
We present two novel metrics for assessing scoliosis, in which the geometric centers of all the affected vertebrae in an
antero-posterior (A-P) radiographic image are used. This is in contradistinction to the existing methods of using selected
vertebrae, and determining either their endplates or the intersections of their diagonals, to define a scoliotic angle. Our
first metric delivers a scoliotic angle, comparable to the Cobb and Ferguson angles. It measures the sum of the angles
between the centers of the affected vertebrae, and avoids the need for an observer to decide on the extent of component
curvatures. Our second metric calculates the normalized root-mean-square curvature of the smoothest path comprising
piece-wise polynomial splines fitted to the geometric centers of the vertebrae. The smoothest path is useful in modeling
the spinal curvature. Our metrics were compared to existing methods using radiographs from a group of twenty subjects
with spinal curvatures of varying severity. Their values were strongly correlated with those of the scoliotic angles (r =
0.850 - 0.886), indicating that they are valid surrogates for measuring the severity of scoliosis. Our direct use of
positional data removes the vagaries of determining variably shaped endplates, and circumvented the significant interand
intra-observer errors of the Cobb and Ferguson methods. Although we applied our metrics to two-dimensional (2-
D) data in this paper, they are equally applicable to three-dimensional (3-D) data. We anticipate that they will prove to
be the basis for a reliable 3-D measurement and classification system.
Poster Session: Other
Automated segmentation of urinary bladder and detection of bladder lesions in multi-detector row CT urography
Show abstract
We are developing a CAD system for automated bladder segmentation and detection of bladder lesions on MDCT
urography, which potentially can assist radiologists in detecting bladder cancer. In the first stage of our CAD system,
given a starting point, the bladder is segmented based on 3D region growing and active contours. In the second stage,
lesion candidates are detected using histogram and shape analysis to separate the abnormality from the background,
which is the bladder partially filled with contrast material. In this pilot study, a limited data set of 15 patients with 29
biopsy-proven lesions (26 malignant, 3 benign) was used. The average size for the 26 malignant lesions was 10 mm
(range: 4.2 mm - 30.5mm) with conspicuity in the range of 2 to 5 on a 5-point scale (5=very subtle). The average size
for the 3 benign lesions was 14 mm (range: 3.5 mm - 25mm) with conspicuity in the range of 2 to 3. Our segmentation
program successfully segmented both the contrast and non-contrast part of the bladder in 87% (13/15) of the patients.
The contrast-filled bladder region was successfully segmented for all 15 patients. Our system detected 83% (24/29) of
the lesions with 1.4 (21/15) false positives per patient. 85% (22/26) of the bladder cancers were detected. The main
cause for missed lesions was that they were in the non-contrast bladder region, which was not included in the detection
stage in this pilot study. The results demonstrate the feasibility of developing a CAD system for automated segmentation
of the bladder and detection of bladder malignancies.
Narrow-band imaging for the computer assisted diagnosis in patients with Barrett's esophagus
Show abstract
Cancer of the esophagus has the worst prediction of all known cancers in Germany. The early detection of suspicious changes in the esophagus allows therapies that can prevent the cancer. Barrett's esophagus is a premalignant change of the esophagus that is a strong indication for cancer. Therefore there is a big interest to detect Barrett's esophagus as early as possible. The standard examination is done with a videoscope where the physician checks the esophagus for suspicious regions. Once a suspicious region is found, the physician takes a biopsy of that region to get a histological result of it. Besides the traditional white light for the illumination there is a new technology: the so called narrow-band Imaging (NBI). This technology uses a smaller spectrum of the visible light to highlight the scene captured by the videoscope. Medical studies indicate that the use of NBI instead of white light can increase the rate of correct diagnoses of a physician. In the future, Computer-Assisted Diagnosis (CAD) which is well known in the area of mammography might be used to support the physician in the diagnosis of different lesions in the esophagus. A knowledge-based system which uses a database is a possible solution for this task. For our work we have collected NBI images containing 326 Regions of Interest (ROI) of three typical classes: epithelium, cardia mucosa and Barrett's esophagus. We then used standard texture analysis features like those proposed by Haralick, Chen, Gabor and Unser to extract features from every ROI. The performance of the classification was evaluated with a classifier using the leaving-one-out sampling. The best result that was achieved is an accuracy of 92% for all classes and an accuracy of 76% for Barrett's esophagus. These results show that the NBI technology can provide a good diagnosis support when used in a CAD system.
Automatic patient-adaptive bleeding detection in a capsule endoscopy
Show abstract
We present a method for patient-adaptive detection of bleeding region for a Capsule Endoscopy (CE) images. The CE
system has 320x320 resolution and transmits 3 images per second to receiver during around 10-hour. We have developed
a technique to detect the bleeding automatically utilizing color spectrum transformation (CST) method. However,
because of irregular conditions like organ difference, patient difference and illumination condition, detection
performance is not uniform. To solve this problem, the detection method in this paper include parameter compensation
step which compensate irregular image condition using color balance index (CBI). We have investigated color balance
through sequential 2 millions images. Based on this pre-experimental result, we defined ΔCBI to represent deviate of
color balance compared with standard small bowel color balance. The ΔCBI feature value is extracted from each
image and used in CST method as parameter compensation constant. After candidate pixels were detected using CST
method, they were labeled and examined with a bleeding character. We tested our method with 4,800 images in 12
patient data set (9 abnormal, 3 normal). Our experimental results show the proposed method achieves (before patient
adaptive method : 80.87% and 74.25%, after patient adaptive method : 94.87% and 96.12%) of sensitivity and
specificity.
An approach to automatic detection of body parts and their size estimation from computed tomography image
Show abstract
Computer-aided diagnosis (CAD) systems usually require information about regions of interest in images, like:
lungs (for nodule detection), colon (for identifying polyps), etc. Many times, it is computationally intensive to
process large data sets as in CT to find these areas of interest. A fast and accurate recognition of the different
regions of interest in the human body from images is therefore necessary. In this paper we propose a fast and
efficient algorithm that can detect the organs of interest in a CT volume and estimate their sizes. Instead of
analyzing the whole 3D volume; which is computationally expensive, a binary search technique is adapted to
search in a few slices. The slices selected in the search process is segmented and different regions are labeled.
Decision over whether the image belongs to a lung or colon or both is based on the count of lung/colon pixels
in the slice. Once the detection is done we look for the start and end slice of the body part to have an estimate
of their sizes. The algorithm involves certain search decisions based on some predefined threshold values which
are empirically chosen from a training data set. The effectiveness of our technique is confirmed by applying
it on an independent test data set. Detection accuracy of 100% is obtained on a test set. This algorithm can
be integrated in a CAD system for running the right application, or can be used in pre-sets for visualization
purposes and other post-processing like image registration etc.
TESD: a novel ground truth estimation method
Show abstract
Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for algorithm validation, measurement
metric analysis, accurate size estimation. When multiple readers provide their documentations of a
lesion that can ultimately be described with occupancy regions, estimating the unknown GT is achieved by aptly
merging those occupancy regions into a single outcome. Several methods are already available but, even when
they consider the spatial location of pixels, e.g. thresholded probability-map (TPM) or STAPLE, pixels are assumed
spatially independent (even when STAPLE proposes a hidden-Markov-random-field fix). In this paper we
propose Truth Estimate from Self Distances (TESD): a new voting scheme, for all the voxels inside and outside
the occupancy region, in order to take in account three key characteristics: (a) critical shape conformations, like
holes or spikes, that are defined by the reciprocally surrounding pixels, (b) marking co-locations, meaning the
closeness without intersection of one reader's marking to other readers' ones and c) the three-dimensionality of
lesions as imaged by CT scanners. In TESD each voxel is labeled into four categories according to its signed
distance transform and then the labeled images are combined with a center of gravity method to provide the GT
estimation. This theoretical approach was validated on a subset of the publicly available Lung Image Database
Consortium archive, where a total of 35 nodules documented on 26 scans by all four radiologists were available.
The results obtained are reasonable estimates, with GT obtained close to TPM and STAPLE; at the same time
this method is not limited to the intersections of readers' marked regions.
Incremental classification learning for anomaly detection in medical images
Show abstract
Computer-aided diagnosis usually screens thousands of instances to find only a few positive cases that indicate
probable presence of disease.The amount of patient data increases consistently all the time. In diagnosis of new
instances, disagreement occurs between a CAD system and physicians, which suggests inaccurate classifiers.
Intuitively, misclassified instances and the previously acquired data should be used to retrain the classifier.
This, however, is very time consuming and, in some cases where dataset is too large, becomes infeasible. In
addition, among the patient data, only a small percentile shows positive sign, which is known as imbalanced
data.We present an incremental Support Vector Machines(SVM) as a solution for the class imbalance problem
in classification of anomaly in medical images. The support vectors provide a concise representation of the
distribution of the training data. Here we use bootstrapping to identify potential candidate support vectors
for future iterations. Experiments were conducted using images from endoscopy videos, and the sensitivity
and specificity were close to that of SVM trained using all samples available at a given incremental step with
significantly improved efficiency in training the classifier.
Robust anatomy detection from CT topograms
Show abstract
We present an automatic method to quickly and accurately detect multiple anatomy region-of-interests (ROIs) from CT
topogram images. Our method first detects a redundant and potentially erroneous set of local features. Their spatial
configurations are captured by a set of local voting functions. Unlike all the existing methods where the idea was to try to
"hit" the correct/best constellations of local features, we have taken an opposite approach. We try to peel away the bad
features until a safe (i.e., conservatively small) number of features remain. It is deterministic in nature and guarantees
a success even for extremely noisy cases. The advantages of the method are its robustness and computational efficiency.
Our method also addresses the potential scenario in which outliers (i.e., false landmarks detections) forms plausible
configurations. As long as such outliers are a minority, the method can successfully remove these outliers. The final ROI
of the anatomy is computed from a best subset of the remaining local features. Experimental validation was carried out
for multiple organs detection from a large collection of CT topogram images. Fast and highly robust performance was
observed. In the testing data sets, the detection rate varies from 98.2% to 100% for different ROIs and the false detection
rate is from 0.0% to 0.5% for different ROIs. The method is fast and accurate enough to be seamlessly integrated into a
real-time work flow on the CT machine to improve efficiency, consistency, and repeatability.