Show all abstracts
View Session
- Front Matter: Volume 8315
- Keynote and Digital Pathology
- Breast
- Oncology
- Abdomen
- Vascular
- Lung
- Colon
- Musculoskeletal
- Digital Pathology I
- Digital Pathology II
- Novel Applications
- Cardiac and Neuro
- Poster Session: Abdomen
- Poster Session: Bone
- Poster Session: Breast
- Poster Session: Cardiovascular
- Poster Session: Dental
- Poster Session: Eye
- Poster Session: Lung
- Poster Session: Microscopy and Histopathology
- Poster Session: Neuro
Front Matter: Volume 8315
Front Matter: Volume 8315
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8315, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Keynote and Digital Pathology
Automated detection of cells from immunohistochemically-stained tissues: application to Ki-67 nuclei staining
Hatice Cinar Akakin,
Hui Kong,
Camille Elkins,
et al.
Show abstract
An automated cell nuclei detection algorithm is described to be used for the quantification of immunohistochemicallystained
tissues. Detection and segmentation of positively stained cells and their separation from the background and
negatively-stained cells is crucial for fast, accurate, consistent and objective analysis of pathology images. One of the
major challenges is the identification, hence accurate counting of individual cells, when these cells form clusters. To
identify individual cell nuclei within clusters, we propose a new cell nuclei detection method based on the well-known
watershed segmentation, which can lead to under- or over-segmentation for this problem. Our algorithm handles oversegmentation
by combining H-minima transformed watershed algorithm with a novel region merging technique. To
handle under-segmentation problem, we develop a Laplacian-of-Gaussian (LoG) filtering based blob detection
algorithm, which estimates the range of the scales from the image adaptively. An SVM classifier was trained in order to
separate non-touching single cells and touching cell clusters with five features representing connected region properties
such as eccentricity, area, perimeter, convex area and perimeter-to-area ratio. Classified touching cell clusters are
segmented with the H-minima based watershed algorithm. The resulting over-segmented regions are improved with the
merging algorithm. The remaining under-segmented cell clusters are convolved with LoG filters to detect the cells within
them. Cell-by-cell nucleus detection performance is evaluated by comparing computer detections with cell locations
manually marked by eight pathology residents. The sensitivity is 89% when the cells are marked as positive at least by
one resident and it increases to 99% when the evaluated cells are marked by all eight residents. In comparison, the
average reader sensitivity varies between 70% ± 18% and 95% ± 11%.
Automated detection of diagnostically relevant regions in H&E stained digital pathology slides
Show abstract
We present a computationally efficient method for analyzing H&E stained digital pathology slides with the objective of
discriminating diagnostically relevant vs. irrelevant regions. Such technology is useful for several applications: (1) It can
speed up computer aided diagnosis (CAD) for histopathology based cancer detection and grading by an order of magnitude
through a triage-like preprocessing and pruning. (2) It can improve the response time for an interactive digital pathology
workstation (which is usually dealing with several GByte digital pathology slides), e.g., through controlling adaptive
compression or prioritization algorithms. (3) It can support the detection and grading workflow for expert pathologists in a
semi-automated diagnosis, hereby increasing throughput and accuracy. At the core of the presented method is the statistical
characterization of tissue components that are indicative for the pathologist's decision about malignancy vs. benignity,
such as, nuclei, tubules, cytoplasm, etc. In order to allow for effective yet computationally efficient processing, we propose
visual descriptors that capture the distribution of color intensities observed for nuclei and cytoplasm. Discrimination
between statistics of relevant vs. irrelevant regions is learned from annotated data, and inference is performed via linear
classification. We validate the proposed method both qualitatively and quantitatively. Experiments show a cross validation
error rate of 1.4%. We further show that the proposed method can prune ≈90% of the area of pathological slides while
maintaining 100% of all relevant information, which allows for a speedup of a factor of 10 for CAD systems.
Breast
Detection of breast cancer in automated 3D breast ultrasound
Show abstract
Automated 3D breast ultrasound (ABUS) is a novel imaging modality, in which motorized scans of the breasts are made
with a wide transducer through a membrane under modest compression. The technology has gained high interest and
may become widely used in screening of dense breasts, where sensitivity of mammography is poor. ABUS has a high
sensitivity for detecting solid breast lesions. However, reading ABUS images is time consuming, and subtle abnormalities
may be missed. Therefore, we are developing a computer aided detection (CAD) system to help reduce reading
time and errors. In the multi-stage system we propose, segmentations of the breast and nipple are performed, providing
landmarks for the detection algorithm. Subsequently, voxel features characterizing coronal spiculation patterns, blobness,
contrast, and locations with respect to landmarks are extracted. Using an ensemble of classifiers, a likelihood
map indicating potential malignancies is computed. Local maxima in the likelihood map are determined using a local
maxima detector and form a set of candidate lesions in each view. These candidates are further processed in a second
detection stage, which includes region segmentation, feature extraction and a final classification. Region segmentation
is performed using a 3D spiral-scanning dynamic programming method. Region features include descriptors of shape,
acoustic behavior and texture. Performance was determined using a 78-patient dataset with 93 images, including 50
malignant lesions. We used 10-fold cross-validation. Using FROC analysis we found that the system obtains a lesion
sensitivity of 60% and 70% at 2 and 4 false positives per image respectively.
Breast image feature learning with adaptive deconvolutional networks
Show abstract
Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis
approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features
directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for
learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided
diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed
unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling.
We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities
(739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated
by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006)
on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of
binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving
dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in
2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx
schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an
ultrasound image set (1125 cases).
Fully automated chest wall line segmentation in breast MRI by using context information
Show abstract
Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that
computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first
step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or
user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing
large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust
computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast
tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is
the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the
chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based
on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used
through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method
is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American
College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density
categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated
segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist.
The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).
Improving CAD performance by fusion of the bilateral mammographic tissue asymmetry information
Show abstract
Bilateral mammographic tissue density asymmetry could be an important factor in assessing risk of developing
breast cancer and improving the detection of the suspicious lesions. This study aims to assess whether fusion of the
bilateral mammographic density asymmetrical information into a computer-aided detection (CAD) scheme could
improve CAD performance in detecting mass-like breast cancers. A testing dataset involving 1352 full-field digital
mammograms (FFDM) acquired from 338 cases was used. In this dataset, half (169) cases are positive containing
malignant masses and half are negative. Two computerized schemes were first independently applied to process FFDM
images of each case. The first single-image based CAD scheme detected suspicious mass regions on each image. The
second scheme detected and computed the bilateral mammographic tissue density asymmetry for each case. A fusion
method was then applied to combine the output scores of the two schemes. The CAD performance levels using the
original CAD-generated detection scores and the new fusion scores were evaluated and compared using a free-response
receiver operating characteristic (FROC) type data analysis method. By fusion with the bilateral mammographic density
asymmetrical scores, the case-based CAD sensitivity was increased from 79.2% to 84.6% at a false-positive rate of 0.3
per image. CAD also cued more "difficult" masses with lower CAD-generated detection scores while discarded some
"easy" cases. The study indicated that fusion between the scores generated by a single-image based CAD scheme and the
computed bilateral mammographic density asymmetry scores enabled to increase mass detection sensitivity in particular
to detect more subtle masses.
Interactive content-based image retrieval (CBIR) computer-aided diagnosis (CADx) system for ultrasound breast masses using relevance feedback
Show abstract
We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists
in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from
a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior
shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity
measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query
and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment
and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities
of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar).
A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the
reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the
same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used
one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100
images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the
query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the
training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86±0.03
and 0.87±0.03 on the training set and 0.91±0.03 and 0.90±0.03 on the test set, respectively. This study demonstrated
that RRF improved the similarity of the retrieved masses.
A content-based retrieval of mammographic masses using the curvelet descriptor
Fabian Narváez,
Gloria Díaz,
Francisco Gómez,
et al.
Show abstract
Computer-aided diagnosis (CAD) that uses content based image retrieval (CBIR) strategies has became an
important research area. This paper presents a retrieval strategy that automatically recovers mammography
masses from a virtual repository of mammographies. Unlike other approaches, we do not attempt to segment
masses but instead we characterize the regions previously selected by an expert. These regions are firstly
curvelet transformed and further characterized by approximating the marginal curvelet subband distribution
with a generalized gaussian density (GGD). The content based retrieval strategy searches similar regions
in a database using the Kullback-Leibler divergence as the similarity measure between distributions. The
effectiveness of the proposed descriptor was assessed by comparing the automatically assigned label with a
ground truth available in the DDSM database.1 A total of 380 masses with different shapes, sizes and margins
were used for evaluation, resulting in a mean average precision rate of 89.3% and recall rate of 75.2% for the
retrieval task.
Oncology
Automatic detection of axillary lymphadenopathy on CT scans of untreated chronic lymphocytic leukemia patients
Show abstract
Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment
CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce
the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is
usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent
on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a
computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first
automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes.
Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next,
the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the
segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two
blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves
are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph
nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved
sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.
Image-based computer-aided prognosis of lung cancer: predicting patient recurrent-free survival via a variational Bayesian mixture modeling framework for cluster analysis of CT histograms
Show abstract
In this paper, we present a computer-aided prognosis (CAP) scheme that utilizes quantitatively derived image
information to predict patient recurrent-free survival for lung cancers. Our scheme involves analyzing CT histograms to
evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling
framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free
survival. Using our dataset of 454 patients with NSCLC, we demonstrate the potential usefulness of the CAP scheme
which can provide a quantitative risk score that is strongly correlated with prognostic factors.
A minimally interactive method to segment enlarged lymph nodes in 3D thoracic CT images using a rotatable spiral-scanning technique
Lei Wang,
Jan Hendrik Moltz,
Lars Bornemann,
et al.
Show abstract
Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up
and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement
and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging.
We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the
enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn
manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the
intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial
segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample
the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The
boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm
and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using
an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method,
a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with
lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with
a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of
0.48mm, were achieved.
Multi-level feature extraction for skin lesion segmentation in dermoscopic images
Show abstract
This paper presents a novel approach in computer aided skin lesion segmentation of dermoscopic images. We
apply spatial and color features in order to model the lesion growth pattern. The decomposition is done by
repeatedly clustering pixels into dark and light sub-clusters. A novel tree structure based representation of the
lesion growth pattern is constructed by matching every pixel sub-cluster with a node in the tree structure. This
model provides a powerful framework to extract features and to train models for lesion segmentation. The model
employed allows features to be extracted at multiple layers of the tree structure, enabling a more descriptive
feature set. Additionally, there is no need for preprocessing such as color calibration or artifact disocclusion.
Preliminary features (mean over RGB color channels) are extracted for every pixel over four layers of the growth
pattern model and are used in association with radial distance as a spatial feature to segment the lesion. The
resulting per pixel feature vectors of length 13 are used in a supervised learning model for estimating parameters
and segmenting the lesion. A dataset containing 116 challenging images from dermoscopic atlases is used to
validate the method via a 10-fold cross validation procedure. Results of segmentation are compared with six
other skin lesion segmentation methods. Our method outperforms ve other methods and performs competitively
with another method. We achieve a per-pixel sensitivity/specicity of 0.890 and 0.901 respectively.
Automated segmentation of tumors on bone scans using anatomy-specific thresholding
Show abstract
Quantification of overall tumor area on bone scans may be a potential biomarker for treatment response assessment
and has, to date, not been investigated. Segmentation of bone metastases on bone scans is a fundamental
step for this response marker. In this paper, we propose a fully automated computerized method for the segmentation
of bone metastases on bone scans, taking into account characteristics of different anatomic regions. A scan
is first segmented into anatomic regions via an atlas-based segmentation procedure, which involves non-rigidly
registering a labeled atlas scan to the patient scan. Next, an intensity normalization method is applied to account
for varying levels of radiotracer dosing levels and scan timing. Lastly, lesions are segmented via anatomic regionspecific
intensity thresholding. Thresholds are chosen by receiver operating characteristic (ROC) curve analysis
against manual contouring by board certified nuclear medicine physicians. A leave-one-out cross validation of
our method on a set of 39 bone scans with metastases marked by 2 board-certified nuclear medicine physicians
yielded a median sensitivity of 95.5%, and specificity of 93.9%. Our method was compared with a global intensity
thresholding method. The results show a comparable sensitivity and significantly improved overall specificity,
with a p-value of 0.0069.
Abdomen
Automated computer-aided detection of prostate cancer in MR images: from a whole-organ to a zone-based approach
Show abstract
MRI has shown to have great potential in prostate cancer localization and grading, but interpreting those
exams requires expertise that is not widely available. Therefore, CAD applications are being developed to aid
radiologists in detecting prostate cancer. Existing CAD applications focus on the prostate as a whole. However,
in clinical practice transition zone cancer and peripheral zone cancer are considered to have different appearances.
In this paper we present zone-specific CAD, in addition to an atlas based segmentation technique which includes
zonal segmentation. Our CAD system consists of a detection and a classification stage. Prior to the detection
stage the prostate is segmented into two zones. After segmentation features are extracted. Subsequently a
likelihood map is generated on which local maxima detection is performed. For each local maximum a region
is segmented. In the classification stage additional shape features are calculated, after which the regions are
classified. Validation was performed on 288 data sets with MR-guided biopsy results as ground truth. Freeresponse
Receiver Operating Characteristic (FROC) analysis was used for statistical evaluation. The difference
between whole-prostate and zone-specific CAD was assessed using the difference between the FROCs. Our results
show that evaluating the two zones separately results in an increase in performance compared to whole-prostate
CAD. The FROC curves at .1, 1 and 3 false positives have a sensitivity of 0.0, 0.55 and 0.72 for whole-prostate and
0.08, 0.57 and 0.80 for zone-specific CAD. The FROC curve of the zone-specific CAD also showed significantly
better performance overall (p < 0.05).
Maximal partial AUC feature selection in computer-aided detection of hepatocellular carcinoma in contrast-enhanced hepatic CT
Show abstract
A major challenge in the current computer-aided detection (CADe) of hepatocellular carcinomas (HCCs) in contrastenhanced
hepatic CT is to reduce the number of false-positive (FP) detections while maintaining a high sensitivity level.
In this paper, we propose a feature selection method based on a sequential forward floating selection procedure coupled
with a linear discriminant analysis classifier to improve the classification performance in computerized detection of
HCCs in contrast-enhanced hepatic CT. The proposed method selected the most relevant features that would maximize
the partial area under the receiver-operating-characteristic (ROC) curve (partial AUC) value, which would essentially
lead to the maximum classification performance in the computer-aided detection scheme in a clinical setting. The partial
AUC value is defined as the normalized AUC value in the high sensitivity region of the ROC curve, which is of clinical
importance. In order to test the performance of the proposed method, we compared it against the popular stepwise
feature selection method based on Wilks' lambda and a recently developed maximal AUC feature selection for an HCC
database (23 HCCs and 1279 non-HCCs). We extracted 88 morphologic, gray-level-based, and texture features from the
segmented lesion candidate regions in the hepatic CT images. The proposed method selected 9 features and achieved
100% sensitivity at 5.5 FPs per patient. Experiments showed a significant improvement in the performance of the
classifier with the proposed feature selection method over that with the popular stepwise feature selection based on
Wilks' lambda (17.3 FPs per patient) and the maximal AUC feature selection (10.0 FPs per patient) in terms of AUC
values and FP rates.
Automatic fetal weight estimation using 3D ultrasonography
Shaolei Feng,
Kevin S. Zhou,
Wesley Lee
Show abstract
This paper proposes a novel and fast approach for automatic estimation of the fetal weights from 3D ultrasound
data. Conventional manual approaches are time-consuming and involve inconsistence by different sonographers
because of the difficulty to trace limb boundaries in complicated Ultrasound limb volumes. It takes up to 10
minutes to manually trace the surface borders of a 20cm long limb. Using our automatic approach, the time is
significantly reduced to 2.1 seconds for measuring the weights based on the entire limb. Experiments with the
automatic approach also show comparable standard deviation and limits of agreement to the manual approaches.
Segmentation of urinary bladder in CT Urography (CTU) using CLASS
Show abstract
We are developing a computerized system for bladder segmentation on CTU, as a critical component for
computer aided diagnosis of bladder cancer. A challenge for bladder segmentation is the presence of regions without
contrast (NC) and filled with IV contrast (C). We are developing a Conjoint Level set Analysis and Segmentation
System (CLASS) specifically for this application. CLASS performs a series of image processing tasks: preprocessing,
initial segmentation, and 3D and 2D level set segmentation and post-processing, designed according to the
characteristics of the bladder in CTU. The NC and the C regions of the bladder were segmented separately in CLASS.
The final contour is obtained in the post-processing stage by the union of the NC and C contours. Seventy bladders (31
containing lesions, 24 containing wall thickening, and 15 normal) were segmented. The performance of CLASS was
assessed by rating the quality of the contours on a 5-point scale (1= "very poor", 3= "fair", 5 = "excellent"). For the 53
partially contrast-filled bladders, the average quality ratings for the 53 NC and 53 C regions were 4.0±0.7 and 4.0±1.0,
respectively. 46 NC and 41 C regions were given quality ratings of 4 or above. Only 2 NC and 5 C regions had ratings
under 3. The average quality ratings for the remaining 12 completely no contrast (NC) and 5 completely contrast-filled
(C) bladder contours were 3.3±1.0 and 3.4±0.5, respectively. After combining the NC and C contours for each of the 70
bladders, 46 had quality ratings of 4 or above. Only 4 had ratings under 3. The average quality rating was 3.8±0.7. The
results demonstrate the potential of CLASS for automated segmentation of the bladder.
Vascular
Automatic detection of coronary stent struts in intravascular OCT imaging
Kai Pin Tung,
Wen Zhe Shi,
Luis Pizarro,
et al.
Show abstract
Optical coherence tomography (OCT) is a light-based, high resolution imaging technique to guide stent deployment
procedure for stenosis. OCT can accurately differentiate the most superficial layers of the vessel wall as
well as stent struts and the vascular tissue surrounding them. In this paper, we automatically detect the struts
of coronary stents present in OCT sequences. We propose a novel method to detect the strut shadow zone and
accurately segment and reconstruct the strut in 3D. The estimation of the position of the strut shadow zone
is the key requirement which enables the strut segmentation. After identification of the shadow zone we use
probability map to estimate stent strut positions. This method can be applied to cross-sectional OCT images
to detect the struts. Validation is performed using simulated data as well as in four in-vivo OCT sequences and
the accuracy of strut detection is over 90%. The comparison against manual expert segmentation demonstrates
that the proposed strut identification is robust and accurate.
A robust automated method to detect stent struts in 3D intravascular optical coherence tomographic image sequences
Show abstract
Intravascular optical coherence tomography (IVOCT) provides very high resolution cross-sectional image sequences of
vessels. It has been rapidly accepted for stent implantation and its follow up evaluation. Given the large amount of stent
struts in a single image sequence, only automated detection methods are feasible. In this paper, we present an automated
stent strut detection technique which requires neither lumen nor vessel wall segmentation. To detect strut-pixel
candidates, both global intensity histograms and local intensity profiles of the raw polar images are used. Gaussian
smoothing is applied followed by specified Prewitt compass filters to detect the trailing shadow of each strut. The
shadow edge positions assist the strut-pixel candidates clustering. In the end, a 3D guide wire filter is applied to remove
the guide wire from the detection results. For validation, two experts marked 6738 struts in 1021 frames in 10 IVOCT
image sequences from a one-year follow up study. The struts were labeled as malapposed, apposed or covered together
with the image quality (high, medium, low). The inter-observer agreement was 96%. The algorithm was validated for
different combinations of strut status and image quality. Compared to the manual results, 93% of the struts were
correctly detected by the new method. For each combination, the lowest accuracy was 88%, which shows the robustness
towards different situations. The presented method can detect struts automatically regardless of the strut status or the
image quality, which can be used for quantitative measurement, 3D reconstruction and visualization of the implanted
stents.
Estimation of prenatal aorta intima-media thickness in ultrasound examination
Show abstract
Prenatal events such as intrauterine growth restriction have been shown to be associated with an increased thickness of
abdominal aorta in the fetus. Therefore the measurement of abdominal aortic intima-media thickness (aIMT) has been
recently considered a sensitive marker of artherosclerosis risk. To date measure of aortic diameter and of aIMT has been
performed manually on US fetal images, thus being susceptible to intra- and inter- operator variability. This work
introduces an automatic algorithm that identifies abdominal aorta and estimates its diameter and aIMT from videos
recorded during routine third trimester ultrasonographic fetal biometry.
Firstly, in each frame, the algorithm locates and segments the region corresponding to aorta by means of an active
contour driven by two different external forces: a static vector field convolution force and a dynamic pressure force.
Then, in each frame, the mean diameter of the vessel is computed, to reconstruct the cardiac cycle: in fact, we expect the
diameter to have a sinusoidal trend, according to the heart rate. From the obtained sinusoid, we identify the frames
corresponding to the end diastole and to the end systole. Finally, in these frames we assess the aIMT. According to its
definition, we consider as aIMT the distance between the leading edge of the blood-intima interface, and the leading
edge of the media-adventitia interface on the far wall of the vessel. The correlation between end-diastole and end-systole
aIMT automatic and manual measures is 0.90 and 0.84 respectively.
Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications
Show abstract
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of
this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have
developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree
extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary
vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or
lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing
curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES
segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local
threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the
straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct
path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets,
respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a
radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test
VOIs, the average percentage volume error relative to the reference standard was improved from 32.9±10.2% using the
MHES method to 9.9±7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved
significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the
automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the
MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot.
This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
Three-dimensional semi-automated segmentation of carotid atherosclerosis from three-dimensional ultrasound images
Show abstract
Three-dimensional ultrasound (3DUS) provides non-invasive and precise measurements of carotid atherosclerosis that
directly reflects arterial wall abnormalities that are thought to be related to stroke risk. Here we describe a threedimensional
segmentation method based on the sparse field level set method to automate the segmentation of the mediaadventitia
(MAB) and lumen-intima (LIB) boundaries of the common carotid artery from 3DUS images. To initiate the
process, an expert chooses four anchor points on each boundary on a subset of transverse slices that are orthogonal to the
axis of the artery. An initial surface is generated using the anchor points as initial guess for the segmentation. The MAB
is segmented first using five energies: length minimization energy, local region-based energy, edge-based energy,
anchor point-based energy, and local smoothness energy. Five energies are also used for the LIB segmentation: length
minimization energy, local region-based energy, global region-based energy, anchor point-based energy, and boundary
separation-based energy. The algorithm was evaluated with respect to manual segmentations on a slice-by-slice basis
using 15 3DUS images. To generate results in this paper, inter-slice distance of 2 mm is used for the initialization. For
the MAB and LIB segmentations, our method yielded Dice coefficients of more than 92% and sub-millimeter values for
mean and maximum absolute distance errors. Our method also yielded a vessel wall volume error of 7.1% ± 3.4%. The
realization of a semi-automated algorithm will aid in the translation of 3DUS measurements to clinical research for the
rapid, non-invasive, and economical monitoring of atherosclerotic disease.
Lung
Automatic classication of pulmonary function in COPD patients using trachea analysis in chest CT scans
Show abstract
Chronic Obstructive Pulmonary Disease (COPD) is a chronic lung disease that is characterized by airflow
limitation. COPD is clinically diagnosed and monitored using pulmonary function testing (PFT), which
measures global inspiration and expiration capabilities of patients and is time-consuming and labor-intensive.
It is becoming standard practice to obtain paired inspiration-expiration CT scans of COPD patients. Predicting
the PFT results from the CT scans would alleviate the need for PFT testing. It is hypothesized that
the change of the trachea during breathing might be an indicator of tracheomalacia in COPD patients and
correlate with COPD severity. In this paper, we propose to automatically measure morphological changes in
the trachea from paired inspiration and expiration CT scans and investigate the influence on COPD GOLD
stage classification. The trachea is automatically segmented and the trachea shape is encoded using the
lengths of rays cast from the center of gravity of the trachea. These features are used in a classifier, combined
with emphysema scoring, to attempt to classify subjects into their COPD stage. A database of 187
subjects, well distributed over the COPD GOLD stages 0 through 4 was used for this study. The data was
randomly divided into training and test set. Using the training scans, a nearest mean classifier was trained
to classify the subjects into their correct GOLD stage using either emphysema score, tracheal shape features,
or a combination. Combining the proposed trachea shape features with emphysema score, the classification
performance into GOLD stages improved with 11% to 51%. In addition, an 80% accuracy was achieved in
distinguishing healthy subjects from COPD patients.
Towards exaggerated emphysema stereotypes
Show abstract
Classification is widely used in the context of medical image analysis and in order to illustrate the mechanism
of a classifier, we introduce the notion of an exaggerated image stereotype based on training data and trained
classifier. The stereotype of some image class of interest should emphasize/exaggerate the characteristic patterns
in an image class and visualize the information the employed classifier relies on. This is useful for gaining insight
into the classification and serves for comparison with the biological models of disease.
In this work, we build exaggerated image stereotypes by optimizing an objective function which consists of a
discriminative term based on the classification accuracy, and a generative term based on the class distributions.
A gradient descent method based on iterated conditional modes (ICM) is employed for optimization. We use
this idea with Fisher's linear discriminant rule and assume a multivariate normal distribution for samples within
a class. The proposed framework is applied to computed tomography (CT) images of lung tissue with emphysema.
The synthesized stereotypes illustrate the exaggerated patterns of lung tissue with emphysema, which is
underpinned by three different quantitative evaluation methods.
An improved automatic computer aided tube detection and labeling system on chest radiographs
Show abstract
Tubes like Endotracheal (ET) tube used to maintain patient's airway and the Nasogastric (NG) tube used to feed the
patient and drain contents of the stomach are very commonly used in Intensive Care Units (ICU). The placement of these
tubes is critical for their proper functioning and improper tube placement can even be fatal. Bedside chest radiographs
are considered the quickest and safest method to check the placement of these tubes. Tertiary ICU's typically generate
over 250 chest radiographs per day to confirm tube placement. This paper develops a new fully automatic prototype
computer-aided detection (CAD) system for tube detection on bedside chest radiographs. The core of the CAD system is
the randomized algorithm which selects tubes based on their average repeatability from seed points. The CAD algorithm
is designed as a 5 stage process: Preprocessing (removing borders, histogram equalization, anisotropic filtering),
Anatomy Segmentation (to identify neck, esophagus, abdomen ROI's), Seed Generation, Region Growing and Tube
Selection. The preliminary evaluation was carried out on 64 cases. The prototype CAD system was able to detect ET
tubes with a True Positive Rate of 0.93 and False Positive Rate of 0.02/image and NG tubes with a True Positive Rate of
0.84 and False Positive Rate of 0.02/image respectively. The results from the prototype system show that it is feasible to
automatically detect both tubes on chest radiographs, with the potential to significantly speed the delivery of imaging
services while maintaining high accuracy.
Detecting airway remodeling in COPD and emphysema using low-dose CT imaging
Show abstract
In this study, we quantitatively characterize lung airway remodeling caused by smoking-related emphysema and Chronic
Obstructive Pulmonary Disease (COPD), in low-dose CT scans. To that end, we established three groups of individuals:
subjects with COPD (n=35), subjects with emphysema (n=38) and healthy smokers (n=28). All individuals underwent a
low-dose CT scan, and the images were analyzed as described next. First the lung airways were segmented using a fast
marching method and labeled according to its generation. Along each airway segment, cross-section images were
resampled orthogonal to the airway axis. Next 128 rays were cast from the center of the airway lumen in each crosssection
slice. Finally, we used an integral-based method, to measure lumen radius, wall thickness, mean wall percentage
and mean peak wall attenuation on every cast ray. Our analysis shows that both the mean global wall thickness and the
lumen radius of the airways of both COPD and emphysema groups were significantly different from those of the healthy
group. In addition, the wall thickness change starts at the 3rd airway generation in the COPD patients compared with
emphysema patients, who display the first significant changes starting in the 2nd generation. In conclusion, it is shown
that airway remodeling happens in individuals suffering from either COPD or emphysema, with some local difference
between both groups, and that we are able to detect and accurately quantify this process using images of low-dose CT
scans.
Computerized scheme for lung nodule detection in multi-projection chest radiography
Show abstract
Our purposes are to develop a conventional computer-aided diagnostic (CAD) scheme and a new fusion CAD
scheme for the detection of lung nodules in multi-projection chest radiography, and to verify that information fused
from the multi-projection chest radiography can greatly improve the performance of the conventional CAD scheme.
The conventional CAD scheme processed each of the three projection images of a subject independently, and
discarded the correlation information between the three images. The fusion CAD scheme registered all candidates
detected by the conventional CAD scheme in the three images of a subject, and integrated the correlation
information between the registered candidates to remove false positives. The CAD schemes were trained and
evaluated on a database with 97 subjects. At the sensitivities of 70%, 65% and 60%, the conventional CAD scheme
reported 20.4, 13.6 and 8.8 false positives per image, respectively, whereas the fusion CAD scheme reported 4.5, 2.8
and 1.2 false positives per image, respectively. The fusion of correlation information can markedly improve the
performance of CAD scheme for lung nodule detection.
Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI
Show abstract
MRI perfusion images give information about regional lung function and can be used to detect pulmonary
pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in
defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease
progression consistently. We present an automated method to calculate a regional score for this purpose. First,
lungs are located based on thresholding and morphological operations. Second, statistical shape models of left
and right children's lungs are initialized at the determined locations and used to precisely segment morphological
images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion
statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate
regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less
than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases.
The approach seems suitable for detecting early signs of CF and monitoring response to therapy.
Colon
Computer-aided detection of polyps in CT colonography by means of AdaBoost
Show abstract
Computer-aided detection (CADe) has been investigated for assisting radiologists in detecting polyps in CT
colonography (CTC). One of the major challenges in current CADe of polyps in CTC is to improve the specificity
without sacrificing the sensitivity. We have developed several CADe schemes based on a massive-training framework
with different nonlinear regression models such as neural network regression, support vector regression, and Gaussian
process regression. Individual CADe schemes based on different nonlinear regression models, however, achieved
comparable results. In this paper, we propose to use the AdaBoost algorithm to combine different regression models in
CADe schemes for improving the specificity without sacrificing the sensitivity. To test the performance of the proposed
approach, we compared it with individual regression models in the distinction between polyps and various types of false
positives (FPs). Our CTC database consisted of 246 CTC datasets obtained from 123 patients in the supine and prone
positions. The testing set contained 93 patients including 19 polyps in seven patients and 86 negative patients with 474
FPs produced by an original CADe scheme. The AdaBoost algorithm combining multiple massive-training regression
models achieved a performance that was higher than each individual regression model, yielding a 94.7% (18/19) bypolyp
sensitivity at an FP rate of 2.0 (188/93) per patient in a leave-one-lesion-out cross validation test.
Automated classification of colon polyps in endoscopic image data
Show abstract
Colon cancer is the third most commonly diagnosed type of cancer in the US. In recent years, however, early
diagnosis and treatment have caused a significant rise in the five year survival rate. Preventive screening is
often performed by colonoscopy (endoscopic inspection of the colon mucosa). Narrow Band Imaging (NBI) is
a novel diagnostic approach highlighting blood vessel structures on polyps which are an indicator for future
cancer risk.
In this paper, we review our automated inter- and intra-observer independent system for the automated
classification of polyps into hyperplasias and adenomas based on vessel structures to further improve the
classification performance. To surpass the performance limitations we derive a novel vessel segmentation approach,
extract 22 features to describe complex vessel topologies, and apply three feature selection strategies.
Tests are conducted on 286 NBI images with diagnostically important and challenging polyps (10mm
or smaller) taken from our representative polyp database. Evaluations are based on ground truth data
determined by histopathological analysis. Feature selection by Simulated Annealing yields the best result
with a prediction accuracy of 96.2% (sensitivity: 97.6%, specificity: 94.2%) using eight features.
Future development aims at implementing a demonstrator platform to begin clinical trials at University
Hospital Aachen.
Automatic colonic fold segmentation for computed tomography colonography
Show abstract
Human colon has complex structures mostly because of the haustral folds. Haustral folds are thin flat protrusions on the
colon wall, which inherently attached on the colon wall. These structures may complicate the shape analysis for
computer-aided detection of colonic polyps (CADpolyp); however, they can serve as solid reference during image
interpretation in computed tomographic colonography (CTC). Therefore, in this study, based on a clear model of the
haustral fold boundaries, we employ level set method to automatically segment the fold surfaces. We believe the
segmented folds have the potential to significantly benefit various post-procedures in CTC, e.g., supine-prone
registration, synchronized image interpretation, automatic polyp matching, CADpolyp, teniae coli extraction, etc. For
the first time, with assistance from physician experts, we established the ground truth of haustral fold boundaries of 15
real patient data from two medical centers, based on which we evaluated our algorithm. The results demonstrated that
about 92.7% of the folds are successfully detected. Furthermore, we explored the segmented area ratio (SAR), i.e., the
ratio between the areas of the intersection and the union of the expert-drawn and the automatically-segmented folds, to
measure the accuracy of the segmentation algorithm. The averaged result of SAR=86.2% shows a good match between
the ground truth and our segmentation results.
Automated detection of colorectal lesions with dual-energy CT colonography
Show abstract
Conventional single-energy computed tomography colonography (CTC) tends to miss polyps 6 - 9 mm in size and
flat lesions. Dual-energy CTC (DE-CTC) provides more complete information about the chemical composition
of tissue than does conventional CTC. We developed an automated computer-aided detection (CAD) scheme
for detecting colorectal lesions in which dual-energy features were used to identify different bowel materials and
their partial-volume artifacts. Based on these features, the dual-energy CAD (DE-CAD) scheme extracted the
region of colon by use of a lumen-tracking method, detected lesions by use of volumetric shape features, and
reduced false positives by use of a statistical classifier. For validation, 20 patients were prepared for DE-CTC by
use of reduced bowel cleansing and orally administered fecal tagging with iodine and/or barium. The DE-CTC
was performed in dual positions by use of a dual-energy CT scanner (SOMATOM Definition, Siemens) at 140
kVp and 80 kVp energy levels. The lesions identified by subsequent same-day colonoscopy were correlated with
the DE-CTC data. The detection accuracies of the DE-CAD and conventional CAD schemes were compared by
use of leave-one-patient-out evaluation and a bootstrap analysis. There were 25 colonoscopy-confirmed lesions:
22 were 6 - 9 mm and 3 were flat lesions ≥10 mm in size. The DE-CAD scheme detected the large flat lesions
and 95% of the 6 - 9 mm lesions with 9.9 false positives per patient. The improvement in detection accuracy by
the DE-CAD was statistically significant.
Computer-aided marginal artery detection on computed tomographic colonography
Show abstract
Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer
screening. The marginal artery of the colon, also known as the marginal artery of Drummond, is the blood
vessel that connects the inferior mesenteric artery with the superior mesenteric artery. The marginal artery
runs parallel to the colon for its entire length, providing the blood supply to the colon. Detecting the marginal
artery may benefit computer-aided detection (CAD) of colonic polyp. It can be used to identify teniae coli
based on their anatomic spatial relationship. It can also serve as an alternative marker for colon localization,
in case of colon collapse and inability to directly compute the endoluminal centerline. This paper proposes an
automatic method for marginal artery detection on CTC. To the best of our knowledge, this is the first work
presented for this purpose. Our method includes two stages. The first stage extracts the blood vessels in the
abdominal region. The eigenvalue of Hessian matrix is used to detect line-like structures in the images. The
second stage is to reduce the false positives in the first step. We used two different masks to exclude the false
positive vessel regions. One is a dilated colon mask which is obtained by colon segmentation. The other is an
eroded visceral fat mask which is obtained by fat segmentation in the abdominal region. We tested our
method on a CTC dataset with 6 cases. Using ratio-of-overlap with manual labeling of the marginal artery as
the standard-of-reference, our method yielded true positive, false positive and false negative fractions of 89%,
33%, 11%, respectively.
Musculoskeletal
Automatic measurement of vertebral body deformations in CT images based on a 3D parametric model
Show abstract
Accurate and objective evaluation of vertebral body deformations represents an important part of the clinical diagnostics
and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards threedimensional
(3D) imaging techniques, the established methods for the evaluation of vertebral body deformations are
based on measurements in two-dimensional (2D) X-ray images. In this paper, we propose a method for automatic
measurement of vertebral body deformations in computed tomography (CT) images that is based on efficient modeling
of the vertebral body shape with a 3D parametric model. By fitting the 3D model to the vertebral body in the image,
quantitative description of normal and pathological vertebral bodies is obtained from the value of 25 parameters of the
model. The evaluation of vertebral body deformations is based on the distance of the observed vertebral body from the
distribution of the parameter values of normal vertebral bodies in the parametric space. The distribution is obtained from
80 normal vertebral bodies in the training data set and verified with eight normal vertebral bodies in the control data set.
The statistically meaningful distance of eight pathological vertebral bodies in the study data set from the distribution of
normal vertebral bodies in the parametric space shows that the parameters can be used to successfully model vertebral
body deformations in 3D. The proposed method may therefore be used to assess vertebral body deformations in 3D or
provide clinically meaningful observations that are not available when using 2D methods that are established in clinical
practice.
Pixel level image fusion for medical imaging: an energy minimizing approach
Show abstract
In an attempt to improve the visualisation techniques for diagnosis and treatment of musculoskeletal injuries,
we present a novel image fusion method for a pixel-wise fusion of CT and MR images. We focus on the spine
and it's related diseases including osteophyte growth, degenerate disc disease and spinal stenosis. This will have
benefit to the 50-75% of people who suffer from back pain, which is the reason for 1.8% of all hospital stays
in the United States.1 Pre-registered CT and MR image pairs were used. Rigid registration was performed
based on soft tissue correspondence. A pixel-wise image fusion algorithm has been designed to combine CT
and MR images into a single image. This is accomplished by minimizing an energy functional using a Graph
Cut approach. The functional is formulated to balance the similarity between the resultant image and the CT
image as well as between the resultant image and the MR image. Furthermore the variational smoothness of
the resultant image is considered in the energy functional (to enforce natural transitions between pixels). The
results have been validated based on the amount of significant detail preserved in the final fused image. Based
on bone cortex and disc / spinal cord areas, 95% of the relevant MR detail and 85% of the relevant CT detail
was preserved. This work has the potential to aid in patient diagnosis, surgery planning and execution along
with post operative follow up.
Detection of sclerotic bone metastases in the spine using watershed algorithm and graph cut
Show abstract
The early detection of bone metastases is important for determining the prognosis and treatment
of a patient. We developed a CAD system which detects sclerotic bone metastases in the spine on
CT images. After the spine is segmented from the image, a watershed algorithm detects lesion
candidates. The over-segmentation problem of the watershed algorithm is addressed by the novel
incorporation of a graph-cuts driven merger. 30 quantitative features for each detection are
computed to train a support vector machine (SVM) classifier. The classifier was trained on 12
clinical cases and tested on 10 independent clinical cases. Ground truth lesions were manually
segmented by an expert. The system prior to classification detected 87% (72/83) of the manually
segmented lesions with volume greater than 300 mm3. On the independent test set, the sensitivity
was 71.2% (95% confidence interval (63.1%, 77.3%)) with 8.8 false positives per case.
Multi-stage osteolytic spinal bone lesion detection from CT data with internal sensitivity control
M. Wels,
B. M. Kelm,
A. Tsymbal,
et al.
Show abstract
Spinal bone lesion detection is a challenging and important task in cancer diagnosis and treatment monitoring.
In this paper we present a method for fully-automatic osteolytic spinal bone lesion detection from 3D CT data.
It is a multi-stage approach subsequently applying multiple discriminative models, i.e., multiple random forests,
for lesion candidate detection and rejection to an input volume. For each detection stage an internal control
mechanism ensures maintaining sensitivity on unseen true positive lesion candidates during training. This way
a pre-defined target sensitivity score of the overall system can be taken into account at the time of model
generation. For a lesion not only the center is detected but also, during post-processing, its spatial extension
along the three spatial axes defined by the surrounding vertebral body's local coordinate system. Our method
achieves a cross-validated sensitivity score of 75% and a mean false positive rate of 3.0 per volume on a data
collection consisting of 34 patients with 105 osteolytic spinal bone lesions. The median sensitivity score is 86%
at 2.0 false positives per volume.
Scoliosis curve type classification using kernel machine from 3D trunk image
Show abstract
Adolescent idiopathic scoliosis (AIS) is a deformity of the spine manifested by asymmetry and deformities of the external
surface of the trunk. Classification of scoliosis deformities according to curve type is used to plan management of scoliosis
patients. Currently, scoliosis curve type is determined based on X-ray exam. However, cumulative exposure to X-rays
radiation significantly increases the risk for certain cancer. In this paper, we propose a robust system that can classify
the scoliosis curve type from non invasive acquisition of 3D trunk surface of the patients. The 3D image of the trunk is
divided into patches and local geometric descriptors characterizing the surface of the back are computed from each patch
and forming the features. We perform the reduction of the dimensionality by using Principal Component Analysis and
53 components were retained. In this work a multi-class classifier is built with Least-squares support vector machine
(LS-SVM) which is a kernel classifier. For this study, a new kernel was designed in order to achieve a robust classifier
in comparison with polynomial and Gaussian kernel. The proposed system was validated using data of 103 patients with
different scoliosis curve types diagnosed and classified by an orthopedic surgeon from the X-ray images. The average rate
of successful classification was 93.3% with a better rate of prediction for the major thoracic and lumbar/thoracolumbar
types.
Digital Pathology I
Automated malignancy detection in breast histopathological images
Show abstract
Detection of malignancy from histopathological images of breast cancer is a labor-intensive and error-prone
process. To streamline this process, we present an efficient Computer Aided Diagnostic system that can differentiate
between cancerous and non-cancerous H&E (hemotoxylin&eosin) biopsy samples. Our system uses novel
textural, topological and morphometric features taking advantage of the special patterns of the nuclei cells in
breast cancer histopathological images. We use a Support Vector Machine classifier on these features to diagnose
malignancy. In conjunction with the maximum relevance - minimum redundancy feature selection technique, we
obtain high sensitivity and specificity. We have also investigated the effect of image compression on classification
performance.
Digital Pathology II
Follicular lymphoma grading using cell-graphs and multi-scale feature analysis
Show abstract
We present a method for the computer-aided histopathological grading of follicular lymphoma (FL) images based
on a multi-scale feature analysis. We analyze FL images using cell-graphs to characterize the structural organization
of the cells in tissues. Cell-graphs represent histopathological images with undirected and unweighted graphs
wherein the cytological components constitute the graph nodes and the approximate adjacencies of the components
are represented with edges. Using the features extracted from nuclei- and cytoplasm-based cell-graphs, a
classifier defines the grading of the follicular lymphoma images. The performance of this system is comparable
to that of our recently developed system that characterizes higher-level semantic description of tissues using
model-based intermediate representation (MBIR) and color-textural analysis. When tested with three different
classifiers, the combination of cell-graph based features with the MBIR and color-textural features followed by
a multi-scale feature selection is shown to achieve considerably higher classification accuracies than any set of
these feature sets can achieve separately.
Nucleus fingerprinting for the unique identification of Feulgen-stained nuclei
Show abstract
DNA Image Cytometry is a method for non-invasive cancer diagnosis which measures the DNA content of
Feulgen-stained nuclei. DNA content is measured using a microscope system equipped with a digital camera as
a densitometer and estimating the DNA content from the absorption of light when passing through the nuclei.
However, a DNA Image Cytometry measurement is only valid if each nucleus is only measured once.
To assist the user in preventing multiple measurements of the same nucleus, we have developed a unique
digital identifier for the characterization of Feulgen-stained nuclei, the so called Nucleus Fingerprint. Only nuclei
with a new fingerprint can be added to the measurement. This fingerprint is based on basic nucleus features,
the contour of the nucleus and the spatial relationship to nuclei in the vicinity. Based on this characterization,
a classifier for testing two nuclei for identity is presented.
In a pairwise comparison of ≈40000 pairs of mutually different nuclei, 99.5% were classified as different. In
another 450 tests, the fingerprints of the same nucleus recorded a second time were in all cases judged identical.
We therefore conclude that our Nucleus Fingerprint approach robustly prevents the repeated measurement of
nuclei in DNA Image Cytometry.
Novel Applications
Computer aided periapical lesion diagnosis using quantized texture analysis
Show abstract
Periapical lesion is a common disease in oral health. While many studies have been devoted to image-based
diagnosis of periapical lesion, these studies usually require clinicians to perform the task. In this paper we
investigate the automatic solutions toward periapical lesion classification using quantized texture analysis.
Specifically, we adapt the bag-of-visual-words model for periapical root image representation, which
captures the texture information by collecting local patch statistics. Then we investigate several similarity
measure approaches with the K-nearest neighbor (KNN) classifier for the diagnosis task. To evaluate these
classifiers we have collected a digitized oral X-Ray image dataset from 21 patients, resulting 139 root
images in total. The extensive experimental results demonstrate that the KNN classifier based on the bagof-
words model can achieve very promising performance for periapical lesion classification.
Automated quantification of adipose and skeletal muscle tissue in whole-body MRI data for epidemiological studies
Diana Wald,
Birgit Teucher,
Julien Dinkel,
et al.
Show abstract
The ratio between the amount of adipose and skeletal muscle tissue is an important determinant of metabolic
health. Recent developments in MRI technology allow whole body scans to be performed for accurate assessment
of body composition. In the present study, a total of 194 participants underwent a 2-point Dixon MRI sequence
of the whole body. A fully automated image segmentation method quantifies the amount of adipose and skeletal
muscle tissue by applying standard image processing techniques including thresholding, region growing and
morphological operators. The adipose tissue is further divided into subcutaneous and visceral adipose tissue by
using statistical shape models. All images were visually inspected. The quantitative analysis was performed
on 44 whole-body MRI data using manual segmentations as ground truth data. We achieved 3.3% and 6.3%
of relative volume difference between the manual and automated segmentation of subcutaneous and visceral
adipose tissue, respectively. The validation of skeletal muscle tissue segmentation resulted in a relative volume
difference of 7.8 ± 4.2% and a volumetric overlap error of 6.4 ± 2.3 %. To our knowledge, we are first to present
a fully automated method which quantifies adipose and skeletal muscle tissue in whole-body MRI data. Due to
the fully automated approach, results are deterministic and free of user bias. Hence, the software can be used in
large epidemiological studies for assessing body fat distribution and the ratio of adipose to skeletal muscle tissue
in relation to metabolic disease risk.
Semantic and topological classification of images in magnetically guided capsule endoscopy
P. W. Mewes,
P. Rennert,
A. Lj. Juloski,
et al.
Show abstract
Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.
Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy
Show abstract
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate
segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new
and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method
uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction.
Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered
image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly,
a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to
reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly
available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg (Germany). The
proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a
competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the
proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness
make the blood vessel segmentation method described here suitable for broad application in automated analysis of
retinal images.
Automated artery-venous classification of retinal blood vessels based on structural mapping method
Show abstract
Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific
responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy
may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to
analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries
and veins is required. We previously described a method for identification and separation of retinal vessel trees;
i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and
identification of color properties prominent to the vessel types. The mean and standard deviation of each of green
channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a
vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two
clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered
centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each
vessel is assigned a label of an artery or a vein. The classification results are compared with the manually
annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus
images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results
match well with the gold standard suggesting its potential in artery-venous classification and the respective
morphology analysis.
Cardiac and Neuro
Automatic classification of scar tissue in late gadolinium enhancement cardiac MRI for the assessment of left-atrial wall injury after radiofrequency ablation
Daniel Perry,
Alan Morris,
Nathan Burgon,
et al.
Show abstract
Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate
lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at
three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar
formation, which are important factors for predicting patient outcome and planning of redo ablation procedures.
We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and
have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our
approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the
performance of multivariate clustering on many combinations of image texture. Algorithm performance was
determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of
the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces
a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of
the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert
classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar
classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth
datasets considered easy, variability from the ground truth was low; for those considered difficult, variability
from ground truth was on par with the variability across experts.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
Show abstract
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements
recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies
which can learn the expert's knowledge from the training images and expert's annotation. Based on
the models constructed from the learning stage, the algorithm searches initial location of the landmark points
for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs
the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view
along the time to refine the measurement landmark points. The experiment results with large volume of data
show that the algorithm runs fast and is robust comparable to expert.
Coronary artery remodeling in non-contrast CT images
Show abstract
A significant cause of coronary artery disease is the coronary atherosclerosis which leads to stenosis of coronary arteries.
It has been shown in recent studies, using intravascular ultrasound and contrast-enhanced CT, that early atherosclerosis
causes positive coronary artery remodeling, defined as increases in the cross-sectional area. It is hypothesized that
detection of artery remodeling using non-contrast CT can be an important factor in sub-clinical assessment of cardiac
risk for asymptomatic subjects. However, measuring remodeling in coronary arteries in non-contrast CT images is a
challenging task because coronary arteries are small and the intensity of coronary arteries is similar to that of
surrounding tissues. Automatic segmentation algorithms that have been successful in segmenting coronary arteries in
contrast-enhanced images do not perform well. To overcome these difficulties, we developed an interactive application
to enable effective measurement of coronary artery remodeling in non-contrast CT images. This application is an
extension to the 3D Slicer image analysis platform. It allows users to visualize and trace the centerline of arteries in cross
sectional views. The artery centerlines are displayed in a three dimensional view overlaid on the original image volume
and color-coded according to the artery labels. Using this 3D artery model, the user can sample the cross-sectional area
of the arteries at selected points for remodeling assessment. Initial validation has demonstrated the effectiveness of this
method. A pilot study also showed positive correlation of large coronary artery remodeling with highest lifetime risks.
Further evaluation is underway using larger study size and more measurement points.
Cluster-based differential features to improve detection accuracy of focal cortical dysplasia
Chin-Ann Yang,
Mostafa Kaveh,
Bradley Erickson
Show abstract
In this paper, a computer aided diagnosis (CAD) system for automatic detection of focal cortical dysplasia
(FCD) on T1-weighted MRI is proposed. We introduce a new set of differential cluster-wise features comparing
local differences of the candidate lesional area with its surroundings and other GM/WM boundaries. The local
differences are measured in a distributional sense using χ2 distances. Finally, a Support Vector Machine (SVM)
classifier is used to classify the clusters. Experimental results show an 88% lesion detection rate with only
1.67 false positive clusters per subject. Also, the results show that using additional differential features clearly
outperforms the result using only absolute features.
Template-based tractography for clinical neonatal diffusion imaging data
Show abstract
In imaging studies of neonates, particularly in the clinical setting, diusion tensor imaging-based tractography is
typically unreliable due to the use of fast acquisition protocols that yield low resolution and signal-to-noise ratio
(SNR). These image acquisition protocols are implemented with the aim of reducing displacement artifacts that
may be produced by the movement of the neonate's head during the scanning session. In addition, axons are not
yet fully myelinated in these subjects. As a result, the water molecules' movements are not as constrained as in
older brains, making it even more dicult to dene structure by means of diusion proles. Here, we introduce
a post-processing method that overcomes some of the diculties described above, allowing the determination of
reliable tracts in newborns. We test our method using neonatal data and in particular, we successfully extract
some of the limbic, association and commissural bers, all of which are typically dicult to obtain by direct
tractography. The method is further validated through visual inspection by expert pediatric neuroradiologists.
Detection of cerebral aneurysms in MRA, CTA and 3D-RA data sets
Clemens M. Hentschke,
Oliver Beuing,
Rosa Nickl,
et al.
Show abstract
We propose a system to automatically detect cerebral aneurysms in 3D X-ray rotational angiography images
(3D-RA), magnetic resonance angiography images (MRA) and computed tomography angiography images
(CTA). After image normalization, initial candidates are found by applying a blob-enhancing filter on the data
sets. Clusters are computed by a modified k-means algorithm. A post-processing step reduces the false positive
(FP) rate on the basis of computed features. This is implemented as a rule-based system that is adapted according
to the modality. In MRA, clusters are excluded that are not neighbored to a vessel. As a final step, FP are
further reduced by applying a threshold classification on a feature. Our method was tested on 93 angiographic
data sets containing aneurysm and non-aneurysm cases. We achieved 95 % sensitivity with an average rate of
2.6 FP per data set (FP/DS) in case of 3D-RA, 89 % sensitivity at 6.6 FP/DS for MRA and 95 % sensitivity
at 37.6 FP/DS with CTA, respectively. We showed that our post-processing approach eliminates FP in MRA
with only a slight decrease of sensitivity. In contrast to other approaches, our algorithm does not require a vessel
segmentation and does not require training of distributional properties.
Poster Session: Abdomen
Gleason grading of prostate histology utilizing manifold regularization via statistical shape model of manifolds
Show abstract
Gleason patterns of prostate cancer histopathology, characterized primarily by morphological and architectural
attributes of histological structures (glands and nuclei), have been found to be highly correlated with disease
aggressiveness and patient outcome. Gleason patterns 4 and 5 are highly correlated with more aggressive disease
and poorer patient outcome, while Gleason patterns 1-3 tend to reflect more favorable patient outcome. Because
Gleason grading is done manually by a pathologist visually examining glass (or digital) slides, subtle morphologic
and architectural differences of histological attributes may result in grading errors and hence cause high
inter-observer variability. Recently some researchers have proposed computerized decision support systems to
automatically grade Gleason patterns by using features pertaining to nuclear architecture, gland morphology, as
well as tissue texture. Automated characterization of gland morphology has been shown to distinguish between
intermediate Gleason patterns 3 and 4 with high accuracy. Manifold learning (ML) schemes attempt to generate
a low dimensional manifold representation of a higher dimensional feature space while simultaneously preserving
nonlinear relationships between object instances. Classification can then be performed in the low dimensional
space with high accuracy. However ML is sensitive to the samples contained in the dataset; changes in the
dataset may alter the manifold structure. In this paper we present a manifold regularization technique to constrain
the low dimensional manifold to a specific range of possible manifold shapes, the range being determined
via a statistical shape model of manifolds (SSMM). In this work we demonstrate applications of the SSMM in (1)
identifying samples on the manifold which contain noise, defined as those samples which deviate from the SSMM,
and (2) accurate out-of-sample extrapolation (OSE) of newly acquired samples onto a manifold constrained by
the SSMM. We demonstrate these applications of the SSMM in the context of distinguishing between Gleason
patterns 3 and 4 using glandular morphologic features in a prostate histopathology dataset of 58 patient studies.
Identifying and eliminating noisy samples from the manifold via the SSMM results in a statistically significant
improvement in classification accuracy (CA), 93.0 ± 1.0% with removal of noisy samples compared to a CA of
90.9 ± 1.1% without removal of samples. The use of the SSMM for OSE of new independent test instances also
shows statistically significant improvement in CA, 87.1±0.8% with the SSMM compared to 85.6±0.1% without
the SSMM. Similar improvements were observed for the synthetic Swiss Roll and Helix datasets.
Incorporating the whole-mount prostate histology reconstruction program Histostitcher into the extensible imaging platform (XIP) framework
Show abstract
There is a need for identifying quantitative imaging (e.g. MRI) signatures for prostate cancer (CaP), so that
computer-aided diagnostic methods can be trained to detect disease extent in vivo. Determining CaP extent
on in vivo MRI is difficult to do; however, with the availability of ex vivo surgical whole mount histological
sections (WMHS) for CaP patients undergoing radical prostatectomy, co-registration methods can be applied to
align and map disease extent onto pre-operative MR imaging from the post-operative histology. Yet obtaining
digitized images of WHMS for co-registration with the pre-operative MRI is cumbersome since (a) most digital
slide scanners are unable to accommodate the entire section, and (b) significant technical expertise is required
for whole mount slide preparation. Consequently, most centers opt to construct quartered sections of each
histology slice. Prior to co-registration with MRI, however, these quartered sections need to be digitally stitched
together to reconstitute a digital, pseudo WMHS. Histostitcher© is an interactive software program that uses
semi-automatic registration tools to digitally stitch quartered sections into pseudo WMHS. Histostitcher© was
originally developed using the GUI tools provided by the Matlab programming interface, but the clinical use was
limited due to the inefficiency of the interface. The limitations of the Matlab based GUI include (a) an inability to
edit the fiducials, (b) the rendering being extremely slow, and (c) lack of interactive and rapid visualization tools.
In this work, Histostitcher© has been integrated into the eXtensible Imaging Platform (XIP
TM
) framework (a set
of libraries containing functionalities for analyzing and visualizing medical image data). XIP
TM
lends the stitching
tool much greater flexibility and functionality by (a) allowing interactive and seamless navigation through the
full resolution histology images, (b) the ability to easily add, edit, or remove fiducials and annotations in order
to register the quadrants and map the disease extent. In this work, we showcase examples of digital stitching of
quartered histological sections into pseudo-WHMS using Histostitcher © via the new XIP
TM
interface. This tool
will be particularly useful in clinical trials and large cohort studies where a quick, interactive way of digitally
reconstructing pseudo WMHS is required.
An integrated electronic colon cleansing for CT colonoscopy via MAP-EM segmentation and scale-based scatter correction
Show abstract
Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content
from native colonic structure. However, the high-density contrast agents tend to introduce the scatter effect on
neighboring soft tissues and elevate their observed CT attenuation values toward that of the tagged materials (TMs),
which may result in an excessive electronic colon cleansing (ECC) where pseudo-enhanced soft tissues are incorrectly
identified as TMs. To address this issue, we integrated a scale-based scatter correction as a preprocessing procedure into
our previous ECC pipeline based on the maximum a posteriori expectation-maximization (MAP-EM) partial volume
segmentation. The newly proposed ECC scheme takes into account both scatter effect and partial volume effect that
commonly appear in CTC images. We evaluated the new method with 10 patient CTC studies and found improved
performance. Our results suggest that the proposed strategy is effective with potentially significant benefits for both
clinical CTC examinations and automatic computer-aided detection (CAD) of colon polyps.
Automated incision line determination for virtual unfolded view generation of the stomach from 3D abdominal CT images
Show abstract
In this paper, we propose an automated incision line determination method for virtual unfolded view generation
of the stomach from 3D abdominal CT images. The previous virtual unfolding methods of the stomach
required a lot of manual operations such as determination of the incision line, which heavily tasks an operator.
In general, an incision line along the greater curvature of the stomach is used for making pathological
specimen. In our method, an incision line is automatically determined by projecting a centerline of the
stomach onto the gastric surface from a projection line. The projection line is determined by using positions
of the cardia and the pylorus, that can be easily specified by two mouse clicks. The process of our method
is performed as follows. We extract the stomach region using a thresholding and a labeling processes. We
apply a thinning process to the stomach region, and then we extract the longest line from the result of the
thinning process. We obtain a centerline of the stomach region by smoothing the longest line by using a
Bezier curve. The incision line is calculated by projecting the centerline onto the gastric surface from the
projection line. We applied the proposed method to 19 cases of CT images. We automatically determined
incision lines. Experimintal results showed our method was able to determine incision lines along the greater
curvature for most of 19 cases.
A phantom design for validating colonoscopy tracking
Show abstract
Phantom experiments are useful and frequently used in validating algorithms or techniques in applications where
it is difficult or impossible to generate accurate ground-truth. In this work we present a phantom design and
experiments to validate our colonoscopy tracking algorithms, that serve to keep both virtual colonoscopy and
optical colonoscopy images aligned (in location and orientation). We describe the construction of two phantoms,
capable of respectively moving along a straight and a curved path. The phantoms are motorized so as to be
able to move at a near constant speed. Experiments were performed at three speeds: 10, 15 and 20mm/sec, to
simulate motion velocities during colonoscopy procedures. The average velocity error was within 3mm/sec in
both straight and curved phantoms. Displacement error was within 7mm over a total distance of 288mm in the
straight phantom, and less than 7mm over 287mm in the curved phantom. Multiple trials were performed of
each experiment(and their errors averaged) to ensure repeatability.
Automatic segmentation of lesions for the computer-assisted detection in fluorescence urology
Show abstract
Bladder cancer is one of the most common cancers in the western world. The diagnosis in Germany
is based on the visual inspection of the bladder. This inspection performed with a cystoscope is a
challenging task as some kinds of abnormal tissues do not differ much in their appearance from their
surrounding healthy tissue. Fluorescence Cystoscopy has the potential to increase the detection rate.
A liquid marker introduced into the bladder in advance of the inspection is concentrated in areas with
high metabolism. Thus these areas appear as bright "glowing". Unfortunately, the fluorescence image
contains besides the glowing of the suspicious lesions no more further visual information like for example
the appearance of the blood vessels. A visual judgment of the lesion as well as a precise treatment
has to be done using white light illumination. Thereby, the spatial information of the lesion provided
by the fluorescence image has to be guessed by the clinical expert. This leads to a time consuming
procedure due to many switches between the modalities and increases the risk of mistreatment. We
introduce an automatic approach, which detects and segments any suspicious lesion in the fluorescence
image automatically once the image was classified as a fluorescence image. The area of the contour
of the detected lesion is transferred to the corresponding white light image and provide the clinical
expert the spatial information of the lesion. The advantage of this approach is, that the clinical expert
gets the spatial and the visual information of the lesion together in one image. This can save time and
decrease the risk of an incomplete removal of a malign lesion.
Size-adaptive hepatocellular carcinoma detection from 3D CT images based on the level set method
Shuntaro Yui,
Junichi Miyakoshi,
Kazuki Matsuzaki,
et al.
Show abstract
Automatic detection of hepatocellular carcinoma (HCC) from 3D CT images effectively reduces interpretation work.
Several detection methods have been proposed. However, there still remains a tough problem of adaptation detection
methods to a wide range of tumor sizes, especially to small nodules, since it is difficult to distinguish tumors from other
structures, including noise. Although the level set method (LS) is a powerful tool for detecting objects with arbitrary
topology, it is still poor at detecting small nodules due to low contrast. To detect small nodules, early phase images are
useful since low contrast in the late phase causes miss-detection of some small nodules. Nevertheless, conventional
methods using early phase images face two problems: one is failure to extract small nodules due to low contrast even in
early phase images, and the other is false-positive (FP) detection of vessels adjacent to tumors. In this paper, a new
robust detection method adapted to the wide range of tumor sizes has been proposed that uses only early phase images.
To overcome these two problems, our method consists of two techniques. One is regularizing surface evolution used in
LS by applying a new HCC filter that can enhance both small nodules and large tumors. The other is regularizing the
surface evolution by applying a Hessian-matrix-based filter that can enhance the vessel structures. Experimental results
showed that the proposed method improves sensitivity by over 15% and decreases FP by over 20%, demonstrating that
the proposed method is useful for detecting HCC accurately.
Medical image retrieval based on texture and shape feature co-occurrence
Show abstract
With the rapid development and wide application of medical imaging technology, explosive volumes of medical
image data are produced every day all over the world. As such, it becomes increasingly challenging to manage
and utilize such data effectively and efficiently. In particular, content-based medical image retrieval has been
intensively researched in the past decade or so.
In this work, we propose a novel approach to content-based medical image retrieval utilizing the co-occurrence
of both the texture and the shape features in contrast to most previous algorithms that use purely the texture
or the shape feature. Specifically, we propose a novel form of representation for the co-occurrence of the texture
and the shape features in an image, i.e., the gray level and edge direction co-occurrence matrix (GLEDCOM).
Based on GLEDCOM, we define eleven features forming a feature vector that is used to measure the similarity
between images. As a result, it consistently yields outstanding performance on both images rich in texture (e.g.,
image of brain) and images with dominant smooth regions and sharp edges (e.g., image of bladder).
As demonstrated by experiments, the mean precision of retrieval with GLEDCOM algorithm outperforms a
set of representative algorithms including the gray level co-occurrence matrix (GLCM) based, the Hu's seven
moment invariants (HSMI) based, the uniformity estimation method (UEM) based and the the modified Zernike
moments (MZM) based algorithms by 10%-20%.
Local jet features and statistical models in a hybrid Bayesian framework for prostate estimation in CBCT images
Show abstract
The challenge in prostate cancer radiotherapy is to deliver the planned dose to the prostate, sparing as much
as possible the neighboring organs, namely bladder and rectum. If a lower amount of dose, compared to the
prescription, is delivered to the prostate, the risk of failure may increase. Likewise, if higher doses are delivered
to the neighboring organs, undesirable side effects may occur. Accurate localization of prostate and organs at
risk is therefore a bottleneck in radiotherapy. In recent Image Guided Radiotherapy (IGRT) procedures, an
intra-operative Cone Beam CT (CBCT) is used at each session to align the prostate to the planned CT and to
maximize the correct dose delivery. Tracking the prostate in these images may allow not only to achieve this goal
but also to accurately measure the cumulated dose as the session goes. This work introduces a new method that
automatically locates the prostate in CBCT images. The whole method lies in a Bayesian formulation where a
multiscale image representation, the local jets, is used as a likelihood function, an the prior knowledge is learned
from multiple examples by expert manual delineations. Compared with manual ground truth segmentations, the
results showed a Jaccard similarity index of 0.84, and an accuracy of 98%in a set of 4 studies of four patients.
Computer vision approach to detect colonic polyps in computed tomographic colonography
Show abstract
In this paper, we present evaluation results for a novel colonic polyp classification method for use as part of a computed
tomographic colonography (CTC) computer-aided detection (CAD) algorithm. Inspired by the interpretative
methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes
sequences of images (referred to here as videos) for classification of CAD marks. First, we generated an initial list of
polyp candidates using an existing CAD system. For each of these candidates, we created a video composed of a series
of intraluminal, volume-rendered images focusing on the candidate from multiple viewpoints. These videos illustrated
the shape of the polyp candidate and gathered contextual information of diagnostic importance. We calculated the
histogram of oriented gradients (HOG) feature on each frame of the video and utilized a support vector machine for
classification. We tested our method by analyzing a CTC data set of 50 patients from three medical centers. Our
proposed video analysis method for polyp classification showed significantly better performance than an approach using
only the 2D CT slice data. The areas under the ROC curve for these methods were 0.88 (95% CI: [0.84, 0.91]) and 0.80
(95% CI: [0.75, 0.84]) respectively (p=0.0005).
Computer-aided mesenteric small vessel segmentation on high-resolution 3D contrast-enhanced CT angiography scans
Show abstract
Segmentation of the mesenteric vasculature has important applications for evaluation of the small bowel. In particular, it
may be useful for small bowel path reconstruction and precise localization of small bowel tumors such as carcinoid.
Segmentation of the mesenteric vasculature is very challenging, even for manual labeling, because of the low contrast
and tortuosity of the small blood vessels. Many vessel segmentation methods have been proposed. However, most of
them are designed for segmenting large vessels. We propose a semi-automated method to extract the mesenteric
vasculature on contrast-enhanced abdominal CT scans. First, the internal abdominal region of the body is automatically
identified. Second, the major vascular branches are segmented using a multi-linear vessel tracing method. Third, small
mesenteric vessels are segmented using multi-view multi-scale vesselness enhancement filters. The method is insensitive
to image contrast, variations of vessel shape and small occlusions due to overlapping. The method could automatically
detect mesenteric vessels with diameters as small as 1 mm. Compared with the standard-of-reference manually labeled
by an expert radiologist, the segmentation accuracy (recall rate) for the whole mesenteric vasculature was 82.3% with a
3.6% false positive rate.
Automated measurement of anterior and posterior acetabular sector angles
Show abstract
In this paper, we propose a segmentation algorithm by which anatomical landmarks on the pelvis are extracted from
computed tomography (CT) images. The landmarks are used to automatically define the anterior (AASA) and posterior
acetabular sector angles (PASA) describing the degree of hip misalignment. The center of each femoral head is obtained
by searching for the point at which most intensity gradient vectors defined at edge points intersect. The radius of each
femoral head is computed by finding the sphere, positioned at the center of the femoral head, for which the normalized
sum of gradient vector magnitudes on the sphere surface is maximal. The anterior and posterior corners of each
acetabulum are searched for on a curve representing the acetabulum and defined by dynamic programming. The femoral
head centers and anterior and posterior corners are used to calculate the AASA and PASA. The algorithm was applied to
CT images of 120 normal subjects and the results were compared to ground truth values obtained by manual
segmentation. The mean absolute difference (± standard deviation) between the obtained and ground truth values was
1.3 ± 0.3 mm for the femoral head centers and 2.1 ± 1.3 degrees for the acetabular angles.
Poster Session: Bone
MRI based knee cartilage assessment
Show abstract
Osteoarthritis is one of the leading causes of pain and disability worldwide and a major health problem in
developed countries due to the gradually aging population. Though the symptoms are easily recognized and
described by a patient, it is difficult to assess the level of damage or loss of articular cartilage quantitatively. We
present a novel method for fully automated knee cartilage thickness measurement and subsequent assessment
of the knee joint. First, the point correspondence between a pre-segmented training bone model is obtained
with use of Shape Context based non-rigid surface registration. Then, a single Active Shape Model (ASM) is
used to segment both Femur and Tibia bone. The surfaces obtained are processed to extract the Bone-Cartilage
Interface (BCI) points, where the proper segmentation of cartilage begins. For this purpose, the cartilage ASM
is trained with cartilage edge positions expressed in 1D coordinates at the normals in the BCI points. The
whole cartilage model is then constructed from the segmentations obtained in the previous step. An absolute
thickness of the segmented cartilage is measured and compared to the mean of all training datasets, giving as a
result the relative thickness value. The resulting cartilage structure is visualized and related to the segmented
bone. In this way the condition of the cartilage is assessed over the surface. The quality of bone and cartilage
segmentation is validated and the Dice's coefficients 0.92 and 0.86 for Femur and Tibia bones and 0.45 and
0.34 for respective cartilages are obtained. The clinical diagnostic relevance of the obtained thickness mapping
is being evaluated retrospectively. We hope to validate it prospectively for prediction of clinical outcome the
methods require improvements in accuracy and robustness.
Predicting the biomechanical strength of proximal femur specimens with bone mineral density features and support vector regression
Show abstract
To improve the clinical assessment of osteoporotic hip fracture risk, recent computer-aided diagnosis systems
explore new approaches to estimate the local trabecular bone quality beyond bone density alone to predict femoral
bone strength. In this context, statistical bone mineral density (BMD) features extracted from multi-detector
computed tomography (MDCT) images of proximal femur specimens and different function approximations
methods were compared in their ability to predict the biomechanical strength. MDCT scans were acquired in
146 proximal femur specimens harvested from human cadavers. The femurs' failure load (FL) was determined
through biomechanical testing. An automated volume of interest (VOI)-fitting algorithm was used to define a
consistent volume in the femoral head of each specimen. In these VOIs, the trabecular bone was represented
by statistical moments of the BMD distribution and by pairwise spatial occurrence of BMD values using the
gray-level co-occurrence (GLCM) approach. A linear multi-regression analysis (MultiReg) and a support vector
regression algorithm with a linear kernel (SVRlin) were used to predict the FL from the image feature sets.
The prediction performance was measured by the root mean square error (RMSE) for each image feature on
independent test sets; in addition the coefficient of determination R2 was calculated. The best prediction
result was obtained with a GLCM feature set using SVRlin, which had the lowest prediction error (RSME =
1.040±0.143, R2 = 0.544) and which was significantly lower that the standard approach of using BMD.mean and
MultiReg (RSME = 1.093±0.133, R2 = 0.490, p<0.0001). The combined sets including BMD.mean and GLCM
features had a similar or slightly lower performance than using only GLCM features. The results indicate that the
performance of high-dimensional BMD features extracted from MDCT images in predicting the biomechanical
strength of proximal femur specimens can be significantly improved by using support vector regression.
Quantitative vertebral compression fracture evaluation using a height compass
Show abstract
Vertebral compression fractures can be caused by even minor trauma in patients with
pathological conditions such as osteoporosis, varying greatly in vertebral body location and
compression geometry. The location and morphology of the compression injury can guide
decision making for treatment modality (vertebroplasty versus surgical fixation), and can be
important for pre-surgical planning. We propose a height compass to evaluate the axial plane
spatial distribution of compression injury (anterior, posterior, lateral, and central), and distinguish
it from physiologic height variations of normal vertebrae. The method includes four steps: spine
segmentation and partition, endplate detection, height compass computation and compression
fracture evaluation. A height compass is computed for each vertebra, where the vertebral body is
partitioned in the axial plane into 17 cells oriented about concentric rings. In the compass
structure, a crown-like geometry is produced by three concentric rings which are divided into 8
equal length arcs by rays which are subtended by 8 common central angles. The radius of each
ring increases multiplicatively, with resultant structure of a central node and two concentric
surrounding bands of cells, each divided into octants. The height value for each octant is
calculated and plotted against octants in neighboring vertebrae. The height compass shows
intuitive display of the height distribution and can be used to easily identify the fracture regions.
Our technique was evaluated on 8 thoraco-abdominal CT scans of patients with reported
compression fractures and showed statistically significant differences in height value at the sites
of the fractures.
Poster Session: Breast
A novel local learning based approach with application to breast cancer diagnosis
Show abstract
In this paper, we introduce a new local learning based approach and apply it for the well-studied problem of breast
cancer diagnosis using BIRADS-based mammographic features. To learn from our clinical dataset the latent relationship
between these features and the breast biopsy result, our method first dynamically partitions the whole sample population
into multiple sub-population groups through stochastically searching the sample population clustering space. Each
encountered clustering scheme in our online searching process is then used to create a certain sample population partition
plan. For every resultant sub-population group identified according to a partition plan, our method then trains a dedicated
local learner to capture the underlying data relationship. In our study, we adopt the linear logistic regression model as our
local learning method's base learner. Such a choice is made both due to the well-understood linear nature of the problem,
which is compellingly revealed by a rich body of prior studies, and the computational efficiency of linear logistic
regression--the latter feature allows our local learning method to more effectively perform its search in the sample
population clustering space. Using a database of 850 biopsy-proven cases, we compared the performance of our method
with a large collection of publicly available state-of-the-art machine learning methods and successfully demonstrated its
performance advantage with statistical significance.
Mammographic enhancement with combining local statistical measures and sliding band filter for improved mass segmentation in mammograms
Show abstract
In this study, a novel mammogram enhancement solution is proposed, aiming to improve the quality of subsequent
mass segmentation in mammograms. It has been widely accepted that characteristics of masses are usually hyper-dense
or uniform density with respect to its background. Also, their core parts are likely to have high-intensity values while the
values of intensity tend to be decreased as the distance to core parts increases. Based on the aforementioned
observations, we develop a new and effective mammogram enhancement method by combining local statistical
measurements and Sliding Band Filtering (SBF). By effectively combining local statistical measurements and SBF, we
are able to improve the contrast of the bright and smooth regions (which represent potential mass regions), as well as, at
the same time, the regions where their surrounding gradients are converging to the centers of regions of interest. In this
study, 89 mammograms were collected from the public MAIS database (DB) to demonstrate the effectiveness of the
proposed enhancement solution in terms of improving mass segmentation. As for a segmentation method, widely used
contour-based segmentation approach was employed. The contour-based method in conjunction with the proposed
enhancement solution achieved overall detection accuracy of 92.4% with a total of 85 correct cases. On the other hand,
without using our enhancement solution, overall detection accuracy of the contour-based method was only 78.3%. In
addition, experimental results demonstrated the feasibility of our enhancement solution for the purpose of improving
detection accuracy on mammograms containing dense parenchymal patterns.
Perceptual mass segmentation using eye-tracking and seed-growing
Show abstract
In the paper, we propose a novel scheme for breast mass segmentation in mammography, which is based on visual
perception and consists of two steps. Firstly, radiologists' eye-gazing data is recorded by the eye-tracker during reading
and then clustered with a density-based spatial clustering of applications with noise (DBSCAN) algorithm to achieve
seeds locating radiologists' regions of interest (ROIs). The seeds-based region growing (SBRG) algorithm is applied to
buckle ROIs containing suspicious lesions. Secondly, in order to achieve fine lesion contour as final result, the ROIs are
segmented with a multi-scale mass segmentation approach using active contour models. The result of applying the
proposed method to the mammograms from both DDSM and Zhejiang Cancer Hospital shows that the achieved average
of overlap rate is 0.5915 and the achieved average of misclassification rate is 0.6342. The innovative point of the
proposed approach is to introduce visual perception into breast mass segmentation, which makes the result of mass
segmentation meet radiologists' subjective demand.
Detection of architectural distortion in prior mammograms using statistical measures of orientation of texture
Show abstract
We present a method using statistical measures of the orientation of texture to characterize and detect architectural
distortion in prior mammograms of interval-cancer cases. Based on the orientation field, obtained by
the application of a bank of Gabor filters to mammographic images, two types of co-occurrence matrices were
derived to estimate the joint occurrence of the angles of oriented structures. For each of the matrices, Haralick's
14 texture features were computed. From a total of 106 prior mammograms of 56 interval-cancer cases and
52 mammograms of 13 normal cases, 4,224 regions of interest (ROIs) were automatically obtained by applying
Gabor filters and phase portrait analysis. For each ROI, statistical features were computed using the angle
co-occurrence matrices. The performance of the features in the detection of architectural distortion was analyzed
and compared with that of Haralick's features computed using the gray-level co-occurrence matrices of
the ROIs. Using logistic regression for feature selection, an artificial neural network for classification, and the
leave-one-image-out approach for cross-validation, the best result achieved was 0.77 in terms of the area under
the receiver operating characteristic (ROC) curve. Analysis of the free-response ROC curve yielded a sensitivity
of 80% at 5.4 false positives per image.
A CAD system based on complex networks theory to characterize mass in mammograms
Show abstract
This paper presents a Computer-Aided Diagnosis (CAD) system for mammograms, which is based on complex
networks to shape boundary characterization of mass in mammograms, suggesting a "second opinion" to the
health specialist. A region of interest (the mass) is automatically segmented using an improved algorithm based
on EM/MPM and the shape is modeled into a scale-free complex network. Topological measurements of the
resulting network are used to compose the shape descriptors. The experiments comparing the complex network
approach with other traditional descriptors, in detecting breast cancer in mammograms, show that the proposed
approach accomplish the best values of accuracy. Hence, the results indicate that complex networks are wellsuited
to characterize mammograms.
Multi-instance learning for mass retrieval in digitized mammograms
Show abstract
Breast cancer is one of the most common malignant tumors in women. In mammogram retrieval system, the query mass
is ambiguity and difficult to be described because in which the lesion and the normal tissue are physically adjacent. If the
query mass can be processed as an image bag, then the ambiguity can be tackled by multi-instance learning (MIL)
techniques. In this paper, we presented a preliminary study of MIL for mass retrieval in digitized mammograms, and
proposed three image bag generators named J-Bag, A-Bag and K-Bag, respectively. Diverse Density (DD), EM-DD and
BP-MIP were applied as MIL algorithms for mass retrieval. Experimental study was carried out on DDSM database and
another database in which images were collected from the Zhejiang Cancer Hospital in China. Preliminary experiments
showed that the MIL techniques can be applied to the problem of mass retrieval in digitized mammograms and the
proposed bag generators A-Bag and K-Bag can achieve more efficient results than the existing bag generator SBN.
Local binary patterns for stromal area removal in histology images
Show abstract
Nuclei counting in epithelial cells is an indication for tumor proliferation rate which is useful to rank
tumors and select an appropriate treatment schedule for the patient. However, due to the high interand
intra- observer variability in nuclei counting, pathologists seek a deterministic proliferation rate
estimate. Histology tissue contains epithelial and stromal cells. However, nuclei counting is clinically
restricted to epithelial cells because stromal cells do not become cancerous themselves since
they remain genetically normal. Counting nuclei existing within the stromal tissue is one of the major
causes of the proliferation rate non-deterministic estimation. Digitally removing stromal tissue
will eliminate a major cause in pathologist counting variability and bring the clinical pathologist a
major step closer toward a deterministic proliferation rate estimation. To that end, we propose a
computer aided diagnosis (CAD) system for eliminating stromal cells from digital histology images
based on the local binary patterns, entropy measurement, and statistical analysis. We validate our
CAD system on a set of fifty Ki-67-stained histology images. Ki-67-stained histology images are
among the clinically approved methods for proliferation rate estimation. To test our CAD system,
we prove that the manual proliferation rate estimation performed by the expert pathologist does not
change before and after stromal removal. Thus, stromal removal does not affect the expert pathologist
estimation clinical decision. Hence, the successful elimination of the stromal area highly reduces
the false positive nuclei which are the major confusing cause for the less experienced pathologists
and thus accounts for the non-determinism in the proliferation rate estimation. Our experimental
setting shows statistical insignificance (paired student t-test shows ρ = 0.74) in the manual nuclei
counting before and after our automated stromal removal. This means that the clinical decision of
the expert pathologist is not affected by our CAD system which is what we want to prove. However,
the usage of our CAD system substantially account for the reduced inter- and intra- proliferation
rate estimation variability and especially for less-experienced pathologists.
Predicting axillary lymph node metastasis from kinetic statistics of DCE-MRI breast images
Show abstract
The presence of axillary lymph node metastases is the most important prognostic factor in breast cancer and can
influence the selection of adjuvant therapy, both chemotherapy and radiotherapy. In this work we present a set
of kinetic statistics derived from DCE-MRI for predicting axillary node status. Breast DCE-MRI images from
69 women with known nodal status were analyzed retrospectively under HIPAA and IRB approval. Axillary
lymph nodes were positive in 12 patients while 57 patients had no axillary lymph node involvement. Kinetic
curves for each pixel were computed and a pixel-wise map of time-to-peak (TTP) was obtained. Pixels were first
partitioned according to the similarity of their kinetic behavior, based on TTP values. For every kinetic curve,
the following pixel-wise features were computed: peak enhancement (PE), wash-in-slope (WIS), wash-out-slope
(WOS). Partition-wise statistics for every feature map were calculated, resulting in a total of 21 kinetic statistic
features. ANOVA analysis was done to select features that differ significantly between node positive and node
negative women. Using the computed kinetic statistic features a leave-one-out SVM classifier was learned that
performs with AUC=0.77 under the ROC curve, outperforming the conventional kinetic measures, including
maximum peak enhancement (MPE) and signal enhancement ratio (SER), (AUCs of 0.61 and 0.57 respectively).
These findings suggest that our DCE-MRI kinetic statistic features can be used to improve the prediction of
axillary node status in breast cancer patients. Such features could ultimately be used as imaging biomarkers to
guide personalized treatment choices for women diagnosed with breast cancer.
A multi-scale approach to mass segmentation using graph cuts
Show abstract
This paper presents a novel scheme for mass segmentation in digitized mammograms, which is based on Graph Cuts
algorithm and multi-scale analysis. The multi-scale method can segment mammographic images with a stepwise process
from global to local segmentation by iterating Graph Cuts. To improve the segmentation efficiency and robustness, the
watershed transform is used for pre-segmentation of the image to produce a region adjacency graph for the following
optimization steps. Besides, this paper proposes a strategy of increasing smoothness energy term step by step in the
Markov Random Field (MRF) image segmentation module, so as to improve the efficiency effectively in the mass
segmentation. The new segmentation strategy effectively improves segmentation performance with less influences of
image noise level. The experimental results demonstrate that the proposed method achieves a better performance in
accuracy and robustness than conventional ones.
Computer-aided diagnostics of screening mammography using content-based image retrieval
Show abstract
Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening
mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics
(CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based
image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the
support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography.
The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this,
3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous
regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of
tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions.
Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes,
respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x
128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for
the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an
implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a
smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This,
however, needs more comprehensive evaluation on clinical data.
A similarity study between the query mass and retrieved masses using decision tree content-based image retrieval (DTCBIR) CADx system for characterization of ultrasound breast mass images
Show abstract
We are developing a Decision Tree Content-Based Image Retrieval (DTCBIR) CADx scheme to assist
radiologists in characterization of breast masses on ultrasound (US) images. Three DTCBIR configurations, including
decision tree with boosting (DTb), decision tree with full leaf features (DTL), and decision tree with selected leaf
features (DTLs) were compared. For DTb, the features of a query mass were combined first into a merged feature score
and then masses with similar scores were retrieved. For DTL and DTLs, similar masses were retrieved based on the
Euclidean distance between the feature vector of the query and those of the selected references. For each DTCBIR
configuration, we investigated the use of the full feature set and the subset of features selected by the stepwise linear
discriminant analysis (LDA) and simplex optimization method, resulting in six retrieval methods. Among the six
methods, we selected five, DTb-lda, DTL-lda, DTb-full, DTL-full and DTLs-full, for the observer study. For a query
mass, three most similar masses were retrieved with each method and were presented to the radiologists in random order.
Three MQSA radiologists rated the similarity between the query mass and the computer-retrieved masses using a ninepoint
similarity scale (1=very dissimilar, 9=very similar). For DTb-lda, DTL-lda, DTb-full, DTL-full and DTLs-full, the
average Az values were 0.90±0.03, 0.85±0.04, 0.87±0.04, 0.79±0.05 and 0.71±0.06, respectively, and the average
similarity ratings were 5.00, 5.41, 4.96, 5.33 and 5.13, respectively. Although the DTb measures had the best
classification performance among the DTCBIRs studied, and DTLs had the worst performance, DTLs-full obtained
higher similarity ratings than the DTb measures.
Automatic tumor detection in the constrained region for ultrasound breast CAD
Yeong Kyeong Seong,
Moon Ho Park,
Eun Young Ko,
et al.
Show abstract
In this paper we propose a new method to segment a breast image into several regions. Tumor detection region is
constrained to the region only in glandular tissue because the tumors usually occur at glandular tissue in the breast
anatomy. We extract texture feature for each point and classify them as several layers using a random forest classifier.
Classified points are merged into a large region and small regions are removed by postprocessing. The accuracy of
glandular tissue detection rate was about 90%. We applied the conventional tumor detection method in this segmented
glandular tissue. After several tests we obtained that tumor detection accuracy improved for 14% and detection time was
also reduced. With this method, we can achieve the improvement both on tumor detection accuracy and on the
processing time.
Automating proliferation rate estimation from Ki-67 histology images
Show abstract
Breast cancer is the second cause of women death and the most diagnosed female cancer in the US. Proliferation rate
estimation (PRE) is one of the prognostic indicators that guide the treatment protocols and it is clinically performed from
Ki-67 histopathology images. Automating PRE substantially increases the efficiency of the pathologists. Moreover,
presenting a deterministic and reproducible proliferation rate value is crucial to reduce inter-observer variability. To that
end, we propose a fully automated CAD system for PRE from the Ki-67 histopathology images. This CAD system is
based on a model of three steps: image pre-processing, image clustering, and nuclei segmentation and counting that are
finally followed by PRE. The first step is based on customized color modification and color-space transformation. Then,
image pixels are clustered by K-Means depending on the features extracted from the images derived from the first step.
Finally, nuclei are segmented and counted using global thresholding, mathematical morphology and connected
component analysis. Our experimental results on fifty Ki-67-stained histopathology images show a significant agreement
between our CAD's automated PRE and the gold standard's one, where the latter is an average between two observers'
estimates. The Paired T-Test, for the automated and manual estimates, shows ρ = 0.86, 0.45, 0.8 for the brown nuclei
count, blue nuclei count, and proliferation rate, respectively. Thus, our proposed CAD system is as reliable as the
pathologist estimating the proliferation rate. Yet, its estimate is reproducible.
Multiresolution Local Binary Pattern texture analysis for false positive reduction in computerized detection of breast masses on mammograms
Show abstract
We investigated the feasibility of using multiresolution Local Binary Pattern (LBP) texture analysis to reduce falsepositive
(FP) detection in a computerized mass detection framework. A new and novel approach for extracting LBP
features is devised to differentiate masses and normal breast tissue on mammograms. In particular, to characterize
the LBP texture patterns of the boundaries of masses, as well as to preserve the spatial structure pattern of the
masses, two individual LBP texture patterns are then extracted from the core region and the ribbon region of pixels
of the respective ROI regions, respectively. These two texture patterns are combined to produce the so-called
multiresolution LBP feature of a given ROI. The proposed LBP texture analysis of the information in mass core
region and its margin has clearly proven to be significant and is not sensitive to the precise location of the
boundaries of masses. In this study, 89 mammograms were collected from the public MAIS database (DB). To
perform a more realistic assessment of FP reduction process, the LBP texture analysis was applied directly to a total
of 1,693 regions of interest (ROIs) automatically segmented by computer algorithm. Support Vector Machine
(SVM) was applied for the classification of mass ROIs from ROIs containing normal tissue. Receiver Operating
Characteristic (ROC) analysis was conducted to evaluate the classification accuracy and its improvement using
multiresolution LBP features. With multiresolution LBP features, the classifier achieved an average area under the
ROC curve, , z A of 0.956 during testing. In addition, the proposed LBP features outperform other state-of-the-arts
features designed for false positive reduction.
Evaluation of stopping criteria for level set segmentation of breast masses in contrast-enhanced dedicated breast CT
Show abstract
Dedicated breast CT (bCT) is an emerging technology that produces 3D images of the breast, thus allowing radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in the bCT volume can prove time consuming and difficult. Thus, we are developing automated 3D lesion segmentation methods to aid in the interpretation of bCT images. Based on previous studies using a 3D radial-gradient index (RGI) method [1], we are investigating whether 3D active contour segmentation can be applied in 3D to capture additional details of the lesion margin.
Our data set includes 40 contract-enhanced bCT scans. Based on a radiologist-marked lesion center of each mass, an initial RGI contour is obtained that serves as the input to an active contour segmentation method. In this study, active contour level set segmentation, an iterative segmentation technique, is extended to 3D. Three stopping criteria are compared, based on 1) the change of volume (ΔV/V), 2) the mean value of the increased volume at each iteratin (dμ/dt), and 3) the changing rate of intensity inside and outside the lesion (Δvw).
Lesion segmentation was evaluated by determining the overlap ratio between computer-determined segmentations and manually-drawn lesion outlines. For a given lesion, the overlap ratio was averaged across coronal, sagittal, and axial planes. The average overlap ratios for the three stopping criteria were found to be 0.66 (ΔV/V), 0.68 (dμ/dt), 0.69 (Δvw).
Computer-aided detection of microcalcifications in digital breast tomosynthesis (DBT): a multichannel signal detection approach on projection views
Show abstract
DBT is one of the promising imaging modalities that may improve the sensitivity and specificity for breast
cancer detection. We are developing a computer-aided detection (CADe) system for clustered microcalcifications (MC)
in DBT. A data set of two-view DBTs from 42 breasts was collected with a GE prototype system. We investigated a 2D
approach to MC detection using projection view (PV) images rather than reconstructed 3D DBT volume. Our 2D
approach consisted of two major stages: 1) detecting individual MC candidates on each PV, and 2) correlating the MC
candidates from the different PVs and detecting clusters in the breast volume. With the MC candidates detected by
prescreening on PVs, a trained multi-channel (MCH) filter bank was used to extract signal response from each MC
candidate. A ray-tracing process was performed to fuse the MCH responses and localize the MC candidates in 3D using
the geometrical information of the DBT system. Potential MC clusters were then identified by dynamic clustering of the
MCs in 3D. A two-fold cross-validation method was used to train and test the CADe system. The detection
performance of clustered MCs was assessed by free receiver operating characteristic (FROC) analysis. It was found that
the CADe system achieved a case-based sensitivity of 90% at an average false positive rate of 2.1 clusters per DBT
volume. Our study demonstrated that the CADe system using 2D MCH filter bank is promising for detection of
clustered MCs in DBT.
Analysis of breast CT lesions using computer-aided diagnosis: an application of neural networks on extracted morphologic and texture features
Shonket Ray,
Nicolas D. Prionas,
Karen K. Lindfors,
et al.
Show abstract
Dedicated cone-beam breast CT (bCT) scanners have been developed as a potential alternative imaging modality to
conventional X-ray mammography in breast cancer diagnosis. As with other modalities, quantitative imaging (QI)
analysis can potentially be utilized as a tool to extract useful numeric information concerning diagnosed lesions from
high quality 3D tomographic data sets. In this work, preliminary QI analysis was done by designing and implementing a
computer-aided diagnosis (CADx) system consisting of image preprocessing, object(s) of interest (i.e. masses,
microcalcifications) segmentation, structural analysis of the segmented object(s), and finally classification into benign or
malignant disease. Image sets were acquired from bCT patient scans with diagnosed lesions. Iterative watershed
segmentation (IWS), a hybridization of the watershed method using observer-set markers and a gradient vector flow
(GVF) approach, was used as the lesion segmentation method in 3D. Eight morphologic parameters and six texture
features based on gray level co-occurrence matrix (GLCM) calculations were obtained per segmented lesion and
combined into multi-dimensional feature input data vectors. Artificial neural network (ANN) classifiers were used by
performing cross validation and network parameter optimization to maximize area under the curve (AUC) values of the
resulting receiver-operating characteristic (ROC) curves. Within these ANNs, biopsy-proven diagnoses of malignant and
benign lesions were recorded as target data while the feature vectors were saved as raw input data. With the image data
separated into post-contrast (n = 55) and pre-contrast sets (n = 39), a maximum AUC of 0.70 ± 0.02 and 0.80 ± 0.02
were achieved, respectively, for each data set after ANN application.
Poster Session: Cardiovascular
A robust and accurate approach to automatic blood vessel detection and segmentation from angiography x-ray images using multistage random forests
Show abstract
In this paper we propose a novel approach based on multi-stage random forests to address problems faced by
traditional vessel segmentation algorithms on account of image artifacts such as stitches organ shadows etc.. Our
approach consists of collecting a very large number of training data consisting of positive and negative examples
of valid seed points. The method makes use of a 14x14 window around a putative seed point. For this window
three types of feature vectors are computed viz. vesselness, eigenvalue and a novel effective margin feature. A
random forest RF is trained for each of the feature vectors. At run time the three RFs are applied in succession
to a putative seed point generated by a naiive vessel detection algorithm based on vesselness. Our approach will
prune this set of putative seed points to correctly identify true seed points thereby avoiding false positives. We
demonstrate the effectiveness of our algorithm on a large dataset of angio images.
Automated detection of contractile abnormalities from stress-rest motion changes
Shahryar Karimi-Ashtiani,
Reza Arsanjani,
Mathews Fish,
et al.
Show abstract
Changes in myocardial function signatures such as wall motion and thickening are typically computed separately from
myocardial perfusion SPECT (MPS) stress and rest studies to assess for stress-induced function abnormalities. The
standard approach may suffer from the variability in contour placements and image orientation when subtle changes
between stress and rest scans in motion and thickening are being evaluated. We have developed a new measure of
regional change of function signature (motion and thickening) computed directly from registered stress and rest gated
MPS data. In our novel approach, endocardial surfaces at the end-diastolic and end-systolic frames for stress and rest
studies were registered by matching ventricular surfaces. Furthermore, we propose a new global registration method
based on finding the optimal rotation for myocardial best ellipsoid fit to minimize the indexing disparities between two
surfaces between stress and rest studies. Myocardial stress-rest function changes were computed and normal limits of
change were determined as the mean and standard deviation of the training set for each polar sample. Normal limits were
utilized to quantify the stress-rest function change for each polar map sample and the accumulated quantified function
signature values were used for abnormality assessments in territorial regions. To evaluate the effectiveness of our novel
method, we examined the agreements of our results against visual scores for motion change on vessel territorial regions
obtained by human experts on a test group with 623 cases and were able to show that our detection method has a
improved sensitivity on per vessel territory basis, compared to those obtained by human experts utilizing gated MPS
data.
Segmentation of the common carotid artery with active shape models from 3D ultrasound images
Show abstract
Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, we
develop and evaluate a new segmentation method for outlining both lumen and adventitia (inner and outer walls)
of common carotid artery (CCA) from three-dimensional ultrasound (3D US) images for carotid atherosclerosis
diagnosis and evaluation. The data set consists of sixty-eight, 17× 2× 2, 3D US volume data acquired from the
left and right carotid arteries of seventeen patients (eight treated with 80mg atorvastain and nine with placebo),
who had carotid stenosis of 60% or more, at baseline and after three months of treatment. We investigate the
use of Active Shape Models (ASMs) to segment CCA inner and outer walls after statin therapy. The proposed
method was evaluated with respect to expert manually outlined boundaries as a surrogate for ground truth. For
the lumen and adventitia segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of
93.6%± 2.6%, 91.8%± 3.5%, mean absolute distances (MAD) of 0.28± 0.17mm and 0.34 ± 0.19mm, maximum
absolute distances (MAXD) of 0.87 ± 0.37mm and 0.74 ± 0.49mm. The proposed algorithm took 4.4 ± 0.6min
to segment a single 3D US images, compared to 11.7±1.2min for manual segmentation. Therefore, the method
would promote the translation of carotid 3D US to clinical care for the fast, safety and economical monitoring
of the atherosclerotic disease progression and regression during therapy.
A fully automated multi-modal computer aided diagnosis approach to coronary calcium scoring of MSCT images
Show abstract
Inter- and intra- observer variability is a problem often faced when an expert or observer is tasked with assessing
the severity of a disease. This issue is keenly felt in coronary calcium scoring of patients suffering from atherosclerosis
where in clinical practice, the observer must identify firstly the presence, followed by the location of candidate calcified
plaques found within the coronary arteries that may prevent oxygenated blood flow to the heart muscle. However, it
can be difficult for a human observer to differentiate calcified plaques that are located in the coronary arteries from
those found in surrounding anatomy such as the mitral valve or pericardium.
In addition to the benefits to scoring accuracy, the use of fast, low dose multi-slice CT imaging to perform the
cardiac scan is capable of acquiring the entire heart within a single breath hold. Thus exposing the patient to
lower radiation dose, which for a progressive disease such as atherosclerosis where multiple scans may be required,
is beneficial to their health.
Presented here is a fully automated method for calcium scoring using both the traditional Agatston method, as
well as the volume scoring method. Elimination of the unwanted regions of the cardiac image slices such as lungs,
ribs, and vertebrae is carried out using adaptive heart isolation. Such regions cannot contain calcified plaques but
can be of a similar intensity and their removal will aid detection. Removal of both the ascending and descending
aortas, as they contain clinical insignificant plaques, is necessary before the final calcium scores are calculated and
examined against ground truth scores of three averaged expert observer results. The results presented here are
intended to show the feasibility and requirement for an automated scoring method to reduce the subjectivity and
reproducibility error inherent with manual clinical calcium scoring.
Post-procedural evaluation of catheter contact force characteristics
Show abstract
Minimally invasive catheter ablation of electric foci, performed in electrophysiology labs, is an attractive treatment
option for atrial fibrillation (AF) - in particular if drug therapy is no longer effective or tolerated. There
are different strategies to eliminate the electric foci inducing the arrhythmia. Independent of the particular
strategy, it is essential to place transmural lesions. The impact of catheter contact force on the generated lesion
quality has been investigated recently, and first results are promising. There are different approaches to measure
catheter-tissue contact. Besides traditional haptic feedback, there are new technologies either relying on catheter
tip-to-tissue contact force or on local impedance measurements at the tip of the catheter.
In this paper, we present a novel tool for post-procedural ablation point evaluation and visualization of contact
force characteristics. Our method is based on localizing ablation points set during AF ablation procedures. The
3-D point positions are stored together with lesion specific catheter contact force (CF) values recorded during
the ablation. The force records are mapped to the spatial 3-D positions, where the energy has been applied.
The tracked positions of the ablation points can be further used to generate a 3-D mesh model of the left atrium
(LA). Since our approach facilitates visualization of different force characteristics for post-procedural evaluation
and verification, it has the potential to improve outcome by highlighting areas where lesion quality may be less
than desired.
Poster Session: Dental
A new screening pathway for identifying asymptomatic patients using dental panoramic radiographs
Show abstract
To identify asymptomatic patients is the challenging task and the essential first step in diagnosis. Findings of dental
panoramic radiographs include not only dental conditions but also radiographic signs that are suggestive of possible
systemic diseases such as osteoporosis, arteriosclerosis, and maxillary sinusitis. Detection of such signs on panoramic
radiographs has a potential to provide supplemental benefits for patients. However, it is not easy for general dental
practitioners to pay careful attention to such signs. We addressed the development of a computer-aided detection (CAD)
system that detects radiographic signs of pathology on panoramic images, and the design of the framework of new
screening pathway by cooperation of dentists and our CAD system. The performance evaluation of our CAD system
showed the sensitivity and specificity in the identification of osteoporotic patients were 92.6 % and 100 %, respectively,
and those of the maxillary sinus abnormality were 89.6 % and 73.6 %, respectively. The detection rate of carotid artery
calcifications that suggests the need for further medical evaluation was approximately 93.6 % with 4.4 false-positives per
image. To validate the utility of the new screening pathway, preliminary clinical trials by using our CAD system were
conducted. To date, 223 panoramic images were processed and 4 asymptomatic patients with suspected osteoporosis, 7
asymptomatic patients with suspected calcifications, and 40 asymptomatic patients with suspected maxillary sinusitis
were detected in our initial trial. It was suggested that our new screening pathway could be useful to identify
asymptomatic patients with systemic diseases.
Automated scheme for measuring mandibular cortical thickness on dental panoramic radiographs for osteoporosis screening
Show abstract
Findings of dental panoramic radiographs (DPRs) have shown that the mandibular cortical thickness (MCT) was
significantly correlated with osteoporosis. Identifying asymptomatic patients with osteoporosis through dental
examinations may bring a supplemental benefit for the patients. However, most of the DPRs are used for only diagnosing
dental conditions by dentists in their routine clinical work. The aim of this study was to develop a computeraided
diagnosis scheme that automatically measures MCT to assist dentists in screening osteoporosis. First, the inferior
border of mandibular bone was detected by use of an active contour method. Second, the locations of mental foramina
were estimated on the basis of the inferior border of mandibular bone. Finally, MCT was measured on the basis of the
grayscale profile analysis. One hundred DPRs were used to evaluate our proposed scheme. Experimental results showed
that the sensitivity and specificity for identifying osteoporotic patients were 92.6 % and 100 %, respectively. We
conducted multiclinic trials, in which 223 cases have been obtained and processed in about a month. Our scheme
succeeded in detecting all cases of suspected osteoporosis. Therefore, our scheme may have a potential to identify
osteoporotic patients at an early stage.
Automatic detection of apical roots in oral radiographs
Show abstract
The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic
detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases.
In this paper we propose an automatic method for periapical root region detection by using the state-of-theart
machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root
detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this
problem, we boost the training set by including more root regions that are close to the annotated ones and
decompose the original images to randomly generate negative samples. Based on these training samples,
the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root
detector. The learned detector usually generates a large amount of true and false positives. In order to
reduce the number of false positives, a confidence score for each candidate detection result is calculated for
further purification. We first merge the detected regions by combining tightly overlapped detected
candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false
positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray
images from 21 patients. The experimental results show that our approach can achieve promising detection
accuracy.
Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images
Show abstract
Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent
chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel
crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is
a new promising technique for early detection of demineralization which can classify healthy and pathological dental
tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular
reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the
performance of automated classification and visualization methods. Cross polarized imaging setup can effectively
remove the specular reflections, however is due to the complexity and other imaging setup limitations not always
possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental
tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was
evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard
dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal
component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The
classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the
proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.
Poster Session: Eye
Retinal image enhancement and registration for the evaluation of longitudinal changes
Show abstract
Retinal images are long-accepted clinical diagnostic method for ocular diseases. Of late, automated assessment of retinal
images has proven to be a useful adjunct in clinical decision support systems. In this paper, we propose a retinal image
registration method, which combine retinal image enhancement and non-rigid image registration methods, for
longitudinal retinal image alignment. A further illumination correction and gray value matching methods are applied for
the longitudinal image comparison and subtraction. The solution can enhance the assessment of longitudinal changes of
retinal images and image subtraction in a clinical application system. The performance of the proposed solution has been
tested on longitudinal retinal images. Preliminary results have demonstrated the accuracy and robustness of the solutions
and their potential application in a clinical environment.
Poster Session: Lung
Automatic seed point identification and main artery segmentation for pulmonary vascular tree segmentation and tracking in computed tomographic pulmonary angiography (CTPA)
Show abstract
We are developing a computer-aided detection (CAD) system to assist radiologists in pulmonary embolism (PE)
detection in computed tomographic pulmonary angiography (CTPA). Automatic segmentation and tracking of
pulmonary vessels is a fundamental step to define the search space for PE detection. For automated tracking of
pulmonary arteries, it is important to accurately identify the seed points to track the left and right pulmonary vessel
trees. In this study, we developed an automatic seed point identification and pulmonary main artery (PMA)
segmentation method. The seed point was derived from the bifurcation region where the pulmonary trunk artery
splits into the left and right. A 3D recursive optimal path finding method (RPF) was developed to find the paths from
the bifurcation point to the end of the left and right PMAs. The PMAs were finally extracted along the PMA paths
using morphological operation.
Two and 18 CTPA cases was used for training and testing, respectively. A set of points in the central luminal space
of the PMA were manually marked as the "reference standard" by two experienced chest radiologists using a
computer interface. A total of 3870 were marked in the test set. A voxel located on the computer-identified paths of
the PMA was counted as a true PMA voxel when its distance to the closest reference standard point is within a
threshold. Our results show that 95.6% (17681/18502) and 88.8% (16439/18502) of computer identified PMA path
points were within a distance of 10 mm and 8 mm to the closest reference point, respectively, and 100% (18/18) of
the seed points were detected in the bifurcation region. 2.7% (104/3870) of the reference standard points were not
contained in the computer segmented vessels and counted as false negative points.
Active relearning for robust supervised classification of pulmonary emphysema
Show abstract
Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However,
the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards
optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize
uncertainty in the selected training samples. Using multi-view inductive learning with the training samples,
an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric,
was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were
resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification
accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the
optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average
accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and
staging throughput efficiency in chest radiology practice.
Comparison of analysis methods for airway quantification
Show abstract
Diseased airways have been known for several years as a possible contributing factor to airflow limitation in Chronic
Obstructive Pulmonary Diseases (COPD). Quantification of disease severity through the evaluation of airway
dimensions - wall thickness and lumen diameter - has gained increased attention, thanks to the availability of multi-slice
computed tomography (CT). Novel approaches have focused on automated methods of measurement as a faster and
more objective means that the visual assessment routinely employed in the clinic. Since the Full-Width Half-Maximum
(FWHM) method of airway measurement was introduced two decades ago [1], several new techniques for quantifying
airways have been detailed in the literature, but no approach has truly become a standard for such analysis. Our own
research group has presented two alternative approaches for determining airway dimensions, one involving a minimum
path and the other active contours [2, 3]. With an increasing number of techniques dedicated to the same goal, we
decided to take a step back and analyze the differences of these methods. We consequently put to the test our two
methods of analysis and the FWHM approach. We first measured a set of 5 airways from a phantom of known
dimensions. Then we compared measurements from the three methods to those of two independent readers, performed
on 35 airways in 5 patients. We elaborate on the differences of each approach and suggest conclusions on which could
be defined as the best one.
Changes of nodule detection after radiologists read bone opacity suppressed chest radiography
Show abstract
A bone opacity suppressed technique using shape-index processing approach has been developed for frontal
chest radiography. The image function preserves original lung image textures but equalizing the image
contrast of the lungs as a part of post-processing. To determine the benefit of this computerized processing,
particular on the investigation of the effect of the bone opacity removal, we conducted a reader study where
radiologists read standard chest radiograph alone (unaided) followed by bone opacity suppressed image
(aided). Posterioranterior (PA) standard chest radiographs in 368 subjects (122 had confirmed lung cancer)
were used for this study. Fifteen Board Certified radiologists participated in the reader study. Each
radiologist interpreted the standard image and then the bone suppressed image. Each reader recorded the
location of the most suspicious nodule, if any, their level of suspicion and recommendation for clinical action.
Detailed analyses were performed to evaluate the observers' performance by tabulating changes of nodule
detection inclusive of false-negative turned to true-positive (FN->TP), true-positive turned to false-negative
(TP->FN), false-positive turned to turn-negative (FP->TN), and turn-negative turned to false-positive (TN-
>FP). Our results indicated that changing rates of FN->TP was 12.35%, TP->FN was 1.37%, FP->TN was
1.14%, and TN->FP was 4.82%, respectively. We also found that 81.85% of the FN->TP events occurred at
nodules significantly covered by the rib (50% or more area overlapped with bone opacity). Two major
situations caused TP->FN events: (1) other nodule like areas were also enhanced and (2) non-solid nodules
were well preserved but less suspicious with the contract equalization.
Automatic segmentation of ground-glass opacity nodule on chest CT images by histogram modeling and local contrast
Show abstract
We propose an automatic segmentation of Ground Glass Opacity (GGO) nodules on chest CT images by histogram
modeling and local contrast. First, optimal volume circumscribing a nodule is calculated by clicking inside of GGO
nodule. To remove noises while preserving a nodule boundary, anisotropic diffusion filtering is applied to the optimal
volume. Second, for deciding an appropriate threshold value of GGO nodule, histogram modeling is performed by
Gaussian Mixture Modeling (GMM) with three components such as lung parenchyma, nodule, and chest wall or vessels.
Third, the attached chest wall and vessels are separated from the GGO nodules by maximum curvature points linking and
morphological erosion with adaptive circular mask. Fourth, initial boundary of GGO nodule is refined using local
contrast information. Experimental results show that attached neighbor structures are well separated from GGO nodules
while missed GGO region is refined. The proposed segmentation method can be used for measurement of the growth rate
of nodule and the proportion of solid portion inside nodule.
Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm
Show abstract
We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to
assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of
pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A
dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional
(3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing
and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was
utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches
of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the
smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented
lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce
false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to
compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE
lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the
CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated
CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.
Pulmonary nodule detection in PET/CT images: improved approach using combined nodule detection and hybrid FP reduction
Show abstract
In this study, an automated scheme for detecting pulmonary nodules in PET/CT images has
been proposed using combined detection and hybrid false-positive (FP) reduction techniques.
The initial nodule candidates were detected separately from CT and PET images. FPs were
then eliminated in the initial candidates by using support vector machine with characteristic
values obtained from CT and PET images. In the experiment, we evaluated proposed method
using 105 cases of PET/CT images that were obtained in the cancer-screening program. We
evaluated true positive fraction (TPF) and FP / case. As a result, TPFs of CT and PET
detections were 0.76 and 0.44, respectively. However, by integrating the both results, TPF was
reached to 0.82 with 5.14 FPs/case. These results indicate that our method may be of practical
use for the detection of pulmonary nodules using PET/CT images.
Investigating the dose dependence of median pixel value in CT lung images of patients undergoing stereotactic body radiation therapy
Brianna Knoll,
Alexandra Cunliffe,
Hania Al-Hallaq,
et al.
Show abstract
We investigated the relationship between the local dose delivered and median pixel-value change following
radiation therapy (RT) by comparing anatomically matched regions of interest (ROIs) in pre- and post-RT computed
tomography (CT) scans. Six patients' clinical pre-treatment baseline CT scans, follow-up CT scans, treatment planning
CT scans, and dose maps were collected. The lungs were extracted using an automated segmentation algorithm, and
demons deformable registration was used to register each patient's follow-up scan and treatment planning scan to their
baseline scan. Median pixel values were calculated in anatomically matched ROIs in the baseline and deformed followup
CT scans, and mean dose delivered to the same ROIs was determined from the deformed dose map. Pearson's
correlation coefficients, rank correlation coefficients, and linear modeling were utilized to quantify the relationship
between median pixel-value change and mean dose delivered. Pearson's correlation coefficients for the six patients
ranged from -0.13-0.67. Rank correlation coefficients ranged from -0.12-0.80. Linear regression analysis on the six
patients' combined data yielded a slope of 2.62 (p < 0.001) and R-squared value of 0.24. General positive trends were
observed between radiation dose and median pixel value change, but no two patients had the same relationship between
these variables, indicating it may not be possible to generalize patients' reactions to varying dose levels of radiation.
Thus, an individualized method for evaluating normal lung tissue damage based on changes in each patient's CT scan
following radiation treatment may be required to assess radiation-induced lung damage.
Effect of denoising on supervised lung parenchymal clusters
Show abstract
Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more
consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed
to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to
enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue
by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes
of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent
patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through
consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median
ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were
extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used
to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were
analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster
quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior
in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms
non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising
technique that does not deteriorate the integrity of supervised clusters.
A hybrid preprocessing method using geometry based diffusion and elective enhancement filtering for pulmonary nodule detection
Show abstract
The computer aided diagnostic (CAD) system has been developed to assist radiologist for early
detection and analysis of lung nodules. For pulmonary nodule detection, image preprocessing is
required to remove the anatomical structure of lung parenchyma and to enhance the visibility of
pulmonary nodules. In this paper a hybrid preprocessing technique using geometry based diffusion
and selective enhancement filtering have been proposed. This technique provides a unified preprocessing
framework for solid nodule as well as ground glass opacity (GGO) nodules. Geometry
based diffusion is applied to smooth the images by preserving the boundary. In order to improve
the sensitivity of pulmonary nodule detection, selective enhancement filter is used to highlight blob
like structure. But selective enhancement filter sometimes enhances the structures like blood vessel
and airways other than nodule and results in large number of false positive. In first step, geometry
based diffusion (GBD) is applied for reduction of false positive and in second step, selective
enhancement filtering is used for further reduction of false negative. Geometry based diffusion and
selective enhancement filtering has been used as preprocessing step separately but their combined
effect was not investigated earlier. This hybrid preprocessing approach is suitable for accurate calculation
of voxel based features. The proposed method has been validated on one public database
named Lung Image Database Consortium (LIDC) containing 50 nodules (30 solid and 20 GGO
nodule) from 30 subjects and one private database containing 40 nodules (25 solid and 15 GGO
nodule) from 30 subjects.
Idiopathic interstitial pneumonias and emphysema: detection and classification using a texture-discriminative approach
Show abstract
Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification
of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector
computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical
morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition
scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis
scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial
separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic
tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each
pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue
or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each
lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground
glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the
drawbacks of a previously developed approach and achieve higher sensitivity and specificity.
Automating the expert consensus paradigm for robust lung tissue classification
Show abstract
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from
CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for
automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines
their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of
probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns:
5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are
refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble
technique. The super clusters were validated against the consensus agreement of four clinical experts. The
aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the
proposed workflow could make automation of lung tissue classification a clinical reality.
Automatic segmentation of tumor-laden lung volumes from the LIDC database
Show abstract
The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided
detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the
number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many
algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on
tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or
major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely
affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been
developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm
comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling,
and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and
bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large
masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections
appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm
was developed and trained to on the first 100 datasets of the LIDC image database.
Unsupervised segmentation of lungs from chest radiographs
Show abstract
This paper describes our preliminary investigations for deriving and characterizing coarse-level textural regions present
in the lung field on chest radiographs using unsupervised grow-cut (UGC), a cellular automaton based unsupervised
segmentation technique. The segmentation has been performed on a publicly available data set of chest radiographs. The
algorithm is useful for this application because it automatically converges to a natural segmentation of the image from
random seed points using low-level image features such as pixel intensity values and texture features.
Our goal is to develop a portable screening system for early detection of lung diseases for use in remote areas in
developing countries. This involves developing automated algorithms for screening x-rays as normal/abnormal with a
high degree of sensitivity, and identifying lung disease patterns on chest x-rays. Automatically deriving and
quantitatively characterizing abnormal regions present in the lung field is the first step toward this goal. Therefore,
region-based features such as geometrical and pixel-value measurements were derived from the segmented lung fields. In
the future, feature selection and classification will be performed to identify pathological conditions such as pulmonary
tuberculosis on chest radiographs. Shape-based features will also be incorporated to account for occlusions of the lung
field and by other anatomical structures such as the heart and diaphragm.
Computer aided diagnosis for osteoporosis based on vertebral column structure analysis
Show abstract
Patients of osteoporosis are comprised of about 11 million people in Japan and it is one of the problems that have gained
society. For preventing the osteoporosis, obtaining early detection and treatment are necessary. Multi-slice CT
technology has been improving for three dimensional (3D) image analysis, higher body axis resolution and shorter scan
time. 3D image analysis using multi-slice CT images of thoracic vertebra can be used for supporting diagnosis of
osteoporosis. Simultaneously, this analysis can be used for lung cancer diagnosis which may lead to early detection. We
develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the
analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of
osteoporosis. An effective result was provided for the case including an insufficient complicated vertebral body bone
fracture by the conventional method.
An application to pulmonary emphysema classification based on model of texton learning by sparse representation
Show abstract
We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in
computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD)
pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying
sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the
dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a
nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840
annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three
subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of
the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method
based on texton learning by k-means, which performs almost the best among other approaches in the literature.
Robust pulmonary lobe segmentation against incomplete fissures
Show abstract
As important anatomical landmarks of the human lung, accurate lobe segmentation may be useful for characterizing
specific lung diseases (e.g., inflammatory, granulomatous, and neoplastic diseases). A number of investigations showed
that pulmonary fissures were often incomplete in image depiction, thereby leading to the computerized identification of
individual lobes a challenging task. Our purpose is to develop a fully automated algorithm for accurate identification of
individual lobes regardless of the integrity of pulmonary fissures. The underlying idea of the developed lobe
segmentation scheme is to use piecewise planes to approximate the detected fissures. After a rotation and a global
smoothing, a number of small planes were fitted using local fissures points. The local surfaces are finally combined for
lobe segmentation using a quadratic B-spline weighting strategy to assure that the segmentation is smooth. The
performance of the developed scheme was assessed by comparing with a manually created reference standard on a
dataset of 30 lung CT examinations. These examinations covered a number of lung diseases and were selected from a
large chronic obstructive pulmonary disease (COPD) dataset. The results indicate that our scheme of lobe segmentation
is efficient and accurate against incomplete fissures.
An intelligent pre-processing framework for standardizing medical images for CAD and other post-processing applications
Show abstract
There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired
from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and
practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such
scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a
smaller set of images may not always work optimally on newer set of images having entirely different characteristics.
In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an
appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its
source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the
performance of various image processing and workflow applications like CAD to perform in a consistent manner. First,
our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image
sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus
giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on
large set of CT images acquired from various vendors including images reconstructed with next generation iterative
methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.
Learning lung nodule similarity using a genetic algorithm
Show abstract
The effectiveness and efficiency of content-based image retrieval (CBIR) can be improved by determining an
optimal combination of image features to use in determining similarity between images. This combination of
features can be optimized using a genetic algorithm (GA). Although several studies have used genetic algorithms
to refine image features and similarity measures in CBIR, the present study is the first to apply these techniques
to medical image retrieval. By implementing a GA to test different combinations of image features for pulmonary
nodules in CT scans, the set of image features was reduced to 29 features from a total of 63 extracted features.
The performance of the CBIR system was assessed by calculating the average precision across all query nodules.
The precision values obtained using the GA-reduced set of features were significantly higher than those found
using all 63 image features. Using radiologist-annotated malignancy ratings as ground truth resulted in an
average precision of 85.95% after 3 images retrieved per query nodule when using the feature set identified
by the GA. Using computer-predicted malignancy ratings as ground truth resulted in an average precision of
86.91% after 3 images retrieved. The results suggest that in the absence of radiologist semantic ratings, using
computer-predicted malignancy as ground truth is a valid substitute given the closeness of the two precision
values.
Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images
Show abstract
This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.
Self-adaptive asymmetric on-line boosting for detecting anatomical structures
Show abstract
In this paper, we propose a self-adaptive, asymmetric on-line boosting (SAAOB) method for detecting anatomical structures
in CT pulmonary angiography (CTPA). SAAOB is novel in that it exploits a new asymmetric loss criterion with
self-adaptability according to the ratio of exposed positive and negative samples and in that it has an advanced rule to
update sample's importance weight taking account of both classification result and sample's label. Our presented method
is evaluated by detecting three distinct thoracic structures, the carina, the pulmonary trunk and the aortic arch, in both
balanced and imbalanced conditions.
A novel semi-transductive learning framework for efficient atypicality detection in chest radiographs
Show abstract
Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test
instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well
to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data
sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning
algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit
model. This can produce better classification performance, but at a much higher computational cost.
Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling.
Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it
difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach
leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high
computational cost.
This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of
atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive
methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive
approach provides both effective and efficient detection of atypical regions within a set of chest radiographs
previously labeled by Mayo Clinic expert thoracic radiologists.
Lung lobe segmentation based on statistical atlas and graph cuts
Show abstract
This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.
Poster Session: Microscopy and Histopathology
Nuclear cytoplasmic cell evaluation from 3D optical CT microscope images
Show abstract
The nuclear cytoplasmic ratio (nc-ratio) is one of the measurements made by cytologists in evaluating the state of a
single cell and is defined to be the ratio of the size of the nucleus to the size of the cytoplasm. This ratio is often realized
in practice by measurements on a single 2D image of a cell image acquired from a conventional microscope, and is
determined by the area of the nucleus measured in the 2D image divided by the area of the cytoplasm seen to be outside
of the nuclear region. It may also be defined as the ratio of the volume of the nucleus to volume of the cytoplasm, but
this is not directly observable in single images from conventional 2-dimensional microscopy.
We conducted a study to evaluate the variation of the 2D nc-ratio estimation due to the asymmetric architecture of cells
and to compare the 2D estimates with the more precise volumetric nc-ratio estimation from 3D cell images. The
measurements were made on 232 3D images of five different cell types. The results indicate that the cell orientation may
cause a large amount of variation in the nc-ratio estimation and that nc-ratios computed directly from 3D images, which
are independent of cell orientation, may offer a much more precise and useful measurement.
Detection of immunocytological markers in photomicroscopic images
David Friedrich,
Joschka zur Jacobsmühlen,
Till Braunschweig,
et al.
Show abstract
Early detection of cervical cancer can be achieved through visual analysis of cell anomalies. The established
PAP smear achieves a sensitivity of 50-90%, most false negative results are caused by mistakes in the preparation
of the specimen or reader variability in the subjective, visual investigation. Since cervical cancer is caused by
human papillomavirus (HPV), the detection of HPV-infected cells opens new perspectives for screening of precancerous
abnormalities. Immunocytochemical preparation marks HPV-positive cells in brush smears of the
cervix with high sensitivity and specificity.
The goal of this work is the automated detection of all marker-positive cells in microscopic images of a
sample slide stained with an immunocytochemical marker. A color separation technique is used to estimate the
concentrations of the immunocytochemical marker stain as well as of the counterstain used to color the nuclei.
Segmentation methods based on Otsu's threshold selection method and Mean Shift are adapted to the task of
segmenting marker-positive cells and their nuclei.
The best detection performance of single marker-positive cells was achieved with the adapted thresholding
method with a sensitivity of 95.9%. The contours differed by a modified Hausdorff Distance (MHD) of 2.8 μm.
Nuclei of single marker positive cells were detected with a sensitivity of 95.9% and MHD = 1.02 μ;m.
Automated detection of tuberculosis on sputum smeared slides using stepwise classification
Ajay Divekar,
Corina Pangilinan,
Gerrit Coetzee,
et al.
Show abstract
Routine visual slide screening for identification of tuberculosis (TB) bacilli in stained sputum slides under
microscope system is a tedious labor-intensive task and can miss up to 50% of TB. Based on the Shannon cofactor
expansion on Boolean function for classification, a stepwise classification (SWC) algorithm is
developed to remove different types of false positives, one type at a time, and to increase the detection of TB
bacilli at different concentrations. Both bacilli and non-bacilli objects are first analyzed and classified into
several different categories including scanty positive, high concentration positive, and several non-bacilli
categories: small bright objects, beaded, dim elongated objects, etc. The morphological and contrast features
are extracted based on aprior clinical knowledge. The SWC is composed of several individual classifiers.
Individual classifier to increase the bacilli counts utilizes an adaptive algorithm based on a microbiologist's
statistical heuristic decision process. Individual classifier to reduce false positive is developed through
minimization from a binary decision tree to classify different types of true and false positive based on feature
vectors. Finally, the detection algorithm is was tested on 102 independent confirmed negative and 74 positive
cases. A multi-class task analysis shows high accordance rate for negative, scanty, and high-concentration as
88.24%, 56.00%, and 97.96%, respectively. A binary-class task analysis using a receiver operating
characteristics method with the area under the curve (Az) is also utilized to analyze the performance of this
detection algorithm, showing the superior detection performance on the high-concentration cases (Az=0.913)
and cases mixed with high-concentration and scanty cases (Az=0.878).
Computerized image analysis of cell-cell interactions in human renal tissue by using multi-channel immunoflourescent confocal microscopy
Show abstract
Analysis of interactions between B and T cells in tubulointerstitial inflammation is important for understanding human
lupus nephritis. We developed a computer technique to perform this analysis, and compared it with manual analysis.
Multi-channel immunoflourescent-microscopy images were acquired from 207 regions of interest in 40 renal tissue
sections of 19 patients diagnosed with lupus nephritis. Fresh-frozen renal tissue sections were stained with combinations
of immunoflourescent antibodies to membrane proteins and counter-stained with a cell nuclear marker. Manual
delineation of the antibodies was considered as the reference standard. We first segmented cell nuclei and cell
membrane markers, and then determined corresponding cell types based on the distances between cell nuclei and
specific cell-membrane marker combinations. Subsequently, the distribution of the shortest distance from T cell nuclei
to B cell nuclei was obtained and used as a surrogate indicator of cell-cell interactions. The computer and manual
analyses results were concordant. The average absolute difference was 1.1±1.2% between the computer and manual
analysis results in the number of cell-cell distances of 3 μm or less as a percentage of the total number of cell-cell
distances. Our computerized analysis of cell-cell distances could be used as a surrogate for quantifying cell-cell
interactions as either an automated and quantitative analysis or for independent confirmation of manual analysis.
Poster Session: Neuro
Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI
Zein Salah,
David Weise,
Bernhard Preim,
et al.
Show abstract
Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several
brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of
various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can
hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the
diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical
slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime.
To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to
an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the
registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the
ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate.
Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for
reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the
diagnostic yield of midbrain sonography.
Quantification of the cerebrospinal fluid from a new whole body MRI sequence
Show abstract
Our work aims to develop a biomechanical model of hydrocephalus both intended to perform clinical research and to assist the neurosurgeon in diagnosis decisions. Recently, we have defined a new MR imaging sequence based on SPACE (Sampling Perfection with Application optimized Contrast using different flip-angle Evolution). On these images, the cerebrospinal fluid (CSF) appears as a homogeneous hypersignal. Therefore such images are suitable for segmentation and for volume assessment of the CSF. In this paper we present a fully automatic 3D segmentation of such SPACE MRI sequences. We choose a topological approach considering that CSF can be modeled as a simply connected object (i.e. a filled sphere). First an initial object which must be strictly included in the CSF and homotopic to a filled sphere, is determined by using a moment-preserving thresholding. Then a priority function based on an Euclidean distance map is computed in order to control the thickening process that adds "simple points" to the initial thresholded object. A point is called simple if its addition or its suppression does not result in change of topology neither for the object, nor for the background. The method is validated by measuring fluid volume of brain phantoms and by comparing our volume assessments on clinical data to those derived from a segmentation controlled by expert physicians. Then we show that a distinction between pathological cases and healthy adult people can be achieved by a linear discriminant analysis on volumes of the ventricular and intracranial subarachnoid spaces.
A new approach to measuring tortuosity
Amanda Wert,
Sherry E. Scott
Show abstract
The detection and measurement of the tortuosity - i.e. the bending and winding - of vessels has been shown to be
potentially useful in the assessment of cancer progression and treatment response. Although several metrics for
tortuosity are used, no single one measure is able to capture all types of tortuosity.
This report presents a new multiscale technique for measuring vessel tortuosity. The approach is based on a method -
called the ergodicity defect - which gives a scale-dependent measure of deviation from ergodicity. Ergodicity is a
concept that captures the manner in which trajectories or signals sample the space; thus, ergodicity and vessel tortuosity
both involve the notion of how a signal samples space. Here we begin to explore this connection.
We first apply the ergodicity defect tortuosity measure to both 2D and 3D synthetic data in order to demonstrate the
response of the method to three types of tortuosity observed in clinical patterns. We then implement the technique on
segmented vessels extracted from brain tumor MRA images. Results indicate that the method can be effectively used to
detect and measure several types of vessel tortuosity.
Multiclass feature selection for improved pediatric brain tumor segmentation
Show abstract
In our previous work, we showed that fractal-based texture features are effective in detection, segmentation
and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an
information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and
ranking different texture features. We further incorporated the feature selection technique with
segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor
(NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting
the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation
robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of
pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation
robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.
Automatic histogram-based segmentation of white matter hyperintensities using 3D FLAIR images
Show abstract
White matter hyperintensities are known to play a role in the cognitive decline experienced by patients suffering
from neurological diseases. Therefore, accurately detecting and monitoring these lesions is of importance. Automatic
methods for segmenting white matter lesions typically use multimodal MRI data. Furthermore, many
methods use a training set to perform a classification task or to determine necessary parameters. In this work,
we describe and evaluate an unsupervised segmentation method that is based solely on the histogram of FLAIR
images. It approximates the histogram by a mixture of three Gaussians in order to find an appropriate threshold
for white matter hyperintensities. We use a context-sensitive Expectation-Maximization method to determine
the Gaussian mixture parameters. The segmentation is subsequently corrected for false positives using the knowledge
of the location of typical FLAIR artifacts. A preliminary validation with the ground truth on 6 patients
revealed a Similarity Index of 0.73 ± 0.10, indicating that the method is comparable to others in the literature
which require multimodal MRI and/or a preliminary training step.