Show all abstracts
View Session
- Front Matter: Volume 7623
- Atlas-based Methods
- Registration I
- Model-based Segmentation
- Image Enhancement
- Keynote and Vascular Image Analysis
- Segmentation I
- 2D Image Segmentation
- Shape
- Registration II
- Diffusion Tensor Image Analysis
- Segmentation II
- Posters: Atlases
- Posters: Classification
- Posters: Diffusion Tensor Image Analysis
- Posters: Motion Analysis
- Posters: Registration
- Posters: Image Restoration and Enhancement
- Posters: Segmentation
- Posters: Shape
- Posters: Texture
Front Matter: Volume 7623
Front Matter: Volume 7623
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings volume 7623, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Atlas-based Methods
Modeling and segmentation of intra-cochlear anatomy in conventional CT
Show abstract
Cochlear implant surgery is a procedure performed to treat profound hearing loss. Since the cochlea is not visible in
surgery, the physician uses anatomical landmarks to estimate the pose of the cochlea. Research has indicated that
implanting the electrode in a particular cavity of the cochlea, the scala tympani, results in better hearing restoration. The
success of the scala tympani implantation is largely dependent on the point of entry and angle of electrode insertion.
Errors can occur due to the imprecise nature of landmark-based, manual navigation as well as inter-patient variations
between scala tympani and the anatomical landmarks. In this work, we use point distribution models of the intra-cochlear
anatomy to study the inter-patient variations between the cochlea and the typical anatomic landmarks, and we implement
an active shape model technique to automatically localize intra-cochlear anatomy in conventional CT images, where
intra-cochlear structures are not visible. This fully automatic segmentation could aid the surgeon to choose the point of
entry and angle of approach to maximize the likelihood of scala tympani insertion, resulting in more substantial hearing
restoration.
A structural-functional MRI-based disease atlas: application to computer-aided-diagnosis of prostate cancer
Show abstract
Different imaging modalities or protocols of a single patient may convey different types of information regarding
a disease for the same anatomical organ/tissue. On the other hand, multi-modal/multi-protocol medical images
from several different patients can also provide spatial statistics of the disease occurrence, which in turn can
greatly aid in disease diagnosis and aid in improved, accurate biopsy and targeted treatment. It is therefore important
to not only integrate medical images from multiple patients into a common coordinate frame (in the form
of a population-based atlas), but also find the correlation between these multi-modal/multi-protocol data features
and the disease spatial distribution in order to identify different quantitative structural and functional disease
signatures. Most previous work on construction of anatomical atlases has focused on deriving a population-based
atlas for the purpose of deriving the spatial statistics. Moreover, these models are typically derived from normal
or healthy subjects, either explicitly or implicitly, where it is assumed that the inter-patient pathological variation
is not large. These methods are not suitable for constructing a disease atlas, where significant differences between
patients on account of disease related variations can be expected. In this paper, we present a novel framework
for the construction of a multi-parametric MRI-based data-driven disease atlas consisting of multi-modal and
multi-protocol data from across multiple patient studies. Our disease atlas contains 3 Tesla structural (T2) and
functional (dynamic contrast enhanced (DCE)) prostate in vivo MRI with corresponding whole mount histology
specimens obtained via radical prostatectomy. Our atlas construction framework comprises 3 distinct modules:
(a) determination of disease spatial extent on the multi-protocol MR imagery for each patient, (b) construction
of a multi-protocol MR imaging spatial atlas which captures the geographical proclivity of the disease, and (c)
feature extraction and the construction of the data-driven multi-protocol MRI based prostate cancer atlas. The
marriage of data driven and spatial atlases could serve as a useful tool for clinicians to identifying structural and
functional imaging disease signatures so as to make better, more informed diagnoses. Each spatial location in this
atlas can be associated with a high dimensional multi-attribute quantitative feature vector. Additionally, since
the feature vectors are extracted from across multiple patient studies, each spatial location in the data-driven
atlas can be characterized by a feature distribution (in turn characterized by a mean and standard deviation).
Preliminary investigation in quantitatively correlating the disease signatures from across the spatial and data
driven atlases suggests that our quantitative atlas framework could emerge as a powerful tool for discovering
prostate cancer imaging signatures.
Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy
Show abstract
Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The
architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus,
their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV
remodelling) and therapies (such as resynchronization).
On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite
dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models
requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external
anatomy and fibers architecture.
In this work we propose a general mathematical framework based on differential geometry concepts for
computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous
approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the
computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete
anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external
geometry agrees with the segmental description of orientations reported in the literature.
Fully automatic cardiac segmentation from 3D CTA data: a multi-atlas based approach
Show abstract
Computed tomography angiography (CTA), a non-invasive imaging technique, is becoming increasingly popular for cardiac
examination, mainly due to its superior spatial resolution compared to MRI. This imaging modality is currently widely
used for the diagnosis of coronary artery disease (CAD) but it is not commonly used for the diagnosis of ventricular and
atrial function. In this paper, we present a fully automatic method for segmenting the whole heart (i.e. the outer surface of
the myocardium) and cardiac chambers from CTA datasets. Cardiac chamber segmentation is particularly valuable for the
extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction. With our approach,
we aim to improve the diagnosis of CAD by providing functional information extracted from the same CTA data, thus not
requiring additional scanning. In addition, the whole heart segmentation method we propose can be used for visualization
of the coronary arteries and for obtaining a region of interest for subsequent segmentation of the coronaries, ventricles and
atria. Our approach is based on multi-atlas segmentation, and performed within a non-rigid registration framework. A
leave-one-out quantitative validation was carried out on 8 images. The method showed a high accuracy, which is reflected
in both a mean segmentation error of 1.05±1.30 mm and an average Dice coefficient of 0.93. The robustness of the method
is demonstrated by successfully applying the method to 243 additional datasets, without any significant failure.
Model guided diffeomorphic demons for atlas based segmentation
Show abstract
Using an atlas, an image can be segmented by mapping its coordinate space to that of the atlas in an anatomically correct
way. In order to find the correct mapping between the two different coordinate spaces e.g. diffeomorphic demons
registration can be applied. The demons algorithm is a popular choice for deformable image registration and offers the
possibility to perform computationally efficient non-rigid (diffeomorphic) registration. However, this registration method
is prone to image artifacts and image noise. Therefore it has been the main objective of the presented work to combine
the efficiency of diffeomorphic demons and the stability of statistical models. In the presented approach a statistical
deformation model that describes "anatomically correct" displacements vector fields for a specific registration problem is
used to guide the demons registration algorithm. By projecting the current displacement vector field, which is calculated
during any iteration of the registration process, into the model space a regularized version of the vector field can be
computed. Using this regularized vector field for the update of the deformation field in the subsequent iteration of the
registration process the demons registration algorithm can be guided by the deformation model. The proposed method
was evaluated on 21 CT datasets of the right hip. Measuring the average and maximum segmentation error for all 21
datasets and all 120 test configurations it could be demonstrated that the newly proposed algorithm leads to a reduction
of the segmentation error of up to 13% compared to using the conventional diffeomorphic demons algorithm.
Registration I
A statistical similarity measure for non-rigid multi-modal image registration
Jiangli Shi,
Yunmei Chen,
Murali Rao,
et al.
Show abstract
We present a novel variational framework for deformable multi-modal image registration. Our approach is
based on Renyi's statistical dependence measure of two random variables with the use of reproducing kernel
Hilbert spaces associated with Gaussian kernels to simplify the computation. The popularly used method of
maximizing mutual information based optimization algorithms are complex and sensitive to the quantization
of the intensities, because it requires the estimation of continuous joint probability density function (pdf).
The proposed model does not deal with joint pdf but instead observed independent samples. Experimental
results are provided to show the effectiveness of the model.
Image processing and registration in a point set representation
Show abstract
An image, being a continuous function, is commonly discretely represented as a set of sample values, namely
the intensities, associated with the spatial grids. After that, all types of the operations are then carried
out there in. We denote such representation as the discrete function representation (DFR). In this paper
we provide another discrete representation for images using the point sets, called the point set representation
(PSR). Essentially, the image is normalized to have the unit integral and is treated as a probability
density function of some random variable. Then the PSR is formed by drawing samples of such random
variable. In contrast with the DFR, here the image is purely represented as points and no values are associated.
Besides being an equivalent discrete representation for images, we show that certain image operations
benefit from such representation in the numerical stability, performance, or both. Examples are given in
the Perona-Malik type diffusion where in the PSR there is no such problem as the numerical instability.
Furthermore, PSR naturally bridges the fields of image registration with the point set registration. This
helps handle some otherwise difficult problems in image registration such as partial image registration, with much faster convergence speed.
Tissue volume and vesselness measure preserving nonrigid registration of lung CT images
Show abstract
In registration-based analyses of lung biomechanics and function, high quality registrations are essential to obtain
meaningful results. Various criteria have been suggested to find the correspondence mappings between two lung
images acquired at different levels of inflation. In this paper, we describe a new metric, the sum of squared
vesselness measure difference (SSVMD), that utilizes the rich information of blood vessel locations and matches
similar vesselness patterns in two images. Preserving both the lung tissue volume and the vesselness measure,
a registration algorithm is developed to minimize the sum of squared tissue volume difference (SSTVD) and
SSVMD together. We compare the registration accuracy using SSTVD + SSVMD with that using SSTVD
alone by registering lung CT images of three normal human subjects. After adding the new SSVMD metric, the
improvement of registration accuracy is observed by landmark error and fissure positioning error analyses. The
average values of landmark error and fissure positioning error are reduced by about 30% and 25%, respectively.
The mean landmark error is on the order of 1 mm. Statistical testing of landmark errors shows that there
is a statistically significant difference between two methods with p values < 0.05 in all three subjects. Visual
inspection shows there are obvious accuracy improvements in the lung regions near the thoracic cage after adding
SSVMD.
Multimodal registration of MR images with a novel least-squares distance measure
Show abstract
In this work we evaluate a novel method for multi-modal image registration of MR images. The key feature of our approach is a new distance measure that allows for comparing modalities that are related by an arbitrary gray-value mapping. The novel measure is formulated as least square problem for minimizing the sum of squared
differences of two images with respect to changing gray-values of one of the images. It turns out that the novel measure can be computed explicitly and allows for very simple and efficient implementation. We compare our new approach to rigid registration with cross-correlation, mutual information, and normalized gradient fields as distance measure.
Extending the quadratic taxonomy of regularizers for nonparametric registration
Show abstract
Quadratic regularizers are used in nonparametric registration to ensure that the registration problem is well
posed and to yield solutions that exhibit certain types of smoothness. Examples of popular quadratic regularizers
include the diffusion, elastic, fluid, and curvature regularizers.1 Two important features of these regularizers
are whether they account for coupling of the spatial components of the deformation (elastic/fluid do;
diffusion/curvature do not) and whether they are robust to initial affine misregistrations (curvature is; diffusion/
elastic/fluid are not). In this article, we show how to extend this list of quadratic regularizers to include
a second-order regularizer that exhibits the best of both features: it accounts for coupling of the spatial components
of the deformation and contains affine transformations in its kernel. We then show how this extended
taxonomy of quadratic regularizers is related to other families of regularizers, including Cachier and Ayache's
differential quadratic forms2 and Arigovindan's family of rotationally invariant regularizers.3, 4 Next, we describe
two computationally efficient paradigms for performing nonparametric registration with the proposed regularizer,
based on Fourier methods5 and on successive Gaussian convolution.6, 7 Finally, we illustrate the performance of
the quadratic regularizers on the task of registering serial 3-D CT exams of patients with lung nodules.
Coupling tumor growth with brain deformation: a constrained parametric non-rigid registration problem
Andreas Mang,
Stefan Becker,
Alina Toma,
et al.
Show abstract
A novel approach for coupling brain tumor mass effect with a continuous model of cancer progression is proposed.
The purpose of the present work is to devise an efficient approximate model for the mechanical interaction of
the tumor with its surroundings in order to aid registration of brain tumor images with statistical atlases as well
as the generation of atlases of brain tumor disease.
To model tumor progression a deterministic reaction-diffusion formalism, which describes the spatio-temporal
dynamics of a coarse-grained population density of cancerous cells, is discretized on a regular grid. Tensor
information obtained from a probabilistic atlas is used to model the anisotropy of the diffusion of malignant cells
within white matter. To account for the expansive nature of the tumor a parametric deformation model is linked
to the computed net cell density of cancerous cells. To this end, we formulate a constrained optimization problem
using an inhomogeneous regularization that in turn allows for approximating physical properties of brain tissue.
The described coupling model can in general be applied to estimate mass effect of non-convex, diffusive as well
as multifocal tumors so that no simplification of the growth model has to be stipulated.
The present work has to be considered as a proof-of-concept. Visual assessment of the computed results
demonstrates the potential of the described method. We conclude that the analogy to the problem formulation in
image registration potentially allows for a sensible integration of the described approach into a unified framework
of image registration and tumor modeling.
Model-based Segmentation
Correspondence free 3D statistical shape model fitting to sparse x-ray projections
Show abstract
In this paper we address the problem of 3D shape reconstruction from sparse X-ray projections. We present a correspondence
free method to fit a statistical shape model to two X-ray projections, and illustrate its performance in 3D shape
reconstruction of the femur. The method alternates between 2D segmentation and 3D shaoe reconstruction, where 2D
segmentation is guided by dynamic programming along the model projection on the X-ray plane. 3D reconstruction is
based on the iterative minimization of the 3D distance between a set of support points and the back-projected silhouette
with respect to the pose and model parameters. We show robustness of the reconstruction on simulated X-ray projection data of the femur, varying the field of view; and in a pilot study on cadaveric femora.
4D reconstruction of cardiac gated SPECT images using a content-adaptive deformable mesh model
Show abstract
In this work, we present a four-dimensional reconstruction technique for cardiac gated SPECT images using a content-adaptive deformable mesh model. Cardiac gated SPECT images are affected by a high level of noise.
Noise reduction methods usually do not account for cardiac motion and therefore introduce motion blur-an artifact
that can decrease diagnostic accuracy. Additionally, image reconstruction methods typically rely on uniform
sampling and Cartesian griding for image representation. The proposed method utilizes a mesh representation
of the images in order to utilize the benefits of content-adaptive nonuniform sampling. The mesh model allows
for accurate representation of important regions while significantly compressing the data. The content-adaptive
deformable mesh model is generated by combining nodes generated on the full torso using pre-reconstructed emission
and attenuation images with nodes accurately sampled on the left ventricle. Ventricular nodes are further
displaced according to cardiac motion using our previously introduced motion estimation technique. The resulting
mesh structure is then used to perform iterative image reconstruction using a mesh-based maximum-likelihood
expectation-maximization algorithm. Finally, motion-compensated post-reconstruction temporal filtering is applied
in the mesh domain using the deformable mesh model. Reconstructed images as well as quantitative
evaluation show that the proposed method offers improved image quality while reducing the data size.
3D shape reconstruction of bone from two x-ray images using 2D/3D non-rigid registration based on moving least-squares deformation
T. Cresson,
D. Branchaud,
R. Chav,
et al.
Show abstract
Several studies based on biplanar radiography technologies are foreseen as great systems for 3D-reconstruction
applications for medical diagnoses. This paper proposes a non-rigid registration method to estimate a 3D personalized
shape of bone models from two planar x-ray images using an as-rigid-as-possible deformation approach
based on a moving least-squares optimization method. Based on interactive deformation methods, the proposed
technique has the ability to let a user improve readily and with simplicity a 3D reconstruction which is an important
step in clinical applications. Experimental evaluations of six anatomical femur specimens demonstrate
good performances of the proposed approach in terms of accuracy and robustness when compared to CT-scan.
Abdominal arteries recognition in x-ray using a structural model
Olivier Nempont,
Raoul Florent
Show abstract
The automatic recognition of vascular trees is a challenging task, required for roadmapping or advanced visualization.
For instance, during an endovascular aneurysm repair (EVAR), the recognition of abdominal arteries in
angiograms can be used to select the appropriate stent graft. This choice is based on a reduced set of arteries
(aorta, renal arteries, iliac arteries) whose relative positions are quite stable.
We propose in this article a recognition process based on a structural model. The centerlines of the target
vessels are represented by a set of control points whose relative positions are constrained. To find their position in
an angiogram, we enhance the target vessels and extract a set of possible positions for each control point. Then,
a constraint propagation algorithm based on the model prunes those sets of candidates, removing inconsistent
ones. We present preliminary results on 5 cases, illustrating the potential of this approach and especially its
ability to handle the high variability of the target vessels.
Robust extraction of the aorta and pulmonary artery from 3D MDCT image data
Show abstract
Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT)
images is important for pulmonary applications. This work presents robust methods for defining the aorta and
pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT
image data. The automatic methods use a common approach employing model fitting and selection and adaptive
refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we
also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature
by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D
MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.
Image Enhancement
Combining short-axis and long-axis cardiac MR images by applying a super-resolution reconstruction algorithm
Show abstract
In cardiac MR images the slice thickness is normally greater than the pixel size within the slices. In general,
better segmentation and analysis results can be expected for isotropic high-resolution (HR) data sets. If two
orthogonal data sets, e. g. short-axis (SA) and long-axis (LA) volumes are combined, an increase in resolution
can be obtained.
In this work we employ a super-resolution reconstruction (SRR) algorithm for computing high-resolution data
sets from two orthogonal SA and LA volumes. In contrast to a simple averaging of both data in the overlapping
region, we apply a maximum a posteriori approach. There, an observation model is employed for estimating an
HR image that best reproduces the two low-resolution input data sets.
For testing the SRR approach, we use clinical MRI data with an in-plane resolution of 1.5 mm×1.5 mm and
a slice thickness of 8 mm. We show that the results obtained with our approach are superior to currently used
averaging techniques. Due to the fact that the heart deforms over the cardiac cycle, we investigate further, how
the replacement of a rigid registration by a deformable registration as preprocessing step improves the quality
of the final HR image data. We conclude that image quality is dramatically enhanced by applying an SRR
technique especially for cardiac MR images where the resolution in slice-selection direction is about five times
lower than within the slices.
Synthesizing MR contrast and resolution through a patch matching technique
Show abstract
Tissue contrast and resolution of magnetic resonance neuroimaging data have strong impacts on the utility of the
data in clinical and neuroscience tasks such as registration and segmentation. Lengthy acquisition times typically
prevent routine acquisition of multiple MR tissue contrast images at high resolution, and the opportunity for
detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based
approach using patch matching from a multiple resolution multiple contrast atlas in order to change an image's
resolution as well as its MR tissue contrast from one pulse-sequence to that of another. The use of this approach
to generate different tissue contrasts (T2/PD/FLAIR) from a single T1-weighted image is demonstrated on both
phantom and real images.
A variational approach for the correction of field-inhomogeneities in EPI sequences
Show abstract
A wide range of medical applications in clinic and research exploit images acquired by fast magnetic resonance
imaging (MRI) sequences such as echo-planar imaging (EPI), e.g. functional MRI (fMRI) and diffusion tensor
MRI (DT-MRI). Since the underlying assumption of homogeneous static fields fails to hold in practical applications,
images acquired by those sequences suffer from distortions in both geometry and intensity. In the present
paper we propose a new variational image registration approach to correct those EPI distortions. To this end
we acquire two reference EPI images without diffusion sensitizing and with inverted phase encoding gradients in
order to calculate a rectified image. The idea is to apply a specialized registration scheme which compensates
for the characteristical direction dependent image distortions. In addition the proposed scheme automatically
corrects for intensity distortions. This is done by evoking a problem dependent distance measure incorporated
into a variational setting. We adjust not only the image volumes but also the phase encoding direction after
correcting for patients head-movements between the acquisitions. Finally, we present first successful results of
the new algorithm for the registration of DT-MRI datasets.
Denoising arterial spin labeling MRI using tissue partial volume
Jan Petr,
Jean-Christophe Ferre,
Jean-Yves Gauvrit,
et al.
Show abstract
Arterial spin labeling (ASL) is a noninvasive MRI method that uses magnetically labeled blood to measure cerebral perfusion.
Spatial resolution of ASL is relatively small and as a consequence perfusion from different tissue types is mixed in each pixel.
An average ratio of gray matter (GM) to white matter (WM) blood flow is 3.2 to 1. Disregarding the partial volume effects (PVE) can thus cause
serious errors of perfusion quantification. PVE also complicates spatial filtering of ASL images
as apart from noise there is a spatial signal variation due to tissue partial volume.
Recently, an algorithm for correcting PVE has been published by Asllani et al. It represents the measured magnetization as a sum of different
tissue magnetizations weighted by their fractional volume in a pixel.
With the knowledge of the partial volume obtained from a high-resolution MRI image, it is possible to separate the individual tissue contributions by linear regression on a neighborhood
of each pixel.
We propose an extension of this algorithm by minimizing the
total-variation of the tissue specific magnetization. This makes the algorithm more flexible to local changes in perfusion. We show that this
method can be used to denoise ASL images without mixing the WM and GM signal.
An adaptive nonlocal means scheme for medical image denoising
Show abstract
Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition
process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we
investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications.
In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme
has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering
(K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted
to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity
matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to
demonstrate the superior performance of the proposed ANL denoising technique over various image denoising
benchmarks in term of both PSNR and perceptual quality comparison.
Noise filtering in thin-slice 4D cerebral CT perfusion scans
Show abstract
Patients suffering from cerebral ischemia or subarachnoid hemorrhage, undergo a 4D (3D+time) CT Perfusion
(CTP) scan to assess the cerebral perfusion and a CT Angiography (CTA) scan to assess the vasculature. The
aim of our research is to extract the vascular information from the CTP scan. This requires thin-slice CTP
scans that suffer from a substantial amount of noise. Therefore noise reduction is an important prerequisite
for further analysis. So far, the few noise filtering methods for 4D datasets proposed in literature deal with
the temporal dimension as a 4th dimension similar to the 3 spatial dimensions, mixing temporal and spatial
intensity information. We propose a bilateral noise reduction method based on time-intensity profile similarity
(TIPS), which reduces noise while preserving temporal intensity information. TIPS was compared to 4D bilateral
filtering on 10 patient CTP scans and, even though TIPS bilateral filtering is much faster, it results in better
vessel visibility and higher image quality ranking (observer study) than 4D bilateral filtering.
Keynote and Vascular Image Analysis
Affinity-based constraint optimization for nearly-automatic vessel segmentation
Show abstract
We present an affinity-based optimization method for nearly-automatic vessels segmentation in CTA scans.
The desired segmentation is modeled as a function that minimizes a quadratic affinity-based functional. The
functional incorporates intensity and geometrical vessel shape information and a smoothing constraint. Given a
few user-defined seeds, the minimum of the functional is obtained by solving a single set of linear equations. The
binary segmentation is then obtained by applying a user-selected threshold. The advantages of our method are
that it requires fewer initialization seeds, is robust, and yields better results than existing graph-based interactive
segmentation methods. Experimental results on 20 vessel segments including the carotid arteries bifurcation and
noisy parts of the carotid yield a mean symmetric surface error of 0.54mm (std=0.28).
A new 3D tubular intensity model for quantification of thin vessels in 3D tomographic images
Show abstract
We introduce a new 3D curved tubular intensity model in conjunction with a model fitting scheme for accurate
segmentation and quantification of thin vessels in 3D tomographic images. The curved tubular model is formulated
based on principles of the image formation process, and we have derived an analytic solution for the model
function. In contrast to previous straight models, the new model allows to accurately represent curved tubular
structures, to directly estimate the local curvature by model fitting, as well as to more accurately estimate the
shape and other parameters of tubular structures. We have successfully applied our approach to 3D synthetic
images as well as 3D MRA and 3D CTA vascular images of the human. It turned out that we achieved more accurate segmentation results in comparison to using a straight model.
Segmentation I
Artifact aware tracking of left ventricular contours in 3D ultrasound
Show abstract
The analysis of echocardiograms, whether visual or automated, is often hampered by ultrasound artifacts which
obscure the moving myocardial wall. In this study, a probabilistic framework for tracking the endocardial surface
in 3D ultrasound images is proposed, which distinguishes between visible and artifact-obscured myocardium.
Motion estimation of visible myocardium relies more using a local, data-driven tracker, whereas tracking of
obscured myocardium is assisted by a global, statistical model of cardiac motion. To make this distinction, the
expectation-maximization algorithm is applied in a stationary and dynamic frame-of-reference. Evaluation on
35 three-dimensional echocardiographic sequences shows that this artifact-aware tracker gives better results than
when no distinction is made. In conclusion, the proposed tracker is able to reduce the influence of artifacts,
potentially improving quantitative analysis of clinical quality echocardiograms.
Classification in medical images using adaptive metric k-NN
Show abstract
The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric
used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used
in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics
in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is
better described, introducing some unsupervised learning knowledge in k-NN.
We investigated four different metrics are estimated: a theoretical metric based on the assumption that
images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the
empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric
obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to
Principal Component Analysis (PCA) performed on it which results the subspace metrics.
The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use
k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast
cancer risk assessment. The results show that appropriate choice of metric can improve classification.
Partial volume correction for volume estimation of liver metastases and lymph nodes in CT scans using spatial subdivision
Show abstract
In oncological therapy monitoring, the estimation of tumor growth from consecutive CT scans is an important
aspect in deciding whether the given treatment is adequate for the patient. This can be done by measuring and
comparing the volume of a lesion in the scans based on a segmentation. However, simply counting the voxels
within the segmentation mask can lead to significant differences in the volume, if the lesion has been segmented
slightly differently by various readers or in different scans, due to the limited spatial resolution of CT and due
to partial volume effects.
We present a novel algorithm for measuring the volume of liver metastases and lymph nodes which considers
partial volume effects at the surface of a lesion. Our algorithm is based on a spatial subdivision of the segmentation.
We have evaluated the algorithm on a phantom and a multi-reader study. Our evaluations have shown
that our algorithm allows determining the volume more accurately even for larger slice thicknesses. Moreover,
it reduces inter-observer variability of volume measurements significantly. The calculation of the volume takes 2
seconds for 503 voxels on a single 2.66GHz Intel Core2 CPU.
Microaneurysms detection with the radon cliff operator in retinal fundus images
Show abstract
Diabetic Retinopathy (DR) is one of the leading causes of blindness in the industrialized world. Early detection is the
key in providing effective treatment. However, the current number of trained eye care specialists is inadequate to screen
the increasing number of diabetic patients. In recent years, automated and semi-automated systems to detect DR with
color fundus images have been developed with encouraging, but not fully satisfactory results. In this study we present the
initial results of a new technique for the detection and localization of microaneurysms, an early sign of DR. The algorithm
is based on three steps: candidates selection, the actual microaneurysms detection and a final probability evaluation. We
introduce the new Radon Cliff operator which is our main contribution to the field. Making use of the Radon transform, the
operator is able to detect single noisy Gaussian-like circular structures regardless of their size or strength. The advantages
over existing microaneurysms detectors are manifold: the size of the lesions can be unknown, it automatically distinguishes
lesions from the vasculature and it provides a fair approach to microaneurysm localization even without post-processing
the candidates with machine learning techniques, facilitating the training phase. The algorithm is evaluated on a publicly
available dataset from the Retinopathy Online Challenge.
Liver segmentation from registered multiphase CT data sets with EM clustering and GVF level set
Show abstract
In this study, clinically produced multiphase CT volumetric data sets (pre-contrast, arterial and venous enhanced phase)
are drawn upon to transcend the intrinsic limitations of single phase data sets for the robust and accurate segmentation of
the liver in typically challenging cases. As an initial step, all other phase volumes are registered to either the arterial or
venous phase volume by a symmetric nonlinear registration method using mutual information as similarity metric. Once
registered, the multiphase CT volumes are pre-filtered to prepare for subsequent steps. Under the assumption that the
intensity vectors of different organs follow the Gaussian Mixture model (GMM), expectation maximization (EM) is then
used to classify the multiphase voxels into different clusters. The clusters for liver parenchyma, vessels and tumors are
combined together and provide the initial liver mask that is used to generate initial zeros level set. Conversely, the voxels
classified as non-liver will guide the speed image of the level sets in order to reduce leakage. Geodesic active contour
level set using the gradient vector flow (GVF) derived from one of the enhanced phase volumes is then performed to
further evolve the liver segmentation mask. Using EM clusters as the reference, the resulting liver mask is finally
morphologically post-processed to add missing clusters and reduce leakage. The proposed method has been tested on the
clinical data sets of ten patients with relatively complex and/or extensive liver cancer or metastases. A 95.8% dice
similarity index when compared to expert manual segmentation demonstrates the high performance and the robustness of
our proposed method - even for challenging cancer data sets - and confirms the potential of a more thorough
computational exploitation of currently available clinical data sets.
Electric field theory based approach to search-direction line definition in image segmentation: application to optimal femur-tibia cartilage segmentation in knee-joint 3-D MR
Show abstract
A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The
method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased
image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of
multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee
that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction
space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all
vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric
field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject
multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in
60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance
as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning
errors, respectively).
2D Image Segmentation
WCE video segmentation using textons
Show abstract
Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has
been used to examine the small intestine non invasively. Medical specialists look for signicative events in the
WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical
relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus,
stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the
automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each
frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The
experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative
events have been previously labelled by experts. Results have shown that the proposed method may eliminate up
to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated
only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.
A weighted mean shift, normalized cuts initialized color gradient based geodesic active contour model: applications to histopathology image segmentation
Show abstract
While geodesic active contours (GAC) have become very popular tools for image segmentation, they are sensitive
to model initialization. In order to get an accurate segmentation, the model typically needs to be initialized
very close to the true object boundary. Apart from accuracy, automated initialization of the objects of interest
is an important pre-requisite to being able to run the active contour model on very large images (such as those
found in digitized histopathology). A second limitation of GAC model is that the edge detector function is based
on gray scale gradients; color images typically being converted to gray scale prior to computing the gradient.
For color images, however, the gray scale gradient results in broken edges and weak boundaries, since the other
channels are not exploited for the gradient determination. In this paper we present a new geodesic active contour
model that is driven by an accurate and rapid object initialization scheme-weighted mean shift normalized cuts
(WNCut). WNCut draws its strength from the integration of two powerful segmentation strategies-mean shift
clustering and normalized cuts. WNCut involves first defining a color swatch (typically a few pixels) from the
object of interest. A multi-scale mean shift coupled normalized cuts algorithm then rapidly yields an initial
accurate detection of all objects in the scene corresponding to the colors in the swatch. This detection result
provides the initial boundary for GAC model. The edge-detector function of the GAC model employs a local
structure tensor based color gradient, obtained by calculating the local min/max variations contributed from each
color channel (e.g. R,G,B or H,S,V). Our color gradient based edge-detector function results in more prominent
boundaries compared to classical gray scale gradient based function. We evaluate segmentation results of our
new WNCut initialized color gradient based GAC (WNCut-CGAC) model against a popular region-based model
(Chan & Vese) on a total of 60 digitized histopathology images. Across a total of 60 images, the WNCut-CGAC
model yielded an average overlap, sensitivity, specificity, and positive predictive value of 73%, 83%, 97%, 84%,
compared to the Chan & Vese model which had corresponding values of 64%, 75%, 95%, 72%. The rapid and
accurate object initialization scheme (WNCut) and the color gradient make the WNCut-CGAC scheme, an ideal
segmentation tool for very large, color imagery.
Retinal atlas statistics from color fundus images
Show abstract
An atlas provides a reference anatomic structure and an associated coordinate system. An atlas may be used in a variety
of applications, including segmentation and registration, and can be used to characterize anatomy across a population. We
present a method for generating an atlas of the human retina from 500 color fundus image pairs. Using color fundus image
pairs, we register image pairs to obtain a larger anatomic field of view. Key retinal anatomic features are selected for atlas
landmarks: disk center, fovea, and main vessel arch. An atlas coordinate system is defined based on the statistics of the
landmarks. Images from the population are warped into the atlas space to produce a statistical retinal atlas which can be
used for automatic diagnosis, concise indexing, semantic blending, etc.
Automatic landmark detection and scan range delimitation for topogram images using hierarchical network
Wei Zhang,
Frederic Mantlic,
Shaohua Kevin Zhou
Show abstract
The topogram is a 2D projection image of human body formed using a Computed Tomography (CT) scanner.
It could be used to delimitate the desired scan range for further precise 3D CT scan. In this paper, we present a
robust and efficient system for automatically determining scan ranges and their associated anatomical landmarks
for topogram images. The system could handle the cases when only about 50% of the desired regions are visible.
The robustness of our system can be attributed to three key ingredients: 1. The detection is based on a
hierarchical network; 2. Network optimization is based on sequentially optimizing a set of subnetworks; 3.
The detection probability is further refined based on the detection context. Extensive experiments (including external testing) on over 1000 topogram images show that our approach works robustly and efficiently even on very challenging data.
Graph-based pigment network detection in skin images
Show abstract
Detecting pigmented network is a crucial step for melanoma diagnosis. In this paper, we present a novel graphbased
pigment network detection method that can find and visualize round structures belonging to the pigment
network. After finding sharp changes of the luminance image by an edge detection function, the resulting binary
image is converted to a graph, and then all cyclic sub-graphs are detected. Theses cycles represent meshes that
belong to the pigment network. Then, we create a new graph of the cyclic structures based on their distance.
According to the density ratio of the new graph of the pigment network, the image is classified as "Absent" or
"Present". Being Present means that a pigment network is detected in the skin lesion. Using this approach, we
achieved an accuracy of 92.6% on five hundred unseen images.
Cervigram image segmentation based on reconstructive sparse representations
Show abstract
We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of
the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination
and specular reflection, the color and texture features in optical images often overlap with each other and are not
linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with
sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations
and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive
and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In
the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in
an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed
lower space and time complexity and higher sensitivity.
Shape
Learning discriminative distance functions for valve retrieval and improved decision support in valvular heart disease
Show abstract
Disorders of the heart valves constitute a considerable health problem and often require surgical intervention.
Recently various approaches were published seeking to overcome the shortcomings of current clinical practice,that
still relies on manually performed measurements for performance assessment. Clinical decisions are still based on
generic information from clinical guidelines and publications and personal experience of clinicians. We present a
framework for retrieval and decision support using learning based discriminative distance functions and visualization
of patient similarity with relative neighborhood graphsbased on shape and derived features. We considered
two learning based techniques, namely learning from equivalence constraints and the intrinsic Random Forest
distance. The generic approach enables for learning arbitrary user-defined concepts of similarity depending on
the application. This is demonstrated with the proposed applications, including automated diagnosis and interventional
suitability classification, where classification rates of up to 88.9% and 85.9% could be observed on a
set of valve models from 288 and 102 patients respectively.
Shape based MRI prostate image segmentation using local information driven directional distance Bayesian method
Show abstract
In this paper, we present a shape based segmentation methodology for magnetic resonance prostate images.
We first propose a new way to represent shapes via the hyperbolic tangent of the signed distance function.
This effectively corrects the drawbacks of the signed distance function and yields very reasonable results for
the shape registration and learning. Secondly, under a Bayesian statistical framework, instead of computing
the posterior using a uniform prior, a directional distance map is introduced in order to incorporate a
priori knowledge of image content as well as the estimated center of target object. Essentially, the image
is modeled as a Finsler manifold and the metric is computed out of the directional derivative of the image.
Then the directional distance map is computed to suppress the posterior remote from the object center.
Thirdly, in the posterior image, a localized region based cost functional is designed to drive the shape
based segmentation. Such cost functional utilizes the local regional information and is robust to both image
noise and remote/irrelevant disturbances. With these three major components, the entire shape based segmentation procedure is provided as a complete open source pipeline and is applied to magnetic resonance image (MRI) prostate data.
3D shape from silhouette points in registered 2D images using conjugate gradient method
Show abstract
We describe a simple and robust algorithm for estimating 3D shape given a number of silhouette points obtained
from two or more viewpoints and a parametric model of the shape. Our algorithm minimizes (in the least
squares sense) the distances from the lines obtained by unprojecting the silhouette points to 3D to their closest
silhouette points on the 3D shape. The solution is found using an iterative approach. In each iteration, we
locally approximate the least squares problem with a degree-4 polynomial function. The approximate problem
is solved using a nonlinear conjugate gradient solver that takes advantage of its structure to perform exact and
global line searches. We tested our algorithm by applying it to reconstruct patient-specific femur shapes from
simulated biplanar X-ray images.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
Show abstract
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a
shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization
algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms.
The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand,
the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis.
However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial
axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven
homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images.
This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro
architecture in order to quantify bone integrity.
Coupled level set segmentation using a point-based statistical shape model relying on correspondence probabilities
Show abstract
In this article, we propose a unified statistical framework for image segmentation with shape prior information.
The approach combines an explicitely parameterized point-based probabilistic statistical shape model (SSM)
with a segmentation contour which is implicitly represented by the zero level set of a higher dimensional surface.
These two aspects are unified in a Maximum a Posteriori (MAP) estimation where the level set is evolved to
converge towards the boundary of the organ to be segmented based on the image information while taking into
account the prior given by the SSM information. The optimization of the energy functional obtained by the MAP
formulation leads to an alternate update of the level set and an update of the fitting of the SSM. We then adapt
the probabilistic SSM for multi-shape modeling and extend the approach to multiple-structure segmentation by
introducing a level set function for each structure. During segmentation, the evolution of the different level set
functions is coupled by the multi-shape SSM. First experimental evaluations indicate that our method is well
suited for the segmentation of topologically complex, non spheric and multiple-structure shapes. We demonstrate
the effectiveness of the method by experiments on kidney segmentation as well as on hip joint segmentation in
CT images.
Registration II
Validation of a nonrigid registration framework that accommodates tissue resection
Show abstract
We present a 3D extension and validation of an intra-operative registration framework that accommodates
tissue resection. The framework is based on the bijective Demons method, but instead of regularizing with
the traditional Gaussian smoother, we apply an anisotropic diffusion filter with the resection modeled as a
diffusion sink. The diffusion sink prevents unwanted Demon forces that originates from the resected area from
diffusing into the surrounding area. Another attractive property of the diffusion sink is the resulting continuous
deformation field across the diffusion sink boundary, which allows us to move the boundary of the diffusion
sink without changing values in the deformation field. The area of resection is estimated by a level-set method
evolving in the space of image intensity disagreements in the intra-operative image domain. A product of using
the bijective Demons method is that we can also provide an accurate estimate of the resected tissue in the preoperative
image space. Validation of the proposed method was performed on a set of 25 synthetic images. Our experiments show a significant improvement in accommodating resection using the proposed method compared to two other Demons based methods.
A modified ICP algorithm for normal-guided surface registration
Show abstract
The iterative closest point (ICP) algorithm is probably the most popular algorithm for fine registration of
surfaces. Among its key properties are: a simple minimization scheme, proofs of convergence as well as the
easiness to modify and improve it in many ways (e.g. use of fuzzy point correspondences, incorporation of a
priori knowledge, extensions to non-linear deformations, speed-up strategies, etc.) while keeping the desirable
properties of the original method. However, most ICP-like registration methods suffer from the fact that they
only consider the distance between the surfaces to register in the criterion to minimize, and thus are highly
dependent on how the surfaces are aligned in the first place. This explains why these methods are likely to be
trapped in local minima and to lead to erroneous solutions. A solution to partly alleviate this problem would
consist in adding higher-order information in the criterion to minimize (e.g. normals, curvatures, etc.), but
previous works along these research tracks have led to computationally intractable minimization schemes. In
this paper, we propose a new way to include the point unit normals in addition to the point coordinates to
derive an ICP-like scheme for non-linear registration of surfaces, and we show how to keep the properties of the
original ICP algorithm. Our algorithm rests on a simple formula showing how the unit normal changes when
a surface undergoes a small deformation. The use of this formula in an ICP-like algorithm is made possible by
adequate implementation choices, most notably the use of a local, differentiable, parametrization of the surfaces
and a locally affine deformation model using this local parametrization. Then we experimentally show the strong
added value of using the unit normals in a series of controlled experiments.
Structural template formation with discovery of subclasses
Show abstract
A major focus of computational anatomy is to extract the most relevant information to identify and characterize
anatomical variability within a group of subjects as well as between different groups. The construction of atlases
is central to this effort. An atlas is a deterministic or probabilistic model with intensity variance, structural,
functional or biochemical information over a population. To date most algorithms to construct atlases have
been based on a single subject assuming that the population is best described by a single atlas. However, we
believe that in a population with a wide range of subjects multiple atlases may be more representative since
they reveal the anatomical differences and similarities within the group. In this work, we propose to use the
K-means clustering algorithm to partition a set of images into several subclasses, based on a joint distance which
is composed of a distance quantifying the deformation between images and a dissimilarity measured from the registration residual. During clustering, the spatial transformations are averaged rather than images to form cluster centers, to ensure a crisp reference. At the end of this algorithm, the updated centers of the k clusters
are our atlases. We demonstrate this algorithm on a subset of a public available database with whole brain
volumes of subjects aged 18-96 years. The atlases constructed by this method capture the significant structural
differences across the group.
Improved robust point matching with label consistency
Show abstract
Robust point matching (RPM) jointly estimates correspondences and non-rigid warps between unstructured
point-clouds. RPM does not, however, utilize information of the topological structure or group memberships of
the data it is matching. In numerous medical imaging applications, each extracted point can be assigned group
membership attributes or labels based on segmentation, partitioning, or clustering operations. For example,
points on the cortical surface of the brain can be grouped according to the four lobes. Estimated warps should
enforce the topological structure of such point-sets, e.g. points belonging to the temporal lobe in the two
point-sets should be mapped onto each other.
We extend the RPM objective function to incorporate group membership labels by including a Label Entropy
(LE) term. LE discourages mappings that transform points within a single group in one point-set onto points
from multiple distinct groups in the other point-set. The resulting Labeled Point Matching (LPM) algorithm
requires a very simple modification to the standard RPM update rules.
We demonstrate the performance of LPM on coronary trees extracted from cardiac CT images. We partitioned
the point sets into coronary sections without a priori anatomical context, yielding potentially disparate labelings
(e.g. [1,2,3] → [a,b,c,d]). LPM simultaneously estimated label correspondences, point correspondences, and a
non-linear warp. Non-matching branches were treated wholly through the standard RPM outlier process akin to
non-matching points. Results show LPM produces warps that are more physically meaningful than RPM alone.
In particular, LPM mitigates unrealistic branch crossings and results in more robust non-rigid warp estimates.
A new combined surface and volume registration
Natasha Lepore,
Anand A. Joshi,
Richard M. Leahy,
et al.
Show abstract
3D registration of brain MRI data is vital for many medical imaging applications. However, purely intensitybased
approaches for inter-subject matching of brain structure are generally inaccurate in cortical regions, due
to the highly complex network of sulci and gyri, which vary widely across subjects. Here we combine a surfacebased
cortical registration with a 3D fluid one for the first time, enabling precise matching of cortical folds,
but allowing large deformations in the enclosed brain volume, which guarantee diffeomorphisms. This greatly
improves the matching of anatomy in cortical areas. The cortices are segmented and registered with the software
Freesurfer. The deformation field is initially extended to the full 3D brain volume using a 3D harmonic mapping
that preserves the matching between cortical surfaces. Finally, these deformation fields are used to initialize a 3D
Riemannian fluid registration algorithm, that improves the alignment of subcortical brain regions. We validate
this method on an MRI dataset from 92 healthy adult twins. Results are compared to those based on volumetric
registration without surface constraints; the resulting mean templates resolve consistent anatomical features
both subcortically and at the cortex, suggesting that the approach is well-suited for cross-subject integration of
functional and anatomic data.
Diffusion Tensor Image Analysis
Fast Hamilton-Jacobi equation solver and neural fiber bundle extraction
Show abstract
The Hamilton-Jacobi equation (HJE) appears widely in applied mathematics, physics, and optimal control
theory. While its analytical solution is rarely available, the numerical solver is indispensable. In this
work, firstly we propose a novel numerical method, based on the fast sweeping scheme, for the static HJE.
Comparing with the original fast sweeping method, our algorithm speeds up the solution up to 8 times in
3D. The efficiency is due to incorporating the ideas of the fast marching into the fast sweeping. Essentially,
the sweeping origin is selected so that the sweeping direction is more consistent with the information flow
direction and the regions where the two directions are against are avoided. Moreover, the successive-overrelaxation
nonlinear iterative method is used for faster convergence. Secondly, we provide a complete pipeline
for brain tractography, in which the proposed solver is the key component for finding the optimal fiber tracts.
Besides, the pipeline contains components from orientation distribution function estimation, multiple fiber
extraction to the final fiber bundle volumetric segmentation, completing the process from DW-MRI image to segmented fiber bundles. The pipeline is integrated into the publicly available software 3D Slicer. The new solver has been tested and compared with the original scheme on various types of HJEs and the tractography
pipeline was tested and performed consistently on all the 12 brain DW-MRI images.
Directional assessment of fiber integrity in Q-ball imaging
Show abstract
Q-ball imaging (QBI) is a diffusion imaging technique that provides unique microstructural information and outperforms diffusion tensor imaging (DTI) in areas of fiber crossings. The generalized fractional anisotropy (GFA) is a widely accepted quantitative measure of fiber integrity for QBI. In this paper we demonstrate a major drawback of the GFA occuring in crossing fiber regions: the fiber integrity is heavily underestimated here. We present a new, directional anisotropy measure and compare it to the GFA using custom-designed fiber phantoms and Monte Carlo simulations. Furthermore, we present in-vivo data of the measure. With the expense of reduced CNR, the new measure allows a better quantification of fiber bundles in crossing regions and potentially yields additional information regarding non-dominant tracts within a voxel.
Resolution of crossing fibers with constrained compressed sensing using traditional diffusion tensor MRI
Show abstract
Diffusion tensor imaging (DTI) is widely used to characterize tissue micro-architecture and brain connectivity. Yet DTI
suffers serious limitations in regions of crossing fibers because traditional tensor techniques cannot represent multiple,
independent intra-voxel orientations. Compressed sensing has been proposed to resolve crossing fibers using a tensor
mixture model (e.g., Crossing Fiber Angular Resolution of Intra-voxel structure, CFARI). Although similar in spirit to
deconvolution approaches, CFARI uses sparsity to stabilize estimation with limited data rather than spatial consistency
or limited model order. Here, we extend the CFARI approach to resolve crossing fibers through a strictly positive,
parsimonious mixture model. Together with an optimized preconditioned conjugate gradient solver, estimation error and
computational burden are greatly reduced over the initial presentation. Reliable estimates of intra-voxel orientations are
demonstrated in simulation and in vivo using data representative of typical, low b-value (30 directions, 700 s/mm2)
clinical DTI protocols. These sequences are achievable in 5 minutes at 3 T, and the whole brain CFARI analysis is
tractable for routine analysis. With these improvements, CFARI provides a robust framework for identifying intra-voxel
structure with conventional DTI protocols and shows great promise in helping to resolve the crossing fiber problem in
current clinical imaging studies.
Reconstruction of a geometrically correct diffusion tensor image of a moving human fetal brain
Show abstract
Recent studies reported the development of methods for rigid registration of 2D fetal brain imaging data to
correct for unconstrained fetal and maternal motion, and allow the formation of a true 3D image of conventional
fetal brain anatomy from conventional MRI. Diffusion tensor imaging provides additional valuable insight into
the developing brain anatomy, however the correction of motion artifacts in clinical fetal diffusion imaging is
still a challenging problem. This is due to the challenging problem of matching lower signal-to-noise ratio
diffusion weighted EPI slice data to recover between-slice motion, compounded by the presence of possible
geometric distortions in the EPI data. In addition, the problem of estimating a diffusion model (such as a
tensor) on a regular grid that takes into account the inconsistent spatial and orientation sampling of the diffusion
measurements needs to be solved in a robust way. Previous methods have used slice to volume registration within
the diffusion dataset. In this work, we describe an alternative approach that makes use of an alignment of diffusion
weighted EPI slices to a conventional structural MRI scan which provides a geometrically correct reference image.
After spatial realignment of each diffusion slice, a tensor field representing the diffusion profile is estimated by
weighted least squared fitting. By qualitative and quantitative evaluation of the results, we confirm the proposed
algorithm successfully corrects the motion and reconstructs the diffusion tensor field.
Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold
Show abstract
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated
informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a
novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The
functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity
pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity
patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace
distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed
to select independent components for constructing the most discriminative functional connectivity pattern. The
discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and
31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising
classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies
discriminative functional networks that are informative for schizophrenia diagnosis.
Segmentation II
Automatic bone segmentation and alignment from MR knee images
Show abstract
Automatic image analysis of magnetic resonance (MR) images of the knee is simplified by bringing the knee
into a reference position. While the knee is typically put into a reference position during image acquisition, this
alignment will generally not be perfect. To correct for imperfections, we propose a two-step process of bone
segmentation followed by elastic tissue deformation.
The approach makes use of a fully-automatic segmentation of femur and tibia from T1 and T2* images.
The segmentation algorithm is based on a continuous convex optimization problem, incorporating regional, and
shape information. The regional terms are included from a probabilistic viewpoint, which readily allows the
inclusion of shape information. Segmentation of the outer boundary of the cortical bone is encouraged by adding
simple appearance-based information to the optimization problem. The resulting segmentation without the shape alignment step is globally optimal.
Standard registration is problematic for knee alignment due to the distinct physical properties of the tissues constituting the knee (bone, muscle, etc.). We therefore develop an alternative alignment approach based on a simple elastic deformation model combined with strict enforcement of similarity transforms for femur and tibia
based on the obtained segmentations.
Subvoxel segmentation and representation of brain cortex using fuzzy clustering and gradient vector diffusion
Ming-Ching Chang,
Xiaodong Tao
Show abstract
Segmentation and representation of human brain cortex from Magnetic Resonance (MR) images is an important
step for visualization and analysis in many neuro imaging applications. In this paper, we propose an automatic
and fast algorithm to segment the brain cortex and to represent it as a geometric surface on which analysis
can be carried out. The algorithm works on T1 weighted MR brain images with extracranial tissue removed.
A fuzzy clustering algorithm with a parametric bias field model is applied to assign membership values of
gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) to each voxel. The cortical boundaries,
namely the WM-GM and GM-CSF boundary surfaces, are extracted as iso-surfaces of functions derived from
these membership functions. The central surface (CS), which traces the peak values (or ridges) of the GM
membership function, is then extracted using gradient vector diffusion. Our main contribution is to provide a
generic, accurate, fast, yet fully-automatic approach to (i) produce a soft segmentation of the MR brain image
with intensity field correction, (ii) extract both the boundary and the center of the cortex in a surface form,
where the topology and geometry can be explicitly examined, and (iii) use the extracted surfaces to model the
curvy, folding cortical volume, which allows an intuitive measurement of the thickness. As a demonstration,
we compute cortical thickness from the surfaces and compare the results with what has been reported in the
literature. The entire process from raw MR image to cortical surface reconstruction takes on average between five to ten minutes.
An expectation-maximization approach to joint curve evolution for medical image segmentation
Show abstract
This paper proposes a new Expectation-Maximization curve evolution algorithm for medical image segmentation.
Traditional level set algorithms perform poorly when image information is incomplete, missing or some objects are
corrupted. In such cases, statistical model-based segmentation methods are widely used since they allow object shape
variations subject to shape prior constraints to overcome the incomplete or noisy information. Although matching
robustly in dealing with noisy and low contrast images, the shape parameters are estimated intractably through the
Maximum A Posterior (MAP) framework by using incomplete image features. In this paper, we present a statistical
shape-based joint curve evolution algorithm for image segmentation based on the assumption that using hidden features
of the image as missing data can simplify the estimation problem and help improve the matching performance. In our
method, these hidden features are designed to be the local voxel labeling data determined based on the intensity
distribution of the image and priori anatomical knowledge. Using an Expectation-Maximization formulation, both the
hidden features and the object shapes can be extracted. In addition, this EM-based algorithm is applied to the joint
parameter and non-parameter shape model for more accurate segmentation. Comparative results on segmenting putamen
and caudate shapes in MR brain images confirm both robustness and accuracy of the proposed curve evolution algorithm.
Simultaneous truth and performance level estimation with incomplete, over-complete, and ancillary data
Show abstract
Image labeling and parcellation are critical tasks for the assessment of volumetric and morphometric features in medical
imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifact. Even
expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be
considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both
reduce this variability as well as quantify the degree of uncertainty. Existing techniques exploit maximum a posteriori
statistics to combine data from multiple raters. A current limitation with these approaches is that they require each rater
to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters
in a research or clinical environment. Herein, we propose a robust set of extensions that allow for missing data, account
for repeated label sets, and utilize training/catch trial data. With these extensions, numerous raters can label small,
overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously
estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables parallel processing
of labeling tasks and reduces the otherwise detrimental impact of rater unavailability.
Fast globally optimal single surface segmentation using regional properties
Show abstract
Efficient segmentation of globally optimal surfaces in volumetric images is a central problem in many medical image
analysis applications. Intra-class variance has been successfully utilized, for instance, in the Chan-Vese model especially
for images without prominent edges. In this paper, we study the optimization problem of detecting a region (volume)
bounded by a smooth terrain-like surface, whose intra-class variance is minimized. A novel polynomial time algorithm is
developed. Our algorithm is based on the shape probing technique in computational geometry and computes a sequence
of O(n) maximum flows in the derived graphs, where n is the size of the input image. Our further investigation shows
that those O(n) graphs form a monotone parametric flow network, which enables to solving the optimal region detection
problem in the complexity of computing a single maximum flow. The method has been validated on computer-synthetic
volumetric images. Its applicability to clinical data sets was demonstrated on 20 3-D airway wall CT images from 6
subjects. The achieved results were highly accurate. The mean unsigned surface positioning error of outer walls of the
tubes is 0.258 ± 0.297mm, given a voxel size of 0.39 x 0.39 x 0.6mm3.
Lung fissure detection in CT images using global minimal paths
Show abstract
Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential
for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning
also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the
fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in
terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and
occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated
fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such
that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice
of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a)
intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum
energy path between a representative point on the fissure and nearest lung boundary point in this energy domain.
The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites.
The datasets span through different pathological conditions and varying imaging artifacts.
Posters: Atlases
Segmentation of lymph node regions in head-and-neck CT images using a combination of registration and active shape model
Show abstract
Segmenting the lymph node regions in head and neck CT images has been a challenging topic in the area of medical image
segmentation. The method proposed herein implements an atlas-based technique constrained by an active shape model
(ASM) to segment the level II, III and IV lymph nodes as one structure. A leave-one-out evaluation study performed on 15
data sets shows that the results obtained with this technique are better than those obtained with a pure atlas-based
segmentation method, in particular in regions of poor contrast.
A groupwise mutual information metric for cost efficient selection of a suitable reference in cardiac computational atlas construction
Show abstract
Computational atlases based on nonrigid registration have found much use in the medical imaging community.
To avoid bias to any single element of the training set, there are two main approaches: using a (random) subject
to serve as an initial reference and posteriorly removing bias, and a true groupwise registration with a constraint
of zero average transformation for direct computation of the atlas. Major drawbacks are the possible selection
of an outlier on one side, and an initialization with an invalid instance on the other. In both cases there is great
potential for affecting registration performance, and producing a final average image in which the structure of
interest deviates from the central anatomy of the population under study.
We propose an inexpensive means of reference selection based on a groupwise correspondence measure, which
avoids the selection of an outlier and is independent from the atlas construction approach that follows. Thus,
it improves tractability of reference selection and robustness of automated atlas construction. We illustrate the
method using a set of 20 cardiac multislice computed tomography volumes.
Effect of inter-subject variation on the accuracy of atlas-based segmentation applied to human brain structures
Show abstract
Large variations occur in brain anatomical structures in human populations, presenting a critical challenge to the brain
mapping process. This study investigates the major impact of these variations on the performance of atlas-based
segmentation. It is based on two publicly available datasets, from each of which 17 T1-weighted brain atlases were
extracted. Each subject was registered to every other subject using the Morphons, a non-rigid registration algorithm. The
automatic segmentations, obtained by warping the segmentation of this template, were compared with the expert
segmentations using Dice index and the differences were statistically analyzed using Bonferroni multiple comparisons at
significance level 0.05. The results showed that an optimum atlas for accurate segmentation of all structures cannot be
found, and that the group of preferred templates, defined as being significantly superior to at least two other templates
regarding the segmentation accuracy, varies significantly from structure to structure. Moreover, compared to other
templates, a template giving the best accuracy in segmentation of some structures can provide highly inferior
segmentation accuracy for other structures. It is concluded that there is no template optimum for automatic segmentation
of all anatomical structures in the brain because of high inter-subject variation. Using a single fixed template for brain
segmentation does not lead to good overall segmentation accuracy. This proves the need for multiple atlas based
solutions in the context of atlas-based segmentation on human brain.
An analysis of methods for the selection of atlases for use in medical image segmentation
Show abstract
The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore
different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR)
images, although the results are pertinent for a wide range of applications. The experiments were performed using 103
images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50
images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set
of readers was assigned the task of selecting atlases from a training population of images, which were selected to be
representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset
of the training data which was stratified based on population modes. Finally, every image in the training set was
employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an
appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified
using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation
algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image
in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.
Combining morphometric evidence from multiple registration methods using dempster-shafer theory
Show abstract
In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof-
freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to
both the registration method and statistical test used. Given the lack of an objective model of group variation is
it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given
the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values.
This paper presents an approach to address both of these issues by combining multiple registration methods using
Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach
is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant
of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a
unique and easy to interpret belief map of regional changes between and within groups without the complications
associated with hypothesis testing.
Posters: Classification
T1- and T2-weighted spatially constrained fuzzy c-means clustering for brain MRI segmentation
Show abstract
The segmentation of brain tissue in magnetic resonance imaging (MRI) plays an important role in clinical analysis
and is useful for many applications including studying brain diseases, surgical planning and computer assisted
diagnoses. In general, accurate tissue segmentation is a difficult task, not only because of the complicated
structure of the brain and the anatomical variability between subjects, but also because of the presence of noise
and low tissue contrasts in the MRI images, especially in neonatal brain images.
Fuzzy clustering techniques have been widely used in automated image segmentation. However, since the
standard fuzzy c-means (FCM) clustering algorithm does not consider any spatial information, it is highly sensitive
to noise. In this paper, we present an extension of the FCM algorithm to overcome this drawback, by
combining information from both T1-weighted (T1-w) and T2-weighted (T2-w) MRI scans and by incorporating
spatial information. This new spatially constrained FCM (SCFCM) clustering algorithm preserves the homogeneity
of the regions better than existing FCM techniques, which often have difficulties when tissues have
overlapping intensity profiles.
The performance of the proposed algorithm is tested on simulated and real adult MR brain images with
different noise levels, as well as on neonatal MR brain images with the gestational age of 39 weeks. Experimental
quantitative and qualitative segmentation results show that the proposed method is effective and more robust
to noise than other FCM-based methods. Also, SCFCM appears as a very promising tool for complex and noisy
image segmentation of the neonatal brain.
3D tensor-based blind multispectral image decomposition for tumor demarcation
Show abstract
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting
tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral
responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or
PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the
number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the
spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image
tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image
preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based
decomposition methods (such as non-negative matrix factorization and independent component analysis) are
used. Superior performance of the tensor-based image decomposition over matrix factorization-based
decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as
well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Knowledge-based quantification of pericardial fat in non-contrast CT data
Show abstract
Recent studies show that pericardial fat is associated with vascular calcification and cardiovascular risk. The
fat is imaged with Computed Tomography (CT) as part of coronary calcium scoring but it is not included in
routine clinical analysis due to the lack of automatic tools for fat quantification. Previous attempts to create
such an automated tool have the limitations of either assuming a preset threshold or a Gaussian distribution
for fat. In order to overcome these limitations, we present a novel approach using a classification-based method
to discriminate fat from other tissues. The classifier is constructed from three binary SVM classifiers trained
separately for multiple tissues (fat, muscle/blood and calcium), and a specific code is assigned to each tissue
type based on the number of classifiers. The decisions of these binary classifiers are combined and compared
with previously determined codes using a minimum Hamming decoding distance to identify fat. We also present
an improved method for detection of a compact region-of-interest around the heart to reduce the number of
false positives due to neighboring organs. The proposed method UH-PFAT attained a maximum overlap of 87%,
and an average overlap of 76% with expert annotations when tested on unseen data from 36 subjects. Our
method can be improved by identifying additional discriminative features for fat and muscle/blood separation,
or by using more advanced classification approaches such as cascaded classifiers to reduce the number of false
detections.
Automated detection of grayscale bar and distance scale in ultrasound images
Show abstract
Computer assisted diagnosis algorithms are evaluated by testing them against wide-ranging sets of images arising
from real clinical conditions. Detection of the distance scale and the reference grayscale present in most ultrasound
images can be used to automate the calibration of physical per-pixel distances and grayscale normalization
over heterogeneously acquired ultrasound datasets. This work presents novel methods for automated detection
of (i) the distance scale and the spacing between its gradations, (ii) the reference grayscale. The distance scale
was detected by searching for regular peaks in the 1-D autocorrelation of image pixel columns. The grayscale
bar was detected by searching for contiguous sets of columns with long sequences of monotonically changing
intensity. In tests on over 1000 images the distance scale detection rate was 94.8% and the correct gradation
spacing was determined 91.2% of the time. The reference grayscale detection rate was 100%. A confidence
measure was also introduced to characterize the certainty of the distance scale detection. An optimal confidence
threshold for flagging low-confidence results that minimizes human intervention without risk of incorrect results
remaining unflagged was established through ROC curve analysis.
CT slice localization via instance-based regression
Show abstract
Automatically determining the relative position of a single CT slice within a full body scan provides several useful
functionalities. For example, it is possible to validate DICOM meta-data information. Furthermore, knowing
the relative position in a scan allows the efficient retrieval of similar slices from the same body region in other
volume scans. Finally, the relative position is often an important information for a non-expert user having only
access to a single CT slice of a scan. In this paper, we determine the relative position of single CT slices via
instance-based regression without using any meta data. Each slice of a volume set is represented by several
types of feature information that is computed from a sequence of image conversions and edge detection routines
on rectangular subregions of the slices. Our new method is independent from the settings of the CT scanner
and provides an average localization error of less than 4.5 cm using leave-one-out validation on a dataset of 34
annotated volume scans. Thus, we demonstrate that instance-based regression is a suitable tool for mapping
single slices to a standardized coordinate system and that our algorithm is competitive to other volume-based
approaches with respect to runtime and prediction quality, even though only a fraction of the input information
is required in comparison to other approaches.
A robust model order estimation and segmentation technique for classification of biopsies in breast cancer
Show abstract
The difficult problem of identifying dominant structures in unknown data sets has been elegantly addressed
recently by a non-parametric information theoretic approach, the "Jump" method. The method employs an
appropriate but fixed power transformation on the distortion-rate, D(R), curve estimated by the popular K-means
algorithm. Although this approach yields good results asymptotically for higher dimensional spaces, in many
practical cases involving lower dimensional spaces, a transformation function with a fixed power may not find the
correct model order. The work presented here develops an objective function to derive a more suitable
transformation function that minimizes classification error in low dimensional data sets. In addition, a number of
carefully chosen K-means seeding methods based upon proper heuristic choices have been used to enhance the
detection sensitivity and to allow a more accurate estimation. The proposed method has been evaluated for a large
variety of datasets and compared with the original Jump method and other well-known order estimation methods
such as Minimum Description Length (MDL), Akaike Information Criteria (AIC), and Consistent Akaike
Information Criteria (CAIC), demonstrating superior overall performance. Comparative results for the Wisconsin
Diagnostic Breast Cancer Dataset have been included. This modified information theoretic approach to model order
estimation is expected to improve and validate diagnostic classification and detection of pre-cancerous lesions.
Other applications such as finding plausible number of segments in image segmentation scenarios are also possible.a
Classification of cognitive states using functional MRI data
Ye Yang,
Ranadip Pal,
Michael O'Boyle
Show abstract
A fundamental goal of the analysis of fMRI data is to locate areas of brain activation that can differentiate various cognitive
tasks. Traditionally, researchers have approached fMRI analysis through characterizing the relationship between cognitive
variables and individual brain voxels. In recent years, multivariate approaches (analyze more than one voxel at once) to
fMRI data analysis have gained importance. But in majority of the multivariate approaches, the voxels used for classification
are selected based on prior biological knowledge or discriminating power of individual voxels. We used sequential
floating forward search (SFFS) feature selection approach for selecting the voxels and applied it to distinguish the cognitive
states of whether a subject is doing a reasoning or a counting task. We obtained superior classifier performance by using
the sequential approach as compared to selecting the features with best individual classifier performance. We analyzed the
problem of over-fitting in this extremely high dimensional feature space with limited training samples. For estimating the
accuracy of the classifier, we employed various estimation methods and discussed their importance in this small sample
scenario. Also we modified the feature selection algorithm by adding spatial information to incorporate the biological constraint that spatially nearby voxels tends to represent similar things.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
Show abstract
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would
not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful
ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated
against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic
cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method
used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness
estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual
expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic
markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to
the evaluated semi-manual markers.
Posters: Diffusion Tensor Image Analysis
Changes of MR and DTI appearance in early human brain development
Show abstract
Understanding myelination in early brain development is of clinical importance, as many neurological disorders
have their origin in early cerebral organization and maturation. The goal of this work is to study a large neonate
database acquired with standard MR imagery to illuminate effects of early development in MRI.
90 neonates were selected from a study of healthy brain development. Subjects were imaged via MRI postnatally.
MR acquisition included high-resolution structural and diffusion tensor images. Unbiased atlases for
structural and DTI data were generated and co-registered into a single coordinate frame for voxel-wise comparison
of MR and DTI appearance across time. All original datasets were mapped into this frame and structural
image data was additionally intensity normalized. In addition, myelinated white matter probabilistic segmentations
from our neonate tissue segmentation were mapped into the same space to study how our segmentation
results were affected by the changing intensity characteristics in early development
Linear regression maps and p-value maps were computed and visualized. The resulting visualization of voxelswise
corresponding maps of all MR and DTI properties captures early development information in MR imagery.
Surprisingly, we encountered regions of seemingly decreased myelinated WM probability over time even though
we expected a confident increase for all of the brain. The intensity changes in the MR images in those regions help
explain this counterintuitive result. The regressional results indicate that this is an effect of intensity changes due
not solely to myelination processes but also likely brain dehydration processes in early postnatal development.
Evaluation of DTI property maps as basis of DTI atlas building
Show abstract
Compared to region of interest based DTI analysis, voxel-based analysis gives higher degree of localization and avoids
the procedure of manual delineation with the resulting intra and inter-rater variability. One of the major challenges in
voxel-wise DTI analysis is to get high quality voxel-level correspondence. For that purpose, current DTI analysis tools
are building on nonlinear registration algorithms that deform individual datasets into a template image that is either
precomputed or computed as part of the analysis. A variety of matching criteria and deformation schemes have been
proposed, but often comparative evaluation is missing. In our opinion, the use of consistent and unbiased measures to
evaluate current DTI procedures is of great importance and our work presents two possible measures. Specifically, we
propose the evaluation criteria generalization and specificity, originally introduced by the shape modeling community, to
evaluate and compare different DTI nonlinear warping results. These measures are of indirect nature and have a
population wise view. Both measures incorporate information of the variability of the registration results in the template
space via a voxel-wise PCA model. Thus far, we have used these measures to evaluate our own DTI analysis procedure
employing fluid-based registration on scalar DTI maps. Generalization and specificity from tensor images in the
template space were computed for 8 scalar property maps. We found that for our procedure an intensity-normalized FA
feature outperformed the other scalar measurements. Also, using the tensor images rather than the FA maps as a
comparison frame seemed to produce more robust results.
Assessing fiber tracking accuracy via diffusion tensor software models
Show abstract
In the last few years, clinicians have started using fiber tracking algorithms for pre- and intraoperative
neurosurgical planning. In the absence of a ground truth, it is often difficult to asses the validity and
precision of these algorithms. To this end, we develop a realistic DTI software model in which multiple
fiber bundles and their geometrical configuration may be specified, also allowing for scenarios in which
fiber bundles cross or kiss and which are common bottlenecks of fiber tracking algorithms. Partial
voluming, that is the contributions of multiple tissues to a voxel, is taken into account. The model
gives us the possibility to compute the diffusion-weighted signal attenuation given certain tissue and
scanner parameters. On the tissue side we can model the diffusion coefficients, the principal diffusion
direction and the width of the fiber bundles. On the scanner side, we can model the diffusion time, the
strength and direction of the applied diffusion gradient and the width of the diffusion pulse. We also
include the possibility to add noise and various artifacts such as aliasing and N/2 ghosting to the model.
Having generated the model of a fiber bundle, we determine the distance between the tracked fibers and
the original model, thus being able to make assertions on the accuracy of the employed fiber tracking
algorithm. Moreover, we can use this information to give an indication about an appropriate width of a
safety margin around the tracked fiber bundle.
Automatic clustering of white matter fibers based on symbolic sequence analysis
Show abstract
Fiber clustering is a very important step towards tract-based, quantitative analysis of white matter via diffusion tensor
imaging (DTI). This work proposes a new computational framework for white matter fiber clustering based on symbolic
sequence analysis method. We first perform brain tissue segmentation on the DTI image using a multi-channel fusion
method and parcellate the whole brain into anatomically labeled regions via a hybrid volumetric and surface warping
algorithm. Then, we perform standard fiber tractography on the DTI image and encode each tracked fiber by a sequence
of labeled brain regions. Afterwards, the similarity between any pair of anatomically encoded fibers is defined as the
similarity of symbolic sequences, which is a well-studied problem in the bioinformatics domain such as is used for gene
and protein symbolic sequences comparisons. Finally, the normalized graph cut algorithm is applied to cluster the fibers
into bundles based on the above defined similarities between any pair of fibers. Our experiments show promising results
of the proposed fiber clustering framework.
Improving RESTORE for robust diffusion tensor estimation: a simulation study
Show abstract
Diffusion tensor magnetic resonance imaging (DT-MRI) is increasingly used in clinical research and
applications for its ability to depict white matter tracts and for its sensitivity to microstructural and
architectural features of brain tissue. However, artifacts are common in clinical DT-MRI acquisitions.
Signal perturbations produced by such artifacts can be severe and neglecting to account for their
contribution can result in erroneous diffusion tensor values. The Robust Estimation of Tensors by Outlier
Rejection (RESTORE) has been demonstrated to be an effective method for improving tensor estimation
on a voxel-by-voxel basis in the presence of artifactual data points in diffusion weighted images. Despite
the very good performance of the RESTORE algorithm, there are some limitations and opportunities for
improvement. Instabilities in tensor estimation using RESTORE have been observed in clinical human
brain data. Those instabilities can come from the intrinsic high frequency spin inflow effects in non-DWIs
or from excluding too many data points from the fitting. This paper proposes several practical constraints
to the original RESTORE method. Results from Monte Carlo simulation indicate that the improved RESTORE method reduces the instabilities in tensor estimation observed from the original RESTORE method.
White matter degeneration in schizophrenia: a comparative diffusion tensor analysis
Show abstract
Schizophrenia is a serious and disabling mental disorder. Diffusion tensor imaging (DTI) studies performed on
schizophrenia have demonstrated white matter degeneration either due to loss of myelination or deterioration of fiber
tracts although the areas where the changes occur are variable across studies. Most of the population based studies
analyze the changes in schizophrenia using scalar indices computed from the diffusion tensor such as fractional
anisotropy (FA) and relative anisotropy (RA). The scalar measures may not capture the complete information from the
diffusion tensor. In this paper we have applied the RADTI method on a group of 9 controls and 9 patients with
schizophrenia. The RADTI method converts the tensors to log-Euclidean space where a linear regression model is
applied and hypothesis testing is performed between the control and patient groups. Results show that there is a
significant difference in the anisotropy between patients and controls especially in the parts of forceps minor, superior
corona radiata, anterior limb of internal capsule and genu of corpus callosum. To check if the tensor analysis gives a
better idea of the changes in anisotropy, we compared the results with voxelwise FA analysis as well as voxelwise
geodesic anisotropy (GA) analysis.
Qualitative and quantitative analysis of probabilistic and deterministic fiber tracking
Show abstract
Fiber tracking (FT) and quantification algorithms are approximations of reality due to limited spatial resolution,
model assumptions, user-defined parameter settings, and physical imaging artifacts resulting from diffusion
sequences. Until now, correctness, plausibility, and reliability of both FT and quantification techniques have
mainly been verified using histologic knowledge and software or hardware phantoms. Probabilistic FT approaches
aim at visualizing the uncertainty present in the data by incorporating models of the acquisition process and
noise. The uncertainty is assessed by tracking many possible paths originating from a single seed point, thereby
taking the tensor uncertainty into account. Based on the tracked paths, maps of connectivity probabilities can be
produced, which may be used to delineate risk structures for presurgical planning. In this paper, we explore the
advantages and disadvantages of probabilistic approaches compared to deterministic algorithms and give both
qualitative and quantitative comparisons based on clinical data. We focus on two important clinical applications,
namely, on the reconstruction of fiber bundles within the proximity of tumors and on the quantitative analysis
of diffusion parameters along fiber bundles. Our results show that probabilistic FT is superior and suitable
for a better reconstruction at the borders of anatomical structures and is significantly more sensitive than the
deterministic approach for quantification purposes. Furthermore, we demonstrate that an alternative tracking
approach, called variational noise tracking, is qualitatively comparable with a standard probabilistic method,
but is computationally less expensive, thus, enhancing its appeal for clinical applications.
Posters: Motion Analysis
3D motion analysis of keratin filaments in living cells
Gerlind Herberich,
Reinhard Windoffer,
Rudolf Leube,
et al.
Show abstract
We present a novel and efficient approach for 3D motion estimation of keratin intermediate filaments in vitro.
Keratin filaments are elastic cables forming a complex scaffolding within epithelial cells. To understand the
mechanisms of filament formation and network organisation under physiological and pathological conditions,
quantitative measurements of dynamic network alterations are essential. Therefore we acquired time-lapse series
of 3D images using a confocal laser scanning microscope. Based on these image series, we show that a dense vector
field can be computed such that the displacements from one frame to the next can be determined. Our method
is based on a two-step registration process: First, a rigid pre-registration is applied in order to compensate for
possible global cell movement. This step enables the subsequent nonrigid registration to capture only the sought
local deformations of the filaments. As the transformation model of the deformable registration algorithm is
based on Free Form Deformations, it is well suited for modeling filament network dynamics. The optimization
is performed using efficient linear programming techniques such that the huge amount of image data of a time
series can be efficiently processed. The evaluation of our results illustrates the potential of our approach.
3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces
Show abstract
Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact,
corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces
deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can
be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged
images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific
harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based
interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed
method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction
was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces
based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines
identification and tracking, and hence, generating the surfaces.
Development of a particle filter framework for respiratory motion correction in nuclear medicine imaging
Show abstract
This research aims to develop a methodological framework based on a data driven approach known as particle filters,
often found in computer vision methods, to correct the effect of respiratory motion on Nuclear Medicine imaging data.
Particles filters are a popular class of numerical methods for solving optimal estimation problems and we wish to use
their flexibility to make an adaptive framework. In this work we use the particle filter for estimating the deformation of
the internal organs of the human torso, represented by X, over a discrete time index k. The particle filter approximates
the distribution of the deformation of internal organs by generating many propositions, called particles. The posterior
estimate is inferred from an observation Zk of the external torso surface. We demonstrate two preliminary approaches in
tracking organ deformation. In the first approach, Xk represent a small set of organ surface points. In the second
approach, Xk represent a set of affine organ registration parameters to a reference time index r. Both approaches are
contrasted to a comparable technique using direct mapping to infer Xk from the observation Zk. Simulations of both
approaches using the XCAT phantom suggest that the particle filter-based approaches, on average performs, better.
A comparison of tracking methods for swimming C. elegans
Christophe Restif,
Dimitris Metaxas
Show abstract
Tracking the swimming motion of C. elegans worms is of high interest for a variety of research projects on behavior
in biology, from aging to mating studies. We compare six different tracking methods, derived from two types
of image preprocessing, namely local and global thresholding methods, and from three types of segmentation
methods: low-level vision, and articulated models of either constant or varying width. All these methods have
been successfully used in recent related works, with some modifications to adapt them to swimming motions.
We show a quantitative comparison of these methods using computer-vision measures. To discuss their relative
strengths and weaknesses, we consider three scenarios of behavior studies, depending on the constraints of a
C. elegans project, and give suggestions as to which methods are more adapted to each case, and how to further
improve them.
An observation model for motion correction in nuclear medicine
Show abstract
This paper describes a method of using a tracking system to track the upper part of the anterior surface during
scanning for developing patient-specific models of respiration. In the experimental analysis, the natural variation
in the anterior surface during breathing will be modeled to reveal the dominant pattern in the breathing cycle.
The main target is to produce a patient-specific set of parameters that describes the configuration of the anterior
surface for all respiration phases. These data then will be linked to internal organ motion to identify the effect
of the morphology of each on motion using particle filter to account for previously unseen patterns of motion. In
this initial study, a set of volunteers were imaged using the Codamotion infrared marker-based system. In the
marker-based system, the temporal variation of the respiratory motion was studied. This showed that for the
12 volunteer cohort, the mean displacement of the thorax surface TS (abdomen surface AS) region is 10.7±5.6
mm (16.0±9.5mm). Finally, PCA was shown to capture the redundancy in the data set with the first principal
component (PC) accounting for more than 96% of the overall variance in both AS and TS datasets. A fitting
to the dominant modes of variation using a simple piecewise sinusoid has suggested a maximum error of about
1.1mm across the complete cohort dataset.
Image-based motion estimation for cardiac CT via image registration
Show abstract
Images reconstructed from tomographic projection data are subject to motion artifacts from organs that move during the
duration of the scan. The effect can be reduced by taking the motion into account in the reconstruction algorithm if an
estimate of the deformation exists. This paper presents the estimation of the three-dimensional cardiac motion by
registering reconstructed images from cardiac quiet phases as a first step towards motion-compensated cardiac image
reconstruction. The non-rigid deformations of the heart are parametrized on a coarse grid on the image volume and are
interpolated with cubic b-splines. The optimization problem of finding b-spline coefficients that best describe the
observed deformations is ill-posed due to the large number of parameters and the resulting motion vector field is
sensitive to the choice of initial parameters. Particularly challenging is the task to capture the twisting motion of the
heart. The motion vector field from a dynamic computer phantom of the human heart is used to initialize the
transformation parameters for the optimization process with realistic starting values. The results are evaluated by
comparing the registered images and the obtained motion vector field to the case when the registration is performed
without using prior knowledge about the expected cardiac motion. We find that the registered images are similar for both
approaches, but the motion vector field obtained from motion estimation initialized with the phantom describes the
cardiac contraction and twisting motion more accurately.
Third brain ventricle deformation analysis using fractional differentiation and evolution strategy in brain cine-MRI
Show abstract
In this paper, we present an original method to evaluate the deformations in the third cerebral ventricle on a brain cine-
MR imaging. First, a segmentation process, based on a fractional differentiation method, is directly applied on a 2D+t
dataset to detect the contours of the region of interest (i.e. lamina terminalis). Then, the successive segmented contours
are matched using a procedure of global alignment, followed by a morphing process, based on the Covariance Matrix
Adaptation Evolution Strategy (CMAES). Finally, local measurements of deformations are derived from the previously
determined matched contours. The validation step is realized by comparing our results with the measurements achieved
on the same patients by an expert.
Endoscopic egomotion computation
Show abstract
Computer assistance in Minimally Invasive Surgery is a very active field of research. Many systems designed for Computer Assisted Surgery require information about the instruments' positions and orientations. Our main focus lies on tracking a laparoscopic ultrasound probe to generate 3D ultrasound volumes. State-of-the-art tracking methods such as optical or electromagnetic tracking systems measure pose with respect to a fixed extra-body coordinate system. This causes inaccuracies of the reconstructed ultrasound volume in the case of patient motion, e.g. due to respiration. We propose attaching an endoscopic camera to the ultrasound probe and calculating the camera motion from the video sequence with respect to the organ surface. We adapt algorithms developed for solving the relative pose problem to recreate the camera path during the ultrasound sweep over the organ. By this image-based motion estimation camera motion can only be determined up to an unknown scale factor, known as the depth-speed-ambiguity. We show, how this problem can be overcome in the given scenario, exploiting the fact, that the distance of the camera to the organ surface is fixed and known. Preprocessing steps are applied to compensate for endoscopic image quality deficiencies.
Posters: Registration
Diffeomorphic demons using normalized mutual information, evaluation on multimodal brain MR images
Show abstract
The demons algorithm is a fast non-parametric non-rigid registration method. In recent years great efforts
have been made to improve the approach; the state of the art version yields symmetric inverse-consistent largedeformation
diffeomorphisms. However, only limited work has explored inter-modal similarity metrics, with
no practical evaluation on multi-modality data. We present a diffeomorphic demons implementation using the
analytical gradient of Normalised Mutual Information (NMI) in a conjugate gradient optimiser. We report the
first qualitative and quantitative assessment of the demons for inter-modal registration. Experiments to spatially
normalise real MR images, and to recover simulated deformation fields, demonstrate (i) similar accuracy from
NMI-demons and classical demons when the latter may be used, and (ii) similar accuracy for NMI-demons on
T1w-T1w and T1w-T2w registration, demonstrating its potential in multi-modal scenarios.
Evaluation of five non-rigid image registration algorithms using the NIREP framework
Show abstract
Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold
standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison
of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project
(NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized
databases of well-characterized images and standard evaluation statistics (methods) which are implemented
in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and
SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation
statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency
error and transitivity error) were used to evaluate and compare image registration performance. The
results indicate that the Demons registration algorithm produced the best registration results with respect to the
relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency
statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst
for another illustrates the need to use multiple evaluation statistics to fully assess performance.
Reliable fusion of knee bone laser scans to establish ground truth for cartilage thickness measurement
Show abstract
We are interested in establishing ground truth data for validating morphology measurements of human knee
cartilage from MR imaging. One promising approach is to compare the high-accuracy 3D laser scans of dissected
cadaver knees before and after the dissolution of their cartilage. This requires an accurate and reliable method
to fuse the individual laser scans from multiple views of the cadaver knees. Unfortunately existing methods
using Iterative Closest Point (ICP) algorithm from off-the-shell packages often yield unreliable fusion results.
We identify two major sources of variation: (i) the noise in depth measurements of the laser scans is significantly
high and (ii) the use of point-to-point correspondence in ICP is not suitable due to sampling variation in the
laser scans. We resolve the first problem by performing adaptive Gaussian smoothing on each individual laser
scans prior to the fusion. For the second problem, we construct a surface mesh from the point cloud of each scan
and adopt a point-to-mesh ICP scheme for pairwise alignment. The complete surface mesh is constructed by
fusing all the scans in the order maximizing mutual overlaps. In experiments on 6 repeated scanning trials of a
cadaver knee, our approach reduced the alignment error of point-to-point ICP by 30% and reduced coefficient of
variation (CV) of cartilage thickness measurements from 5% down to 1.4%, significantly improving the method's
repeatability.
Multicontrast MRI registration of carotid arteries in atherosclerotic and normal subjects
Luca Biasiolli,
J. Alison Noble,
Matthew D. Robson
Show abstract
Clinical studies on atherosclerosis agree that multi-contrast MRI is the most promising technique for in-vivo
characterization of carotid plaques. Multi-contrast image registration is essential for this application, because it corrects
misalignments caused by patient motion during MRI acquisition. To date, it has not been determined which automatic
method provides the best registration accuracy in carotid MRI. This study tries to answer this question by presenting an
iterative coarse-to-fine algorithm that co-registers multi-contrast images of carotid arteries using three similarity metrics:
Correlation Ratio (CR), Mutual Information (MI) and Gradient MI (GMI). The registration algorithm is first applied on
the entire images and then only on the Region of Interest (ROI) of the carotid arteries using sub-pixel accuracy. The ROI
is defined by an automatic carotid detection algorithm, which was tested on a group of 20 patients with different types of
atherosclerotic plaques (sensitivity 91% and specificity 88%). Automatic registration was compared with image
alignment obtained by manual operators (clinically qualified vascular specialists). Registration accuracies were measured
using a novel MRI validation procedure, in which the gold standard is represented by in-plane rigid transformations
applied by the MRI system to mimic neck movements. Overall, automatic methods (GMI = 181 ± 104 μm) produced
lower registration errors than manual operators (365 ± 102 μm). GMI performed slightly better than CR and MI,
suggesting that anatomical information improves registration accuracy in the carotid ROI.
Cylindrical affine transformation model for image registration
Show abstract
This paper describes the development of a cylindrical affine transformation model for image registration. The
usefulness of the model for initial alignment was demonstrated for the application of registering prone and
supine 3D MR images of the breast. Final registration results visually improved when using the cylindrical affine
transformation model instead of none or a Cartesian affine transformation model before non-rigid registration.
An improved 3D shape context registration method for non-rigid surface registration
Show abstract
3D shape context is a method to define matching points between similar shapes as a pre-processing step to non-rigid
registration. The main limitation of the approach is point mismatching, which includes long geodesic distance mismatch
and neighbors crossing mismatch. In this paper, we propose a topological structure verification method to correct the
long geodesic distance mismatch and a correspondence field smoothing method to correct the neighbors crossing
mismatch. A robust 3D shape context model is proposed and further combined with thin-plate spline model for non-rigid
surface registration. The method was tested on phantoms and rat hind limb skeletons from micro CT images. The results
from experiments on mouse hind limb skeletons indicate that the approach is robust.
Optical flow based deformable volume registration using a novel second-order regularization prior
Show abstract
Nonlinear image registration is an initial step for a large number of medical image analysis applications. Optical
flow based intensity registration is often used for dealing with intra-modality applications involving motion
differences. In this work we present an energy functional which uses a novel, second-order regularization prior
of the displacement field. Compared to other methods our scheme is robust to non-Gaussian noise and does not
penalize locally affine deformation fields in homogeneous areas. We propose an efficient and stable numerical
scheme to find the minimizer of the presented energy. We implemented our algorithm using modern consumer
graphics processing units and thereby increased the execution performance dramatically. We further show
experimental evaluations on clinical CT thorax data sets at different breathing states and on dynamic 4D CT
cardiac data sets.
Automatic estimation of registration parameters: image similarity and regularization
T. R. Langerak,
U. A. van der Heide,
A. N. T. J. Kotte,
et al.
Show abstract
Image registration is a procedure to spatially align two images that is often used in, for example, computer-aided
diagnosis or segmentation applications. To maximize the flexibility of image registration methods, they depend on many
registration parameters that must be fine-tuned for each specific application. Tuning parameters is a time-consuming
task, that would ideally be performed for each individual registration. However, doing this manually for each registration
is too time-consuming, and therefore we would like to do this automatically. This paper proposes a methodology to
estimate one of most important parameters in a registration procedure, the regularization setting, on the basis of the
image similarity. We test our method on a set of images of prostate cancer patients and show that using the proposed
methodology, we can improve the result of image registration when compared to using an average-best parameter.
LCC demons with divergence term for liver MRI motion correction
Show abstract
Contrast-enhanced liver MR image sequences acquired at multiple times before and after contrast administration
have been shown to be critically important for the diagnosis and monitoring of liver tumors and may be used
for the quantification of liver inflammation and fibrosis. However, over multiple acquisitions, the liver moves
and deforms due to patient and respiratory motion. In order to analyze contrast agent uptake one first needs
to correct for liver motion. In this paper we present a method for the motion correction of dynamic contrastenhanced
liver MR images. For this purpose we use a modified version of the Local Correlation Coefficient
(LCC) Demons non-rigid registration method. Since the liver is nearly incompressible its displacement field has
small divergence. For this reason we add a divergence term to the energy that is minimized in the LCC Demons
method. We applied the method to four sequences of contrast-enhanced liver MR images. Each sequence had a
pre-contrast scan and seven post-contrast scans. For each post-contrast scan we corrected for the liver motion
relative to the pre-contrast scan. Quantitative evaluation showed that the proposed method improved the liver
alignment relative to the non-corrected and translation-corrected scans and visual inspection showed no visible
misalignment of the motion corrected contrast-enhanced scans and pre-contrast scan.
Towards analysis of growth trajectory through multimodal longitudinal MR imaging
Neda Sadeghi,
Marcel Prastawa,
John H. Gilmore,
et al.
Show abstract
The human brain undergoes significant changes in the first few years after birth, but knowledge about this
critical period of development is quite limited. Previous neuroimaging studies have been mostly focused on
morphometric measures such as volume and shape, although tissue property measures related to the degree of
myelination and axon density could also add valuable information to our understanding of brain maturation.
Our goal is to complement brain growth analysis via morphometry with the study of longitudinal tissue property
changes as reflected in patterns observed in multi-modal structural MRI and DTI. Our preliminary study includes
eight healthy pediatric subjects with repeated scans at the age of two weeks, one year, and two years with T1,
T2, PD, and DT MRI. Analysis is driven by the registration of multiple modalities and time points within and
between subjects into a common coordinate frame, followed by image intensity normalization. Quantitative
tractography with diffusion and structural image parameters serves for multi-variate tissue analysis. Different
patterns of rapid changes were observed in the corpus callosum and the posterior and anterior internal capsule,
structures known for distinctly different myelination growth. There are significant differences in central versus
peripheral white matter. We demonstrate that the combined longitudinal analysis of structural and diffusion
MRI proves superior to individual modalities and might provide a better understanding of the trajectory of early neurodevelopment.
A fast rigid-registration method of inferior limb x-ray image and 3D CT images for TKA surgery
Show abstract
In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed
Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty
(TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT
images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is
captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the
conventional registration mainly uses cross-correlation function between two images,and utilizes optimization
techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these
problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial
positions for the registration. We evaluate our registration method by using three patient's image data, and we compare
our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex
method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of
derivatives. Our registration method is more effective than the downhill simplex method in computational time and the
stable convergence. We have developed the implant simulation system on a personal computer, in order to support the
surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user
can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.
Detection of stable mammographic features under compression using simulated mammograms
Show abstract
Stable features under simulated mammographic compressions, which will become candidate landmarks for a temporal
mammographic feature-based registration algorithm, are discussed in this paper. Using these simulated mammograms,
we explore the extraction of features based on standard intensity projection images and local phase projection images.
One approach to establishing corresponding features is by template matching using a similarity measure. Simulated
mammographic projections from deformed MR volumes are employed, as the mean projected 3D displacements are
computed and therefore validation of the technique is possible. Tracking is done by template matching using normalized
cross correlation as the similarity measure. The performance of standard projection images and local phase projection
images is compared. The preliminary results reveal that although the majority of the points within the breast are difficult
to track, a small number may be successfully tracked, which is indicative of their stability and thus their suitability as
candidate landmarks. Whilst matching using the standard projection images achieves an overall error of 14.46mm, this
error increases to 22.7mm when computing local phase of the projection images. These results suggest that using local
phase alone does not improve template matching. For the identification of stable landmarks for feature-based
mammogram registration, we conclude that intensity based template matching using normalized correlation is a feasible
approach for identifying stable features.
Improving fluid registration through white matter segmentation in a twin study design
Show abstract
Robust and automatic non-rigid registration depends on many parameters that have not yet been systematically
explored. Here we determined how tissue classification influences non-linear fluid registration of brain MRI. Twin
data is ideal for studying this question, as volumetric correlations between corresponding brain regions that are
under genetic control should be higher in monozygotic twins (MZ) who share 100% of their genes when compared
to dizygotic twins (DZ) who share half their genes on average. When these substructure volumes are quantified
using tensor-based morphometry, improved registration can be defined based on which method gives higher MZ
twin correlations when compared to DZs, as registration errors tend to deplete these correlations. In a study of 92
subjects, higher effect sizes were found in cumulative distribution functions derived from statistical maps when
performing tissue classification before fluid registration, versus fluidly registering the raw images. This gives
empirical evidence in favor of pre-segmenting images for tensor-based morphometry.
Direction-dependent regularization for improved estimation of liver and lung motion in 4D image data
Show abstract
The estimation of respiratory motion is a fundamental requisite for many applications in the field of 4D medical
imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done using non-linear
registration of time frames of the sequence without further modelling of physiological motion properties. In this
context, the accurate calculation of liver und lung motion is especially challenging because the organs are slipping
along the surrounding tissue (i.e. the rib cage) during the respiratory cycle, which leads to discontinuities in the
motion field. Without incorporating this specific physiological characteristic, common smoothing mechanisms
cause an incorrect estimation along the object borders.
In this paper, we present an extended diffusion-based model for incorporating physiological knowledge in image
registration. By decoupling normal- and tangential-directed smoothing, we are able to estimate slipping motion
at the organ borders while preventing gaps and ensuring smooth motion fields inside.
We evaluate our model for the estimation of lung and liver motion on the basis of publicly accessible 4D CT
and 4D MRI data. The results show a considerable increase of registration accuracy with respect to the target
registration error and a more plausible motion estimation.
An intensity-based approach to x-ray mammography: MRI registration
Show abstract
This paper presents a novel approach to X-ray mammography - MRI registration. The proposed method uses
an intensity-based technique and an affine transformation matrix to approximate the 3D deformation of the
breast resulting from the compression applied during mammogram acquisition. The registration is driven by a
similarity measure that is calculated at each iteration of the algorithm between the target X-ray mammogram and
a simulated X-ray image, created from the MR volume. Although the similarity measure is calculated in 2D, we
compute a 3D transformation that is updated at each iteration. We have performed two types of experiments.
In the first set, we used simulated X-ray target data, for which the ground truth deformation of the volume
was known and thus the results could be validated. For this case, we examined the performance of 4 different
similarity measures and we show that Normalized Cross Correlation and Gradient Difference perform best. The
calculated mean reprojection error was for both similarity measures 4mm, for an initial misregistration of 14mm.
In the second set of experiments, we present the initial results of registering real X-ray mammograms with MR
volumes. The results indicate that the breast boundaries were registered well and the volume was deformed in
3D in a similar way to the deformation of the breast during X-ray mammogram acquisition. The experiments
were carried out on five patients.
3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications
Show abstract
Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy
in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a
volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the
volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central
slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase
image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small
neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D
US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been
compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show
average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much
better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster
than volumetric registration (~4.5 seconds versus 83 seconds).
Multi-Modality fiducial marker for validation of registration of medical images with histology
Show abstract
A multi-modality fiducial marker is presented in this work, which can be used for validating the correlation of histology
images with medical images. This marker can also be used for landmark-based image registration. Seven different
fiducial markers including a catheter, spaghetti, black spaghetti, cuttlefish ink, and liquid iron are implanted in a mouse
specimen and then investigated based on visibility, localization, size, and stability. The black spaghetti and the mixture
of cuttlefish ink and flour are shown to be the most suitable markers. Based on the size of the markers, black spaghetti is
more suitable for big specimens and the mixture of the cuttlefish ink, flour, and water injected in a catheter is more
suitable for small specimens such as mouse tumours. These markers are visible on medical images and also detectable on
histology and optical images of the tissue blocks. The main component in these agents which enhances the contrast is
iron.
Fast correspondences search in anatomical trees
Show abstract
Registration of multiple medical images commonly comprises the steps feature extraction, correspondences search and transformation computation. In this paper, we present a new method for a fast and pose independent search of correspondences using as features anatomical trees such as the bronchial system in the lungs or the vessel system in the liver. Our approach scores the similarities between the trees' nodes (bifurcations) taking into account both, topological properties extracted from their graph representations and anatomical properties extracted from the trees themselves. The node assignment maximizes the global similarity (sum of the scores of each pair of assigned nodes), assuring that the matches are distributed throughout the trees. Furthermore, the proposed method is able to deal with distortions in the data, such as noise, motion, artifacts, and problems associated with the extraction method, such as missing or false branches. According to an evaluation on swine lung data sets, the method requires less than one second on average to compute the matching and yields a high rate of correct matches compared to state of the art work.
Evaluation of an efficient GPU implementation of digitally reconstructed radiographs in 3D/2D image registration
Show abstract
Intensity-based three-dimensional to two-dimensional (3D/2D) X-ray image registration algorithms usually require
generating digitally reconstructed radiographs (DRRs) in every iteration during their optimization phase.
Thus a large part of the computation time of such registration algorithms is spent in computing these DRRs. In
a 3D-to-multiple-2D image registration framework where a sequence of DRRs is calculated, not only the computation
but also the memory cost is high. We present an efficient DRR generation method to reduce both costs on
a graphical processing units (GPU) implementation. The method relies on integrating a precomputation stage
and a narrow-band region-of-interest calculation into the DRR generation. We have demonstrated its benefits on
a previously proposed non-rigid 4D-to-multiple-2D image registration framework to estimate cerebral aneurysm
wall motion. The two tested algorithms initially required several hours of highly intensive computation that
involves generating a large number of DRRs in every iteration. In this paper, results on datasets of digital
and physical pulsating cerebral aneurysm phantoms showed a speedup factor of around 50x in the generation of
DRRs. In further image registration based wall motion estimation experiments using our implementation, we
could obtain estimation results through the whole cardiac cycle within 5 minutes without degrading the overall performance.
Markov random field optimization for intensity-based 2D-3D registration
Show abstract
We propose a Markov Random Field (MRF) formulation for the intensity-based N-view 2D-3D registration problem. The
transformation aligning the 3D volume to the 2D views is estimated by iterative updates obtained by discrete optimization
of the proposed MRF model. We employ a pairwise MRF model with a fully connected graph in which the nodes represent
the parameter updates and the edges encode the image similarity costs resulting from variations of the values of adjacent
nodes. A label space refinement strategy is employed to achieve sub-millimeter accuracy. The evaluation on real and
synthetic data and comparison to state-of-the-art method demonstrates the potential of our approach.
Image similarity metrics in image registration
A. Melbourne,
G. Ridgway,
D. J. Hawkes
Show abstract
Measures of image similarity that inspect the intensity probability distribution of the images have proved extremely
popular in image registration applications. The joint entropy of the intensity distributions and the
marginal entropies of the individual images are combined to produce properties such as resistance to loss of
information in one image and invariance to changes in image overlap during registration. However information
theoretic cost functions are largely used empirically. This work attempts to describe image similarity measures
within a formal mathematical metric framework. Redefining mutual information as a metric is shown to lead
naturally to the standardised variant, normalised mutual information.
Registration-based interpolation applied to cardiac MRI
Show abstract
Various approaches have been proposed for segmentation of cardiac MRI. An accurate segmentation of the
myocardium and ventricles is essential to determine parameters of interest for the function of the heart, such as
the ejection fraction. One problem with MRI is the poor resolution in one dimension.
A 3D registration algorithm will typically use a trilinear interpolation of intensities to determine the intensity
of a deformed template image. Due to the poor resolution across slices, such linear approximation is highly
inaccurate since the assumption of smooth underlying intensities is violated. Registration-based interpolation
is based on 2D registrations between adjacent slices and is independent of segmentations. Hence, rather than
assuming smoothness in intensity, the assumption is that the anatomy is consistent across slices. The basis for
the proposed approach is the set of 2D registrations between each pair of slices, both ways. The intensity of a
new slice is then weighted by (i) the deformation functions and (ii) the intensities in the warped images. Unlike
the approach by Penney et al. 2004, this approach takes into account deformation both ways, which gives more
robustness where correspondence between slices is poor.
We demonstrate the approach on a toy example and on a set of cardiac CINE MRI. Qualitative inspection reveals
that the proposed approach provides a more convincing transition between slices than images obtained by linear
interpolation. A quantitative validation reveals significantly lower reconstruction errors than both linear and
registration-based interpolation based on one-way registrations.
Automated algorithm for atlas-based segmentation of the heart and pericardium from non-contrast CT
Show abstract
Automated segmentation of the 3D heart region from non-contrast CT is a pre-requisite for automated quantification of
coronary calcium and pericardial fat. We aimed to develop and validate an automated, efficient atlas-based algorithm for
segmentation of the heart and pericardium from non-contrast CT.
A co-registered non-contrast CT atlas is first created from multiple manually segmented non-contrast CT data. Noncontrast
CT data included in the atlas are co-registered to each other using iterative affine registration, followed by a
deformable transformation using the iterative demons algorithm; the final transformation is also applied to the segmented
masks. New CT datasets are segmented by first co-registering to an atlas image, and by voxel classification using a
weighted decision function applied to all co-registered/pre-segmented atlas images. This automated segmentation
method was applied to 12 CT datasets, with a co-registered atlas created from 8 datasets. Algorithm performance was
compared to expert manual quantification.
Cardiac region volume quantified by the algorithm (609.0 ± 39.8 cc) and the expert (624.4 ± 38.4 cc) were not
significantly different (p=0.1, mean percent difference 3.8 ± 3.0%) and showed excellent correlation (r=0.98, p<0.0001).
The algorithm achieved a mean voxel overlap of 0.89 (range 0.86-0.91). The total time was <45 sec on a standard
windows computer (100 iterations). Fast robust automated atlas-based segmentation of the heart and pericardium from
non-contrast CT is feasible.
Mosaicing of microscope images in the presence of large areas with insufficient information content
Show abstract
In virtual microscopy, multiple overlapping fields of view are acquired from a large slide using a motorized
microscope stage that moves and focuses the slide automatically. A virtual slide is reconstructed by combining
digitally saved fields of view into an image mosaic. A seamless reconstruction requires the correction of unknown
positioning errors of the stage. This is usually done by automatically estimating alignment parameters of the
tiles in the image mosaic. But finding accurate alignment parameters can be inhibited by the presence of
tiles that lack information content in the areas of overlap. In this work we propose a new mosaicing method
that accesses information content of each overlap and performs pairwise registrations of adjacent tiles only if
the content of their overlap is deemed sufficient for successful registration. For global positioning of tiles a
stitching path is found by tracing such content-rich overlaps. We tested the proposed algorithm on bright field
and fluorescence microscope images and compared the results with those of an existing algorithm based on
simultaneous estimation of global alignment parameters. It is shown that the new algorithm improves perceived
image quality at boundaries between tiles. Our method is also computationally efficient since it performs no
more than one pairwise registration per tile on average.
Volume-constrained image registration for pre- and post-operative CT liver data
Show abstract
The resection of a tumor is one of the most common tasks in liver surgery. Here, it is of particular importance to
resect the tumor and a safety margin on the one hand and on the other hand to preserve as much healthy liver
tissue as possible. To this end, a preoperative CT scan is taken in order to come up with a sound resection strategy.
It is the purpose of this paper to compare the preoperative planning with the actual resection result. Obviously
the pre- and postoperative data is not straightforward comparable, a meaningful registration is required. In the
literature one may find a rigid and a landmark-based approach for this task. Whereas the rigid registration does
not compensate for nonlinear deformation the landmark approach may lead to an unwanted overregistration.
Here we propose a fully automatic nonlinear registration with volume constraints which seems to overcome both
aforementioned problems and does lead to satisfactory results in our test cases.
Medical image registration using the modified conditional entropy measure combining the spatial and intensity information
Show abstract
We propose an image registration technique using spatial and intensity information. The registration is conducted by the
use of a measure based on the entropy of conditional probabilities. To achieve the registration, we first define a modified
conditional entropy (MCE) computed from the joint histograms for the area intensities of two given images. In order to
combine the spatial information into a traditional registration measure, we use the gradient vector flow field. Then the
MCE is computed from the gradient vector flow intensity (GVFI) combining the gradient information and their intensity
values of original images. To evaluate the performance of the proposed registration method, we conduct various
experiments with our method as well as existing method based on the mutual information (MI) criteria. We evaluate the
precision of MI- and MCE-based measurements by comparing the registration obtained from MR images and
transformed CT images. The experimental results show that our proposed method is a more accurate technique.
Posters: Image Restoration and Enhancement
Improving arterial spin labeling data by temporal filtering
Jan Petr,
Jean-Christophe Ferre,
Jean-Yves Gauvrit,
et al.
Show abstract
Arterial spin labeling (ASL) is an MRI method for imaging brain perfusion by magnetically
labeling blood in brain feeding arteries. The perfusion is obtained from the
difference between images with and without prior labeling.
Image noise is one of the main problems of ASL as the difference is around
0.5-2% of the image magnitude. Usually, 20-40 pairs of images need to be
acquired and averaged to reach a satisfactory quality.
The images are acquired shortly after the labeling to allow the labeled blood to reach the
imaged slice. A sequence of images with multiple delays is more suitable for quantification
of the cerebral blood flow as it gives more information about the blood arrival and relaxation.
Although the quantification methods are sensitive to noise, no filtering or only Gaussian filtering is
used to denoise the data in the temporal domain prior to quantification.
In this article, we propose an efficient way
to use the redundancy of information in the time sequence of each pixel
to suppress noise. For this purpose, the vectorial NL-means method is adapted to work in the temporal
domain. The proposed method is tested on simulated and real 3T MRI data. We demonstrate a clear improvement
of the image quality as well as a better performance compared to Gaussian and normal spatial NL-means
filtering.
Compact rotation invariant descriptor for non-local means
Show abstract
Non-local means is a recently proposed denoising technique that better preserves image structures than other
methods. However, the computational cost of non-local means is prohibitive, especially for large 3D images.
Modifications have previously been proposed to reduce the cost, which result in image artefacts. This paper
proposes a compact rotation invariant descriptor. Testing demonstrates improved denoising performance relative
to optimized non-local means. Rotation invariant non-local means is an order of magnitude faster.
Novel registration-based image enhancement for x-ray fluoroscopy
Show abstract
High image noise in low-dose fluoroscopic x-ray often necessitates additional radiographic-dose exposures to patients to
include as part of the medical records. We present an image registration based approach for the generation of highquality
images from a sequence of low-dose x-ray fluoroscopy exposures. Image subregions in consecutively acquired
fluoroscopy frames are registered to subregions in a pre-selected reference frame using a two-dimensional
transformation model. Frames neighboring the reference image are resampled using a smooth deformation field
generated by interpolation of the individual subregion deformations. Motion-corrected neighboring frames are then
combined with the reference frame using a weighted, frequency-specific multi-resolution combination method. Using
this method, image noise (localized standard deviation) was reduced by 38% in phantom data and by 29% in clinical
barium swallow examinations. We demonstrate an effective method for generating a simulated radiographic-dose x-ray
image from a set of consecutively acquired low-dose fluoroscopy images. The significant improvement in image quality
indicates the potential of this approach to lower average patient dose by substantially reducing the need for additional
exposures for patient records.
Application of a modified regularization procedure for estimating oxygen tension in large retinal blood vessels
Show abstract
Phosphorescence lifetime measurement based on a frequency domain approach is used to estimate oxygen tension in
large retinal blood vessels. The classical least squares (LS) estimation was initially used to determine oxygen tension
indirectly from intermediate variables. A spatial regularized least squares (RLS) method was later proposed to reduce the
high variance of oxygen tension estimated by LS method. In this paper, we provide a solution using a modified RLS
(MRLS) approach that utilizes prior knowledge about retinal vessels oxygenation based on expected oxygen tension
values in retinal arteries and veins. The performance of MRLS method was evaluated in simulated and experimental
data by determining the bias, variance, and mean absolute error (MAE) of oxygen tension measurements and comparing
these parameters with those derived with the use of LS and RLS methods.
Posters: Segmentation
Automated extraction method for the center line of spinal canal and its application to the spinal curvature quantification in torso x-ray CT images
Show abstract
X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT
scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously
interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for
interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is
important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification
scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of
four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction
of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the
spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared
with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence
interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the
proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These
experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature
on CT images.
Closing of interrupted vascular segmentations: an automatic approach based on shortest paths and level sets
Show abstract
Exact segmentations of the cerebrovascular system are the basis for several medical applications, like preoperation
planning, postoperative monitoring and medical research. Several automatic methods for the extraction of the
vascular system have been proposed. These automatic approaches suffer from several problems. One of the
major problems are interruptions in the vascular segmentation, especially in case of small vessels represented by
low intensities. These breaks are problematic for the outcome of several applications e.g. FEM-simulations and
quantitative vessel analysis. In this paper we propose an automatic post-processing method to connect broken
vessel segmentations. The approach proposed consists of four steps. Based on an existing vessel segmentation
the 3D-skeleton is computed first and used to detect the dead ends of the segmentation. In a following step
possible connections between these dead ends are computed using a graph based approach based on the vesselness
parameter image. After a consistency check is performed, the detected paths are used to obtain the final
segmentation using a level set approach. The method proposed was validated using a synthetic dataset as well as
two clinical datasets. The evaluation of the results yielded by the method proposed based on two Time-of-Flight
MRA datasets showed that in mean 45 connections between dead ends per dataset were found. A quantitative
comparison with semi-automatic segmentations by medical experts using the Dice coefficient revealed that a
mean improvement of 0.0229 per dataset was achieved. In summary the approach presented can considerably
improve the accuracy of vascular segmentations needed for following analysis steps.
Multiscale topo-morphologic opening of arteries and veins: a validation study on phantoms and CT imaging of pulmonary vessel casting of pigs
Show abstract
Distinguishing pulmonary arterial and venous (A/V) trees via in vivo imaging is a critical first step in the
quantification of vascular geometry for purposes of determining, for instance, pulmonary hypertension, detection of
pulmonary emboli and more. A multi-scale topo-morphologic opening algorithm has recently been introduced by us
separating A/V trees in pulmonary multiple-detector X-ray computed tomography (MDCT) images without contrast.
The method starts with two sets of seeds - one for each of A/V trees and combines fuzzy distance transform, fuzzy
connectivity, and morphologic reconstruction leading to multi-scale opening of two mutually fused structures while
preserving their continuity. The method locally determines the optimum morphological scale separating the two
structures. Here, a validation study is reported examining accuracy of the method using mathematically generated
phantoms with different levels of fuzziness, overlap, scale, resolution, noise, and geometric coupling and MDCT
images of pulmonary vessel casting of pigs. After exsanguinating the animal, a vessel cast was generated using
rapid-hardening methyl methacrylate compound with additional contrast by 10cc of Ethiodol in the arterial side
which was scanned in a MDCT scanner at 0.5mm slice thickness and 0.47mm in plane resolution. True
segmentations of A/V trees were computed from these images by thresholding. Subsequently, effects of
distinguishing A/V contrasts were eliminated and resulting images were used for A/V separation by our method.
Experimental results show that 92% - 98% accuracy is achieved using only one seed for each object in phantoms
while 94.4% accuracy is achieved in MDCT cast images using ten seeds for each of A/V trees.
Image segmentation using the student's t-test and the divergence of direction on spherical regions
Show abstract
We have developed a new framework for analyzing images called Shells and Spheres (SaS) based on a set of spheres
with adjustable radii, with exactly one sphere centered at each image pixel. This set of spheres is considered optimized
when each sphere reaches, but does not cross, the nearest boundary of an image object. Statistical calculations at varying
scale are performed on populations of pixels within spheres, as well as populations of adjacent spheres, in order to
determine the proper radius of each sphere. In the present work, we explore the use of a classical statistical method, the
student's t-test, within the SaS framework, to compare adjacent spherical populations of pixels. We present results from
various techniques based on this approach, including a comparison with classical gradient and variance measures at the
boundary. A number of optimization strategies are proposed and tested based on pairs of adjacent spheres whose size are
controlled in a methodical manner. A properly positioned sphere pair lies on opposite sides of an object boundary,
yielding a direction function from the center of each sphere to the boundary point between them. Finally, we develop a
method for extracting medial points based on the divergence of that direction function as it changes across medial ridges,
reporting not only the presence of a medial point but also the angle between the directions from that medial point to the
two respective boundary points that make it medial. Although demonstrated here only in 2D, these methods are all
inherently n-dimensional.
Aorta segmentation in non-contrast cardiac CT images using an entropy-based cost function
Olga C. Avila-Montes,
Uday Kukure,
Ioannis A. Kakadiaris
Show abstract
Studies have shown that aortic calcification is associated with increased risk of cardiovascular disease. Furthermore,
aortic calcium assessment can be performed on standard cardiac calcium scoring Computed Tomography
scans, which may help to avoid additional imaging studies. In this paper, we present an entropy-based, narrow
band restricted, iterative method for segmentation of the ascending aorta in non-contrast CT images, as a step
towards aortic calcification detection and pericardial fat quantitation. First, an estimate of the aorta center and
radius is obtained by applying dynamic programming in Hough space. In the second step, these estimates serve
to reduce the aorta boundary search area to within a narrow band, and the contour is updated iteratively using
dynamic programming methods. Our algorithm is able to overcome the limitations of previous approaches in
characterizing (i) the boundary edge features and (ii) non-circular shape at aortic root. The results from the proposed
method compare favorably with the manually traced aorta boundaries and outperform other approaches
in terms of boundary distance and volume overlap.
A skull segmentation method for brain MR images based on multiscale bilateral filtering scheme
Show abstract
We present a novel automatic segmentation method for the skull on brain MR images for attenuation correction in
combined PET/MRI applications. Our method transforms T1-weighted MR images to the Radon domain and then
detects the feature of the skull. In the Radon domain we use a bilateral filter to construct a multiscale images series. For
the repeated convolution we increase the spatial smoothing at each scale and make the cumulative width of the spatial
and range Gaussian doubled at each scale. Two filters with different kernels along the vertical direction are applied along
the scales from the coarse to fine levels. The results from a coarse scale give a mask for the next fine scale and supervise
the segmentation in the next fine scale. The method is robust for noise MR images because of its multiscale bilateral
filtering scheme. After combining the two filtered sinogram, the reciprocal binary sinogram of the skull is obtained for
the reconstruction of the skull image. We use the filtered back projection method to reconstruct the segmented skull
image. We define six metrics to evaluate our segmentation method. The method has been tested with brain phantom data,
simulated brain data, and real MRI data. Evaluation results showed that our method is robust and accurate, which is
useful for skull segmentation and subsequently for attenuation correction in combined PET/MRI applications.
A skull stripping method using deformable surface and tissue classification
Xiaodong Tao,
Ming-Ching Chang
Show abstract
Many neuroimaging applications require an initial step of skull stripping to extract the cerebrum, cerebellum, and
brain stem. We approach this problem by combining deformable surface models and a fuzzy tissue classification
technique. Our assumption is that contrast exists between brain tissue (gray matter and white matter) and
cerebrospinal fluid, which separates the brain from the extra-cranial tissue. We first analyze the intensity of the
entire image to find an approximate centroid of the brain and initialize an ellipsoidal surface around it. We then
perform a fuzzy tissue classification with bias field correction within the surface. Tissue classification and bias
field are extrapolated to the entire image. The surface iteratively deforms under a force field computed from
the tissue classification and the surface smoothness. Because of the bias field correction and tissue classification,
the proposed algorithm depends less on particular imaging contrast and is robust to inhomogeneous intensity
often observed in magnetic resonance images. We tested the algorithm on all T1 weighted images in the OASIS
database, which includes skull stripping results using Brain Extraction Tool; the Dice scores have an average of
0.948 with a standard deviation of 0.017, indicating a high degree of agreement. The algorithm takes on average
2 minutes to run on a typical PC and produces a brain mask and membership functions for gray matter, white
matter, and cerebrospinal fluid. We also tested the algorithm on T2 images to demonstrate its generality, where
the same algorithm without parameter adjustment gives satisfactory results.
Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation
Show abstract
Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the
right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical
planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As
manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an
automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting
aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and
quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm,
namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e.
clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to
initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver
variability in terms of accuracy.
Segmentation of the thalamus in multi-spectral MR images using a combination of atlas-based and gradient graph cut methods
Show abstract
Two popular segmentation methods used today are atlas based and graph cut based segmentation techniques. The atlas
based method deforms a manually segmented image onto a target image, resulting in an automatic segmentation. The
graph cut segmentation method utilizes the graph cut paradigm by treating image segmentation as a max-flow problem.
A specialized form of this algorithm was developed by Lecoeur et al [1], called the spectral graph cut algorithm. The
goal of this paper is to combine both of these methods, creating a more stable atlas based segmentation algorithm that is
less sensitive to the initial manual segmentation. The registration algorithm is used to automate and initialize the spectral
graph cut algorithm as well as add needed spatial information, while the spectral graph cut algorithm is used to increase
the robustness of the atlas method. To calculate the sensitivity of the algorithms, the initial manual segmentation of the
atlas was both dilated and eroded 2 mm and the segmentation results were calculated. Results show that the atlas based
segmentation segments the thalamus well with an average Dice Similarity Coefficient (DSC) of 0.87. The spectral graph
cut method shows similar results with an average DSC measure of 0.88, with no statistical difference between the two
methods. The atlas based method's DSC value, however, was reduced to 0.76 and 0.67 when dilated and eroded
respectively, while the combined method retained a DSC value of 0.81 and 0.74, with a statistical difference found
between the two methods.
Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing
Show abstract
We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill
region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing
functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features;
hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common
problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV
monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We
used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved
good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of
large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a
volumetric overlap fraction of 0.61 ± 0.13 which outperformed four other methods where the overlap fraction varied
from 0.40 ± 0.24 to 0.59 ± 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false
negative and 15 were false positive.
'Active contour without edges', on parametric manifolds
Show abstract
Region based active contour model has been widely used in image segmentation on planar images. However
while a photo picture or a medical image is defined on 2D or 3D Euclidean spaces, in many cases the
information is defined on the curved surfaces, or more general manifolds. In this work we extend the region
based active contour method to work on the parametric manifolds. Essentially, it was noticed that in some
region based active contour segmentation methods, it were only the signs of the level set function values,
instead of the value themselves, which contribute to the cost functional. Thus a binary state function
is enough to represent the two phase segmentation. This gives an alternative view of the level set based
optimization and it is especially useful when the image domain is curved because the signed distance function
and its derivative are relative difficult to be evaluated in a curved space. Based on this, the segmentation
on the curved space is proceed by consecutively changing the binary state function, to optimize the cost functional. Finally, the converged binary function gives the segmentation on the manifold. The method is stable and fast. We demonstrate the applications of this method, with the cost functional defined using the Chan-Vese model, in neuroimaging, fluid mechanics and geographic fields where the information is naturally defined on curved surfaces.
Blood vessel segmentation using line-direction vector based on Hessian analysis
Show abstract
For decision of the treatment strategy, grading of stenoses is important in diagnosis of vascular disease such as arterial occlusive disease or thromboembolism.
It is also important to understand the vasculature in minimally invasive surgery such as laparoscopic surgery or natural orifice translumenal endoscopic surgery.
Precise segmentation and recognition of blood vessel regions are indispensable tasks in medical image processing systems.
Previous methods utilize only ``lineness'' measure, which is computed by Hessian analysis.
However, difference of the intensity values between a voxel of thin blood vessel and a voxel of surrounding tissue is generally decreased by the partial volume effect.
Therefore, previous methods cannot extract thin blood vessel regions precisely.
This paper describes a novel blood vessel segmentation method that can extract thin blood vessels with suppressing false positives.
The proposed method utilizes not only lineness measure but also line-direction vector corresponding to the largest eigenvalue in Hessian analysis.
By introducing line-direction information, it is possible to distinguish between a blood vessel voxel and a voxel having a low lineness measure caused by noise.
In addition, we consider the scale of blood vessel.
The proposed method can reduce false positives in some line-like tissues close to blood vessel regions by utilization of iterative region growing with scale information.
The experimental result shows thin blood vessel (0.5 mm in diameter, almost same as voxel spacing) can be extracted finely by the proposed method.
Brain segmentation performance using T1-weighted images versus T1 maps
Show abstract
The recent driven equilibrium single-pulse observation of T1 (DESPOT1) approach permits real-time clinical
acquisition of large-volume and high-isotropic-resolution T1 mapping of MR tissue parameters with improved
uniformity. It is assumed that the quantitative nature of maps will facilitate clinical applications such as disease
diagnosis and comparison across subjects. However, there is not yet enough quantitative evidence on the
actual benefit of adopting T1 maps, especially in computer-aided medical image analysis tasks. In this study, we
compare methods with respect to image types, T1-weighted images or T1 maps, in automatic brain MRI segmentation.
Our experimental results demonstrate that, using T1 maps, different segmentation algorithms show
better agreement with each other, compared to that from using T1-weighted images. Furthermore, through
multi-dimensional-scaling projection, we are able to visualize the relative affinity among segmentation results,
which reveals that the projections of those segmentations using two different types of input images tend to form
two separate clusters. Finally, by comparing to expert segmented reference segmentation of brain sub-regions,
our results clearly indicate a better agreement between the manual reference and those automatic ones on T1
maps. In other words, our study provides an evidence for the hypothesis that compared to the conventionally
used T1-weighted images, T1 maps lead to improved reliability in automatic brain MRI segmentation task.
Detection of small human cerebral cortical lesions with MRI under different levels of Gaussian smoothing: applications in epilepsy
Diego Cantor-Rivera,
Maged Goubran,
Alan Kraguljac,
et al.
Show abstract
The main objective of this study was to assess the effect of smoothing filter selection in Voxel-Based Morphometry
studies on structural T1-weighted magnetic resonance images. Gaussian filters of 4 mm, 8 mm or 10 mm Full
Width at High Maximum are commonly used, based on the assumption that the filter size should be at least
twice the voxel size to obtain robust statistical results. The hypothesis of the presented work was that the
selection of the smoothing filter influenced the detectability of small lesions in the brain. Mesial Temporal
Sclerosis associated to Epilepsy was used as the case to demonstrate this effect.
Twenty T1-weighted MRIs from the BrainWeb database were selected. A small phantom lesion was placed
in the amygdala, hippocampus, or parahippocampal gyrus of ten of the images. Subsequently the images were
registered to the ICBM/MNI space. After grey matter segmentation, a T-test was carried out to compare each
image containing a phantom lesion with the rest of the images in the set. For each lesion the T-test was repeated
with different Gaussian filter sizes. Voxel-Based Morphometry detected some of the phantom lesions. Of the
three parameters considered: location,size, and intensity; it was shown that location is the dominant factor for
the detection of the lesions.
Automated method for tracing leading and trailing processes of migrating neurons in confocal image sequences
Show abstract
Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration
studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an
automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks
acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the
cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace
the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most
likely direction of the leading process. This direction is found at each step by examining second derivatives of
fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed
number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to
form an improved trace that more closely follows the approximate centerline of the leading process. We apply a
similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate
our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We
show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average.
Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively
demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.
Quantitative CT for volumetric analysis of medical images: initial results for liver tumors
Alexander S. Behnaz,
James Snider,
Eneh Chibuzor,
et al.
Show abstract
Quantitative CT for volumetric analysis of medical images is increasingly being proposed for monitoring
patient response during chemotherapy trials. An integrated MATLAB GUI has been developed for an
oncology trial at Georgetown University Hospital. This GUI allows for the calculation and visualization of
the volume of a lesion. The GUI provides an estimate of the volume of the tumor using a semi-automatic
segmentation technique. This software package features a fixed parameter adaptive filter from the ITK toolkit
and a tumor segmentation algorithm to reduce inter-user variability and to facilitate rapid volume
measurements. The system also displays a 3D rendering of the segmented tumor, allowing the end user to
have not only a quantitative measure of the tumor volume, but a qualitative view as well. As an initial
validation test, several clinical cases were hand-segmented, and then compared against the results from the
tool, showing good agreement.
Multiobject segmentation using coupled shape space models
Show abstract
Due to noise and artifacts often encountered in medical images, segmenting objects in these is one of the most
challenging tasks in medical image analysis. Model-based approaches like statistical shape models (SSMs) incorporate
prior knowledge that supports object detection in case of in-complete evidence from image data. In this paper, we present
a method to increase information of the object's shape in problematic image areas by incorporating mutual shape
information from other entities in the image. This is done by using a common shape space of multiple objects as
additional restriction. Two different approaches to implement mutual shape information are presented. Evaluation was
performed on nine cardiac images by simultaneous segmentation of the epi- and endocardium of the left heart ventricle
using the proposed methods. The results show that the segmentation quality is improved with both methods. For the
better one, the average surface distance error is approx. 40% lower.
Automatic recognition and validation of the common carotid artery wall segmentation in 100 longitudinal ultrasound images: an integrated approach using feature selection, fitting and classification
Show abstract
Most of the algorithms for the common carotid artery (CCA) segmentation require human interaction. The aim of this
study is to show a novel accurate algorithm for the computer-based automated tracing of CCA in longitudinal B-Mode
ultrasound images.
One hundred ultrasound B-Mode longitudinal images of the CCA were processed to delineate the region of interest
containing the artery. The algorithm is based on geometric feature extraction, line fitting, and classification. Output of
the algorithm is the tracings of the near and far adventitia layers. Performance of the algorithm was validated against
human tracings (ground truth) and benchmarked with a previously developed automated technique.
Ninety-eight images were correctly processed, resulting in an overall system error (with respect to ground truth) equal to
0.18 ± 0.17 mm (near adventitia) and 0.17 ± 0.24 mm (far adventitia). In far adventitia detection, our novel technique
outperformed the current standard method, which showed overall system errors equal to 0.07 ± 0.07 mm and 0.49 ± 0.27
mm for near and far adventitia, respectively. We also showed that our new technique is quite insensitive to noise and has
performance independent on the subset of images used for training the classifiers.
Superior architecture of this methodology could constitute a general basis for the development of completely automatic
CCA segmentation strategies.
Automated fat measurement and segmentation with intensity inhomogeneity correction
Show abstract
Adipose tissue (AT) content, especially visceral AT (VAT), is an important indicator for risks of many disorders,
including heart disease and diabetes. Fat measurement by traditional means is often inaccurate and cannot separate
subcutaneous and visceral fat. MRI offers a medium to obtain accurate measurements and segmentation between
subcutaneous and visceral fat. We present an approach to automatically label the voxels associated with adipose tissue
and segment them between subcutaneous and visceral. Our method uses non-parametric non-uniform intensity
normalization (N3) to correct for image artifacts and inhomogeneities, fuzzy c-means to cluster AT regions and active
contour models to separate SAT and VAT. Our algorithm has four stages: body masking, preprocessing, SAT and VAT
separation, and tissue classification and quantification. The method was validated against a manual method performed by
two observers, which used thresholds and manual contours to separate SAT and VAT. We measured 25 patients, 22 of
which were included in the final analysis and the other three had too much artifact for automated processing. For SAT
and total AT, differences between manual and automatic measurements were comparable to manual inter-observer
differences. VAT measurements showed more variance in the automated method, likely due to inaccurate contours.
A novel fast liver segmentation method with graph cuts
Show abstract
Liver segmentation remains a difficult problem in medical images processing, especially when accuracy and speed are
both seriously considered. Graph Cuts is a powerful segmentation tool through which the optimal results are got by
considering both region and boundary information in images. However, the traditional Graph Cuts algorithms are always
computationally expensive and inappropriate to be applied to real clinical circumstance. Recently, the GPU (Graphics
Processor Unit) had evolved to be a cheap and superpower general purpose computing instrument, especially when
NVIDIA released its revolutionary CUDA (Compute Unified Device Architecture). In this paper, we introduce a novel
method to segment 3D liver images with GPU, using the Push-Relable style 3D Graph Cuts implementation. Some
modifications such as 3D storage structures are also introduced which make our implement well fit to the GPU parallel
computing capabilities. Experiments have been executed on human liver CT data and these experiments show that our
method can obtains results in much less time compared to the implement with CPU.
Thrombus segmentation by texture dynamics from microscopic image sequences
Nicolas Brieu,
Jovana Serbanovic-Canic,
Ana Cvejic,
et al.
Show abstract
The genetic factors of thrombosis are commonly explored by microscopically imaging the coagulation of blood
cells induced by injuring a vessel of mice or of zebrafish mutants. The latter species is particularly interesting
since skin transparency permits to non-invasively acquire microscopic images of the scene with a CCD camera
and to estimate the parameters characterizing the thrombus development. These parameters are currently
determined by manual outlining, which is both error prone and extremely time consuming. Even though a
technique for automatic thrombus extraction would be highly valuable for gene analysts, little work can be
found, which is mainly due to very low image contrast and spurious structures. In this work, we propose to
semi-automatically segment the thrombus over time from microscopic image sequences of wild-type zebrafish
larvae. To compensate the lack of valuable spatial information, our main idea consists of exploiting the temporal
information by modeling the variations of the pixel intensities over successive temporal windows with a linear
Markov-based dynamic texture formalization. We then derive an image from the estimated model parameters,
which represents the probability of a pixel to belong to the thrombus. We employ this probability image to
accurately estimate the thrombus position via an active contour segmentation incorporating also prior and
spatial information of the underlying intensity images. The performance of our approach is tested on three
microscopic image sequences. We show that the thrombus is accurately tracked over time in each sequence if the
respective parameters controlling prior influence and contour stiffness are correctly chosen.
Relaxed image foresting transforms for interactive volume image segmentation
Show abstract
The Image Foresting Transform (IFT) is a framework for image partitioning, commonly used for interactive segmentation. Given an image where a subset of the image elements (seed-points) have been assigned correct segmentation labels, the IFT completes the labeling by computing minimal cost paths from all image elements to the seed-points. Each image element is then given the same label as the closest seed-point. Here, we propose the relaxed IFT (RIFT). This modified version of the IFT features an additional parameter to control the smoothness of the segmentation boundary. The RIFT yields more intuitive segmentation results in the presence of noise and
weak edges, while maintaining a low computational complexity. We show an application of the method to the refinement of manual segmentations of a thoracolumbar muscle in magnetic resonance images. The performed study shows that the refined segmentations are qualitatively similar to the manual segmentations, while intra-user variations are reduced by more than 50%.
Digital bowel cleansing for virtual colonoscopy with probability map
Show abstract
Virtual colonoscopy (VC) is a noninvasivemethod for colonic polyp screening, by reconstructing three-dimensional
models of the colon using computerized tomography (CT). Identifying the residual fluid retained inside the colon
is a major challenge for 3D virtual colonoscopy using fecal tagging CT data. Digital bowel cleansing aims to
segment the colon lumen from a patient abdominal image acquired using an oral contrast agent for colonic material
tagging. After removing the segmented residual fluid, the clean virtual colon model can be constructed
and visualized for screening. We present a novel automatic method for digital cleansing using probability map.
The random walker algorithm is used to generate the probability map for air (inside the colon), soft tissue, and
residual fluid instead of segment colon lumen directly. The probability map is then used to remove residual fluid
from the original CT data. The proposed method was tested using VC study data at National Cancer Institute
at NIH. The performance of our VC system for polyp detection has been improved by providing radiologists
more detail information of the colon wall.
Optimal combination of multiple cortical surface parcellations
Show abstract
A variety of methodologies have been developed for the parcellation of human cortical surface into sulcal or gyral
regions due to its importance in structural and functional mapping of the human brain. However, characterizing the
performance of surface parcellation methods and the estimation of ground truth of segmentation are still open problems.
In this paper, we present an algorithm for simultaneous truth and performance estimation of various approaches for
human cortical surface parcellation. The probabilistic true segmentation is estimated as a weighted combination of the
segmentations resulted from multiple methods. Afterward, an Expectation-Maximization (EM) algorithm is used to
optimize the weighting depending on the estimated performance level of each method. Furthermore, a spatial
homogeneity constraint modeled by the Hidden Markov Random Field (HMRF) theory is incorporated to refine the
estimated true segmentation into a spatially homogenous decision. The proposed method has been evaluated using both
synthetic and real data. The experimental results demonstrate the validity of the method proposed in this paper. Also, it
has been used to generate reference sulci regions to perform a comparison study of three methods for cortical surface
parcellation.
A multi-scale approach to mass segmentation using active contour models
Show abstract
As an important step of mass classification, mass segmentation plays an important role in computer-aided diagnosis
(CAD). In this paper, we propose a novel scheme for breast mass segmentation in mammograms, which is based on level
set method and multi-scale analysis. Mammogram is firstly decomposed by Gaussian pyramid into a sequence of images
from fine to coarse, the C-V model is then applied at the coarse scale, and the obtained rough contour is used as the
initial contour for segmentation at the fine scale. A local active contour (LAC) model based on image local information
is utilized to refine the rough contour locally at the fine scale. In addition, the feature of area and gray level extracted
from coarse segmentation is used to set the parameters of LAC model automatically to improve the adaptivity of our
method. The results show the higher accuracy and robustness of the proposed multi-scale segmentation method than the conventional ones.
Statistical fusion of surface labels provided by multiple raters
Show abstract
Studies of the size and morphology of anatomical structures rely on accurate and reproducible delineation of the structures,
obtained either by human raters or automatic segmentation algorithms. Measures of reproducibility and variability are
vital aspects of such studies and are usually estimated using repeated scans or repeated delineations (in the case of human
raters). Methods exist for simultaneously estimating the true structure and rater performance parameters from multiple
segmentations and have been demonstrated on volumetric images. In this work, we extend the applicability of previous
methods onto two-dimensional surfaces parameterized as triangle meshes. Label homogeneity is enforced using a Markov
random field formulated with an energy that addresses the challenges introduced by the surface parameterization. The
method was tested using both simulated raters and cortical gyral labels. Simulated raters are computed using a global
error model as well as a novel and more realistic boundary error model. We study the impact of raters and their accuracy
based on both models, and show how effectively this method estimates the true segmentation on simulated surfaces. The
Markov random field formulation was shown to effectively enforce homogeneity for raters suffering from label noise. We
demonstrated that our method provides substantial improvements in accuracy over single-atlas methods for all experimental
conditions.
Ball-scale based hierarchical multi-object recognition in 3D medical images
Show abstract
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically
recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is
to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image
so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via
the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly.
(b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and
their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in
a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of
the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and
male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition
accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that
quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest
recognition. (3) Scale yields useful information about the relationship between the model assembly and any given
image such that the recognition results in a placement of the model close to the actual pose without doing any
elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
Automatic segmentation of the aorta and the adjoining vessels
Show abstract
Diseases of the cardiovascular system are one of the main causes of death in the Western world. Especially the
aorta and its main descending vessels are of high importance for diagnosis and treatment.
Today, minimally invasive interventions are becoming increasingly popular due to their advantages like cost
effectiveness and minimized risk for the patient. The training of such interventions, which require much of
coordination skills, can be trained by task training systems, which are operation simualtion units. These systems
require a data model that can be reconstructed from given patient data sets. In this paper, we present a
method that allows to segment and classify aorta, carotides, and ostium (including coronary arteries) in one
run, fully automatic and highly robust. The system tolerates changes in topology, streak artifacts in CT caused
by calcification and inhomogeneous distribution of contrast agent. Both CT and MRI-Images can be processed.
The underlying algorithm is based on a combination of Vesselness Enhancement Diffusion, Region Growing, and
the Level Set Method. The system showed good results on all 15 real patient data sets whereby the deviation
was smaller than two voxels.
A completely automated processing pipeline for lung and lung lobe segmentation and its application to the LIDC-IDRI data base
Show abstract
Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like
localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung
cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper
are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based
on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database
Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations.
We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection
and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically
plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality.
For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy
on a per-lobe basis.
Gyral parcellation of cortical surfaces via coupled flow field tracking
Gang Li,
Lei Guo,
Kaiming Li,
et al.
Show abstract
This paper presents a novel method for parcellation of the cortical surface of human brain into gyral based regions via
coupled flow field tracking. The proposed method consists of two major steps. First, the cortical surface is automatically
parcellated into sulcal based regions using several procedures: estimating principal curvatures and principal directions;
applying the hidden Markov random field and the Expectation-Maximization (HMRF-EM) framework for sulcal region
segmentation based on the maximum principal curvature; diffusing the maximum principal direction field in order to
propagate reliable and informative principal directions at gyral crests and sulcal bottoms to other flat cortical regions
with noisy principal directions by minimization of an energy function; tracking the flow field towards sulcal bottoms to
parcellate the cortical surfaces into sulcal basins. The sulcal parcellation provides a very good initialization for the
following steps of gyral parcellation on cortical surfaces. Second, based on the sulcal parcellation results, the cortical
surface is further parcellated into gyral based regions using the following procedures: extracting gyral crest segments;
dilating gyral crest segments; inverting the principal direction flow field and tracking the flow field towards gyral crests
in order to partition the cortical surface into a collection of gyral patches; merging gyral patches to obtain gyral parcellation of the cortical surface. The proposed algorithm pipeline is applied to nine randomly selected cortical surfaces of normal brains and promising results are obtained. The accuracy of the semi-automatic gyral parcellation is comparable to that labeled manually by experts.
Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors
Show abstract
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable
and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task
according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering
the image information only often leads to meager image segmentation. In this paper, we propose a fully automated
technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is
performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis
(KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation
is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete
wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Fuzzy affinity induced curve evolution
Show abstract
In this paper, we present a fuzzy affinity induced curve evolution method for image segmentation without the
need for solving PDEs, thereby making level set implementations vastly more efficient. We make use of fuzzy
affinity that has been employed in fuzzy connectedness methods as a speed function for curve evolution. The
fuzzy affinity consists of two components, namely homogeneity-based affinity and object-feature-based affinity,
which take account both boundary gradient and object region information. Ball scale - a local morphometric
structure - has been used for image noise suppression. We use a similar strategy for curve evolution as the
method in,1 but simplify the voxel switching mechanism where only one linked list is used to implicitly represent
the evolving curve. We have presented several studies to evaluate the performance of the method based on brain
MR and lung CT images. These studies demonstrate high accuracy and efficiency of the proposed method.
Multi-structure segmentation of multi-modal brain images using artificial neural networks
Eun Young Kim,
Hans Johnson
Show abstract
A method for simultaneous segmentation of multiple anatomical brain structures from multi-modal MR images
has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created
by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a
training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN
segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the
priors descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of
multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared
with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative
overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are
similar or higher to those measures previously developed methods. The ANN provides a level of consistency
between subjects and time efficiency comparing human labor that allows it to be used for very large studies.
Segmentation of cervical cell images using mean-shift filtering and morphological operators
C. Bergmeir,
M. García Silvente,
J. Esquivias López-Cuervo,
et al.
Show abstract
Screening plays an important role within the fight against cervical cancer. One of the most challenging parts in
order to automate the screening process is the segmentation of nuclei in the cervical cell images, as the difficulty
for performing this segmentation accurately varies widely within the nuclei. We present an algorithm to perform
this task. After background determination in an overview image, and interactive identification of regions of
interest (ROIs) at lower magnification levels, ROIs are extracted and processed at the full magnification level
of 40x. Subsequent to initial background removal, the image regions are smoothed by mean-shift and median
filtering. Then, segmentations are generated by an adaptive threshold. The connected components in the
resulting segmentations are filtered with morphological operators by characteristics such as shape, size and
roundness. The algorithm was tested on a set of 50 images and was found to outperform other methods.
Multilevel wireless capsule endoscopy video segmentation
Show abstract
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view
most of the small intestine. WCE transmits more than 50,000 video frames per examination and the visual inspection of
the resulting video is a highly time-consuming task even for the experienced gastroenterologist. Typically, a medical
clinician spends one or two hours to analyze a WCE video. To reduce the assessment time, it is critical to develop a
technique to automatically discriminate digestive organs and shots each of which consists of the same or similar shots. In
this paper a multi-level WCE video segmentation methodology is presented to reduce the examination time.
A probability tracking approach to segmentation of ultrasound prostate images using weak shape priors
Show abstract
Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To
this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard
cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation
methods are based on the imagery data of the prostate. The necessity to process large volumes of such data
requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region.
In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate
due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by
many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast
and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior
knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel
approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is
presented. The proposed approach is based on the concept of distribution tracking, which provides a unified
framework for tracking both photometric and morphological features of the prostate. In particular, the tracking
of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force,
which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.
A new osteophyte segmentation method with applications to an anterior cruciate ligament transection rabbit femur model via micro-CT imaging
Show abstract
Osteophyte is an additional bony growth on a normal bone surface limiting or stopping motion in a deteriorating joint.
Detection and quantification of osteophytes from CT images is helpful in assessing disease status as well as treatment and
surgery planning. However, it is difficult to segment osteophytes from healthy bones using simple thresholding or
edge/texture features in CT imaging. Here, we present a new method, based on active shape model (ASM), to solve this
problem and evaluate its application to ex vivo μCT images in an ACLT rabbit femur model. The common idea behind
most ASM based segmentation methods is to first build a parametric shape model from a training dataset and during
application, find a shape instance from the model that optimally fits to target image. However, it poses a fundamental
difficulty for the current application because a diseased bone shape is significantly altered at regions with osteophyte
deposition misguiding an ASM method that eventually leads to suboptimum segmentation results. Here, we introduce a
new partial ASM method that uses bone shape over healthy regions and extrapolate its shape over diseased region
following the underlying shape model. Once the healthy bone region is detected, osteophyte is segmented by subtracting
partial-ASM derived shape from the overall diseased shape. Also, a new semi-automatic method is presented in this paper
for efficiently building a 3D shape model for rabbit femur. The method has been applied to μCT images of 2-, 4-, and
8-week post ACLT and sham-treated rabbit femurs and results of reproducibility and sensitivity analyses of the new
osteophyte segmentation method are presented.
Segmentation of blurry object by learning from examples
Show abstract
Object with blurry boundary is a very common problem across image modalities and applications in medical field. Examples
include skin lesion segmentation, tumor delineation in mammogram, tongue tracing in MR images, etc. To address
blurry boundary problem, region-based active contour methods have been developed which utilize global image feature
to address the problem of fuzzy edge. Image feature, such as texture, intensity histograms, or structure tensors, have also
been studied for region-based models. On the other hand, trained domain experts have been much more effective in performing
such tasks than computer algorithms that are based on a set of carefully selected, sophisticated image features. In
this paper, we present a novel method that employs a learning strategy to guide active contour algorithm for delineating
blurry objects in the imagery. Our method consists of two steps. First, using gold-standard examples, we derive statistical
descriptions of the object boundary. Second, in the segmentation process, the statistical description is reinforced to achieve
desired delineation. Experiments were conducted using both synthetic images and the skin lesion images. Our synthetic
images were created with 2D Gaussian function, which closely resembles objects with blurry boundary. The robustness of
our method with respect to the initialization is evaluated. Using different initial curves, similar results were achieved consistently.
In experiments with skin lesion images, the outcome matches the contour in reference image, which are prepared
by human experts. In summary, our experiments using both synthetic images and skin lesion images demonstrated great
segmentation accuracy and robustness.
Computer-aided detection of bladder tumors based on the thickness mapping of bladder wall in MR images
Show abstract
Bladder cancer is reported to be the fifth leading cause of cancer deaths in the United States. Recent advances in medical
imaging technologies, such as magnetic resonance (MR) imaging, make virtual cystoscopy a potential alternative with
advantages as being a safe and non-invasive method for evaluation of the entire bladder and detection of abnormalities.
To help reducing the interpretation time and reading fatigue of the readers or radiologists, we introduce a computer-aided
detection scheme based on the thickness mapping of the bladder wall since locally-thickened bladder wall often appears
around tumors. In the thickness mapping method, the path used to measure the thickness can be determined without any
ambiguity by tracing the gradient direction of the potential field between the inner and outer borders of the bladder wall.
The thickness mapping of the three-dimensional inner border surface of the bladder is then flattened to a twodimensional
(2D) gray image with conformal mapping method. In the 2D flattened image, a blob detector is applied to
detect the abnormalities, which are actually the thickened bladder wall indicating bladder lesions. Such scheme was
tested on two MR datasets, one from a healthy volunteer and the other from a patient with a tumor. The result is
preliminary, but very promising with 100% detection sensitivity at 7 FPs per case.
Validation and detection of vessel landmarks by using anatomical knowledge
Show abstract
The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically.
Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the
location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level
anatomical knowledge to improve these classification results. We propose a new approach to validate candidates
for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous
landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system,
mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search
process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed
which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the
proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT
images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets
has been tested in combination with a state of the art landmark detector with excellent results. Beside the
carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior
mesenteric, renal, aortic, iliac and femoral bifurcation.
Automatic optic disc segmentation based on image brightness and contrast
Show abstract
Untreated glaucoma leads to permanent damage of the optic nerve and resultant visual field loss, which can
progress to blindness. As glaucoma often produces additional pathological cupping of the optic disc (OD), cupdisc-
ratio is one measure that is widely used for glaucoma diagnosis. This paper presents an OD localization
method that automatically segments the OD and so can be applied for the cup-disc-ratio based glaucoma diagnosis.
The proposed OD segmentation method is based on the observations that the OD is normally much
brighter and at the same time have a smoother texture characteristics compared with other regions within retinal
images. Given a retinal image we first capture the ODs smooth texture characteristic by a contrast image that
is constructed based on the local maximum and minimum pixel lightness within a small neighborhood window.
The centre of the OD can then be determined according to the density of the candidate OD pixels that are
detected by retinal image pixels of the lowest contrast. After that, an OD region is approximately determined by
a pair of morphological operations and the OD boundary is finally determined by an ellipse that is fitted by the
convex hull of the detected OD region. Experiments over 71 retinal images of different qualities show that the
OD region overlapping reaches up to 90.37% according to the OD boundary ellipses determined by our proposed
method and the one manually plotted by an ophthalmologist.
Segmentation of blood clot from CT pulmonary angiographic images using a modified seeded region growing algorithm method
Show abstract
Pulmonary embolism (PE) is a medical condition defined as the obstruction of pulmonary arteries by a blood
clot, usually originating in the deep veins of the lower limbs. PE is a common but elusive illness that can cause
significant disability and death if not promptly diagnosed and effectively treated. CT Pulmonary Angiography
(CTPA) is the first line imaging study for the diagnosis of PE. While clinical prediction rules have been recently
developed to associate short-term risks and stratify patients with acute PE, there is a dearth of objective biomarkers
associated with the long-term prognosis of the disease. Clot (embolus) burden is a promising biomarker for the
prognosis and recurrence of PE and can be quantified from CTPA images. However, to our knowledge, no study
has reported a method for segmentation and measurement of clot from CTPA images. Thus, the purpose of this
study was to develop a semi-automated method for segmentation and measurement of clot from CTPA images. Our
method was based on Modified Seeded Region Growing (MSRG) algorithm which consisted of two steps: (1) the
observer identifies a clot of interest on CTPA images and places a spherical seed over the clot; and (2) a region
grows around the seed on the basis of a rolling-ball process that clusters the neighboring voxels whose CT
attenuation values are within the range of the mean ± two standard deviations of the initial seed voxels. The rollingball
propagates iteratively until the clot is completely clustered and segmented. Our experimental results revealed
that the performance of the MSRG was superior to that of the conventional SRG for segmenting clots, as evidenced
by reduced degrees of over- or under-segmentation from adjacent anatomical structures. To assess the clinical value
of clot burden for the prognosis of PE, we are currently applying the MSRG for the segmentation and volume
measurement of clots from CTPA images that are acquired in a large cohort of patients with PE in an on-going
NIH-sponsored clinical trial.
Development of an acquisition protocol and a segmentation algortihm for wounds of cutaneous Leishmaniasis in digital images
Show abstract
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation
of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working
environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this
protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining
the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in
order to determine the color layer with the highest contrast between the background and the wound. The proposed
algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology
techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the
color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage,
and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm.
The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical
specialist. Broadly speaking, our results support that color provides useful information during segmentation and
measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity,
and 98% accuracy.
Interactive segmentation method with graph cut and SVMs
Show abstract
Medical image segmentation is a prerequisite for visualization and diagnosis. State-of-the-art techniques of image
segmentation concentrate on interactive methods which are more robust than automatic techniques and more
efficient than manual delineation. In this paper, we present an interactive segmentation method for medical
images which relates to graph cut based on Support Vector Machines (SVMs). The proposed method is a
hybrid method that combines three aspects. First, the user selects seed points to paint object and background
using a "brush", and then the labeled pixels/voxels data including intensity value and gradient of the sampled
points are used as training set for SVMs training process. Second, the trained SVMs model is employed to
predict the probability of which classifications each unlabeled pixel/voxel belongs to. Third, unlike traditional
Gaussian Mixture Model (GMM) definition for region properties in graph cut method, negative log-likelihood
of the obtained probability of each pixel/voxel from SVMs model is used to define t-links in graph cut method
and the classical max-flow/min-cut algorithm is applied to minimize the energy function. Finally, the proposed
method is applied in 2D and 3D medical image segmentation. The experiment results demonstrate availability
and effectiveness of the proposed method.
Segmentation of light and dark hair in dermoscopic images: a hybrid approach using a universal kernel
Show abstract
The main challenge in an automated diagnostic system for the early diagnosis of melanoma is the correct segmentation
and classification of moles, often occluded by hair in images obtained with a dermoscope. Hair occlusion causes
segmentation algorithms to fail to identify the correct nevus border, and can cause errors in estimating texture measures.
We present a new method to identify hair in dermoscopic images using a universal approach, which can segment both
dark and light hair without prior knowledge of the hair type. First, the hair is amplified using a universal matched
filtering kernel, which generates strong responses for both dark and light hair without prejudice. Then we apply local
entropy thresholding on the response to get a raw binary hair mask. This hair mask is then refined and verified by a
model checker. The model checker includes a combination of image processing (morphological thinning and label
propagation) and mathematical (Gaussian curve fitting) techniques. The result is a clean hair mask which can be used to
segment and disocclude the hair in the image, preparing it for further segmentation and analysis. Application on real
dermoscopic images yields good results for thick hair of varying colours, from light to dark. The algorithm also performs
well on skin images with a mixture of both dark and light hair, which was not previously possible with previous hair
segmentation algorithms.
Volumetric segmentation of trabecular bone into rods and plates: a new method based on local shape classification
Show abstract
Bone micro architecture is believed to play a key role in determining bone quality. We propose a new method
of segmentation based on local shape classification. Bones samples are thus described into their basic elements
(rods and plates). On each bone voxel we calculate the inertia moment of a neighborhood obtained by local
geodesic dilation in the bone volume. The dilated volume is obtained through a homothopic dilation using the
Fast Marching algorithms. The size of the dilated volume is choosen from local aperture diameter in order to
be scale independent. The bone cross-section is calculated using an optimized granulometry algorithm. The
segmentation has been carried on a wide range of human trabecular bone with varied structure. Voxels are then
classified according the ratio between the inertia moments of the dilated volumes.
Image enhancement and edge-based mass segmentation in mammogram
Show abstract
This paper presents a novel, edge-based segmentation method for identifying the mass contour (boundary) for a
suspicious mass region (Region of Interest (ROI)) in a mammogram. The method first applies a contrast stretching
function to adjust the image contrast, then uses a filtering function to reduce image noise. Next, for each pixel in a ROI,
the energy descriptor (one of the Haralick descriptors) is computed from the co-occurrence matrix of the pixel; and the
energy texture image of a ROI is obtained. From the energy texture image, the edges in the image are detected; and the
mass region is identified from the closed-path edges. Finally, the boundary of the identified mass region is used as the
contour of the segmented mass. We applied our method to ROI-marked mammogram images from the Digital Database
for Screening Mammography (DDSM). Preliminary results show that the contours detected by our method outline the
shape and boundary of a mass much more closely than the ROI markings made by radiologists.
Posters: Shape
Database guided detection of anatomical landmark points in 3D images of the heart
Show abstract
Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D)
echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted
automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization
and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as
segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The
landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast
classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The
method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The
method can detect landmarks accurately in the four-chamber (apex: 7.9±7.1mm, septal mitral valve point: 5.6±2.7mm;
lateral mitral valve point: 4.0±2.6mm) and two-chamber view (apex: 7.1±6.7mm, anterior mitral valve point:
5.8±3.5mm, inferior mitral valve point: 4.5±3.1mm). The results compare well to those reported by others.
Partial volume correction using cortical surfaces
Kamille Rosenfalck Blaasvær,
Camilla Dremstrup Haubro,
Simon Fristed Eskildsen,
et al.
Show abstract
Partial volume effect (PVE) in positron emission tomography (PET) leads to inaccurate estimation of regional
metabolic activities among neighbouring tissues with different tracer concentration. This may be one of the main
limiting factors in the utilization of PET in clinical practice. Partial volume correction (PVC) methods have
been widely studied to address this issue. MRI based PVC methods are well-established.1 Their performance
depend on the quality of the co-registration of the MR and PET dataset, on the correctness of the estimated
point-spread function (PSF) of the PET scanner and largely on the performance of the segmentation method
that divide the brain into brain tissue compartments.1, 2 In the present study a method for PVC is suggested,
that utilizes cortical surfaces, to obtain detailed anatomical information. The objectives are to improve the
performance of PVC, facilitate a study of the relationship between metabolic activity in the cerebral cortex and
cortical thicknesses, and to obtain an improved visualization of PET data. The gray matter metabolic activity
after performing PVC was recovered by 99.7 - 99.8 % , in relation to the true activity when testing on simple
simulated data with different PSFs and by 97.9 - 100 % when testing on simulated brain PET data at different
cortical thicknesses. When studying the relationship between metabolic activities and anatomical structures
it was shown on simulated brain PET data, that it is important to correct for PVE in order to get the true
relationship.
Adaptive model based pulmonary artery segmentation in 3D chest CT
Show abstract
The extraction and analysis of the pulmonary artery in computed tomography (CT) of the chest can be an
important, but time-consuming step for the diagnosis and treatment of lung disease, in particular in non-contrast
data, where the pulmonary artery has low contrast and frequently merges with adjacent tissue of similar intensity.
We here present a new method for the automatic segmentation of the pulmonary artery based on an adaptive
model, Hough and Euclidean distance transforms, and spline fitting, which works equally well on non-contrast
and contrast enhanced data. An evaluation on 40 patient data sets and a comparison to manual segmentations
in terms of Jaccard index, sensitivity, specificity, and minimum mean distance shows its overall robustness.
A combined voxel and surface based method for topology correction of brain surfaces
Show abstract
Brain surfaces provide a reliable representation for cortical mapping. The construction of
correct surfaces from magnetic resonance images (MRI) segmentation is a challenging task,
especially when genus zero surfaces are required for further processing such as
parameterization, partial inflation and registration. The generation of such surfaces has been
approached either by correcting a binary image as part of the segmentation pipeline or by
modifying the mesh representing the surface. During this task, the preservation of the
structure may be compromised because of the convoluted nature of the brain and
noisy/imperfect segmentations. In this paper, we propose a combined, voxel and surfacebased,
topology correction method which preserves the structure of the brain while yielding
genus zero surfaces. The topology of the binary segmentation is first corrected using a set of
topology preserving operators applied sequentially. This results in a white matter/gray matter
binary set with correct sulci delineation, homotopic to a filled sphere. Using the corrected
segmentation, a marching cubes mesh is then generated and the tunnels and handles resulting
from the meshing are finally removed with an algorithm based on the detection of nonseparating
loops. The approach was validated using 20 young individuals MRI from the
OASIS database, acquired at two different time-points. Reproducibility and robustness were
evaluated using global and local criteria such as surface area, curvature and point to point
distance. Results demonstrated the method capability to produce genus zero meshes while
preserving geometry, two fundamental properties for reliable and accurate cortical mapping
and further clinical studies.
3D bone mineral density distribution and shape reconstruction of the proximal femur from a single simulated DXA image: an in vitro study
Show abstract
Area Bone Mineral Density (aBMD) measured by Dual-energy X-ray Absorptiometry (DXA) is an established
criterion in the evaluation of hip fracture risk. The evaluation from these planar images, however, is limited
to 2D while it has been shown that proper 3D assessment of both the shape and the Bone Mineral Density
(BMD) distribution improves the fracture risk estimation. In this work we present a method to reconstruct both
the 3D bone shape and 3D BMD distribution of the proximal femur from a single DXA image. A statistical
model of shape and a separate statistical model of the BMD distribution were automatically constructed from
a set of Quantitative Computed Tomography (QCT) scans. The reconstruction method incorporates a fully
automatic intensity based 3D-2D registration process, maximizing the similarity between the DXA and a digitally
reconstructed radiograph of the combined model. For the construction of the models, an in vitro dataset of
QCT scans of 60 anatomical specimens was used. To evaluate the reconstruction accuracy, experiments were
performed on simulated DXA images from the QCT scans of 30 anatomical specimens. Comparisons between
the reconstructions and the same subject QCT scans showed a mean shape accuracy of 1.2mm, and a mean
density error of 81mg/cm3. The results show that this method is capable of accurately reconstructing both the
3D shape and 3D BMD distribution of the proximal femur from DXA images used in clinical routine, potentially
improving the diagnosis of osteoporosis and fracture risk assessments at a low radiation dose and low cost.
Model-based segmentation of pathological lymph nodes in CT data
Show abstract
For the computer-aided diagnosis of tumor diseases knowledge about the position, size and type of the lymph
nodes is needed to compute the tumor classification (TNM). For the computer-aided planning of subsequent
surgeries like the Neck Dissection spatial information about the lymph nodes is also important. Thus, an
efficient and exact segmentation method for lymph nodes in CT data is necessary, especially pathological altered
lymph nodes play an important role here.
Based on prior work, in this paper we present a noticeably enhanced model-based segmentation method for
lymph nodes in CT data, which now can be used also for enlarged and mostly well separated necrotic lymph
nodes. Furthermore, the kind of pathological variation can be determined automatically during segmentation,
which is important for the automatic TNM classification.
Our technique was tested on 21 lymph nodes from 5 CT datasets, among several enlarged and necrotic ones.
The results lie in the range of the inter-personal variance of human experts and improve the results of former
work again. Bigger problems were only noticed for pathological lymph nodes with vague boundaries due to
infiltrated neighbor tissue.
Evaluation of manual and computerized methods for the determination of axial vertebral rotation
Show abstract
Axial vertebral rotation is among the most important parameters for the evaluation of spinal deformities, and
several manual and computerized methods have been proposed for its measurement. Routine manual measurement
of axial vertebral rotation from three-dimensional (3D) images is error-prone due to the limitations of the
observers, different properties of imaging techniques, variable characteristics of the observed anatomy, and difficulties
in image navigation and representation. Computerized methods do not suffer from these limitations and
may yield accurate results, however, they also require manual identification of multiple anatomical landmarks or
neglect the sagittal and coronal inclinations of vertebrae. The variability of manual and computerized methods
for measuring axial vertebral rotation in 3D images has not been thoroughly investigated yet. In this study we
evaluated, compared and analyzed four different manual and a computerized method for measuring axial vertebral
rotation. Using each method, three observers independently performed two series of manual measurements
on 56 normal and scoliotic vertebrae in images, acquired by computed tomography (CT) and magnetic resonance
(MR), which allowed the estimation of intra-observer, inter-observer and inter-method variability. The relatively
low intra-observer standard deviation (0.8, 0.7 and 1.3 degrees for each observer), inter-observer standard deviation
(1.3, 2.0 and 1.9 degrees for each observer pair), and inter-method standard deviation (best: 1.9 degrees)
of the computerized method indicate that it is feasible for the determination of axial vertebral rotation and may
represent an efficient alternative to manual methods in terms of repeatability, reliability and user effort.
Sparse active shape models: influence of the interpolation kernel on segmentation accuracy and speed
Show abstract
We analyze the segmentation of sparse data using the 3D variant of Active Shape Models by van Assen et al.
(SPASM). This algorithm is designed to segment volumetric data represented by multiple planes with arbitrary
orientations and with large undersampled regions. With the help of statistical shape constraints, the complicated
interpolation of the sliced data is replaced by a mesh-based interpolation. To overcome large void areas without
image information the mesh nodes are updated using a Gaussian kernel that propagates the available information
to the void areas. Our analysis shows that the accuracy is mostly constant for a wide range of kernel scales,
but the convergence speed is not. Experiments on simulated 3D echocardiography datasets indicate that an
appropriate selection of the kernel can even double the convergence speed of the algorithm. Additionally, the
optimal value for the kernel scale seems to be mainly related to the spatial frequency of the model encoding
the statistical shape priors rather than to the sparsity of the sliced data. This suggests the possibility to precalculate
the propagation coefficients which would reduce the computational load up to 40% depending on the
spatial configuration of the input data.
Smart manual landmarking of organs
Show abstract
Statistical shape models play a very important role in most modern medical segmentation frameworks. In this
work we propose an extension to an existing approach for statistical shape model generation based on manual
mesh deformation. Since the manual acquisition of ground truth segmentation data is a prerequisite for shape
model creation, we developed a method that integrates a solution to the landmark correspondence problem in
this particular step. This is done by coupling a user guided mesh adaptation for ground truth segmentation with
a simultaneous real time optimization of the mesh in order to preserve point correspondences. First, a reference
model with evenly distributed points is created that is taken as the basis of manual deformation. Afterwards
the user adapts the model to the data set using a 3D Gaussian deformation of varying stiffness. The resulting
meshes can be directly used for shape model construction. Furthermore, our approach allows the creation of shape
models of arbitrary topology. We evaluate our method on CT data sets of the kidney and 4D MRI time series
images of the cardiac left ventricle. A comparison with standard ICP-based and population-based optimization
based correspondence algorithms showed better results both in terms of generalization capability and specificity
for the model generated by our approach. The proposed method can therefore be used to considerably speed
up and ease the process of shape model generation as well as remove potential error sources of landmark and
correspondence optimization algorithms needed so far.
Segmentation of the endocardial wall of the left atrium using local region-based active contours and statistical shape learning
Show abstract
Atrial fibrillation, a cardiac arrhythmia characterized by unsynchronized electrical activity in the atrial chambers
of the heart, is a rapidly growing problem in modern societies. One treatment, referred to as catheter ablation,
targets specific parts of the left atrium for radio frequency ablation using an intracardiac catheter. Magnetic
resonance imaging has been used for both pre- and and post-ablation assessment of the atrial wall. Magnetic
resonance imaging can aid in selecting the right candidate for the ablation procedure and assessing post-ablation
scar formations. Image processing techniques can be used for automatic segmentation of the atrial wall, which
facilitates an accurate statistical assessment of the region. As a first step towards the general solution to the
computer-assisted segmentation of the left atrial wall, in this paper we use shape learning and shape-based image
segmentation to identify the endocardial wall of the left atrium in the delayed-enhancement magnetic resonance images.
Automated determination of the centers of vertebral bodies and intervertebral discs in CT and MR lumbar spine images
Show abstract
The knowledge of the location of the centers of vertebral bodies and intervertebral discs is valuable for the analysis of
the spine. Existing methods for the detection and segmentation of vertebrae in images acquired by computed tomography
(CT) and magnetic resonance (MR) imaging are usually applicable only to a specific image modality and require prior
knowledge of the location of vertebrae, usually obtained by manual identification or statistical modeling. We propose a
completely automated framework for the detection of the centers of vertebral bodies and intervertebral discs in CT and
MR images. The image intensity and gradient magnitude profiles are first extracted in each image along the already
obtained spinal centerline and therefore contain a repeating pattern representing the vertebral bodies and intervertebral
discs. Based on the period of the repeating pattern and by using a function that approximates the shape of the vertebral
body, a model of the vertebral body is generated. The centers of vertebral bodies and intervertebral discs are detected by
measuring the similarity between the generated model and the extracted profiles. The method was evaluated on 29 CT
and 13 MR images of lumbar spine with varying number of vertebrae. The overall mean distance between the obtained
and the ground truth centers was 2.8 ± 1.9 mm, and no considerable differences were detected between the results for
CT, T1-weighted MR or T2-weighted MR images, or among different vertebrae. The proposed method may therefore be
valuable for initializing the techniques for the detection and segmentation of vertebrae.
Construction of groupwise consistent shape parameterizations by propagation
Show abstract
Prior knowledge can highly improve the accuracy of segmentation algorithms for 3D medical images. A popular
method for describing the variability of shape of organs are statistical shape models. One of the greatest challenges
in statistical shape modeling is to compute a representation of the training shapes as vectors of corresponding
landmarks, which is required to train the model. Many algorithms for extracting such landmark vectors work
on parameter space representations of the unnormalized training shapes. These algorithms are sensitive to
inconsistent parameterizations: If corresponding regions in the training shapes are mapped to different areas of
the parameter space, convergence time increases or the algorithms even fail to converge. In order to improve
robustness and decrease convergence time, it is crucial that the training shapes are parameterized in a consistent
manner. We present a novel algorithm for the construction of groupwise consistent parameterizations for a set
of training shapes with genus-0 topology. Our algorithm firstly computes an area-preserving parameterization
of a single reference shape, which is then propagated to all other shapes in the training set. As the parameter
space propagation is controlled by approximate correspondences derived from a shape alignment algorithm,
the resulting parameterizations are consistent. Additionally, the area-preservation property of the reference
parameterization is likewise propagated such that all training shapes can be reconstructed from the generated
parameterizations with a simple uniform sampling technique. Though our algorithm considers consistency as an
additional constraint, it is faster than computing parameterizations for each training shape independently from
scratch.
A statistical shape and motion model for the prediction of respiratory lung motion
Show abstract
We propose a method to compute a 4D statistical model of respiratory lung motion which consists of a 3D shape
atlas, a 4D mean motion model and a 4D motion variability model. Symmetric diffeomorphic image registration
is used to estimate subject-specific motion models, to generate an average shape and intensity atlas of the lung
as anatomical reference frame and to establish inter-subject correspondence. The Log-Euclidean framework
allows to perform statistics on diffeomorphic transformations via vectorial statistics on their logarithms. We
apply this framework to compute the mean motion and motion variations by performing a Principal Component
Analysis (PCA) on diffeomorphisms. Furthermore, we present methods to adapt the generated statistical 4D
motion model to a patient-specific lung geometry and the individual organ motion.
The prediction performance is evaluated with respect to motion field differences and with respect to landmark-
based target registration errors. The quantitative analysis results in a mean target registration error of 3,2 ± 1,8
mm. The results show that the new method is able to provide valuable knowledge in many fields of application.
Posters: Texture
Assessing texture measures with respect to their sensitivity to scale-dependent higher order correlations in medical images using surrogates
Show abstract
The quantitative characterization of images showing tissue probes being visualized by e.g. CT or MR is of great interest
in many fields of medical image analysis. A proper quantification of the information content in such images can be
realized by calculating well-suited texture measures, which are able to capture the main characteristics of the image
structures under study. Using test images showing the complex trabecular structure of the inner bone of a healthy and
osteoporotic patient we propose and apply a novel statistical framework, with which one can systematically assess the
sensitivity of texture measures to scale-dependent higher order correlations (HOCs). To this end, so-called surrogate
images are generated, in which the linear properties are exactly preserved, while parts of the higher order correlations (if
present) are wiped out in a scale dependent manner. This is achieved by dedicated Fourier phase shuffling techniques.
We compare three commonly used classes of texture measures, namely spherical Mexican hat wavelets (SMHW),
Minkowski functionals (MF) and scaling indices (SIM). While the SMHW were sensitive to HOCs on small scales
(Significance S=19-23), the MF and SIM could detect the HOCs very well for the larger scales (S = 39 (MF) and S = 29
(SIM)). Thus the three classes of texture measures are complimentary with respect to their ability to detect scaledependent
HOCs. The MF and SIM are, however, slightly preferable, because they are more sensitive to HOCs on length
scales, which the important structural elements, i.e. the trabeculae, are considered to have.