Proceedings Volume 8669

Medical Imaging 2013: Image Processing

cover
Proceedings Volume 8669

Medical Imaging 2013: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 April 2013
Contents: 25 Sessions, 150 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2013
Volume Number: 8669

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8669
  • Segmentation
  • DTI/Functional
  • Shape Appearance
  • Temporal and Motional Analysis
  • OCT and Ultrasound
  • Lung
  • Registration
  • Segmentation and Localization
  • Keynote and 2D-3D Registration
  • Statistics of Images and Structures
  • Label Fusion
  • Poster Session: Atlases
  • Poster Session: Blood Vessels
  • Poster Session: Classification
  • Poster Session: Compressive Sensing
  • Poster Session: Diffusion Tensor Imaging
  • Poster Session: Optical Coherence Tomography
  • Poster Session: Image Enhancement
  • Poster Session: Label Fusion
  • Poster Session: Motion
  • Poster Session: Registration
  • Poster Session: Segmentation
  • Poster Session: Shape
  • Poster Session: Ultrasound
Front Matter: Volume 8669
icon_mobile_dropdown
Front Matter: Volume 8669
This PDF file contains the front matter associated with SPIE Proceedings Volume 8669, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Segmentation
icon_mobile_dropdown
Efficient convex optimization-based curvature dependent contour evolution approach for medical image segmentation
Eranga Ukwatta, Jing Yuan, Wu Qiu, et al.
Markov random field (MRF) based approaches are extensively used in image segmentation applications, which often produce segmentation results with a boundary of minimal length/surface and tending to pass along image edges, yet affected by boundary shrinkage or bias in the absence of proper image edge information to drive the segmentation. In this paper, we propose a novel curvature re-weighted boundary smoothing term and introduce a new convex optimization-based contour/surface evolution method for medical image segmentation. The proposed curvature-based term generates the optimal solution with low curvatures and helps to avoid boundary shrinkage and bias. This is particularly useful for segmenting medical images, in which noisy and poor image quality exists widely and the shapes of anatomical objects are often smooth and even convex. Moreover, a new convex optimization-based contour evolution method is applied to propagate the initial contour to the object of interest efficiently and robustly. Distinct from the traditional methods for contour evolution, the proposed algorithm provides a fully time-implicit contour evolution scheme, which allows a large evolution step-size to significantly speed up convergence. It also propagates the contour to its globally optimal position during each discrete time-frame, which improves the algorithmic robustness to noise and poor initialization. The fast continuous max-flow-based algorithm for contour evolution is implemented on a commercially available graphics processing unit (GPU) to achieve a high computational performance. Experimental results for both synthetic and 2D/3D medical images showed that the proposed approach generated segmentation results efficiently and increased the accuracy and robustness of segmentation by avoiding segmentation shrinkage and bias.
An automated algorithm for cell-level FISH dot counting
Yousef Al-Kofahi, Dirk Padfield, Antti Seppo
Fluorescence in situ hybridization (FISH) dot counting is the process of enumerating chromosomal abnormalities in interphase cell nuclei. This process is widely used in many areas of biomedical research and diagnosis. We present a generic and fully automatic algorithm for cell-level counting of FISH dots in 2-D fluorescent images. Our proposed algorithm starts by segmenting cell nuclei in DAPI stained images using a 2-D wavelet based segmentation algorithm. Nuclei segmentation is followed by FISH dot detection and counting, which consists of three main steps. First, image pre-processing where median and top-hat filters are used to clean image noise, subtract background and enhance the contrast of the FISH dots. Second, FISH dot detection using a multi-level h-minima transform approach that accounts for the varying image contrast. Third, FISH dot counting where clustered FISH dots are separated using a local maxima detection-based method followed by FISH dot size filtering based on constraints to account for large connected components of tightly-clustered dots. To quantitatively assess the performance of our proposed FISH dot counting algorithm, automatic counting results were compared to manual counts of 880 cells selected from 19 invasive ductal breast carcinoma samples exhibiting varying degrees of Human Epidermal Growth Factor Receptor 2 (HER2) expression. Cell-level dot counting accuracy was assessed using two metrics: cell classification agreement and dot-counting match. Our automatic results gave an overall cell-by-cell classification agreement of 88% and an overall accuracy of 81%.
Automatic cell segmentation in fluorescence images of confluent cell monolayers using multi-object geometric deformable model
Zhen Yang, John A. Bogovic, Aaron Carass, et al.
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in uorescence images of conuent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the uorescence images, the cell junctions are enhanced by applying an orderstatistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0:88.
Robust local appearance features for MRI brain structure segmentation across scanning protocols
Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution and that brain tissue classification can be performed robustly for images obtained with different protocols, we propose to derive appearance features from brain-tissue density maps instead of directly from the MR images. We evaluated this approach on hippocampus segmentation in two sets of images acquired with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multi-atlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition-independent features significantly improved (0.783) over atlas-based segmentation alone (0.728).
Region-based graph cut using hierarchical structure with application to ground-glass opacity pulmonary nodules segmentation
Chi-Hsuan Tsou, Kuo-Lung Lor, Yeun-Chung Chang, et al.
Image segmentation for the demarcation of pulmonary nodules in CT images is intrinsically an arduous task. The difficulty can be summarized into two aspects. Firstly, lung tumor can be various in terms of physical densities in pulmonary regions, implying the different interpretation as a mixture of GGO and solid nodules. Hence, processing of lung CT images may generally encounter tissue inhomogeneous problem. The second factor that complicates the task of nodule demarcation is the irregular shapes that most nodules are directly connected to other structures sharing the similar density profile. In this paper, an image segmentation framework is proposed by unifying the techniques of statistical region merging and conditional random field (CRF) with graph cut optimization to address the difficult problem of GGO nodules quantification in CT images. Different from traditional segmentation methods that use pixel-based approach such as region growing and morphological constraints, we employ a hierarchical segmentation tree to alleviate the effect of inhomogeneous attenuation. In addition to building perceptual prominent regions, we perform inference in CRF model based on restricting the pool of segmented regions. Following that, an inference CRF model is carried out to detect and localize individual object instances in CT images. The proposed algorithm is evaluated with four sets of manual delineations on 77 lung CT images. Incorporating with the efficiency and accuracy of pulmonary nodules segmentation method proposed in this paper, a computer aided system is hence feasible to develop related clinical application.
DTI/Functional
icon_mobile_dropdown
Fiber feature map based landmark initialization for highly deformable DTI registration
Aditya Gupta, Matthew Toews, Ravikiran Janardhana, et al.
This paper presents a novel pipeline for the registration of diffusion tensor images (DTI) with large pathological variations to normal controls based on the use of a novel feature map derived from white matter (WM) fiber tracts. The research presented aims towards an atlas based DTI analysis of subjects with considerable brain pathologies such as tumors or hydrocephalus. In this paper, we propose a novel feature map that is robust against variations in WM fiber tract integrity and use these feature maps to determine a landmark correspondence using a 3D point correspondence algorithm. This correspondence drives a deformation field computed using Gaussian radial basis functions(RBF). This field is employed as an initialization to a standard deformable registration method like demons. We present early preliminary results on the registration of a normal control dataset to a dataset with abnormally enlarged lateral ventricles affected by fatal demyelinating Krabbe disease. The results are analyzed based on a regional tensor matching criterion and a visual assessment of overlap of major WM fiber tracts. While further evaluation and improvements are necessary, the results presented in this paper highlight the potential of our method in handling registration of subjects with severe WM pathology.
Morphological changes in the corpus callosum: a study using joint Riemannian feature spaces
Shape, scale, orientation and position, the physical features associated with white matter DTI tracts, can, either individually or in combination, be used to define feature spaces. Recent work by Mani et al.1 describes a Riemannian framework in which these joint feature spaces are considered. In this paper, we use the tools and metrics defined within this mathematical framework to study morphological changes due to disease progression. We look at sections of the anterior corpus callosum, which describes a deep arc along the mid-sagittal plane, and show how multiple sclerosis and normal control populations have different joint shape-orientation signatures.
Parcellation of the thalamus using diffusion tensor images and a multi-object geometric deformable model
Chuyang Ye, John A. Bogovic, Sarah H. Ying, et al.
The thalamus is a sub-cortical gray matter structure that relays signals between the cerebral cortex and midbrain. It can be parcellated into the thalamic nuclei which project to different cortical regions. The ability to automatically parcellate the thalamic nuclei could lead to enhanced diagnosis or prognosis in patients with some brain disease. Previous works have used diffusion tensor images (DTI) to parcellate the thalamus, using either tensor similarity or cortical connectivity as information driving the parcellation. In this paper, we propose a method that uses the diffusion tensors in a different way than previous works to guide a multiple object geometric deformable model (MGDM) for parcellation. The primary eigenvector (PEV) is used to indicate the homogeneity of fiber orientations. To remove the ambiguity due to the fact that the PEV is an orientation, we map the PEV into a 5D space known as the Knutsson space. An edge map is then generated from the 5D vector to show divisions between regions of aligned PEV's. The generalized gradient vector flow (GGVF) calculated from the edge map drives the evolution of the boundary of each nucleus. Region based force, balloon force, and curvature force are also employed to refine the boundaries. Experiments have been carried out on five real subjects. Quantitative measures show that the automated parcellation agrees with the manual delineation of an expert under a published protocol.
Effects of DTI spatial normalization on white matter tract reconstructions
Nagesh Adluru, Hui Zhang, Do P. M. Tromp, et al.
Major white matter (WM) pathways in the brain can be reconstructed in vivo using tractography on diffusion tensor imaging (DTI) data. Performing tractography using the native DTI data is often considered to produce more faithful results than performing it using the spatially normalized DTI obtained using highly non-linear transformations. However, tractography in the normalized DTI is playing an increasingly important role in population analyses of the WM. In particular, the emerging tract specific analyses (TSA) can benefit from tractography in the normalized DTI for statistical parametric mapping in specific WM pathways. It is well known that the preservation of tensor orientations at the individual voxel level is enforced in tensor based registrations. However small reorientation errors at individual voxel level can accumulate and could potentially affect the tractography results adversely. To our knowledge, there has been no study investigating the effects of normalization on consistency of tractography that demands non-local preservation of tensor orientations which is not explicitly enforced in typical DTI spatial normalization routines. This study aims to evaluate and compare tract reconstructions obtained using normalized DTI against those obtained using native DTI. Although tractography results have been used to measure and influence the quality of spatial normalization, the presented study addresses a distinct question: whether non-linear spatial normalization preserves even long-range anatomical connections obtained using tractography for accurate reconstructions of pathways. Our results demonstrate that spatial normalization of DTI data does preserve tract reconstructions of major WM pathways and does not alter the variance (individual differences) of their macro and microstructural properties. This suggests one can extract quantitative and shape properties efficiently from the tractography data in the normalized DTI for performing population statistics on major WM pathways.
Susceptibility artefact correction by combining B0 field maps and non-rigid registration using graph cuts
Pankaj Daga, Marc Modat, Gavin Winston, et al.
We present a novel method for correction of geometric distortions arising from susceptibility artefacts in echo-planar MRI images that combines fieldmap and an image registration based correction technique in a unified framework. The geometric distortions arising from these artefacts lead to inaccurate alignment of images from different MRI techniques and hinders their joint analysis. A novel phase unwrapping algorithm is presented that can efficiently compute the B0 field inhomogeneity map as well as the confidence associated with the estimated solution. This information is used to adaptively drive a subsequent image registration step to further refine the results in low-confidence areas. The effectiveness of the proposed unified algorithm in correcting for geometric distortions due to susceptibility artefacts is demonstrated on interventional MRI EPI images.
Functional brain atlas construction for brain network analysis
Hongming Li, Yong Fan
Brain network analysis is a promising tool in studies of the human brain’s functional organization and neuropsychiatric disorders. For neuroimaging data based brain network analysis, network nodes are typically defined as distinct greymatter regions delineated by anatomical brain atlases, random parcellation of the brain space, or image voxels, resulting in brain network nodes with different spatial scales. As precise functional organization of the brain remains unclear, it is challenging to determine a proper spatial scale in practice. The brain network nodes defined anatomically or randomly do not necessarily possess desired properties of ideal brain network nodes, i.e., functional homogeneity within each individual node, functional distinctiveness across different nodes, and functional consistence of the same node across different subjects. To obtain a definition of brain network nodes with the desired properties, a brain parcellation method based on functional information is proposed to achieve a brain parcellation consistent across subjects and highly in agreement with the functional organization of the brain. Particularly, spatially contiguous voxel-wise functional information of the brain fMRI data recursively aggregate according to inter-voxel/region functional affinity from voxel level to coarser scales, resulting in a brain parcellation with a multi-level hierarchy. A trade-off between functional homogeneity and distinctiveness is determined by identifying the hierarchy level with network measures highly consistent across subjects. The proposed method has been validated on resting-state fMRI datasets for functional network analysis, and the results demonstrate that brain networks constructed with 200~500 nodes could achieve the highest inter-subject consistence.
Shape Appearance
icon_mobile_dropdown
Multi-object statistical analysis of late adolescent depression
Mahdi Ramezani, Abtin Rasoulian, Purang Abolmaesumi, et al.
Shape deformations and volumetric changes in the hippocampus and amygdala have previously been noted in Major Depressive Disorder (MDD). Unfortunately, these analyses are limited because relative shape and pose (rigid+scale transformation) information of multiple objects in brain are generally disregarded. We hypothesize that this information might complement studies of limbic structural deformation in MDD. We focus on changes in temporal (e.g., superior, middle and inferior temporal gyrus) and limbic (e.g., hippocampus and amygdala) lobes. Here, we use a multi-object statistical pose and shape model to analyze imaging data from young people with and without a depressive disorder. Nineteen individuals with a depressive disorder (mean age: 17.85) and twenty six healthy controls (age: 18) were enrolled in the study. A segmented atlas in MNI space has been used to segment hippocampus, amygdala, parahippocampal gyri, putamen, and the superior, inferior and middle temporal gyri in both hemispheres of the brain. Points on the surface of each structure were extracted and warped to each subjects’ structural MRI. These corresponding surface points were used within the analysis, to extract the pose and shape features. Pose and shape differences were detected between the two groups, such that second principal mode of pose variation (p = 0.022), and first principal mode of shape variation (p = 0.049) were found to differ significantly between the two groups.
Statistical shape representation with landmark clustering by solving the assignment problem
Bulat Ibragimov, Boštjan Likar, Franjo Pernuš, et al.
Statistical shape modeling is considered as a backbone of image analysis, since shapes capture distinguishable geometrical properties of depicted objects and spatial relationships among the objects. In the field of medical image analysis, a shape allows segmentation and registration of complex and/or poorly visible structures, where geometrical information may have a more crucial role than pure intensity information. In this paper, we present a novel statistical shape model based on landmark positions and spatial relationships among landmarks. A given training set of images is first annotated by a set of landmarks, which represents the shape of the object of interest. In contrast to active shape (ASM) and appearance models (AAM), where a shape is a single object characterized by a system of eigenvectors, we describe a shape as a combination of distances and angles between landmarks. Finding a suitable combination of distances and angles is achieved by optimizing the representativeness of the model (i.e. the distances and angles must describe the shape and its plasticity), and complexity of the model (i.e. the number of distances and angles must be acceptable for practical applications). To generate a model that satisfies these conditions, the landmarks are first separated into clusters, which are then optimally connected. The optimal connections between clusters are generated by using the assignment problem. The obtained model combined with the game-theoretic framework was applied to segment lung fields from chest radiographs. The usage of such simplified model results on average in a 0.05 mm deterioration of the segmentation performance in terms of the symmetric mean boundary distance, and in a 3.3-times acceleration of the computational time.
Quantitative vertebral morphometry in 3D
Darko Štern, Vesna Njagulj, Boštjan Likar, et al.
Identification of vertebral deformations in two dimensions (2D) is a challenging task due to the projective nature of radiographic images and natural anatomical variability of vertebrae. By generating detailed three-dimensional (3D) anatomical images, computed tomography (CT) enables accurate measurement of vertebral deformations. We present a novel approach to quantitative vertebral morphometry (QVM) based on parametric modeling of the vertebral body shape in 3D. A detailed 3D representation of the vertebral body shape is obtained by automatically aligning a parametric 3D model to vertebral bodies in CT images. The parameters of the 3D model describe clinically meaningful morphometric vertebral body features, and QVM in 3D is performed by comparing the parameters to their statistical values. By applying statistical classification analysis, thresholds and parameters that best discriminate between normal and fractured vertebral bodies are determined. The proposed QVM in 3D was applied to 454 normal and 228 fractured vertebral bodies, yielding classification sensitivity of 92:5% at 7:5% specificity, with corresponding accuracy of 92:5% and precision of 86:1%. The 3D shape parameters that provided the best separation between normal and fractured vertebral bodies were the vertebral body height, and the inclination and concavity of both vertebral endplates. The described QVM in 3D is able to efficiently discriminate between normal and fractured vertebral bodies, and identify morphological cases (wedge, (bi)concavity, crush) and grades (1, 2, 3) of vertebral body deformations. It may be therefore valuable for diagnosing and predicting vertebral fractures in patients who are at risk of osteoporosis.
Combining active appearance and deformable superquadric models for LV segmentation in cardiac MRI
Sharath Gopal, Yuka Otaki, Reza Arsanjani, et al.
In this work we automatically segment the left ventricle (LV) in cardiac MR images in the end-diastole (ED) and end-systole (ES) phases using a novel approach that combines statistical and deterministic deformable models. A 3D Active Appearance Model (AAM) is used to segment the ED phase. The AAM texture model is trained on radial samples from gradient magnitude images to make the fitting process faster and more discriminative. A trained ED-to-ES shape correspondence model is used to map a given ED shape to an ES shape. Once the AAM model converges to a shape in ED, the correspondence model is used to get an approximate ES shape. We segment the LV in the ES phase by first fitting a deformable superquadric to the AAM converged shape (in ED) using data range forces and then tracking the LV using image and data range forces (for the ES shape obtained from correspondence model). We test our approach by performing leave-one-out training on a 35 patient datasets. The data comprises 19 normal patients and 16 patients having heart abnormalities (cardiomyopathy and myocardial infarction). The composition makes it a challenging data collection with significant shape variation. The performance of our method is evaluated by measuring the mismatch between automatically segmented and expert delineated contours using the Mean Perpendicular Distance (MPD) and Dice metrics. The average MPD is 2.6mm for ED and 3.7mm for ES (error mostly towards the apex and base). The average Dice is 0.9 for ED and 0.8 for ES. These results show good potential for clinical use.
Parsing radiographs by integrating landmark set detection and multi-object active appearance models
Albert Montillo, Qi Song, Xiaoming Liu, et al.
This work addresses the challenging problem of parsing 2D radiographs into salient anatomical regions such as the left and right lungs and the heart. We propose the integration of an automatic detection of a constellation of landmarks via rejection cascade classifiers and a learned geometric constellation subset detector model with a multi-object active appearance model (MO-AAM) initialized by the detected landmark constellation subset. Our main contribution is twofold. First, we propose a recovery method for false positive and negative landmarks which allows to handle extreme ranges of anatomical and pathological variability. Specifically we (1) recover false negative (missing) landmarks through the consensus of inferences from subsets of the detected landmarks, and (2) choose one from multiple false positives for the same landmark by learning Gaussian distributions for the relative location of each landmark. Second, we train a MO-AAM using the true landmarks for the detectors and during test, initialize the model using the detected landmarks. Our model fitting allows simultaneous localization of multiple regions by encoding the shape and appearance information of multiple objects in a single model. The integration of landmark detection method and MO-AAM reduces mean distance error of the detected landmarks from 20.0mm to 12.6mm. We assess our method using a database of scout CT scans from 80 subjects with widely varying pathology.
Temporal and Motional Analysis
icon_mobile_dropdown
Multiple sclerosis lesions evolution in patients with clinically isolated syndrome
A. Crimi, O. Commowick, J. C. Ferre, et al.
Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Some classifications have been carried out according to either the clinical course or the immunopathological profiles. Epidemiological data and imaging are showing that MS is a two-phase neurodegenerative inflammatory disease. At the early stage it is dominated by focal inflammation of the white matter (WM), and at a later stage it is dominated by diffuse lesions of the grey matter and spinal cord. A Clinically Isolated Syndrome (CIS) is a first neurological episode caused by inflammation/demyelination in the central nervous system which may lead to MS. Few studies have been carried out so far about this initial stage. Better understanding of the disease at its onset will lead to a better discovery of pathogenic mechanisms, allowing suitable therapies at an early stage. We propose a new data processing framework able to provide an early characterization of CIS patients according to lesion patterns, and more specifically according to the nature of the inflammatory patterns of these lesions. The method is based on a two layers classification. Initially, the spatio-temporal lesion patterns are classified using a tensor-like representation. The discovered lesion patterns are then used to identify group of patients and their correlation to 15 months follow-up total lesion loads (TLL), which is so far the only image-based figure that can potentially infer future evolution of the pathology. We expect that the proposed framework can infer new prospective figures from the earliest imaging sign of MS since it can provide a classification of different types of lesion across patients.
Landmark detection and coupled patch registration for cardiac motion tracking
Haiyan Wang, Wenzhe Shi, Xiahai Zhuang, et al.
Increasing attention has been focused on the estimation of the deformation of the endocardium to aid the diagnosis of cardiac malfunction. Landmark tracking can provide sparse, anatomically relevant constraints to help establish correspondences between images being tracked or registered. However, landmarks on the endocardium are often characterized by ambiguous appearance in cardiac MR images which makes the extraction and tracking of these landmarks problematic. In this paper we propose an automatic framework to select and track a sparse set of distinctive landmarks in the presence of relatively large deformations in order to capture the endocardial motion in cardiac MR sequences. To achieve this a sparse set of the landmarks is identified using an entropy-based approach. In particular we use singular value decomposition (SVD) to reduce the search space and localize the landmarks with relatively large deformation across the cardiac cycle. The tracking of the sparse set of landmarks is performed simultaneously by optimizing a two-stage Markov Random Field (MRF) model. The tracking result is further used to initialize registration based dense motion tracking. We have applied this framework to extract a set of landmarks at the endocardial border of the left ventricle in MR image sequences from 51 subjects. Although the left ventricle undergoes a number of different deformations, we show how the radial, longitudinal motion and twisting of the endocardial surface can be captured by the proposed approach. Our experiments demonstrate that motion tracking using sparse landmarks can outperform conventional motion tracking by a substantial amount, with improvements in terms of tracking accuracy of 20:8% and 19:4% respectively.
Voxel-wise displacement as independent features in classification of multiple sclerosis
Min Chen, Aaron Carass, Daniel S. Reich, et al.
We present a method that utilizes registration displacement fields to perform accurate classification of magnetic resonance images (MRI) of the brain acquired from healthy individuals and patients diagnosed with multiple sclerosis (MS). Contrary to standard approaches, each voxel in the displacement field is treated as an independent feature that is classified individually. Results show that when used with a simple linear discriminant and majority voting, the approach is superior to using the displacement field with a single classifier, even when compared against more sophisticated classification methods such as adaptive boosting, random forests, and support vector machines. Leave-one-out cross-validation was used to evaluate this method for classifying images by disease, MS subtype (Acc: 77%-88%), and age (Acc: 96%-100%).
pCT derived arterial input function for improved pharmacokinetic analysis of longitudinal dceMRI for colorectal cancer
Monica Enescu, Manav Bhushan, Esme J. Hill, et al.
Dynamic contrast-enhanced MRI is a dynamic imaging technique that is now widely used for cancer imaging. Changes in tumour microvasculature are typically quantified by pharmacokinetic modelling of the contrast uptake curves. Reliable pharmacokinetic parameter estimation depends on the measurement of the arterial input function, which can be obtained from arterial blood sampling, or extracted from the image data directly. However, arterial blood sampling poses additional risks to the patient, and extracting the input function from MR intensities is not reliable. In this work, we propose to compute a perfusion CT based arterial input function, which is then employed for dynamic contrast enhanced MRI pharmacokinetic parameter estimation. Here, parameter estimation is performed simultaneously with intra-sequence motion correction by using nonlinear image registration. Ktrans maps obtained with this approach were compared with those obtained using a population averaged arterial input function, i.e. Orton. The dataset comprised 5 rectal cancer patients, who had been imaged with both perfusion CT and dynamic contrast enhanced MRI, before and after the administration of a radiosensitising drug. Ktrans distributions pre and post therapy were computed using both the perfusion CT and the Orton arterial input function. Perfusion CT derived arterial input functions can be used for pharmacokinetic modelling of dynamic contrast enhanced MRI data, when perfusion CT images of the same patients are available. Compared to the Orton model, perfusion CT functions have the potential to give a more accurate separation between responders and non-responders.
Registration of multiple temporally related point sets using a novel variant of the coherent point drift algorithm: application to coronary tree matching
We present a novel algorithm for the registration of multiple temporally related point sets. Although our algorithm is derived in a general setting, our primary motivating application is coronary tree matching in multi-phase cardiac spiral CT. Our algorithm builds upon the fast, outlier-resistant Coherent Point Drift (CPD) algorithm, but incorporates temporal consistency constraints between the point sets, resulting in spatiotemporally smooth displacement fields. We preserve the speed and robustness of the CPD algorithm by using the technique of separable surrogates within an EM (Expectation-Maximization) optimization framework, while still minimizing a global registration cost function employing both spatial and temporal regularization. We demonstrate the superiority of our novel temporally consistent group-wise CPD algorithm over a straightforward pair-wise approach employing the original CPD algorithm, using coronary trees derived from both simulated and real cardiac CT data. In all the tested configurations and datasets, our method presents lower average error between tree landmarks compared to the pairwise method. In the worst case, the difference is around few micrometers but in the better case, our method divides by two the error from the pairwise method. This improvement is especially important for a dataset with numerous outliers. With a fixed set of parameter that has been tuned automatically, our algorithm yields better results than the original CPD algorithm which shows the capacity to register without a priori information on an unknown dataset.
Contextual filtering in curvelet domain for fluoroscopic sequences
Carole Amiot, Jérémie Pescatore, Jocelyn Chanussot, et al.
X-ray exposure during image guided interventions can be important for the patient as well as for the medical staff. Therefore dose reduction is a major concern. Nevertheless, decreasing the dose per image affects significantly the image quality. As a matter of fact, this tends to increase the noise and reduce the contrast. Hence, we propose a new and efficient method to reduce the noise in low dose fluoroscopic sequences. Many studies in that domain have been proposed implementing either multi-scale approaches using wavelet with its derivatives or using filters in the direct space. Our work is based on a spatio-temporal denoising filter using the curvelet transform. Indeed, this sparse transform represents well smooth images with edges and can be applied to fluoroscopic images in order to achieve robust denoising performances. Therefore, we propose to combine a temporal recursive filter with a spatial curvelet filter. Our work is focused on the use of the statistical dependencies between the curvelet coefficients in order to optimize the threshold function. Determining the correlation among coefficients allows to detect which coefficients represent the relevant signal. Thus, our method allows to diminish or even to erase curvelet-like artefacts. The performances and robustness of the proposed method are assessed both on synthetic and real low dose sequences (ie: 20 nGy/frame).
OCT and Ultrasound
icon_mobile_dropdown
Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images
Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).
Ultrasound image segmentation using feature asymmetry and shape guided live wire
Thomas M. Rackham, Sylvia Rueda, Caroline L. Knight, et al.
Ultrasound (US) is a versatile, low cost, real-time, widely available imaging modality. Manual segmentation for volumetric US measurements can be difficult and very time consuming, requiring slice-by-slice segmentations. However, automatic segmentation of ultrasound images can prove challenging due to the presence of speckle, attenuation, missing boundaries, signal dropouts, and artefacts. Semi-automatic segmentation techniques can improve the speed and accuracy of such measurements, taking advantage of clinical expertise while allowing user interaction. This paper presents a novel solution for interactive image segmentation on B-mode ultrasound images. The proposed method builds on the Live Wire framework and introduces two new sets of Live Wire costs, namely a Feature Asymmetry (FA) cost to localise edges and a weak shape constraint cost to aid the selection of appropriate boundaries in the presence of missing information or artefacts. The resulting semi-automatic segmentation method follows edges based on structural relevance rather than intensity gradients, adapting the method to ultrasound images, where the object boundaries are normally fuzzy. The new method is applied in the context of fetal arm adipose tissue quantification, the adipose tissue being an indicator of the fetal nutritional state. A quantitative and qualitative evaluation is performed with respect to related segmentation techniques. The method was tested on 48 manually segmented ultrasound images of the fetal arm across gestation, showing similar accuracy to the intensity-based Live Wire approach but superior repeatability while requiring significantly less time and user interaction.
Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set
Xulei Qin, Zhibin Cong, Luma V. Halig, et al.
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%±2.3% and 83.6±7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Segmentation of retinal OCT images using a random forest classifier
Andrew Lang, Aaron Carass, Elias Sotirchos, et al.
Optical coherence tomography (OCT) has become one of the most common tools for diagnosis of retinal abnormalities. Both retinal morphology and layer thickness can provide important information to aid in the differential diagnosis of these abnormalities. Automatic segmentation methods are essential to providing these thickness measurements since the manual delineation of each layer is cumbersome given the sheer amount of data within each OCT scan. In this work, we propose a new method for retinal layer segmentation using a random forest classifier. A total of seven features are extracted from the OCT data and used to simultaneously classify nine layer boundaries. Taking advantage of the probabilistic nature of random forests, probability maps for each boundary are extracted and used to help refine the classification. We are able to accurately segment eight retinal layers with an average Dice coefficient of 0:79±0:13 and a mean absolute error of 1:21±1:45 pixels for the layer boundaries.
Classification of atorvastatin effect based on shape and texture features in ultrasound images
Xin Yang, Rui Wang, Liu Li, et al.
Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability. Many research studies have been carried out on how to quantitatively evaluate local arterial effects of potential carotid disease treatments. In this paper, the atorvastatin effect evaluation on atherosclerosis plaques are classified based on various shape and texture features extracted from ultrasound images. First, images of atherosclerotic lesions were extracted manually from ultrasound images by an expert physician. After analysis, 26 shape and 85 texture characteristics, and vessel wall volume (VWV) percent of change, were extracted and calculated from atherosclerotic lesions. Among these, to make the method convenient and exact enough, effective features and VWV percent of change, were selected for drug treatment effect evaluation by physician. Finally, a support vector machine (SVM) classifier was utilized to classify atherosclerosis plaques between atorvastatin group and placebo group. The leave-one-case-out protocol was utilized on a database of 768 carotid ultrasound images of 12 patients (5 subjects of placebo group and 7 subjects of atorvastatin group) for evaluation. The classification results showed overall accuracy 91.67%, sensitivity 95.56%, specificity 86.16%; positive predictive value 90.72%, negative predictive value 93.20%, Matthew’s correlation coefficient 82.81%, Youden’s index 81.72%. And the receiver operating characteristic (ROC) curve in our test also performed well. The experimental results also demonstrate that classification using the combined features has higher accuracy than that only using shape/texture feature or VWV percent of change. The proposed method can be used for the statins effect evaluation, especially when patients are treated with drugs, and further be developed as a beneficial tool for facilitating a physician’s diagnosis of the atherosclerosis.
Lung
icon_mobile_dropdown
Real time motion analysis in 4D medical imaging using conditional density propagation
Johannes Lotz, Bernd Fischer, Janine Olesch, et al.
Motion, like tumor movement due to respiration, constitutes a major problem in radiotherapy and/or diagnostics. A common idea to compensate for the motion in 4D imaging, is to invoke a registration strategy, which aligns the images over time. This approach is especially challenging if real time processing of the data and robustness with respect to noise and acquisition errors is required. To this end, we present a novel method which is based only on selected image features and uses a probabilistic approach to compute the wanted transformations of the 3D images. Moreover, we restrict the search space to rotation, translation and scaling. In an initial phase, landmarks in the first image of the series have to be identified, which are in the course of the scheme automatically transferred to the next image. To find the associated transformation parameters, a probabilistic approach, based on factored sampling, is invoked. We start from a state set containing a fixed number of different candidate parameters whose probabilities are approximated based on the image information at the landmark positions. Subsequent time frames are analyzed by factored sampling from this state set and by superimposing a stochastic diffusion term on the parameters. The algorithm is successfully applied to clinical 4D CT data. Landmarks have been placed manually to mark the tumor or a similar structure in the initial image whose position is then tracked over time. We achieve a processing rate of up to 12 image volumes per second. The accuracy of the tracking after five time steps is measured based on expert placed landmarks. We achieve a mean landmark error of less than 2 mm in each dimension in a region with radius of 25 mm around the target structure.
Population based modeling of respiratory lung motion and prediction from partial information
Dirk Boye, Golnoosh Samei, Johannes Schmidt, et al.
Treatment of tumor sites affected by respiratory motion requires knowledge of the position and the shape of the tumor and the surrounding organs during breathing. As not all structures of interest can be observed in real-time, their position needs to be predicted from partial information (so-called surrogates) like motion of diaphragm, internal markers or patients surface. Here, we present an approach to model respiratory lung motion and predict the position and shape of the lungs from surrogates. 4D-MRI lung data of 10 healthy subjects was acquired and used to create a model based on Principal Component Analysis (PCA). The mean RMS motion ranged from 1.88 mm to 9.66 mm. Prediction was done using a Bayesian approach and an average RMSE of 1.44 mm was achieved.
A derivative of stick filter for pulmonary fissure detection in CT images
Changyan Xiao, Marius Staring, Juan Wang, et al.
Pulmonary fissures are important landmarks for automated recognition of lung anatomy and need to be detected as a pre-processing step. We propose a derivative of stick (DoS) filter for pulmonary fissures detection in thoracic CT scans by considering their thin curvilinear shape across multiple transverse planes. Based on a stick decomposition of a local rectangular neighborhood, a nonlinear derivative operator perpendicular to each stick is defined. Then, combining with a standard deviation of the intensity along the stick, the composed likelihood function will take a strong response to fissure-like bright lines, and tends to suppress undesired structures including large vessels, step edges and blobs. Applying the 2D filter sequentially to the sagittal, coronal and axial slices, an approximate 3D co-planar constraint is implicitly exerted through the cascaded pipeline, which helps to further eliminate non-fissure tissues. To generate a clear fissure segmentation, we adopt a connected component based post-processing scheme, combined with a branch-point finding algorithm to disconnect the residual adjacent clutters from the fissures. The performance of our filter has been verified in experiments with a 23 patients dataset, where pathologies to different extents are included. The DoS filter compared favorably with prior algorithms.
Globally optimal lung tumor co-segmentation of 4D CT and PET images
Junjie Bai, Qi Song, Sudershan K. Bhatia, et al.
Four-dimensional CT scans provides valuable motion information of patient throughout different respiratory phases. PET, on the other hand, provides functional information about tumor, which differentiate tumor from normal tissue effectively. However, manually contouring structures of interest on 4D CT is prohibitively tedious due to the large amount of data. In this paper, we propose an automatic method to segment lung tumor simultaneously for 4D CT scans in all phases and PET scan. The problem is modeled as an optimization problem based on Markov Random Fields (MRF) which involves region, boundary terms and a regularization term between PET and CT scans. The problem is solved optimally by computing a single max flow in a properly constructed graph. As far as the authors know, this is the first work in simultaneously segmenting tumor in 4D CT while incorporating PET information. Experiments on 3 lung cancer patients are conducted. The average Dice coefficient is improved from 0.680 to 0.791 compared to segmenting tumor volume in 4D CT phase by phase without incorporating PET information. The proposed method is efficient in terms of running time since the method only requires computing a max flow for which efficient algorithm exists. The memory consumption is linearly scalable with respect to number of 4D CT phases, which enables our method to handle multiple 4D CT phases with reasonable memory consumption.
Pulmonary lobe segmentation using the thin plate spline (TPS) with the help of the fissure localization areas
Benjamin Odry, Pauline Steininger, Li Zhang, et al.
Lung lobe segmentation is clinically important for disease classification, treatment and follow-up of pulmonary diseases. Diseases such as tuberculosis and silicolis typically present in specific lobes i.e. almost exclusively the upper ones. However, the fissures separating different lobes are often difficult to detect because of their variable shape, appearance and low contrast in computed tomography images. In addition, a substantial fraction of patients have missing or incomplete fissures. To solve this problem, several methods have been employed to interpolate incomplete or missed fissures. For example, Pu et al. used an implicit surface fitting with different radial basis functions; Ukil et al. apply fast marching methods; and Ross et al. used an interactive thin plate spline (TPS) interpolation where the user selects the points that will be used to compute the fissure interpolation via TPS. In our study, results of an automated fissure detection method based on a plate-filter as well points derived from vessels were fed into an a robust TPS interpolation that ultimately defined the lobes. To improve the selection of detected points, we statistically determined the areas where fissures are localized from 19 data-sets. These areas were also used to constrain TPS fitting so it reflected the expected shape and orientation of the fissures, hence improving result accuracy. Regions where the detection step provided low response were replaced by points derived from a distance-to-vessels map. The error, defined as the Euclidian mean distance between ground truth points and the TPS fitted fissures, was computed for each dataset to validate our results. Ground truth points were defined for both exact fissure locations and approximate fissure locations (when the fissures were not clearly visible). The mean error was 5.64±4.83 mm for the exact ground truth points, and 10.01 ± 8.23 mm for the approximate ground truth points.
Highly accurate fast lung CT registration
Jan Rühaak, Stefan Heldmann, Till Kipshagen, et al.
Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.
Registration
icon_mobile_dropdown
Assessing accuracy of non-linear registration in 4D image data using automatically detected landmark correspondences
René Werner, Christine Duscha, Alexander Schmidt-Richberg, et al.
4D imaging becomes increasingly important in clinical practice. Its use in diagnostics and therapy planning usually requires the application of non-linear registration techniques. The reliability of information derived from the computed transformations is directly dependent on the registration accuracy. Ideally, this accuracy should be evaluated on a patient- and data-specific level { which requires appropriate evaluation criteria and procedures. A standard approach for evaluation of non-linear registration accuracy is to compute a landmark- or point-based registration error by means of manually detected landmark correspondences in the images to register, with the landmarks being anatomically characteristic points. Manual detection of such points is, however, time-consuming and error-prone. In this contribution, different operators for automatic landmark detection and a block matching strategy for landmark propagation in 4D image sequences (here: 4D lung CT, 4D liver MRT) are proposed and evaluated. It turns out that the so-called Förstner-Rohr operators perform best for detection of anatomically characteristic points and that the proposed propagation strategy ensures a robust transfer of these landmarks between the images. The automatically detected landmark correspondences are then used to evaluate the accuracy of different registration approaches (in total 48 variants) applied for registering 4D lung CT data. The resulting registration error values are compared to errors obtained by manually detected landmark pairs. It is shown that derived statements concerning differences in accuracy of the registration approaches are identical for both the manually and the automatically detected landmark sets.
Deformable image registration by multi-objective optimization using a dual-dynamic transformation model to account for large anatomical differences
Some of the hardest problems in deformable image registration are problems where large anatomical differences occur between image acquisitions (e.g. large deformations due to images acquired in prone and supine positions and (dis)appearing structures between image acquisitions due to surgery). In this work we developed and studied, within a previously introduced multi-objective optimization framework, a dual-dynamic transformation model to be able to tackle such hard problems. This model consists of two non-fixed grids: one for the source image and one for the target image. By not requiring a fixed, i.e. pre-determined, association of the grid with the source image, we can accommodate for both large deformations and (dis)appearing structures. To find the transformation that aligns the source with the target image we used an advanced, powerful model-based evolutionary algorithm that exploits features of a problem’s structure in a principled manner via probabilistic modeling. The actual transformation is given by the association of coordinates with each point in the two grids. Linear interpolation inside a simplex was used to extend the correspondence (i.e. transformation) as found for the grid to the rest of the volume. As a proof of concept we performed tests on both artificial and real data with disappearing structures. Furthermore, the case of prone-supine image registration for 2D axial slices of breast MRI scans was evaluated. Results demonstrate strong potential of the proposed approach to account for large deformations and (dis)appearing structures in deformable image registration.
Multimodal rigid-body registration of 3D brain images using bilateral symmetry
Sylvain Prima, Olivier Commowick
In this paper we show how to use the approximate bilateral symmetry of the brain with respect to its interhemispheric fissure for intra-subject (rigid-body) mono- and multimodal 3D image registration. We propose to define and compute an approximate symmetry plane in the two images to register and to use these two planes as constraints in the registration problem. This 6-parameter problem is thus turned into three successive 3-parameter problems. Our hope is that the lower dimension of the parameter space makes these three subproblems easier and faster to solve than the initial one. We implement two algorithms to solve these three subproblems in the exact same way, within a common intensity-based framework using mutual information as the similarity measure. We compare this symmetry-based strategy with the standard approach (i.e. direct estimation of a 6-parameter rigid-body transformation), also implemented within the same framework, using synthetic and real datasets. We show our symmetry-based method to achieve subvoxel accuracy with better robustness and larger capture range than the standard approach, while being slightly less accurate and slower. Our method also succeeds in registering clinical MR and PET images with a much better accuracy than the standard approach. Finally, we propose a third strategy to decrease the run time of the symmetry-based approach and we give some ideas, to be tested in future works, on how to improve its accuracy.
CT colonography: inverse-consistent symmetric registration of prone and supine inner colon surfaces
CT colonography interpretation is difficult and time-consuming because fecal residue or fluid can mimic or obscure polyps, leading to diagnostic errors. To compensate for this, it is normal practice to obtain CT data with the patient in prone and supine positions. Repositioning redistributes fecal residue and colonic gas; fecal residue tends to move, while fixed mural pathology does not. The cornerstone of competent interpretation is the matching of corresponding endoluminal locations between prone and supine acquisitions. Robust and accurate automated registration between acquisitions should lead to faster and more accurate detection of colorectal cancer and polyps. Any directional bias when registering the colonic surfaces could lead to incorrect anatomical correspondence resulting in reader error. We aim to reduce directional bias and so increase robustness by adapting a cylindrical registration algorithm to penalize inverse-consistency error, using a symmetric optimization. Using 17 validation cases, the mean inverse-consistency error was reduced significantly by 86%, from 3.3 mm to 0.45 mm. Furthermore, we show improved alignment of the prone and supine colonic surfaces, evidenced by a reduction in the mean-of-squared-differences by 43% overall. Mean registration error, measured at a sparse set of manually selected reference points, remained at the same level as the non-symmetric method (no significant differences). Our results suggest that the inverse-consistent symmetric algorithm performs more robustly than non-symmetric implementation of B-spline registration.
Statistical 3D prostate imaging atlas construction via anatomically constrained registration
Mirabela Rusu, B. Nicolas Bloch, Carl C. Jaffe, et al.
Statistical imaging atlases allow for integration of information from multiple patient studies collected across different image scales and modalities, such as multi-parametric (MP) MRI and histology, providing population statistics regarding a specific pathology within a single canonical representation. Such atlases are particularly valuable in the identification and validation of meaningful imaging signatures for disease characterization in vivo within a population. Despite the high incidence of prostate cancer, an imaging atlas focused on different anatomic structures of the prostate, i.e. an anatomic atlas, has yet to be constructed. In this work we introduce a novel framework for MRI atlas construction that uses an iterative, anatomically constrained registration (AnCoR) scheme to enable the proper alignment of the prostate (Pr) and central gland (CG) boundaries. Our current implementation uses endorectal, 1.5T or 3T, T2-weighted MRI from 51 patients with biopsy confirmed cancer; however, the prostate atlas is seamlessly extensible to include additional MRI parameters. In our cohort, radical prostatectomy is performed following MP-MR image acquisition; thus ground truth annotations for prostate cancer are available from the histological specimens. Once mapped onto MP-MRI through elastic registration of histological slices to corresponding T2-w MRI slices, the annotations are utilized by the AnCoR framework to characterize the 3D statistical distribution of cancer per anatomic structure. Such distributions are useful for guiding biopsies toward regions of higher cancer likelihood and understanding imaging profiles for disease extent in vivo. We evaluate our approach via the Dice similarity coefficient (DSC) for different anatomic structures (delineated by expert radiologists): Pr, CG and peripheral zone (PZ). The AnCoR-based atlas had a CG DSC of 90.36%, and Pr DSC of 89.37%. Moreover, we evaluated the deviation of anatomic landmarks, the urethra and veromontanum, and found 3.64 mm and respectively 4.31 mm. Alternative strategies that use only the T2-w MRI or the prostate surface to drive the registration were implemented as comparative approaches. The AnCoR framework outperformed the alternative strategies by providing the lowest landmark deviations.
Mouse lung volume reconstruction from efficient groupwise registration of individual histological slices with natural gradient
Haibo Wang, Mirabela Rusu, Thea Golden, et al.
Mouse lung models facilitate the study of the pathogenesis of various pulmonary diseases such as infections and inflammatory diseases. The co-registration of ex vivo histological data and pre-excised magnetic resonance imaging (MRI) in preclinical mouse models would allow for determination and validation of imaging signatures for different pathobiologies within the lung. While slice-based co-registration could be used, this approach assumes that (a) slice correspondences between the two different modalities exist, and (b) finding slice correspondences often requires the intervention of an expert and is time consuming. A more practical approach is to first reconstruct the 3D histological volume from individual slices, then perform 3D registration with the MR volume. Before the histological reconstruction, image registration is required to compensate for geometric differences between slices. Pairwise algorithms work by registering pairs of successive slices. However, even if successive slices are registered reasonably well, the propagation of registration errors over slices can yield a distorted volumetric reconstruction significantly different in shape from the shape of the true specimen. Groupwise registration can reduce the error propagation by considering more than two successive images during the registration, but existing algorithms are computationally expensive. In this paper, we present an efficient groupwise registration approach, which yields consistent volumetric reconstruction and yet runs equally fast as pairwise registration. The improvements are based on 1) natural gradient which speeds up the transform warping procedure and 2) efficient optimization of the cost function of our groupwise registration. The strength of the natural gradient technique is that it could help mitigate the impact of the uncertainties of the gradient direction across multiple template slices. Experiments on two mouse lung datasets show that compared to pairwise registration, our groupwise approach runs faster in terms of registration convergence, and yields globally more consistent reconstruction.
Surrogate-based diffeomorphic motion estimation for radiation therapy: comparison of multivariate regression approaches
Matthias Wilms, René Werner, Jan Ehrhardt, et al.
Respiratory motion is a major source of error in radiation treatment of thoracic and abdominal tumors. State-of-the-art motion-adaptive radiation therapy techniques are usually guided by external breathing signals acting as surrogates for the internal motion of organs and tumors. Assuming a relationship between the surrogate measurements and the internal motion patterns, which are usually described by non-linear transformations, correspondence models can be defined and used for surrogate-based motion estimation. In this contribution, a diffeomorphic motion estimation framework based on standard multivariate linear regression is extended by subspace-based approaches like principal component analysis, partial least squares, and canonical correlation analysis. These methods aim at exploiting the hidden structure of the training data to improve the use of the information provided by high-dimensional surrogate and internal motion representations. A quantitative evaluation carried out on 4D CT data sets of 10 lung tumor patients shows that subspace-based approaches are able to significantly improve the mean estimation accuracy when compared to standard multivariate linear regression.
Segmentation and Localization
icon_mobile_dropdown
Probabilistic model-based detection and localization of calibration phantoms in CT Images
Mingna Zheng, J. Jeffrey Carr, Yaorong Ge
As medical imaging moves from qualitative assessment to quantitative analysis based on bio-markers, calibration of imaging data becomes critically important. In computed tomography (CT) images values are scaled to Hounsfield units (HU), despite this, the measurements can vary due to differences in machine-level calibration and patient size. One way to ensure proper calibration at the image level is to include a phantom containing objects of known HU values in each scan so that each image can be calibrated individually by the values measured from the phantom regions within the image. This introduces a need to extract phantom measurements from each image. Given a reasonable starting point manually or heuristically, this is a straightforward segmentation problem because the phantom regions are well defined and the values are relatively uniform. However, the problem becomes challenging if the requirement is a fully automated method that is robust across variations of phantoms and that can exclude images without phantoms. In this paper, we describe a probabilistic model-based approach to tackling this problem. We use the constellation model framework first proposed by Burt el al. to represent a phantom as composed of a number of parts and determine the existence and localization of the phantom in a probabilistic sense based on the detection of candidate parts. This model based approach allows us to formally describe variations in phantom design and handle missing parts caused by phantom regions similar to the background. Initial results on 100 CT studies from a longitudinal cardiovascular study are encouraging.
Coarse-to-fine localization of anatomical landmarks in CT images based on multi-scale local appearance and rotation-invariant spatial landmark distribution model
Mitsutaka Nemoto, Yoshitaka Masutani, Shouhei Hanaoka, et al.
The detection of anatomical landmarks (LMs) often plays a key role in medical image analysis. In our previous study, we reported an automatic LM detection method for CT images. Despite its high detection sensitivity, the distance errors of the detection results for some LMs were relatively large as they sometimes exceeded 10 mm. Naturally, it is desirable to minimize LM detection error, especially when the LM detection results are used in image analysis tasks such as image segmentation. In this study, we introduce a novel method of coarse-to-fine localization to increase accuracy, which refines the LM positions detected by our previous method. The proposed LM localization is performed by both multiscale local image pattern recognition and likelihood estimation from prior knowledge of the spatial distribution of multiple LMs. Classifier ensembles for recognizing local image patterns are trained by the cost-sensitive MadaBoost. The cost of each sample is altered depending on its distance from the ground truth LM position. The spatial LM distribution likelihood, calculated from a statistical model of inter-landmark distances between all LM pairs, is also used in the localization. The evaluation experiment was performed with 15 LMs in 39 CT images. The average distance error of the pre-detected LM position was improved by 2.05 mm by the proposed localization method. The proposed method was shown to be effective for reducing LM detection error.
Automated anatomical labeling of the cerebral arteries using belief propagation
Murat Bilgel, Snehashis Roy, Aaron Carass, et al.
Labeling of cerebral vasculature is important for characterization of anatomical variation, quantification of brain morphology with respect to specific vessels, and inter-subject comparisons of vessel properties and abnormalities. We propose an automated method to label the anterior portion of cerebral arteries using a statistical inference method on the Bayesian network representation of the vessel tree. Our approach combines the likelihoods obtained from a random forest classifier trained using vessel centerline features with a belief propagation method integrating the connection probabilities of the cerebral artery network. We evaluate our method on 30 subjects using a leave-one-out validation, and show that it achieves an average correct vessel labeling rate of over 92%.
A pattern recognition framework for vessel segmentation in 4D CT of the brain
J. J. Mordang, M. T. H. Oei, R. van den Boom, et al.
In this study, a pattern recognition-based framework is presented to automatically segment the complete cerebral vasculature from 4D Computed Tomography (CT) patient data. Ten consecutive patients whom were admitted to our hospital on a suspicion of ischemic stroke were included in this study. A background mask and bone mask were calculated based on intensity thresholding and morphological operations, and the following six image features were proposed: 1) a subtraction image of a subtraction image consisting of timing-invariant CTA and non-constrast CT, 2) the area under the curve of a gamma variate function fitted to the tissue curves, 3-5) three optimized parameter values of this gamma variate function, and 6) a vessel likeliness function. After masking bone and background, these features were used to train a linear discriminant voxel classifier (LDC) on regions of interest (ROIs), which were annotated in soft tissue (white matter and gray matter) and vessels by an expert observer. The LDC was trained in a leave-one-out manner in which 9 patients tissue ROIs were used for training and the remaining patient tissue ROIs were used for testing the classifier. To evaluate the frame work, for each training cycle the accuracy was calculated by dividing the true positives and negatives by the true positives and negatives and false positives and negatives. The resulting averaged accuracy was 0:985±0:014 with a range of 0:957 to 0:999.
Hepatic vein segmentation using wavefront propagation and multiscale vessel enhancement
Klaus Drechsler, Cristina Oyarzun Laura, Stefan Wesarg
Modern volumetric imaging techniques such as CT or MRI, aid in the understanding of a patient's anatomy and pathologies. Depending on the medical use case, various anatomical structures are of interest. Blood vessels play an important role in several applications, e.g. surgical planning. Manual delineation of blood vessels in volumetric images is error prone and time consuming. Automated vessel segmentation is a challenging problem due to acquisition-dependent problems such as noise, contrast, spatial resolution, and artifacts. In this paper, a vessel segmentation method is presented that combines a wavefront propagation technique with Hessian-based vessel enhancement. The latter has proven its usefulness as a preprocessing step to detect tubular structures before the actual segmentation is carried out. The former allows for an ordered growing process, which enables topological analysis. The contribution of this work is as follows. 1. A new vessel enhancement filter for tubular structures based on the Laplacian is proposed, 2. a wavefront propagation technique is proposed that prevents leaks by imposing a threshold on the maximum number of voxels that the propagating front must contain, and 3. a volumetric hole filling method is proposed to filll holes, bays, and tunnels which are caused at locations where the tubular structure assumption is violated. The proposed method reduces approximately 50% of the necessary eigenvalue calculations for vessel enhancement and prevents leaks starting at small spots, which usually occur using standard region growing. Qualitative and quantitative evaluation based on several metrics (statistical measures, dice and symmetric average surface distance) is presented.
Keynote and 2D-3D Registration
icon_mobile_dropdown
A flexible toolkit for rapid GPU-based generation of DRRs for 2D-3D registration
Grant Marchelli, David Haynor, William Ledoux, et al.
This paper presents initial performance results for a software toolkit that implements GPU-based parallel computation of digitally reconstructed radiographs (DRRs) from volumetric imaging data for 2D-3D registration. The computational parallelism is achieved using NVIDIA’s CUDA implementation of general purpose computing on the graphics processing unit. The sample volumetric imaging data shown here is from CT imaging of a cadaveric foot, but the toolkit can be applied equally well to other volumetric imaging data. An efficient implementation requires launching hundreds of simultaneous, independent computational threads and fast thread access to the global memory where they need to read and write data. We have implemented fast DRR generation by launching a computational thread for each pixel in the image, and achieve efficient memory access by using 3D texture memory to store the volumetric data and constant memory to store global information such as intensifier coordinates. The Thrust software library was used to store individual bone DRRs, which enables efficient memory transfer and use of built-in device operators during image compositing and similarity quantification. By storing individual DRRs, the toolkit can support independent kinematics for up to 32 segmented objects. We show that the algorithm scales with the number of processors and compare timings for three commercially available GPUs. Here we present our initial fast DRR computations to demonstrate that the toolkit can produce useful results for a full 160 × 339 × 439 stack of floating point density data on a high resolution 1152 × 896 pixel screen in 1.3 ms and on a 512 × 512 pixel screen in less than 0.6 ms.
Breast compression simulation using ICP-based B-spline deformation for correspondence analysis in mammography and MRI datasets
Julia Krüger, Jan Ehrhardt, Arpad Bischof, et al.
Mammography is the most commonly used imaging modality in breast cancer screening and diagnosis. The analysis of 2D mammographic images can be difficult due to the projective nature of the imaging technique and poor contrast between tumorous and healthy fibro-glandular tissue. Contrast-enhanced magnetic resonance imaging (MRI) can overcome these disadvantages by providing a 3D dataset of the breast. The detection of corresponding image structures is challenging due to large breast deformations during the image acquisition. We present a method for analyzing 2D/3D intra-individual correspondences between mammography and MRI datasets. Therefore, an ICP-based B-spline registration is used to approximate the breast deformation differences. The resulting deformed MR image is projected onto the 2D plane to enable a comparison with the 2D mammogram. A first evaluation based on six mammograms revealed an average accuracy of 4.87 mm. In contrast to previous FEM-based approaches, we propose a fast and easy to implement 3D/3D-registration, for simulating the mammographic breast compression.
Semi-automatic registration of 3D orthodontics models from photographs
In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.
Statistics of Images and Structures
icon_mobile_dropdown
Bias correction of maximum likelihood estimation in quantitative MRI
D. H. J. Poot, G. Kotek, W. J. Niessen, et al.
For quantitative MRI techniques, such as T1, T2 mapping and Diffusion Tensor Imaging (DTI), a model has to be fit to several MR images that are acquired with suitably chosen different acquisition settings. The most efficient estimator to retrieve the parameters is the Maximum Likelihood (ML) estimator. However, the standard ML estimator is biased for finite sample sizes. In this paper we derive a bias correction formula for magnitude MR images. This correction is applied in two different simulation experiments, a T2 mapping experiment and a DTI experiment. We show that the correction formula successfully removes the bias. As the correction is performed as post-processing, it is possible to retrospectively correct the results of previous quantitative experiments. With this procedure more accurate quantitative values can be obtained from quantitative MR acquisitions.
Near-lossless compression of computed tomography images using predictive coding with distortion optimization
Andreas Weinlich, Peter Amon, Andreas Hutter, et al.
This paper presents a method for iterative minimization of combined residual and prediction error for near-lossless compression of medical computed tomography acquisitions using pixel-wise least-squares prediction. While most other lossy state-of-the-art image compression systems like JPEG 2000 make use of transform-based coding, in lossless coding higher compression ratios can be achieved with plain predictive algorithms like JPEG-LS because of their non-linear data adaptive energy reduction. Yet, applying these algorithms in lossy coding, simple quantization usually leads to error propagation and therefore serious quality loss or rate increase, as prediction accuracy of a pixel value and thus data rate depends on the previously reconstructed image region. The proposed minimization approach modifies the original image to be coded in a way such that the edge-directed prediction method from literature may achieve better predictions while introducing only a minimum amount of distortion. Compared to transform-based coding methods, the distortion introduced by the proposed scheme mostly consists in noise reduction instead of blurring or the introduction of artificial structures. The method also prevents error propagation due to the consideration of all pixel dependencies of the prediction. It is shown that, combined with a context-adaptive arithmetic coder, in high-fidelity coding (i. e., PSNR higher than 55 dB) the proposed method can achieve higher compression ratios than the transform-based approaches JPEG 2000, H.264/AVC, and HEVC intra coding.
Tumor segmentation in brain MRI by sparse optimization
Shandong Wu, David J. Rippe, Nicholas G. Avgeropoulos
In this work we propose a novel method for brain tumor segmentation in MRI by adapting the sparse optimization techniques. The core of the method lies in the subspace decomposition of the tissue feature space constituted by the brain MR images. The tumor-grown MRI slices can be viewed as a corrupted observation, which therefore can be decomposed into two components: the low-rank normal brain tissue structures and the sparse corruption/error that is due to the developed tumor. Through performing rank decomposition the corruption/error can be spotted out, thus giving rise to an initial segmentation of tumor. Our method requires no model learning. Experiments are performed on a data set of 12 subjects and the segmentation agreement is 0.86 in terms of the Dice’s similarity coefficient in comparison with the manual segmentation that is performed by a 15-year experienced radiologist. The proposed method represents an efficient mode for brain tumor segmentation that may be potentially incorporated in automated or semi-automatic segmentation systems in the clinical workflow.
Three-dimensional synthetic blood vessel generation using stochastic L-systems
Segmentation of blood vessels from magnetic resonance angiography (MRA) or computed tomography angiography (CTA) images is a complex process that usually takes a lot of computational resources. Also, most vascular segmentation and detection algorithms do not work properly due to the wide architectural variability of the blood vessels. Thus, the construction of convincing synthetic vascular trees makes it possible to validate new segmentation methodologies. In this work, an extension to the traditional Lindenmayer system (L-system) that generates synthetic 3D blood vessels by adding stochastic rules and parameters to the grammar is proposed. Towards this aim, we implement a parser and a generator of L-systems whose grammars simulate natural features of real vessels such as the bifurcation angle, average length and diameter, as well as vascular anomalies, such as aneurysms and stenoses. The resulting expressions are then used to create synthetic vessel images that mimic MRA and CTA images. In addition, this methodology allows for vessel growth to be limited by arbitrary 3D surfaces, and the vessel intensity profille can be tailored to match real angiographic intensities.
Longitudinal intensity normalization of magnetic resonance images using patches
This paper presents a patch based method to normalize temporal intensities from longitudinal brain magnetic resonance (MR) images. Longitudinal intensity normalization is relevant for subsequent processing, such as segmentation, so that rates of change of tissue volumes, cortical thickness, or shapes of brain structures becomes stable and smooth over time. Instead of using intensities at each voxel, we use patches as image features as a patch encodes neighborhood information of the center voxel. Once all the time-points of a longitudinal dataset are registered, the longitudinal intensity change at each patch is assumed to follow an auto-regressive (AR(1)) process. An estimate of the normalized intensities of a patch at every time-point are generated from a hidden Markov model, where the hidden states are the unobserved normalized patches and the outputs are the observed patches. A validation study on a phantom dataset shows good segmentation overlap with the truth, and an experiment with real data shows more stable rates of change for tissue volumes with the temporal normalization than without.
Label Fusion
icon_mobile_dropdown
Automatic neonatal brain tissue segmentation with MRI
Vedran Srhoj-Egekher, Manon J. N. L. Benders, Max A. Viergever, et al.
Volumetric measurements of neonatal brain tissue classes have been suggested as an indicator of long-term neurodevelopmental performance. To obtain these measurements, accurate brain tissue segmentation is needed. We propose a novel method for automatic segmentation of cortical grey matter (CoGM), unmyelinated white matter (UWM), myelinated white matter (MWM), basal ganglia and thalami, brainstem, cerebellum, ventricles, and cerebrospinal fluid in the extracerebral space (CSF) in MRI scans of the brain in preterm infants. For this project, seven preterm born infants, scanned at term equivalent age were used. Axial T1- and T2- weighted scans were acquired with 3T MRI scanner. The automatic segmentation was performed in three subsequent stages where each tissue was labeled. First, a multi-atlas-based segmentation (MAS) was employed to obtain localized, subject specific spatially varying priors for each tissue. Next, based on these priors, two-class classification with k-nearest neighbor (kNN) classifier was performed to obtain the segmentation of each tissue type separately. Last, to refine the final result, and to achieve the segmentation along the tissue boundaries, a multiclass naive Bayes classifier was employed. The results were evaluated against the manually set reference standard and quantified in terms of Dice coefficient (DC) and modified Hausdorff distance (MHD), defined as 95th-percentile of the Hausdorff distance. On average, the method achieved the following DCs: 0.87 for CoGM, 0.91 for UWM, 0.60 for MWM, 0.93 for basal ganglia and thalami, 0.87 for brainstem, 0.94 for cerebellum, 0.86 for ventricles, 0.82 for CSF. The obtained average MHDs were 0.48 mm, 0.44 mm, 3.09 mm, 0.39 mm, 0.62 mm, 0.35 mm, 1.75 mm, 1.13 mm, for each tissue, respectively. The proposed methods achieved high segmentation accuracy for all tissues, except for MWM, and it provides a tool for quantification of brain tissue volumes in axial MRI scans of preterm born infants.
Robust non-local multi-atlas segmentation of the optic nerve
Andrew J. Asman, Michael P. DeLisi, Louise A. Mawn, et al.
Labeling or segmentation of structures of interest on medical images plays an essential role in both clinical and scientific understanding of the biological etiology, progression, and recurrence of pathological disorders. Here, we focus on the optic nerve, a structure that plays a critical role in many devastating pathological conditions – including glaucoma, ischemic neuropathy, optic neuritis and multiple-sclerosis. Ideally, existing fully automated procedures would result in accurate and robust segmentation of the optic nerve anatomy. However, current segmentation procedures often require manual intervention due to anatomical and imaging variability. Herein, we propose a framework for robust and fully-automated segmentation of the optic nerve anatomy. First, we provide a robust registration procedure that results in consistent registrations, despite highly varying data in terms of voxel resolution and image field-of-view. Additionally, we demonstrate the efficacy of a recently proposed non-local label fusion algorithm that accounts for small scale errors in registration correspondence. On a dataset consisting of 31 highly varying computed tomography (CT) images of the human brain, we demonstrate that the proposed framework consistently results in accurate segmentations. In particular, we show (1) that the proposed registration procedure results in robust registrations of the optic nerve anatomy, and (2) that the non-local statistical fusion algorithm significantly outperforms several of the state-of-the-art label fusion algorithms.
Improving whole-brain segmentations through incorporating regional image intensity statistics
Christian Ledig, Rolf A. Heckemann, Alexander Hammers, et al.
Multi-atlas segmentation methods are among the most accurate approaches for the automatic labeling of magnetic resonance (MR) brain images. The individual segmentations obtained through multi-atlas propagation can be combined using an unweighted or locally weighted fusion strategy. Label overlaps can be further improved by refining the label sets based on the image intensities using the Expectation-Maximisation (EM) algorithm. A drawback of these approaches is that they do not consider knowledge about the statistical intensity characteristics of a certain anatomical structure, especially its intensity variance. In this work we employ learned characteristics of the intensity distribution in various brain regions to improve on multi-atlas segmentations. Based on the intensity profile within labels in a training set, we estimate a normalized variance error for each structure. The boundaries of a segmented region are then adjusted until its intensity characteristics are corrected for this variance error observed in the training sample. Specifically, we start with a high-probability “core” segmentation of a structure, and maximise the similarity with the expected intensity variance by enlarging it. We applied the method to 35 datasets of the OASIS database for which manual segmentations into 138 regions are available. We assess the resulting segmentations by comparison with this gold-standard, using overlap metrics. Intensity-based statistical correction improved similarity indices (SI) compared with EM-refined multi-atlas propagation from 75.6% to 76.2% on average. We apply our novel correction approach to segmentations obtained through either a locally weighted fusion strategy or an EM-based method and show significantly increased similarity indices.
Patch-based label fusion using local confidence-measures and weak segmentations
André Mastmeyer, Dirk Fortmeier, Ehsan Maghsoudi, et al.
A system for the fully automatic segmentation of the liver and spleen is presented. In a multi-atlas based segmentation framework, several existing segmentations are deformed in parallel to image intensity based registrations targeting the unseen patient. A new locally adaptive label fusion method is presented as the core of this paper. In a patch comparison approach, the transformed segmentations are compared to a weak segmentation of the target organ in the unseen patient. The weak segmentation roughly estimates the hidden truth. Traditional fusion approaches just rely on the deformed expert segmentations only. The result of patch comparison is a confidence weight for a neighboring voxel-label in the atlas label images to contribute to the voxel under study. Fusion is finally carried out in a weighted averaging scheme. The new contribution is the incorporation of locally determined confidence features of the unseen patient into the fusion process. For a small experimental set-up consisting of 12 patients, the proposed method performs favorable to standard classifier label fusion methods. In leave-one-out experiments, we obtain a mean Dice ratio of 0.92 for the liver and 0.82 for the spleen.
Poster Session: Atlases
icon_mobile_dropdown
Combined pixel classification and atlas-based segmentation of the ventricular system in brain CT Images
Pieter C. Vos, Ivana Išgum, J. Matthijs Biesbroek, et al.
Accurate segmentation of the brain ventricular system in Computed Tomography (CT) images is useful in neurodiagnosis, providing quantitative measures on changes in ventricular size due to stroke. Manual segmentation, however, is a time-consuming, tedious task and is prone to large inter-observer variability. This study presents an automatic ventricle system segmentation method by combining the results of supervised pixel classification based on intensities with spatial information obtained from a multi-atlas-based segmentation method. The method is applied to follow-up brain CT images which were collected from a cohort of 20 patients with proven ischemic stroke. The automatic segmentation performance was evaluated in a leave-one-out strategy by comparing with manual segmentations. The results show that combining information obtained from pixel classification and multi-atlas-based segmentation significantly outperforms each method independently with a mean Dice coefficient index of 0.810.07.±
Constructing a 4D murine cardiac micro-CT atlas for automated segmentation and phenotyping applications
D. Clark, A. Badea, G. A. Johnson, et al.
A number of investigators have demonstrated the potential of preclinical micro-CT in characterizing cardiovascular disease in mouse models. One major hurdle to advancing this approach is the extensive user interaction required to derive quantitative metrics from these 4D image arrays (space + time). In this work, we present: (1) a method for constructing an average anatomic cardiac atlas of the mouse based on 4D micro-CT images, (2) a fully automated approach for segmenting newly acquired cardiac data sets using the atlas, and (3) a quantitative characterization of atlas-based segmentation accuracy and consistency. Employing the deformable registration toolkit, ANTs, the construction of minimal deformation fields, and a novel adaptation of joint bilateral filtration, our atlas construction scheme was used to integrate 6, C57BL/6 cardiac micro-CT data sets, reducing the noise standard deviation from ~70 HU in the individual data sets to ~21 HU in the atlas data set. Using the segmentation tools in Atropos and our atlas-based segmentation, we were able to propagate manual labels to 5, C57BL/6 data sets not used in atlas construction. Average Dice coefficients and volume accuracies (respectively) over phases 1 (ventricular diastole), 3, and 5 (ventricular systole) of these 5 data sets were as follows: left ventricle, 0.96, 0.96; right ventricle, 0.89, 0.92; left atrium, 0.88, 0.89; right atrium, 0.86, 0.92; myocardium, 0.90, 0.94. Once the atlas was constructed and segmented, execution of the proposed automated segmentation scheme took ~6.5 hours per data set, versus more than 50 hours required for a manual segmentation.
Build 4-dimensional myocardial model for dynamic CT images
Yixun Liu, Songtao Liu, Albert C. Lardo, et al.
4D (3D + time) model is valuable in comprehensive assessment of cardiac functions. Usually, the generation of the 4D myocardial models involves myocardium segmentation, mesh generation and non-rigid registration (to build mesh node correspondence). In this paper, we present a method to simultaneously perform the above tasks. This method begins from a triangular surface model of the myocardium at the first phase of a cardiac cycle. Then, the myocardial surface is simulated as a linear elastic membrane, and evolves toward the next phase governed by an energy function while maintaining the mesh quality. Our preliminary experiments performed on dynamic CT images of the dog demonstrated the effectiveness of this method on both segmentation and mesh generation. The minimum average surface distance between the segmentation results of the proposed method and the ground truth can reach 0.72 ± 0.55 mm, and the mesh quality measured by the aspect ratio of the triangle was less than 11.57 ± 1.18.
Poster Session: Blood Vessels
icon_mobile_dropdown
A new morphological tool to extract blood vessels in cross sectional MRI
Cédric Blanchard, Tadeusz Sliwa, Alain Lalande, et al.
In this paper, we propose a new method of mathematical morphology called Aurora transform. This is a geodesic reconstruction that only spreads in radial orientations from a center. Thanks to this method, star domains such as blood vessels in cross sectional planes are extracted even if these regions are often inhomogeneous or some parts of their edges are not drawn very well. This method has been successfully applied to extract the edges of the aortic root, the ascending and the descending aorta in cross sectional cine-MRI. It has been then compared to the use of some active contours.
Automatic vessel extraction of lower extremity CT angiography using multi-segmented volume and regional vessel tracking
Min Jin Lee, Helen Hong, Jin Wook Chung
Computed tomography angiography (CTA) is currently considered noninvasive potential alternative to conventional digital subtraction angiography (DSA) for the evaluation of lower extremity arteries. For the diagnosis of peripheral arterial occlusive disease, lower extremity vessels in CTA images are extracted in advance. We propose an automatic vessel extraction method using multi-segmented volume and regional vessel tracking in lower extremity CT angiography. To consider an anatomical characteristic of each lower extremity vessel structure, whole volume is automatically divided into five segments such as foot, tibia, knee, femur and pelvis along z-axis of the lower extremities. The vessels and bones are extracted by three-dimensional region growing with multi-seeding and iterative multiple threshold estimation. Finally, to restore the eroded vessels near to bones and cavernous vessels in pelvis and tibia, regional vessel tracking considering density, size and direction is performed. Experimental results show that our method provides accurate results in occluded and stenosed vessels without loss of soft tissue and calcification. For visual scoring, two radiologists compared paired images obtained from proposed method and conventional angiography.
Automatic detection of retinal vascular bifurcations and crossovers based on isotropy and anisotropy
Guodong Li, Dehui Xiang, Fei Yang, et al.
The analysis of retinal blood vessels is very important in the detection of some diseases in early stages, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease, and stroke. The bifurcations and crossovers are important feature points, which play important roles in the analysis of the retinal vessel tree. These feature points have been demonstrated to be important features in many visual tasks such as image registration, mosaicing, and segmentation. In this paper, a new method is proposed to detect vascular bifurcations and crossovers in fundus images. The Gaussian filter is applied to the blue channel of the original color retinal images to suppress the central reflex and reduce the candidate points. The eigenvalues and eigenvectors of Hessian matrix are then obtained in multiple scales to provide the structural and directional information. By computing the anisotropy and isotropy of neighboring image segments for each pixel in a retinal image, we define a multi-scale vessel filter which combines the responses of tubular structures and the responses of bifurcations and crossovers. Finally, the proposed method has been tested with publicly available database STARE and DRIVE. The experimental results show that bifurcations, crossovers and tubular structures can be detected simultaneously. And the proposed method performs well in detecting the bifurcations and crossovers which are in thin vessels or low contrast vessels.
A hardware implementation of a levelset algorithm for carotid lumen segmentation in CTA
André van der Avoird, Ning Lin, Bram van Ginneken, et al.
This work presents a novel hardware implementation of a levelset algorithm for carotid lumen segmentation in computed tomography. We propose to use a field programmable gate array (FPGA) to iteratively solve the underlying finite difference scheme. A FPGA processor can be programmed to have a dedicated hardware architecture including specific data path and processor core design with different types of parallelizations which is fully tailored and optimized toward its application. The method has been applied to ten carotid bifurcation of six stroke patients and the results have been compared to the results obtained from the same method implemented in C++. Visual inspections revealed similar segmentation results. The average computation time in software was 1663 ± 86 seconds, the computation time on the FPGA processor was 28 seconds yielding approximately a 60-fold speed-up which to our knowledge has been unmmatched before for this class of algorithms.
Automated artery and vein detection in 4D-CT data with an unsupervised classification algorithm of the time intensity curves
H. O. A. Laue, M. T . H. Oei, L. Chen, et al.
In this work a fully automated detection method for artery input function (AIF) and venous output function (VOF) in 4D-computer tomography (4D-CT) data is presented based on unsupervised classification of the time intensity curves (TIC) as input data. Bone and air voxels are first masked out using thresholding of the baseline measurement. The TICs for each remaining voxel are converted to time-concentration-curves (TCC) by subtracting the baseline value from the TIC. Then, an unsupervised K-means classifier is applied to each TCC with an area under the curve (AUC) larger than 95% of the maximum AUC of all TCCs. The results are three clusters, which yield average TCCs for vein and artery voxels in the brain, respectively. A third cluster generally represents a vessel outside the brain. The algorithm was applied to five 4D-CT patient data who were scanned on the suspicion of ischemic stroke. For all _ve patients, the algorithm yields reasonable classification of arteries and veins as well as reasonable and reproducible AIFs and VOF. To our knowledge, this is the first application of an unsupervised classification method to automatically identify arteries and veins in 4D-CT data. Preliminary results show the feasibility of using K-means clustering for the purpose of artery-vein detection in 4D-CT patient data.
3D multiscale vessel enhancement based centerline extraction of blood vessels
Rahul Prasanna Kumar, Fritz Albregtsen, Martin Reimers, et al.
Extraction of blood vessel structure is important for improving planning, navigation and tracking in several interventional procedures. Centerline based registration methods have proven to be fast for clinical applications and an effective way of registering multi-modal images. Here, we present a novel blood vessel centerline extraction method in 3D. Our method consists of two parts, namely Multiscale Vessel Enhancement Filtering (MVEF) and Centerline Extraction using Vessel Direction (CEVD). Our proposed MVEF has an improved noise reduction and better Gaussian profile at the vessel cross-sections compared to conventional MVEF. The CEVD is our novel method for tracing the peaks of the Gaussian profile of the local MVEF at the vessel cross-sections. The peak of the Gaussian profile provides the center position of the blood vessels. The novelty of this method is in effectively finding only the connected centerlines of the blood vessels of interest. The proposed method was evaluated using both synthetic and medical images. On comparing with Frangi's vesselness filtering combined with thinning, our method is shown to be approximately 5 times faster. The results also show that our method is customized to detect only the desired blood vessels, thereby eliminating the detection of unwanted vessel-like structures. The centerline accuracy was evaluated by comparing with ground truth data created by finding Hough circle centers at each cross-section of the vessel structure. The modified symmetric Hausdorff distance between our result and the ground truth was approximately 1 pixel for both synthetic and medical images.
Poster Session: Classification
icon_mobile_dropdown
A method for automated anatomical labeling of abdominal veins extracted from 3D CT images
In abdominal surgery, understanding blood vessel structure is important because abdominal blood vessels have large individual differences among patients. Computers must be used to support surgeons and their understanding of blood vessel structures. This paper presents a method of automated anatomical labeling of abdominal veins. A thinning process is applied to the abdominal vein regions extracted from a CT volume. The result of the process is expressed as a tree structure. Since portal veins have a characteristic shape and position in the portal system, we applied rule-based anatomical labeling to them. The names of other veins are assigned by classifiers trained by a machine learning technique, where several likelihood functions are constructed for each vessel name. Their weighted sum is used as the likelihood of the vessel name. The names of the branches in the tree structure are labeled by searching for the branch whose likelihood of an anatomical name is maximum and assigning the anatomical name to the branch. In an experiment using 50 cases of abdominal CT volumes, the recall rate, the precision rate, and the F-measure were 87.5, 93.1, and 90.2%, respectively.
Graph-based bifurcation detection in phase-contrast MR images
Yoo-Jin Jeong, Sebastian Ley, Michael Delles, et al.
Dealing with cardiovascular diseases the velocity-encoded magnetic resonance imaging (PC-MRI) is a well-known technique to acquire non-invasive measurements of the blood flow. However, the application of conventional vessel segmentation methods in PC-MR images often leads to problems due to the reduced quality of the morphology image. We proposed a robust centerline extraction method in PC-MR images to overcome those problems. The method yielded satisfying results for the centerline extraction of large vessels but did not consider vessel branches. Therefore, in this paper we present an approach for the detection of bifurcations in PC-MR images. The developed algorithm requires two inputs: the previously computed centerline points of the main vessel and a minimal user input. For each point on the centerline it determines, if there exists a bifurcation in the cross-sectional plane at that position. This is accomplished by an a* path finding algorithm, which computes the path costs for a potential bifurcation point to its corresponding center point. The path costs are determined by the combination of different features derived from the morphology and flow information. By comparison of all cost sums, bifurcations can be detected due to their low amount/value. The algorithm, evaluated on 7 volunteer and 12 patient PC-MRI datasets, yielded satisfying results.
Optimal filter approach for the detection of vessel bifurcations in color fundus images
Bifurcations of retinal vessels in fundus images are important structures clinically and their detection is also an important component in image processing algorithms such as registration, segmentation and change detection. In this paper, we develop a method for direct bifurcation detection based on the optimal filter framework. This approach first generates a set of filters to represent all cases of bifurcations, and then uses them to generate a feature space for a classifier to distinguish bifurcations and non-bifurcations. This approach is different from previous methods as it uses a minimal number of assumptions, essentially only requiring training images and expert annotations of bifurcations. The method is trained on 60 fundus images and tested on 20 fundus images, resulting in an AUC of 0.883, which compares well to a human expert.
Data-specific feature point descriptor matching using dictionary learning and graphical models
Ricardo Guerrero, Daniel Rueckert
The identification of anatomical landmarks in medical images is an important task in registration and morphometry. The manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. Matching landmarks in a pair of images is a challenging task. Although off-the-shelf feature point descriptors are powerful at describing points in an image, they are generic by nature, as they have been usually developed for applications in a computer vision setting where there is little prior knowledge about the images. Leveraging on recent developments in the machine learning community, this paper aims to build feature point descriptors that are dataset-specific. The proposed approach describes landmarks as feature descriptors based on a sparse coding reconstruction of a patch surrounding the landmark (or any point of interest), using a dataset-specific learned dictionary. Since strong spatial constraints typically exist in medical images, we also combine spatial information of surrounding point descriptors into a graphical model that is built online. We show accurate results in matching one-to-one anatomical landmarks in brain MR images.
Automated temperature calculation method for DWI-thermometry: volunteer study
Koji Sakai, Kei Yamada, Naozo Sugimoto
Diffusion-weight imaging (DWI) has already been incorporated as a regular sequence for patients. If DWI could indicate brain temperature without a complicated procedure, such information may greatly contribute to initial diagnosis. The temperature (T: degree Celsius) was calculated using the following equation form the diffusion coefficient (D): T= 2256.74/ln (4.39221/D) - 273.15. The cerebrospinal fluid region for automated temperature computation was segmented by 2-dimensional region growing. No significant differences were seen between temperatures using the proposed method and the manually segmented. The proposed method of fully automated deep brain temperature computation from DWI may prove feasible for application in MRI consoles.
Colour and multispectral imaging for wound healing evaluation in the context of a comparative preclinical study
Dorra Nouri, Yves Lucas, Sylvie Treuillet, et al.
Accurate wound assessment is a critical task for patient care and health cost reduction at hospital but even still worse in the context of clinical studies in laboratory. This task, completely devoted to nurses, still relies on manual and tedious practices. Wound shape is measured with rules, tracing papers or rarely with alginate castings and serum injection. The wound tissues proportion is also estimated by a qualitative visual assessment based on the red-yellow-black code. Further to our preceding works on wound 3D complete assessment using a simple freehanded digital camera, we explore here the adaptation of this tool to wounds artificially created for experimentation purposes. It results that tissue uniformity and flatness leads to a simplified approach but requires multispectral imaging for enhanced wound delineation. We demonstrate that, in this context, a simple active contour method can successfully replace more complex tools such as SVM supervised classification, as no training step is required and that one shot is enough to deal with perspective projection errors. Moreover, involving all the spectral response of the tissue and not only RGB components provides a higher discrimination for separating healed epithelial tissue from granulation tissue. This research work is part of a comparative preclinical study on healing wounds. It aims to compare the efficiency of specific medical honeys with classical pharmaceuticals for wound care. Results revealed that medical honey competes with more expensive pharmaceuticals.
Wound image analysis system for diabetics
Lei Wang, Peder C. Pedersen, Diane Strong, et al.
Diabetic foot ulcers represent a significant health issue, and daily wound care is necessary for wound healing to occur. The goal of this research is to create a smart phone based wound image analysis system for people with diabetes to track the healing process of chronic ulcers and wounds. This system has been implemented on an Android smart phone in collaboration with a PC (or embedded PC). The wound image is captured by the smart phone camera and transmitted to the PC via Wi-Fi for image processing. The PC converts the JPEG image to bitmap format, then performs boundary segmentation on the wound in the image. The segmentation is done with a particular implementation of the level set algorithm, the distance regularized level set evolution (DRLSE) method, which eliminates the need for re-initialization of level set function. Next, an assessment of the wound healing is performed with color segmentation within the boundaries of the wound image, by applying the K-Mean color clustering algorithm based on the red-yellow-black (RYB) evaluation model. Finally, the results are re-formatted to JPEG, transmitted back to the smart phone and displayed. To accelerate the wound image segmentation, we have implemented the DRLSE method on the GPU and CPU cooperative hardware platform in data-parallel mode, which has greatly improved the computational efficiency. Processing wound images acquired from UMASS Medical Center has demonstrated that the wound image analysis system provides accurate wounds area determination and color segmentation. For all wound images of size around 640 x 480, with complicated wound boundaries, the wound analysis consumed max 3s, which is 5 times faster than the same algorithm running on the CPU alone.
Clustering of lung adenocarcinomas classes using automated texture analysis on CT images
Antonio Pires, Henry Rusinek, James Suh, et al.
Purpose: To assess whether automated texture analysis of CT images enables discrimination among pathologic classes of lung adenocarcinomas, and thus serves as an in vivo biomarker of lung cancer prognosis. Materials and Methods: Chest CTs of 30 nodules in 30 patients with resected adenocarcinomas were evaluated by a pulmonary pathologist who classified each resected cancer according to the International Association for the Study of Lung Cancer (IASLC) system. The categories included adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), lepidic-predominant adenocarcinoma (LPA), and other invasive adenocarcinomas (INV). 3D volumes of interest (VOIs) and 2D regions of interest (ROIs) were then constructed for each nodule. A comprehensive set of N=279 texture parameters were computed for both 3D and 2D regions. Clustering and classification of these parameters were performed with linear discriminant analysis (LDA) using features determined by optimal subsets. Results: Of the 30 adenocarcinomas, there were 13 INV, 11 LPA, 3 MIA, and 3 AIS. AIS and MIA groups were analyzed together. With all 3 classes, LDA classified 17 of 30 nodules correctly using the nearest neighbor (k=1) method. When only the two largest classes (INV and LPA) were used, 21 of 24 nodules were classified correctly. With 3 classes and 2D texture analysis, and when using only the two largest groups, LDA was able to correctly classify all nodules. Conclusion: CT texture parameters determined by optimal subsets allows for effective clustering of adenocarcinoma classes. These results suggest the potential use of automated (or computer-assisted) CT image analysis to predict the invasive pathologic character of lung nodules. Our approach overcomes the limitations of current radiologic interpretation, such as subjectivity, inter- and intra-observer variability, and the effect of reader experience.
Morphometric connectivity analysis to distinguish normal, mild cognitive impaired, and Alzheimer subjects based on brain MRI
Lene Lillemark, Lauge Sørensen, Peter Mysling, et al.
This work investigates a novel way of looking at the regions in the brain and their relationship as possible markers to classify normal control (NC), mild cognitive impaired (MCI), and Alzheimer Disease (AD) subjects. MRI scans from a subset of 101 subjects from the ADNI study at baseline was used for this study. 40 regions in the brain including hippocampus, amygdala, thalamus, white, and gray matter were segmented using FreeSurfer. From this data, we calculated the distance between the center of mass of each region, the normalized number of voxels and the percentage volume and surface connectivity shared between the regions. These markers were used for classification using a linear discriminant analysis in a leave-one-out manner. We found that the percentage of surface and volume connectivity between regions gave a significant classification between NC and AD and borderline significant between MCI and AD even after correction for whole brain volume at baseline. The results show that the morphometric connectivity markers include more information than whole brain volume or distance markers. This suggests that one can gain additional information by combining morphometric connectivity markers with traditional volume and shape markers.
Deformation texture-based features for classification in Alzheimer's disease
Nhat Trung Doan, Baldur van Lew, Boudewijn Lelieveldt, et al.
Neurological pathologies are often reflected in brain magnetic resonance images as abnormal global or local anatomical changes. These variations can be computed using non-rigid registration and summarized using Jacobian determinant maps of the resulting deformation field, which characterise local volume changes. We propose a new approach which exploits the information contained in Jacobian determinant maps of the whole brain in Alzheimer’s disease (AD) classification by means of texture analysis. Textural features were derived from whole-brain Jacobian determinant maps based on 3D Grey Level Co-occurrence Matrix. The large number of features obtained depicts anatomical variations at different resolution, allowing retaining both global and local information. Principle component analysis was applied for feature reduction such that 95% of the data variance was retained. Classification was performed using a linear support vector machine. We evaluated our approach using a bootstrapping procedure in which 92 subjects were randomly split into separate training and testing sets. For comparison purposes, we implemented two dissimilarity-based classification approaches, one based on pairwise registration and the other based on registration to a single template. Our new approach significantly outperformed the other approaches. The results of this study showed that pairwise registration did not bring added value compared to registration to a single template and textural features were more informative than dissimilarity-based features. This study demonstrates the potential of texture analysis on whole brain Jacobian determinant map for diagnosis of AD subjects.
Poster Session: Compressive Sensing
icon_mobile_dropdown
3D spatio-temporal analysis for compressive sensing in magnetic resonance imaging of the murine cardiac cycle
Brice Hirst, Yahong Rosa Zheng, Ming Yang, et al.
This paper explores a three-dimensional compressive sensing (CS) technique for reducing measurement time in magnetic resonance imaging (MRI) of the murine (mouse) cardiac cycle. By randomly undersampling a single 2D slice of a mouse heart at regular time intervals as it expands and contracts through the stages of a heartbeat, a CS reconstruction algorithm can be made to exploit transform sparsity in time as well as space. For the purposes of measuring the left ventricular volume in the mouse heart, this 3D approach offers significant advantages against classical 2D spatial compressive sensing.
Curvelets as a sparse basis for compressed sensing magnetic resonance imaging
David S. Smith, Lori R. Arlinghaus, Thomas E. Yankeelov, et al.
We present an example of compressed sensing magnetic resonance imaging reconstruction where curvelets instead of wavelets provide a superior sparse basis when coupled to a group sparse representation of chemical exchange saturation transfer (CEST) imaging of the human breast. Taking a fully sampled CEST acquisition from a healthy volunteer, we retrospectively undersampled by a factor of four. We find that a group-sparse formulation of the reconstruction coupled with either Cohen-Daubechies-Feauveau 9/7 wavelets or curvelets provided superior results to a spatial-only regularized reconstruction. Between the group sparse reconstructions, the curvelet-regularized reconstruction outperformed the wavelet-regularized reconstruction.
Poster Session: Diffusion Tensor Imaging
icon_mobile_dropdown
Software-based diffusion MR human brain phantom for evaluating fiber-tracking algorithms
Yundi Shi, Gwendoline Roger, Clement Vachet, et al.
Fiber tracking provides insights into the brain white matter network and has become more and more popular in diffusion magnetic resonance (MR) imaging. Hardware or software phantom provides an essential platform to investigate, validate and compare various tractography algorithms towards a "gold standard". Software phantoms excel due to their flexibility in varying imaging parameters, such as tissue composition, SNR, as well as potential to model various anatomies and pathologies. This paper describes a novel method in generating diffusion MR images with various imaging parameters from realistically appearing, individually varying brain anatomy based on predefined fiber tracts within a high-resolution human brain atlas. Specifically, joint, high resolution DWI and structural MRI brain atlases were constructed with images acquired from 6 healthy subjects (age 22-26) for the DWI data and 56 healthy subject (age 18-59) for the structural MRI data. Full brain fiber tracking was performed with filtered, two-tensor tractography in atlas space. A deformation field based principal component model from the structural MRI as well as unbiased atlas building was then employed to generate synthetic structural brain MR images that are individually varying. Atlas fiber tracts were accordingly warped into each synthetic brain anatomy. Diffusion MR images were finally computed from these warped tracts via a composite hindered and restricted model of diffusion with various imaging parameters for gradient directions, image resolution and SNR. Furthermore, an open-source program was developed to evaluate the fiber tracking results both qualitatively and quantitatively based on various similarity measures.
Connectivity-based parcellation of the postcentral gyrus using a spectral approach
Tristan Moreau, Bernard Gibaud
Subdividing the cortex into structural elements, known as parcellation, is a key aspect to apprehend the link between structure and function in the brain. A very exciting idea to parcellate the cortex and thus to construct the human connectome is to suppose that all structural elements of the cortex share similar connectivity patterns : this process defines a connectivity-based parcellation. We address the problem of the connectivity-based parcellation without anatomical priors using the highly efficient normalized cut algorithm to classify, in a reproducible way, a large data set of connectivity patterns. The idea was to model the seeds topology as a graph in which each node represents a seed, the edges between two nodes represent the local neighbourhood relationships of the seed, and weights of the edges represent the similarity of the two connectional fingerprints of the corresponding seeds. This connectivity-based parcellation was applied both on a phantom and on the left postcentral gyrus of four different subjects. For the real data set, the structural connectivity pattern of each seed, located on the surface of the grey/white interface of the left postcentral gyrus, was reconstructed from diffusion magnetic resonance imaging data. These connectivity patterns were characterised using a probabilistic tractography based on a model of diffusion which could take into account up to two fibers in each voxel. Finally, the left postcentral gyrus of each subject was parcellated in twelve parcels.
DTI quality control assessment via error estimation from Monte Carlo simulations
Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.
UNC-Utah NA-MIC DTI framework: atlas based fiber tract analysis with application to a study of nicotine smoking addiction
Audrey R. Verde, Jean-Baptiste Berger, Aditya Gupta, et al.
Purpose: The UNC-Utah NA-MIC DTI framework represents a coherent, open source, atlas fiber tract based DTI analysis framework that addresses the lack of a standardized fiber tract based DTI analysis workflow in the field. Most steps utilize graphical user interfaces (GUI) to simplify interaction and provide an extensive DTI analysis framework for non-technical researchers/investigators. Data: We illustrate the use of our framework on a 54 directional DWI neuroimaging study contrasting 15 Smokers and 14 Controls. Method(s): At the heart of the framework is a set of tools anchored around the multi-purpose image analysis platform 3D-Slicer. Several workflow steps are handled via external modules called from Slicer in order to provide an integrated approach. Our workflow starts with conversion from DICOM, followed by thorough automatic and interactive quality control (QC), which is a must for a good DTI study. Our framework is centered around a DTI atlas that is either provided as a template or computed directly as an unbiased average atlas from the study data via deformable atlas building. Fiber tracts are defined via interactive tractography and clustering on that atlas. DTI fiber profiles are extracted automatically using the atlas mapping information. These tract parameter profiles are then analyzed using our statistics toolbox (FADTTS). The statistical results are then mapped back on to the fiber bundles and visualized with 3D Slicer. Results: This framework provides a coherent set of tools for DTI quality control and analysis. Conclusions: This framework will provide the field with a uniform process for DTI quality control and analysis.
Mapping longitudinal cerebral cortex development using diffusion tensor imaging
Yaping Wang, Gang Li, Mihye Ahn, et al.
Diffusion tensor imaging (DTI) could provide convenient and crucial insights into the underlying age-related biological maturation of human brains, including myelination, axonal density changes, fiber tract reorganization, and synaptic pruning processes. Fractional anisotropy (FA) derived from DTI has been commonly used to characterize cellular morphological changes associated with the development of human brain, due to its sensitivity to microstructural changes. In this paper, we aim to discern the longitudinal neurodevelopmental patterns in typically maturing human brains using 200 healthy subjects from 5 to 22 years of age, based on the FA in cortical gray matter (GM). Specifically, FA image is first aligned with the corresponding T1 image, which has been parcellated into different cortical ROIs, and then the average FA in each ROI is computed. Linear mixed model is used to analyze the FA developmental pattern in each cortical ROI. The developmental trajectory of FA in each ROI across ages is delineated, and the best-fitting models of age-related changes in FA were linear for all ROIs. FA generally increases with the age from 5 to 22 years of age. In addition, males and females follow the similar pattern, with the FA of females being generally lower than that of males in most ROIs. This provides us some insights into the microstructural changes in the longitudinal cerebral cortex development.
Poster Session: Optical Coherence Tomography
icon_mobile_dropdown
3D image noise reduction and contrast enhancement in optical coherence tomography
A novel noise reduction algorithm is proposed for reducing the noise and enhancing the contrast in 3D Optical Coherence Tomography (OCT) images. First, the OCT image is divided into two subregions based on the local noise property: the background area in which the additive noise is dominant and the foreground area in which the multiplicative noise is dominant. In the background, the noise is eliminated by the 2D linear filtering combined with the frame averaging. In the foreground, the noise is eliminated by the 3D linear filtering-an extension of the 2D linear filtering. Therefore, the denoised image is reconstructed according to the combination of the denoised background and foreground. The above procedure can be formulated with a bi-linear model which can be solved efficiently. The proposed bi-linear model can dramatically improve image quality in 3D images with heavy noise and the corresponding linear filter kernel in 2D can be performed in real time. The filter kernel we used is introduced based on the linear noise model in OCT system. The noise model used in the filter kernel includes both the multiplicative (speckle) noise and the additive (incoherent) noise, where the latter is not considered in the most existing linear speckle filters and wavelet filters. Also, the filter kernel can be treated as a low pass filter and can be applied to frequency extraction. Therefore an image contrast enhancement method is introduced in the frequency domain based on the frequency decomposing and weighted combination. A set of experiments are carried out to verify the effectiveness and efficiency of the proposed algorithm.
Poster Session: Image Enhancement
icon_mobile_dropdown
Image denoising of low-radiation dose coronary CT angiography by an adaptive block-matching 3D algorithm
Dongwoo Kang, Piotr Slomka, Ryo Nakazato, et al.
Our aim in this study was to optimize and validate an adaptive denoising algorithm based on Block-Matching 3D, for reducing image noise and improving assessment of left ventricular function from low-radiation dose coronary CTA. In this paper, we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from 7 consecutive patients. We validated the algorithm using a novel method, with the myocardial mass from the low-noise cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the myocardial mass were not statistically different by comparison of individual datapoints by the students' t-test (130.9±31.3g in low-noise 70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40% phase and the denoised 40% phase by the students' t-test, both in the blood pool (p <0.0001) and myocardium (p <0.0001). In conclusion, we optimized and validated an adaptive BM3D denoising algorithm for coronary CTA. This new method reduces image noise and has the potential for improving assessment of left ventricular function from low-dose coronary CTA.
Pulse sequence based multi-acquisition MR intensity normalization
Amod Jog, Snehashis Roy, Aaron Carass, et al.
Intensity normalization is an important preprocessing step in magnetic resonance (MR) image analysis. In MR images (MRI), the observed intensities are primarily dependent on (1) intrinsic magnetic resonance properties of the tissues such as proton density (PD), longitudinal and transverse relaxation times (T1 and T2 respectively), and (2) the scanner imaging parameters like echo time (TE), repeat time (TR), and flip angle (α). We propose a method which utilizes three co-registered images with different contrast mechanisms (PD-weighted, T2-weighted and T1-weighted) to first estimate the imaging parameters and then estimate PD, T1, and T2 values. We then normalize the subject intensities to a reference by simply applying the pulse sequence equation of the reference image to the subject tissue parameters. Previous approaches to solve this problem have primarily focused on matching the intensity histograms of the subject image to a reference histogram by different methods. The fundamental drawback of these methods is their failure to respect the underlying imaging physics and tissue biology. Our method is validated on phantoms and we show improvement of normalization on real images of human brains.
Noise reduction using nonadditive Q-Gaussian filters in magnetic resonance images
Spatial filtering is a ubiquitously used image processing approach to reduce noise, and frequently part of image processing pipelines. The most commonly used function is the Gaussian. Recently, a generalization of the Gaussian function consistent with nonadditive statistics was proposed. Although generalized Gaussian has been used for image filtering, no study assessed its performance for medical images. Here, we present two classes of Q-Gaussian filters as noise reduction methods. We evaluated filter performance for magnetic resonance images (MRI) in cerebral, thoracic and abdominal regions. Fractal dimension estimations from images were paired with filter effectiveness. Results showed that Q-Gaussian filters have improved filtering effective gain, when compared to classical Gaussian filtering. Furthermore, it is observed filter gain dependence with fractal dimension. The obtained results suggest that the Q-Gaussian filters are better for noise reduction than classic Gaussian filter when dealing with fractal MRI or fractal noise.
Multiscale TV flow with applications to fast denoising and registration
Prashant Athavale, Robert Xu, Perry Radau, et al.
Medical images consist of image structures of varying scales, with different scales representing different components. For example, in cardiac images, left ventricle, myocardium and blood pool are the large scale structures, whereas infarct and noise are represented by relatively small scale structures. Thus, extracting different scales in an image i.e. multiscale image representation, is a valuable tool in medical image processing. There are various multiscale representation techniques based on different image decomposition algorithms and denoising methods. Gaussian blurring with varying standard deviation can be considered as a multiscale representation, but it diffuses the image isotropically, thereby diffusing main edges. On the other hand, inverse scale representations based on variational formulations preserve edges; but they tend to be time consuming and thus unsuitable for real-time applications. In the present work, we propose a fast multiscale representation technique, motivated by successive decomposition of smooth parts based on total variation (TV ) minimization. Thus, we smooth a given image at an increasing scale, producing a multiscale TV representation. As noise is a small scale component of an image, we can effectively use the proposed method for denoising . We also prove that the denoising speed, up to the time-step, is determined by the user, making the algorithm well-suited for real-time applications. The proposed method inherits edge preserving property from total variation flow. Using this property, we propose a novel multiscale image registration algorithm, where we register corresponding scales in images, thereby registering images efficiently and accurately.
Robust blind deconvolution for fluorescence microscopy using GEM algorithm
Fluorescence microscopies have been used as an essential tool in biomedical research, because of better signal to noise ratio compared to other microscopies. Among the various kinds of fluorescence microscopies, wide field fluorescence microscopy (WFFM) and confocal fluorescence microscopy are generally most widely used. While confocal microscopy image has higher clarity than WFFM, it is not suitable for live cells because of a number of major drawbacks such as photo-bleaching and low image acquisition speed. The purpose of this paper is to obtain clearer live cell images by restoring degraded WFFM image. Many studies have been carried out for the purpose of obtaining clearer live cell images by restoring degraded WFFM images, while most of them are not based on regularized MLE (Maximum likelihood estimator) which restores the image by maximizing Poisson likelihood. However, the MLE method is not robust to noise because of ill posed problems. Actually, Gaussian as well as Poisson noise exists in the WFFM image. There are some approaches to improve noise robustness, but these methods cannot guarantee the convergence of likelihood. The purpose of this paper is to obtain clearer live cell images by restoring degraded WFFM images utilizing a robust deconvolution method for WFFM using generalized expectation maximization (GEM) algorithm that guarantees the convergence of a regularized likelihood. Moreover, we actualized a blind deconvolution that can restore the images and estimate point spread function (PSF) simultaneously, while most other researches assume that the PSF is previously known. We performed the proposed algorithm on fluorescent bead and cell images. Our results show that the proposed method restores more accurately than existing methods.
Image processing of infrared thermal images for the detection of necrotizing enterocolitis
Ruqia Nur, Monique Frize
Necrotizing Enterocolitis (NEC) is a devastating intestinal disease associated with a high rate of mortality and long-term morbidity. Treatments can be successful if NEC is diagnosed early, but no reliable methods exist. Infrared imaging can detect tissue inflammation and thus has the potential to be an early diagnostic tool for NEC. Infants with no clinical or radiographic signs of NEC, and a group of infants with evidence of at least Bell’s Stage 2 NEC were enrolled in our study. Infants underwent bedside infrared imaging for 60 seconds. The dataset consists of twenty normal infants and nine infants with NEC. In early work, in infants with NEC, the upper-to-lower (UL) region temperatures differed significantly, where no significant difference in the UL region was found in normal infants. No significant difference was found in left-to-right (LR) region temperatures for both groups. The decision tree classifier produced good results in terms of specificity, sensitivity, and standard deviation for ten trials. Results for the medians were: 91%+/-0.07%; 84%+/-18%; and for the means they were: 86%+/-0.04%; 79%+/-21% [1]. In this work, we assessed the impact of image enhancement in discriminating between infants with NEC and those without. The approaches explored were: (i) noise reduction; (ii) background removal; and (iii) contrast enhancement. Preliminary results show marked improvement in detecting infants with NEC. Future work will automate the analysis and carry-out a prospective study to attempt detecting NEC at earlier stages. Other image analysis techniques will be tested to enhance the performance of our new diagnostic tool.
Sparse dictionary representation and propagation for MRI volume super-resolution
This study addresses the problem of generating a high-resolution (HR) MRI volume from a single low-resolution (LR) MRI input volume. Recent researches have proved that sparse coding can be successfully applied for single-frame super-resolution for natural images, which is based on good reconstruction of any local image patch with a sparse linear combination of atoms taken from an appropriate over-complete dictionary. This study adapts the basic idea of sparse code-based super-resolution (SCSR) for MRI volume data, and then improves the dictionary learning strategy in the conventional SCSR for achieving the precise sparse representation of HR volume patches. In the proposed MRI super-resolution strategy, we only learn the dictionary of the HR MRI volume patches with sparse coding algorithm, and then propagate the HR dictionary to the LR dictionary by mathematical analysis for calculating the sparse representation (coefficients) of any LR local input volume patch. The unknown corresponding HR volume patch can be reconstructed with the sparse coefficients from the LR volume patch and the corresponding HR dictionary. We validate that the proposed SCSR strategy through dictionary propagation can recover much clearer and more accurate HR MRI volume than the conventional interpolated methods.
Poster Session: Label Fusion
icon_mobile_dropdown
iSTAPLE: improved label fusion for segmentation by combining STAPLE with image intensity
Xiaofeng Liu, Albert Montillo, Ek T. Tan, et al.
Multi-atlas based methods have been a trend for robust and automated image segmentation. In general these methods first transfer prior manual segmentations, i.e., label maps, on a set of atlases to a given target image through image registration. These multiple label maps are then fused together to produce segmentations of the target image through voting strategy or statistical fusing, e.g., STAPLE. STAPLE simultaneously estimates the true segmentation and the label map performance level, but has been shown inaccurate for multi-atlas segmentation because it is determined completely on the propagated label maps without considering the target image intensity. We develop a new method, called iSTAPLE, that combines target image intensity into a similar maximum likelihood estimate (MLE) framework as in STAPLE to take advantage of both intensity-based segmentation and statistical label fusion based on atlas consensus and performance level. The MLE framework is then solved using a modified EM algorithm to simultaneously estimate the intensity profiles of structures of interest as well as the true segmentation and atlas performance level. Unlike other methods, iSTAPLE does not require the target image to have same image contrast and intensity range as the atlas images, which greatly extends the use of atlases. Experiments on whole brain segmentation showed that iSTAPLE performed consistently better than STAPLE.
Poster Session: Motion
icon_mobile_dropdown
Tracking multiple neurons on worm images
Toufiq Parag, Victoria Butler, Dmitri Chklovskii
We are interested in establishing the correspondence between neuron activity and body curvature during various movements of C. Elegans worms. Given long sequences of images, specifically recorded to glow when the neuron is active, it is required to track all identifiable neurons in each frame. The characteristics of the neuron data, e.g., the uninformative nature of neuron appearance and the sequential ordering of neurons, renders standard single and multi-object tracking methods either ineffective or unnecessary for our task. In this paper, we propose a multi-target tracking algorithm that correctly assigns each neuron to one of several candidate locations in the next frame preserving shape constraint. The results demonstrate how the proposed method can robustly track more neurons than several existing methods in long sequences of images.
Involuntary motion tracking for medical dynamic infrared thermography using a template-based algorithm
In medical applications, Dynamic Infrared (IR) Thermography is used to detect the temporal variation of the skin temperature. Dynamic Infrared Imaging first introduces a thermal challenge such as cooling on the human skin, and then a sequence of hundreds of consecutive frames is acquired after the removal of the thermal challenge. As a result, by analyzing the temporal variation of the skin temperature over the image sequence, the thermal signature of skin abnormality can be examined. However, during the acquisition of dynamic IR imaging, the involuntary movements of patients are unavoidable, and such movements will undermine the accuracy of diagnosis. In this study, based on the template-based algorithm, a tracking approach is proposed to compensate the motion artifact. The affine warping model is adopted to estimate the motion parameter of the image template, and then the Lucas-Kanade algorithm is applied to search for the optimized parameters of the warping function. In addition, the weighting mask is also incorporated in the computation to ensure the robustness of the algorithm. To evaluate the performance of the approach, two sets of IR image sequences of a subject’s hand are analyzed: the steady-state image sequence, in which the skin temperature is in equilibrium with the environment, and the thermal recovery image sequence, which is acquired after cooling is applied on the skin for 60 seconds. By selecting the target region in the first frame as the template, satisfactory tracking results were obtained in both experimental trials, and the robustness of the approach can be effectively ensured in the recovery trial.
Poster Session: Registration
icon_mobile_dropdown
Volume-preserving correction of non-rigid registrations for the investigation of pleural thickening growth
Pleural thickenings can be assessed using 3D CT-image data. A precise registration in the thickening regions is required for a detailed investigation of the volumetric thickening growth and to algorithmically combine image information from two points in time. For this purpose, a non-rigid registration, utilizing B-spline based deformations, is applied. This kind of deformation is computationally efficient; however it might induce volumetric compression in the image domain. For the assessment of growth, it is inevitable to guarantee volume-preservation. In this paper we suggest a new method to enforce this preservation during the registration process in selected image regions. In contrast to other volume-preserving approaches, this correction is independent of the preliminary chosen method to estimate the non-rigid registration. To reduce complexity in large scale cases, we additionally present a method to approximate the global correction by successively solving smaller sub-tasks. Finally we show that both methods reduce the compression induced by the deformation and also enhance the registration quality in terms of image similarity.
A framework for automatic tuning of system parameters and its use in image registration
The performance of most segmentation and registration algorithms depends on the values of internal parameters. Most often, these are set empirically. This is a trial-and-error process in which the developer modifies the values in an attempt to improve performance. This is an implicit form of optimization. In this paper, we present a more intuitive and systematic framework for this type of problem. We then use it to estimate optimal parameter values of a common registration problem. We formulate the performance of the registration problem as a function of its internal parameters, and use optimization techniques to search for an optimal value for these parameters. Registration quality is evaluated using a set of training images in which the anatomy of interest was segmented and comparing the overlap between the segmentations as induced by the registration. As a large number of computationally complex registrations are performed during the optimization, a cluster of MPI-enabled computers are used collaboratively to reduce the computation time. We evaluated the proposed method using ten CT images of the liver from five patients, and evaluated three optimization algorithms. The results showed that, compared with the empirical values suggested in the published literature, our technique was able to obtain parameter values that are tuned for particular applications in a more intuitive and systematic way. In addition, the proposed framework can potentially be used to tune system parameter values appropriate for specific input types.
3D registration of histology and ultrasound data for validation of prostate cancer imaging
Stefan G. Schalk, Tamerlan A. Saidov, Hessel Wijkstra, et al.
Several ultrasound (US) prostate cancer localization methods are emerging, opening opportunities for targeted biopsies and focal therapy. However, before any of these methods, like elastography or contrast-enhanced US, can be introduced into clinical practice, accurate validation is required. The current gold standard for validation is histological assessment of the prostate after radical prostatectomy. Therefore, a 3D registration of histological and US data is required. This task is complicated by misalignment between histology slices and ultrasound imaging planes, pressure caused by the adopted transrectal US probe, and deformation and volume change during fixation in formalin solution. In this work, we introduce a dedicated 3D algorithm, automatically registering histology and ultrasound data. Because there is no information available between histology slices, and internal landmarks are not consistently present in US images, registration is based on outer-contours shape only. A 3D surface model of the prostate in constructed, based on manually outlined contours in a transrectal sweep video and a longitudinal image. Also, a similar model is constructed from the histology slices, including cancerous areas marked by a pathologist. Registration of the models is then performed in three steps: affine registration, elastic surface registration, and internal registration. In-vitro validation of the algorithm was performed by inserting rubber wires into four prostate mimicking phantoms and applying probe pressure. The resulting registration accuracy was 1.6 mm, which is considerably smaller than the histology slicing resolution of 4 mm.
Automatic measurement of wrist synovitis from contrast-enhanced MRI: a registration-centered approach
Peter Mysling, Sune Darkner, Jon Sporring, et al.
MRI-determined measurement of synovial inflammation (synovitis) from hand MRIs has recently gained considerable popularity as a secondary marker in rheumatoid arthritis (RA) clinical trials. The currently accepted scoring systems are, however, purely semi-quantitative and rely on assessment from a trained radiologist. We propose a novel, fully automatic technique for quantitative wrist synovitis measurement from two MRIs acquired before and after contrast agent injection. The technique estimates the volume of the synovial inflammation in three steps. First, the wrist synovial membrane is segmented using multi-atlas B-spline based freeform registration. Second, positioning differences between the pre- and post-contrast acquisitions are corrected by rigid registration. Finally, wrist synovitis is quantified from the difference between the pre- and post-contrast sequences in the region of the segmented synovium. We evaluate the proposed technique on a data set of nineteen patients with acquisitions at two time points in a leave-one-patient-out fashion. Our experiments show that we are able to perform synovitis measurement with good correlation to manual semi-quantitative RAMRIS scores for both static (r=0.84) and longitudinal (r=0.87) scoring. These results compare favorably to the RAMRIS inter-observer variability.
2D registration guided models for semi-automatic MRI prostate segmentation
Ruida Cheng, Baris Turkbey, Justin Senseney, et al.
Accurate segmentation of prostate magnetic resonance images (MRI) is a challenging task due to the variable anatomical structure of the prostate. In this work, two semi-automatic techniques for segmentation of T2-weighted MRI images of the prostate are presented. Both models are based on 2D registration that changes shape to fit the prostate boundary between adjacent slices. The first model relies entirely on registration to segment the prostate. The second model applies Fuzzy-C means and morphology filters on top of the registration in order to refine the prostate boundary. Key to the success of the two models is the careful initialization of the prostate contours, which requires specifying three Volume of Interest (VOI) contours to each axial, sagittal and coronal image. Then, a fully automatic segmentation algorithm generates the final results with the three images. The algorithm performance is evaluated with 45 MR image datasets. VOI volume, 3D surface volume and VOI boundary masks are used to quantify the segmentation accuracy between the semi-automatic and expert manual segmentations. Both models achieve an average segmentation accuracy of 90%. The proposed registration guided segmentation model has been generalized to segment a wide range of T2- weighted MRI prostate images.
Monoplane stereoscopic imaging method for inverse geometry x-ray fluoroscopy
Scanning Beam Digital X-ray (SBDX) is a low-dose inverse geometry fluoroscopic system for cardiac interventional procedures. The system performs x-ray tomosynthesis at multiple planes in each frame period and combines the tomosynthetic images into a projection-like composite image for fluoroscopic display. We present a novel method of stereoscopic imaging using SBDX, in which two slightly offset projection-like images are reconstructed from the same scan data by utilizing raw data from two different detector regions. To confirm the accuracy of the 3D information contained in the stereoscopic projections, a phantom of known geometry containing high contrast steel spheres was imaged, and the spheres were localized in 3D using a previously described stereoscopic localization method. After registering the localized spheres to the phantom geometry, the 3D residual RMS errors were between 0.81 and 1.93 mm, depending on the stereoscopic geometry. To demonstrate visualization capabilities, a cardiac RF ablation catheter was imaged with the tip oriented towards the detector. When viewed as a stereoscopic red/cyan anaglyph, the true orientation (towards vs. away) could be resolved, whereas the device orientation was ambiguous in conventional 2D projection images. This stereoscopic imaging method could be implemented in real time to provide live 3D visualization and device guidance for cardiovascular interventions using a single gantry and data acquired through normal, low-dose SBDX imaging.
Cortical correspondence via sulcal curve-constrained spherical registration with application to Macaque studies
Ilwoo Lyu, Sun Hyung Kim, Joon-Kyung Seong, et al.
In this work, we present a novel cortical correspondence method with application to the macaque brain. The correspondence method is based on sulcal curve constraints on a spherical deformable registration using spherical harmonics to parameterize the spherical deformation. Starting from structural MR images, we first apply existing preprocessing steps: brain tissue segmentation using the Automatic Brain Classification tool (ABC), as well as cortical surface reconstruction and spherical parametrization of the cortical surface via Constrained Laplacian-based Automated Segmentation with Proximities (CLASP). Then, initial correspondence between two cortical surfaces is automatically determined by a curve labeling method using sulcal landmarks extracted along sulcal fundic regions. Since the initial correspondence is limited to sulcal regions, we use spherical harmonics to extrapolate and regularize this correspondence to the entire cortical surface. To further improve the correspondence, we compute a spherical registration that optimizes the spherical harmonic parameterized deformation using a metric that incorporates the error over the sulcal landmarks as well as the normalized cross correlation of sulcal depth maps over the whole cortical surface. For evaluation, a normal 18-months-old macaque brain (for both left and right hemispheres) was matched to a prior macaque brain template with 9 manually labeled, major sulcal curves. The results show successful registration using the proposed registration approach. Evaluation results for optimal parameter settings are presented as well.
Novel PET/CT image fusion via Gram-Schmidt spectral sharpening
Ronald T. Kneusel, Peter N. Kneusel
PET/CT is a widely used dual-modality imaging technique that has been clearly shown to improve tumor localization and hence patient outcomes. The standard means by which PET/CT images are displayed for review is alpha-blending which results in a merge of the two images using a variable parameter, α, to select the relative proportion of each image displayed. In this work, we present a new fusion technique based on Gram- Schmidt spectral sharpening to display the physiological information found in the PET image along with the anatomical details from the higher resolution CT image. A selected color table is applied to the PET data to create a mult-channel (multiband) RGB image. This image, up-scaled to the resolution of the CT data, along with the original PET data which represents a lower resolution single-channel (panchromatic) image, is processed via Gram-Schmidt orthogonalization. Then, the higher resolution CT data, modified to more closely match the PET statistics, is substituted for the first vector of the new orthogonal set. Finally, an inverse Gram-Schmidt process restores the data and results in a new RGB image which is the fusion of the original PET/CT data. We show that this new image provides a clear indication of PET activity while preserving the details of the CT image. We compare these images with alpha-blended images as well as color-based and PCA-based spectral sharpening results.
Characterisation of respiratory motion extracted from 4D MRI
Nuclear Medicine (NM) imaging is currently the most sensitive approach for functional imaging of the human body. However, in order to achieve high-resolution imaging, one of the factors degrading the detail or apparent resolution in the reconstructed image, namely respiratory motion, has to be overcome. All respiratory motion correction approaches depend on some assumption or estimate of respiratory motion. In this paper, the respiratory motion found from 4D MRI is analysed and characterised. The characteristics found are compared with previous studies and will be incorporated into the process of estimating respiratory motion.
Extracting respiratory motion from 4D MRI using organ-wise registration
Nuclear Medicine (NM) imaging serves as a powerful diagnostic tool for imaging of biochemical and physiological processes in vivo. The degradation in spatial image resolution caused by the often irregular respiratory motion must be corrected to achieve high resolution imaging. In order perform motion correction more accurately, it is proposed that patient motion obtained from 4D MRI can be used to analyse respiratory motion. To extract motion from the dynamic MRI dataset an organ wise intensity based affine registration framework is proposed and evaluated. Comparison of the resultant motion obtained within selected organs is made against an open source free form deformation algorithm. For validation, the correlation of the results of both techniques to a previous study of motion in 20 patients is found. Organwise affine registration correlates very well (r≈0:9) with a previous study (Segars et al., 2007)1 whilst free form deformation shows little correlation (r ≈ 0:3). This increases the confidence of the organ wise affine registration framework being an effective tool to extract motion from dynamic anatomical datasets.
Evaluation of 3D-2D registration methods for registration of 3D-DSA and 2D-DSA cerebral images
Recent C-arm systems used for endovascular image-guided interventions enable the acquisition of three-dimensional (3D) and dynamic two-dimensional (2D+t) images in the same interventional suite. The 3D images are used to observe the vascular morphology while the 2D+t images show the current state of the intervention. By spatial alignment of 3D and 2D+t images one can facilitate the endovascular interventions, e.g. by displaying the intra-interventional tools and contrast-agent flow in the augmented 3D+t images. To achieve the spatial alignment several 3D-2D registration methods were proposed that are concerned with finding the rigid-body parameters of the 3D image. Meanwhile, the pose of the C-arm system is usually obtained through a dedicated C-arm calibration. In practice, the calibrated C-arm pose parameters are typically valid only if the imaged object is positioned in the C-arm’s isocenter. To compensate this, the 3D-2D registration should search simultaneously for the rigid-body as well as the C-arm pose parameters. For verification, we tested three 3D-2D registration methods on real, clinical 3D and 2D+t angiographic images of twenty patients, ten of which were imaged with attached fiducial markers to obtain a “gold standard” registration. The results indicate that, compared to searching solely the rigid-body parameters, by searching simultaneously for rigid-body and the C-arm pose parameters significantly improves the accuracy and success rate of 3D-2D registration methods. Among the three tested methods the intensity-based method using mutual information was the most robust, as it successfully registered all clinical datasets, and highly accurate, as the maximal fiducial registration error was less or equal than 0.34 mm.
Super-resolution in cardiac MRI using a Bayesian approach
Nelson Velasco Toledo, Andrea Rueda, Cristina Santa Marta, et al.
Acquisition of proper cardiac MR images is highly limited by continued heart motion and apnea periods. A typical acquisition results in volumes with inter-slice separations of up to 8 mm. This paper presents a super-resolution strategy that estimates a high-resolution image from a set of low-resolution image series acquired in different non-orthogonal orientations. The proposal is based on a Bayesian approach that implements a Maximum a Posteriori (MAP) estimator combined with a Wiener filter. A pre-processing stage was also included, to correct or eliminate differences in the image intensities and to transform the low-resolution images to a common spatial reference system. The MAP estimation includes an observation image model that represents the different contributions to the voxel intensities based on a 3D Gaussian function. A quantitative and qualitative assessment was performed using synthetic and real images, showing that the proposed approach produces a high-resolution image with significant improvements (about 3dB in PSNR) with respect to a simple trilinear interpolation. The Wiener filter shows little contribution to the final result, demonstrating that the MAP uniformity prior is able to filter out a large amount of the acquisition noise.
Stochastic image registration with user constraints
Ivan Kolesov, Jehoon Lee, Patricio Vela, et al.
Constrained registration is an active area of research and is the focus of this work. This note describes a non-rigid image registration framework for incorporating landmark constraints. Points that must remain stationary are selected, the user chooses the spatial extent of the inputs, and an automatic step computes the deformable registration, respecting the constraints. Parametrization of the deformation field is by an additive composition of a similarity transformation and a set of Gaussian radial basis functions. The bases’ centers, variances, and weights are determined with a global optimization approach that is introduced. This approach is based on the particle filter for performing constrained optimization; it explores a series of states defining a deformation field that is physically meaningful (i.e., invertible) and prevents chosen points from moving. Results on synthetic two dimensional images are presented.
A novel point-based nonrigid image registration scheme based on learning optimal landmark configurations
Tao Wan, B. Nicolas Bloch, Shabbar Danish, et al.
Image registration plays an increasingly important role in the field of medical image processing given the plurality of images often acquired from different sensors, time points, or viewpoints. Landmark-based registration schemes represent the most popular class of registration methods due to their simplicity and high accuracy. Previous studies have shown that these registration schemes are sensitive to the number and location of landmarks. Identifying important landmarks to perform an accurate registration remains a very challenging task. Current landmark selection methods, such as feature-based approaches, focus on optimization of global transformation and may have poor performance in recovering local deformation, e.g. subtle tissue changes caused by tumor resection, making them inappropriate for registering pre- and post-surgery images as a small cancerous region will be deformed after removing a tumor. In this work, a novel method is introduced to estimate optimal landmark configurations. An important landmark configuration that will be used as a training landmark set was learned for an image pair with a known deformation. This landmark configuration can be considered as a collection of discrete points. A generic transformation matrix between a pair of training landmark sets with different deformation locations was computed via an iterative close point (ICP) alignment technique. A new landmark configuration was determined by simply transforming the training landmarks to the current displacement location while preserving the topological structure of the configuration of landmarks. Two assumptions are made: 1) In a new pair of images the deformation is approximately the same size and has only been spatially relocated in the image, and that by a simple affine transformation one can identify the optimal configuration on this new pair of images; and 2) The deformation is of similar size and shape on the original pair of images. These are reasonable assumptions in many cases where one seeks to register tumor images at multiple time points following application of therapy and to evaluate changes in tumor size. The experiments were conducted on 286 pairs of synthetic MRI brain images. The training landmark configurations were obtained through 2000 iterations of registration where the points with consistently best registration performance were selected. The estimated landmarks greatly improved the quality metrics compared to a uniform grid placement scheme and a speeded-up robust features (SURF) based method as well as a generic free-form deformation (FFD) approach. The quantitative results showed that the new landmark configuration achieved 95% improvement in recovering the local deformation compared to 89% for the uniform grid placement, 79% for the SURF-based approach, and 10% for the generic FFD approach.
Recursive Bayesian estimation of respiratory motion using a modified autoregressive transition model
Compensation for respiratory motion has been identified as a crucial factor in achieving high resolution Nuclear Medicine (NM) imaging. Many motion correction approaches have been studied and they are seen to have advantages over simpler approaches such as respiratory gating. However, all motion correction approaches rely on an assumption or estimation of respiratory motion. This paper builds upon previous work in recursive Bayesian estimation of respiratory motion assuming a stereo camera observation of the motion of the external torso surface. This paper compares the performance of a modified autoregressive transition model against the previously presented linear transition model used when estimating motion within a 4D dataset generated from the XCAT phantom.
Poster Session: Segmentation
icon_mobile_dropdown
Skeleton-based refinement of multi-material volumetric meshes
Cristina Oyarzun Laura, Pablo Bueno Plaza, Klaus Drechsler, et al.
Accurate multi-material mesh generation is necessary for many applications, e.g. image-guided surgery, in which precision is important. For this application, it is necessary to enhance conventional algorithms with physiological information that adds accuracy to the results. There are several approaches working on the generation of such meshes. However, state of the art approaches show inaccuracies in the areas in which thin structures are, e.g. liver vasculature. These algorithms are not able to detect the vessels in areas in which they are narrow and they assign their elements to wrong materials, e.g., parenchyma. We propose to extend two state of the art algorithms, namely that by Boltcheva et al. and that by Pons et al. and enhance them making use of the skeleton of these structures to solve this problem. By analyzing the mesh generated by the aforementioned algorithms one can find several intersections between the mesh belonging to the vessels and the skeleton, showing that some elements must be mismatched. We evaluate the proposed algorithm in 23 clinical datasets of the liver, in which we previously segmented parenchyma and vessels. For quantitative evaluation, the meshes generated with and without skeleton information are compared. The improvements are shown by means of intersection number, volume and length differences of the vasculature mesh using the different methods. The results show an improvement of 65% for the number of intersections, 4% for the volume and 22% for the length.
Image segmentation using normalized cuts with multiple priors
We present a novel method to incorporate prior knowledge into normalized cuts. The prior is incorporated into the cost function by maximizing the similarity of the prior to one partition and the dissimilarity to the other. This simple formulation can also be extended to multiple priors to allow the modeling of the shape variations. A shape model obtained by PCA on a training set can be easily integrated into the new framework. This is in contrast to other methods which usually incorporate the prior knowledge by hard constraints during optimization. The eigenvalue problem inferred by spectral relaxation is not sparse, but can still be solved efficiently. We apply this method to toy and real data and compare it with other normalized cut based segmentation algorithms and graph cuts. We demonstrate that our method gives promising results and can still give a good segmentation even when the prior is not accurate.
Sparseness constrained nonnegative matrix factorization for unsupervised 3D segmentation of multichannel images: demonstration on multispectral magnetic resonance image of the brain
Ivica Kopriva, Ante Jukić, Xinjian Chen
A method is proposed for unsupervised 3D (volume) segmentation of registered multichannel medical images. To this end, multichannel image is treated as 4D tensor represented by a multilinear mixture model, i.e. the image is modeled as weighted linear combination of 3D intensity distributions of organs (tissues) present in the image. Interpretation of this model suggests that 3D segmentation of organs (tissues) can be implemented through sparseness constrained factorization of the nonnegative matrix obtained by mode-4 unfolding of the 4D image tensor. Sparseness constraint implies that only one organ (tissue) is dominantly present at each pixel or voxel element. The method is preliminary validated, in term of Dice's coefficient, on extraction of brain tumor from synthetic multispectral magnetic resonance image obtained from the TumorSim database.
Customized hybrid level sets for automatic lung segmentation in chest x-ray images
A chest x-ray screening system for pulmonary pathologies such as tuberculosis (TB) is of paramount importance due to the increasing mortality rate of patients with undiagnosed TB, especially in densely-populated developing countries. As a first step toward developing such screening systems, this paper presents a novel computer vision module that automatically segments the lungs from posteroanterior digital chest x-ray images. The segmentation task is non-trivial, due to poor image contrast and occlusion of the lung region by ribs, clavicle, heart, and by non-TB abnormalities associated with pulmonary diseases. In the proposed procedure, we first compute a lung shape model by employing a level set based technique for registration up to a homography. Next, we use this computed mean lung shape to initialize the level set that is based on a best fit measure obtained in a heuristically estimated search space for the projective transform parameters. Once the level set is initialized, a suite of customized lower level image features and higher level shape features up to a homography evolve the level set function at a lower resolution in order to achieve a coarse segmentation of the lungs. Finally, a fine segmentation step is performed by adding additional shape variation constraints and evolving the level set in a higher resolution. We processed the standard Japanese Society of Radiological Technology (JSRT) dataset, comprised of 247 images, using this scheme. The promising results (92% accuracy) demonstrate the viability and efficacy of the proposed approach.
An automatic tumor segmentation framework of cervical cancer in T2-weighted and diffusion weighted magnetic resonance images
Yueying Kao, Wu Li, Huadan Xue, et al.
Cervical cancer is one of the common malignant tumors and is a major health threat for women. The accurate segmentation of the cervical cancer is of important clinical significant for prevention, diagnosis and treatment of cervical cancer. Due to the complexity of the structure of human abdomen, the images in a single imaging modality T2-weighted MR images can not sufficiently show the precise information of the cervical cancer. In this paper, we present an automatic segmentation framework of cervical cancer, making use of the information provided by both T2-weighted magnetic resonance (MR) images and diffusion weighted magnetic resonance (DW-MR) images of cervical cancer. This framework consists of the following steps. Firstly, the DW-MR images are registered to T2-weighted MR images using mutual information method; then classification operation is executed in the registered DW-MR images to localize the tumor. Secondly, T2-weighted MR images are filtered by P-M nonlinear anisotropic diffusion filtering technique; and then bladder and rectum are segmented and excluded, so the Region of Interest (ROI) containing tumor is extracted. Finally, the tumor is accurately segmented by Confederative Maximum a Posterior (CMAP) algorithm combining with the results of T2-weighted MR images and DW-MR images. We tested this framework on 5 different cervical cancer patients. Compared with the results outlined manually by the experienced radiologists, it is demonstrated effectiveness of our proposed segmentation framework.
False-positive reduction of liver tumor detection using ensemble learning method
Atsushi Miyamoto, Junichi Miyakoshi, Kazuki Matsuzaki, et al.
We proposed a novel ensemble learning method which can be applied to false-positive reduction of liver tumor detection. In many cases of the liver tumor detection, training data has some issues due to characteristics of liver tumors, and the conventional ensemble learning methods such as Bagging and AdaBoost tend to degrade sensitivity. The proposed method generates various weak classifiers based on adaptive sampling in order to enhance an ensemble effect against such issues, and can achieve accuracy satisfying requirements of liver tumor detection. We applied the method to 48 CT images and evaluated the accuracy. Results showed that the proposed method succeeded in reducing false positives greatly (from 3.96 to 1.10/image) while maintaining the required sensitivity.
Lobar fissure detection using line enhancing filters
Tobias Klinder, Hannes Wendland, Rafael Wiemker
Automatic segmentation of lung lobes from CT data is becoming clinically relevant as an enabler for, e.g., lobe-based quantitative analysis for diagnostics or more accurate interventional planning. The detection of fissures is thereby usually a first step in a more comprehensive segmentation framework. Although many approaches have been presented in the past addressing fissure detection, there are still several limitations. In this paper, we review one of the most prominent algorithms for fissure detection which is based on eigenvalue analysis of the Hessian matrix and discuss its inherent limitations. In order to overcome these shortcomings, we propose a novel line enhancing filter using multiple hypotheses testing. Due to the large search space of a potential three-dimensional surface orientation, we search for fissure line pieces in two-dimensional cut planes. For each voxel inside the lungs, we match the local two-dimensional neighborhood around the voxel with a fissure template model representing a bright line on a dark background. By testing out numerous rotated versions of the template model, we are able to detect fissures of different orientations. In contrast to the eigenvalue analysis of the Hessian matrix, the local neighborhood to be considered can be effectively varied for the new filter with a limited set of parameters thus providing more flexibility. On 20 cases from a publicly available data base, an ROC curve analysis showed that the line enhancing filter results in an average area under curve of 0.71 compared to 0.67 using the Hessian filter.
Steerable wavelet transform for atlas based retinal lesion segmentation
Diabetic macular edema (DME) characterized by discrete white{yellow lipid deposits due to vascular leakage is one of the most severe complication seen in diabetic patients that cause vision loss in affected areas. Such vascular leakage can be treated by laser surgery. A regular follow{up and laser photocoagulation can reduce the risk of blindness by 90%. In an automated retina screening system, it is thus very crucial to make the segmentation of such hard exudates accurate and register these images taken over time to a reference co-ordinate system to make the necessary follow-ups more precise. We introduce a novel method of ethnicity based statistical atlas for exudates segmentation and follow-up. Ethnic background plays a significant role in retinal pigment epithelium, visibility of the choroidal vasculature and overall retinal luminance in patients and retinal images. Such statistical atlas can thus help to provide a solution, simplify the image processing steps and increase the detection rate. In this paper, bright lesion segmentation is investigated and experimentally verified for the gold standard built from African American fundus images. 40 automatically generated landmark points on the major vessel arches with macula and optic centers are used to warp the retinal images. PCA is used to obtain a mean shape of the retinal major arches (both lower and upper). The mean of the co-ordinates of the macula and optic disk center are obtained resulting 42 landmark points and together they provide a reference co-ordinate frame ( or the atlas co-ordinate frame) for the images. The retinal funds images of an ethnic group without any artifact or lesion are warped to this reference co-ordinate frame from which we obtain a mean image representing the statistical measure of the chromatic distribution of the pigments in the eye of that particular ethnic group. 400 images of African American eye has been used to build such a gold standard for this ethnic group. Any test image of the patient of that ethnic group is first warped to the reference frame and then a distance map is obtained with this mean image. Finally, the post-processing schemes are applied on the distance map image to enhance the edges of the exudates. A multi-scale and multi-directional steerable filters along with the Kirsch edge detector was found to be promising. Experiments with the publicly available HEI-MED dataset showed the good performance of the proposed method. We achieved the lesion localization fraction (LLF) of 82.5% at 35% of non{lesion localization fraction (NLF) on the FROC curve.
Automated segmentation of MS lesions in brain MR images using localized trimmed-likelihood estimation
Diagnosis and prognosis of patients with multiple sclerosis (MS) rely on quantitative markers derived from the analysis of magnetic resonance (MR) images. To compute these markers, a segmentation of lesions in the brain tissues, which are characteristic for MS disease, is needed. In this paper, we propose an unsupervised method for segmenting MS lesions that employs localized trimmed-likelihood estimation (TLE) to model the intensity distributions of normal appearing brain tissues (NABT). Compared to the original whole-brain TLE approach, the proposed method employs a set of three-component Gaussian mixture models for each of the spatially localized and non-overlapping subregions of the brain. The subregions were assigned by the customized balanced box decomposition that takes into account the spatial distribution and the cardinality of NABT tissues, as obtained from the initial whole-brain TLE. The proposed method was tested and compared to the original TLE approach on publicly available synthetic BrainWeb datasets. The results indicate a higher average Dice similarity coefficient both for the segmentation of NABT and MS lesions by using the proposed spatially localized TLE as compared to the original whole-brain TLE, which is due to the fact that the proposed method yields a more accurate NABT model and thus detects fewer false NABT outliers.
Development of a novel constellation based landmark detection algorithm
Ali Ghayoor, Jatin G. Vaidya, Hans J. Johnson
Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.
Breast segmentation in MRI: quantitative evaluation of three methods
A precise segmentation of breast tissue is often required for computer-aided diagnosis (CAD) of breast MRI. Only a few methods have been proposed to automatically segment breast in MRI. Authors reported satisfactory performance, but a fair comparison has not been done yet as all breast segmentation methods were evaluated on their own data sets with different manual annotations. Moreover, breast volume overlap measures, which were commonly used for evaluations, do not seem to be adequate to accurately quantify the segmentation qualities. Breast volume overlap measures are not sensitive to small errors, such as local misalignments, because the breast appears to be much larger than other structures. In this work, two atlas-based approaches and a breast segmentation method based on Hessian sheetness filter are exhaustively evaluated and benchmarked on a data set of 52 manually annotated breast MR images. Three quantitative measures including dense tissue error, pectoral muscle error and pectoral surface distance are defined to objectively reflect the practical use of breast segmentation in CAD methods. The evaluation measures provide important evidence to conclude that the three evaluated techniques perform accurate breast segmentations. More specifically, the atlas-based methods appear to be more precise, but require larger computation time than the sheetness-based breast segmentation approach.
Fuzzy model based object delineation via energy minimization
We study the problem of automatic delineation of an anatomic object in an image, where the object is solely identified by its anatomic prior. We form such priors in the form of fuzzy models to facilitate the segmentation of images acquired via different imaging modalities (like CT, MRI, or PET), in which the recorded image properties are usually different. Our main interest is in delineating different body organs in medical images for automatic anatomy recognition (AAR). The AAR system we are developing consists of three main components: (C1) building body-wide groupwise fuzzy anatomic models; (C2) recognizing the body organs geographically and then delineating them by employing the models; (C3) generating quantitative descriptions. This paper focuses on (C2) and presents a unified approach for model-based segmentation within which several different strategies can be formulated, ranging from modelbased hard/fuzzy thresholding to model-based graph cut, fuzzy connectedness, and random walker methods and algorithms. This is an important theoretical advance. The presented experiments clearly prove, that a fully automatic segmentation system based on the fuzzy models can indeed provide the reliable segmentations. However, the presented experiments utilize only the simplest versions of the methodology presented in the theoretical part of the paper. The full experimental evaluation of the methodology is still a work in progress.
Consistent 4D brain extraction of serial brain MR images
Yaping Wang, Gang Li, Jingxin Nie, et al.
Accurate and consistent skull stripping of serial brain MR images is of great importance in longitudinal studies that aim to detect subtle brain morphological changes. To avoid inconsistency and the potential bias introduced by independently performing skull-stripping for each time-point image, we propose an effective method that is capable of skull-stripping serial brain MR images simultaneously. Specifically, all serial images of the same subject are first affine aligned in a groupwise manner to a common space to avoid any potential bias introduced by asymmetric transforms. A brain probability map, which encapsulates prior information gathered from a population of real brain MR images, is then warped to the aligned serial images for guiding skull-stripping via a deformable surface method. In particular, the same initial surface meshes representing the initial brain surfaces are first placed on all aligned serial images, and then all these surface meshes are simultaneously evolved to the respective target brain boundaries, driven by the intensity-based force, the force from the probability map, as well as the force from the spatial and temporal smoothness. Especially, imposing the temporal smoothness helps achieve longitudinally consistent results. Evaluations on 20 subjects, each with 4 time points, from the ADNI database indicate that our method gives more accurate and consistent result compared with 3D skull-stripping method. To better show the advantages of our 4D brain extraction method over the 3D method, we compute the Dice ratio in a ring area (±5mm) surrounding the ground-truth brain boundary, and our 4D method achieves around 3% improvement over the 3D method. In addition, our 4D method also gives smaller mean and maximal surface-to- surface distance measurements, with reduced variances.
Statistical representation of high-dimensional enhancement fields with application to consistent enhancement of chest x-ray images
Zhiqiang Lao, Xin Zheng, Quncai Zou
This paper proposes a statistical model of enhancement field (SMEF) aiming to effectively capture the statistics of high-dimensional enhancement fields which can then be leveraged to regularize the enhancement of portable chest radiograph images captured in the intensive care unit (ICU). Wavelet-packet transformation (WPT) of enhancement fields coupled with PCA in each wavelet band are used to more accurately estimate a prior pdf of high-dimensional enhancement fields from limited number of training samples. As a result, more consistent enhancement results can be obtained. In experiments, we first demonstrate the ability of SMEF to improve the visibility of CR ICU images and then demonstrate the ability of SMEF to provide more consistent image enhancement solution, via comparing the localized image enhancement algorithm, CLAHE, with its SMEF-constrained version. The proposed SMEF framework can potentially incorporate various image enhancement algorithms to improve consistency and stability.
Localizing and segmenting Crohn's disease affected regions in abdominal MRI using novel context features
Dwarikanath Mahapatra, Peter J. Schüffler, Jeroen A. W. Tielbeek, et al.
Increasing incidence of Crohn’s disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time consuming and invasive due to which Magnetic resonance imaging (MRI) has emerged as the preferred non-invasive procedure over colonoscopy. Current MRI approaches rely on extensive manual segmentation for an accurate analysis thus limiting their effectiveness. We propose a supervised learning method for the localization and segmentation of regions in abdominal MR images that have been affected by CD. Higher order statistics from intensity and texture are used with context information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel measure to derive context information. Experiments on real patient data show that our features achieve high sensitivity and can successfully segment out the pixels belonging to CD affected regions.
Glottis segmentation using dynamic programming
Jing Chen, Bahadir K. Gunturk, Melda Kunduk
High speed videoendoscopy (HSV) is widely used for the assessment of vocal fold vibratory behavior. Due to the huge volume of HSV data, an automated and accurate segmentation of glottal opening is demanded for objective quantification and analysis of vocal fold vibratory characteristics. In this study, a simplified dynamic programming based algorithm is presented to do glottis segmentation. The underlying idea is to track glottal edge in gradient image, where the average gradient magnitude along edge path is assumed to be maximal. To achieve accurate segmentation results and enable further analysis, we addressed different aspects of the problem, including reflection removal, detection of posterior and anterior commissures and determination of open and closed portions of glottal area. Reflection removal, which is essential for robust segmentation, is also achieved by dynamic programming. Posterior and anterior commissures in each frame of HSV data help pre-define the range of glottal area which needs to be segmented and therefore decrease the segmentation cost. In addition to the proposed algorithm, three other methods (including active contour, standard dynamic programming and fixed-threshold segmentation) have been implemented. The experimental results show that the proposed algorithm is more efficient and accurate than the others.
Effects of T2-weighted MRI based cranial volume measurements on studies of the aging brain
Phong Vuong, David Drucker, Chris Schwarz, et al.
Many brain aging studies use total intracranial volume (TIV) as a proxy measure of premorbid brain size that is unaffected by neurodegeneration. T1-weighted Magnetic Resonance Imaging (MRI) sequences are commonly used to measure TIV, but T2-weighted MRI sequences provide superior contrast between the cerebrospinal fluid (CSF) bounding the premorbid brain space and surrounding dura mater. In this study, we compared T1-based and T2-based TIV measurements to assess the practical impact of this superior contrast on studies of brain aging. 810 Alzheimer’s Disease Neuroimaging Initiative (ADNI) participants, including healthy elders and those with mild cognitive impairment (MCI) and Alzheimer’s Disease (AD), received T1-weighted and T2-weighted MRI at their baseline evaluation. TIV was automatically estimated from T1-weighted images using FreeSurfer version 4.3 (T1TIV), and an automated active contour method was used to estimate TIV from T2-weighted images (T2TIV). The correlation between T1TIV and T2TIV was high (.93), and disagreement was greater on larger heads. However, correcting a FreeSurfer-based measure of total parenchymal volume by dividing it by T2TIV led to stronger expected associations with a standardized measure of cognitive dysfunction (MMSE) in Poisson regression models among individuals with AD (z=1.73 vs. 1.09) and MCI (z=3.15 vs. 2.79) than a corresponding parenchymal volume measure divided by T1TIV. This effect was enhanced when the analysis was restricted to the cases where T1TIV and T2TIV disagreed the most. These findings suggest that T2- based TIV measurements may be higher fidelity than T1-based TIV measurements, thus leading to greater sensitivity to detect biologically plausible brain-behavior associations.
Food image analysis for measuring food intake in free living conditions
Measuring the type and amount of food intake of free-living (outside controlled clinical research centers) people is an important task in nutrition research. One practical method, called the Remote Food Photography Method (RFPM),1 is to provide camera-equipped smartphones to participants, who are trained to take pictures of their foods and send these pictures to the researchers over a wireless network. These pictures can then be analyzed by trained raters to accurately estimate food intake, though the process can be labor intensive. In this paper, we describe a computer vision application to estimate food intake from the pictures captured and sent by participants. We describe the application in detail, including segmentation, pattern classification, volume estimation modules, and provide comprehensive experimental results to evaluate its performance.
DEeP random walks
Mandana Javanshir Moghaddam, Abouzar Eslami, Nassir Navab
In this paper, we proposed distance enforced penalized (DEeP) random walks segmentation framework to delineate coupled boundaries by modifying classical random walks formulations. We take into account curves inter-dependencies and incorporate associated distances into weight function of conventional random walker. This effectively leverages segmentation of weaker boundaries guided by stronger counterparts, which is the main advantage over classical random walks techniques where the weight function is only dependent on intensity differences between connected pixels, resulting in unfavorable outcomes in the context of poor contrasted images. First, we applied our developed algorithm on synthetic data and then on cardiac magnetic resonance (MR) images for detection of myocardium borders. We obtained encouraging results and observed that proposed algorithm prevents epicardial border to leak into right ventricle or cross back into endocardial border that often observe when conventional random walker is used. We applied our method on forty cardiac MR images and quantified the results with corresponding manual traced borders as ground truths. We found the Dice coefficients 70%± 14% and 43% ±14% respectively for DEeP random walks and conventional one.
Analysis of brain white matter hyperintensities using pattern recognition techniques
Mariana Bento, Letícia Rittner, Simone Appenzeller, et al.
The brain white matter is responsible for the transmission of electrical signals through the central nervous system. Lesions in the brain white matter, called white matter hyperintensity (WMH), can cause a significant functional deficit. WMH are commonly seen in normal aging, but also in a number of neurological and psychiatric disorders. We propose here an automatic method for WHM analysis in order to distinguish regions of interest between normal and non-normal white matter (identification task) and also to distinguish different types of lesions based on their etiology: demyelinating or ischemic (classification task). The method combines texture analysis with the use of classifiers, such as Support Vector Machine (SVM), Nearst Neighboor (1NN), Linear Discriminant Analysis (LDA) and Optimum Path Forest (OPF). Experiments with real brain MRI data showed that the proposed method is suitable to identify and classify the brain lesions.
An information theoretic approach to automated medical image segmentation
Enrique Corona, Jason E. Hill, Brian Nutter, et al.
Automated segmentation of medical images is a challenging problem. The number of segments in a medical image may be unknown a priori, due to the presence or absence of pathological anomalies. Some unsupervised learning techniques founded in information theory concepts may provide a solid approach to this problem’s solution. We have developed the Improved “Jump” Method (IJM), a technique that efficiently finds a suitable number of clusters representing different tissue characteristics in a medical image. IJM works by optimizes an objective function that quantifies the quality of particular cluster configurations. Recent developments involving interesting relationships between Spectral Clustering (SC) and kernel Principal Component Analysis (kPCA) are used to extend IJM to the non-linear domain. This novel SC approach maps the data to a new space where the points belonging to the same cluster are collinear if the parameters of a Radial Basis Function (RBF) kernel are adequately selected. After projecting these points onto the unit sphere, IJM measures the quality of different cluster configurations, yielding an algorithm that simultaneously selects the number of clusters, and the RBF kernel parameter. Validation of this method is sought via segmentation of MR brain images in a combination of all major modalities. Such labeled MRI datasets serve as benchmarks for any segmentation algorithm. The effectiveness of the nonlinear IJM is demonstrated in the segmentation of uterine cervix color images for early identification of cervical neoplasia, as an aid to cervical cancer diagnosis. Studies are in progress in segmentation and detection of multiple sclerosis lesions.
Automated segmentation of pulmonary lobes in chest CT scans using evolving surfaces
Pechin Lo, Eva M. van Rikxoort, Fereidoun Abtin, et al.
Segmentation of the pulmonary lobes from chest CT scans is a challenging problem, especially with the presence of incomplete pulmonary fissures. We present an iterative approach for the segmentation of pulmonary lobes via a surface that evolves based on a voxel based fissure confidence function and a smooth prior. The surface is constructed such that it separates the whole lung at all times, and is represented as a height map above a 2D reference plane. A surface evolution process is used to fit the surface to a pulmonary fissure in a scan. At each iteration, the height of all points in the map is adjusted such that the overall confidence is maximized, followed by Laplacian smoothing to enforce a smooth prior on the surface. The proposed method was trained and tuned on 18 CT scans from a clinical trial, and tested on 41 scans of different patients with severe emphysema from another clinical trial. Average overlap ratio of the segmented upper and lower lobes of the left and right lungs are 0.96 and 0.91 respectively, with no manual editing of the major fissures. Average overlap ratio for the right middle lobe is 0.86, where manually selection of initial lobe was needed for six cases, and with seven cases excluded because the minor fissure was almost entirely not visible in the CT scan.
A multiscale graph cut approach to bright-field multiple cell image segmentation using a Bhattacharyya measure
Soo Min Kang, Justin W. L. Wan
Automatic segmentation of bright-field cell images is important to cell biologists, but is difficult to achieve due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). The standard segmentation techniques, such as the level set method and active contours, are not able to overcome these features of bright-field images. Consequently, poor segmentation results are produced. In this paper, we present a robust segmentation method, which combines the techniques of graph cut, multiresolution, and Bhattacharyya measure, performed in a multiscale framework, to locate multiple cells in bright-field images. The issue of low contrast in bright-field images is addressed by determining the difference in intensity profiles of the cells and the background. The resulting segmentation on the entire image frame provides global information. Then a local segmentation at different regions of interest is performed to obtain finer details of the segmentation result. We illustrate the effectiveness of the method by presenting the segmentation results of C2C12 (muscle) cells in bright-field images.
Automatic segmentation of abdominal wall in ventral hernia CT: a pilot study
Zhoubing Xu, Wade M. Allen, Benjamin K. Poulose, et al.
The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24-43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments; notably, quantitative metrics based on image-processing are not used. We propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. To date, automated segmentation algorithms have not been presented to quantify the abdominal wall and potential hernias. In this pilot study with four clinically acquired CT scans on post-operative patients, we demonstrate a novel approach to geometric classification of the abdominal wall and essential abdominal features (including bony landmarks and skin surfaces). Our approach uses a hierarchical design in which the abdominal wall is isolated in the context of the skin and bony structures using level set methods. All segmentation results were quantitatively validated with surface errors based on manually labeled ground truth. Mean surface errors for the outer surface of the abdominal wall was less than 2mm. This approach establishes a baseline for characterizing the abdominal wall for improving VH care.
Graph cuts based left atrium segmentation refinement and right middle pulmonary vein extraction in C-arm CT
Dong Yang, Yefeng Zheng, Matthias John
Automatic segmentation of the left atrium (LA) with the left atrial appendage (LAA) and the pulmonary vein (PV) trunks is important for intra-operative guidance in radio-frequency catheter ablation to treat atrial fibrillation (AF). Recently, we proposed a model-based method1, 2 for LA segmentation from the C-arm CT images using marginal space learning (MSL).3 However, on some data, the mesh from the model-based segmentation cannot exactly fit the true boundary of the left atrium in the image since the method does not make full use of local voxel-wise intensity information. Furthermore, due to the large variations of the PV drainage pattern, extra right middle pulmonary veins are not included in the LA model. In this paper, a graph-based method is proposed by exploiting the graph cuts method to refine results from the model-based segmentation and extract right middle pulmonary veins. We first build regions of interest to constrain the segmentation. The region growing method is used to construct graphs within the regions of interest for the graph cuts optimization. The graph cuts optimization is then performed and newly segmented foreground voxels are assigned into different parts of the left atrium. For the extraction of right middle pulmonary veins, occasional false positive PVs are removed by examining multiple criteria. Experiments demonstrate that the proposed graph-based method is effective and efficient to improve the LA segmentation accuracy and extract right middle PVs.
Cortical thickness changes related to the processes of maturation and aging in healthy brains
Heitor H. Cunha, Antonio C. Santos, Sara Rosset, et al.
Normal aging is accompanied by global as well as regional structural changes. While these age-related changes in gray matter volume have been extensively studied, less has been done using newer morphological indexes, such as cortical thickness and surface area and the studies usually focus on subjects older than 19. Here, we analyzed structural images of 143 healthy volunteers, ranging from 6 to 86 years of age, using FreeSurfer to support the parcellation, and proposed a way to compute the regional changes of cortical thickness that occurs in human brains from childhood to elderliness. We separated the whole process in two stages: maturation and aging, and compute the best threshold for each region, allowing to identify when those processes begin, their velocities and the relation to some degenerative diseases.
A registration and atlas propagation based framework for automatic whole heart segmentation of CT volumes
Xiahai Zhuang, Jingjing Song, Songhua Zhan, et al.
Cardiac computed tomography (CT) is widely used in clinics for diagnosing heart diseases and assessing functionality of the heart. It is therefore desirable to achieve fully automatic whole heart segmentation for the clinical applications, since manual work can be labor-intensive and subject to bias. However, automating this segmentation is challenging due to the large shape variability of the heart and the poor contrast between sub- structures such as those in the right ventricle and right atrium region in CT angiography images. In this work, we develop a fully automatic whole heart segmentation framework for CT volumes. This framework is based on image registration and atlas propagation techniques. Also, we investigate and compare the segmentation performance using single and multiple atlas propagation and segmentation strategies. In multiple atlas segmentation, a ranking-and-selection scheme is used to identify the best atlas(es) from an atlas pool for an unseen image. The segmentation methods are evaluated using fifteen clinical data. The results show that the proposed multiple atlas segmentation method can achieve a mean Dice score of 0:889±0:023 and a mean surface distance error of 1:17±1:39 mm for the automatic whole heart segmentation of seven substructures.
Automatic segmentation of the preterm neonatal brain with MRI using supervised classification
Sabina M. Chiţă, Manon Benders, Pim Moeskops, et al.
Cortical folding ensues around 13-14 weeks gestational age and a qualitative analysis of the cortex around this period is required to observe and better understand the folds arousal. A quantitative assessment of cortical folding can be based on the cortical surface area, extracted from segmentations of unmyelinated white matter (UWM), cortical grey matter (CoGM) and cerebrospinal fluid in the extracerebral space (CSF). This work presents a method for automatic segmentation of these tissue types in preterm infants. A set of T1- and T2-weighted images of ten infants scanned at 30 weeks postmenstrual age was used. The reference standard was obtained by manual expert segmentation. The method employs supervised pixel classification in three subsequent stages. The classification is performed based on the set of spatial and texture features. Segmentation results are evaluated in terms of Dice coefficient (DC), Hausdorff distance (HD), and modified Hausdorff distance (MHD) defined as 95th percentile of the HD. The method achieved average DC of 0.94 for UWM, 0.73 for CoGM and 0.86 for CSF. The average HD and MHD were 6.89 mm and 0.34 mm for UWM, 6.49 mm and 0.82 mm for CoGM, and 7.09 mm and 0.79 mm for CSF, respectively. The presented method can provide volumetric measurements of the segmented tissues, and it enables quantification of cortical characteristics. Therefore, the method provides a basis for evaluation of clinical relevance of these biomarkers in the given population.
Multi-organ segmentation from 3D abdominal CT images using patient-specific weighted-probabilistic atlas
Organ segmentation of CT volumes is a basic function of computer-aided diagnosis and surgery-assistance systems. Many of these systems implement organ segmentation methods that are limited to specific organs and that are not robust in dealing with inter-subject differences of organ shape or position. In this paper, we propose an automated method for multi-organ segmentation of abdominal 3D CT volumes by using a patient-specific, weighted-probabilistic atlas for organ position. This is achieved in a two-step process. First, we prepare for segmentation by dividing an atlas database into multiple clusters. This is done using pairs of a training image and the corresponding manual segmentation data set. In the next step, we choose a cluster whose template image is the most similar to the target image. We then weight all of the atlases in the selected cluster by calculating the similarities between the atlases and the target image to dynamically generate a specific probabilistic atlas for each target image. We use the generated probabilistic atlas in MAP estimation to obtain a rough segmentation result and then refine it by using a graph-cut method. Our method can simultaneously segment four organs: the liver, spleen, pancreas and kidneys. Our weighting scheme greatly reduces segmentation error due to inter-subject differences. We applied our method to 100 cases of CT volumes and thus showed that it could segment the liver, spleen, pancreas and kidneys with Dice similarity coefficients of 95.2%, 89.7%, 69.6%, and 89.4%, respectively.
Poster Session: Shape
icon_mobile_dropdown
Shape manifold regression with spherical harmonics for hippocampus shape analysis
Shape regression analysis is a powerful tool to study local shape changes as a function of an independent regressor variable. In this paper, we introduce spherical harmonic(SPHARM) representation to surface manifold learning and shape regression. Here, we use root mean square distance(RMSD) to measure the deformation degree of the surface, and find out that the hippocampus’ deformation degree is increased over age. We also investigate the particular changing area, and discover that the hippocampus have significant changes in the frontal area and tail area, especially in CA1 subfield.
Computation on shape manifold for atlas generation: application to whole heart segmentation of cardiac MRI
Xiahai Zhuang, Wenzhe Shi, Haiyan Wang, et al.
In this work, we investigate the computation on a shape manifold for atlas generation and application to atlas propagation and segmentation. We formulate the computation of Fréchet mean via the constant velocity fields and Log-Euclidean framework for Nadaraya-Watson kernel regression modeling. In this formulation, we directly compute the Fréchet mean of shapes via fast vectorial operations on the velocity fields. By using image similarity metric to estimate the distance of shapes in the assumed manifold, we can estimate a close shape of an unseen image using Naderaya-Watson kernel regression function. We applied this estimation to generate subject-specific atlases for whole heart segmentation of MRI data. The segmentation results on clinical data demonstrated an improved performance compared to existing methods, thanks to the usage of subject-specific atlases which had more similar shapes to the unseen images.
Poster Session: Ultrasound
icon_mobile_dropdown
Interactive 3D segmentation method based on uncertain local region updating in hierarchical MRF graph
Sang Hyun Park, Il Dong Yun
In this paper, we present a three-dimensional interactive segmentation method. Unlike most previous interactive methods which largely depend on user interaction, we exploit a prior knowledge of training data to reduce the user effort. Based on the prior knowledge, most distinguishable parts of an object are automatically segmented and labels of some uncertain parts are queried to an user. To systematically model the problem, we combine the hierarchical Markov random field (HMRF) framework and the active learning scheme. The HMRF framework, proposed for the automatic manner, simultaneously reflects characteristics of local variations and their global smoothness, while the active learning scheme improves the efficiency of interactive system. We incorporate the active learning strategy into the editing step of the HMRF structure in order to find and modify the uncertain parts after the automatic segmentation. Specifically, the uncertainties of local regions are firstly computed by the label difference between segmentation candidates. Then, the graph models of the uncertain regions are updated by the user interaction. Since the HMRF structure constrains the smoothness of local regions and the global optimality, the segmentation is updated as a whole even though the small numbers of local parts are edited. The proposed method is applied to the segmentation of femur and tibia in knee MR images for evaluation. The evaluation demonstrates that the proposed method improves the segmentation efficiency more than the graph cut based method or manual editing.
Prostate segmentation in 3D TRUS using convex optimization with shape constraint
Wu Qiu, Jing Yuan, Eranga Ukwatta, et al.
An efficient and accurate segmentation of 3D end-firing transrectal ultrasound (TRUS) images plays a central role in the planning and treatment of 3D TRUS guided prostate biopsy. In this paper, we propose a novel convex optimization based approach to delineate prostate boundaries from 3D TRUS images. The technique makes use of the approximate rotational symmetry of prostate shapes and reduces the original 3D segmentation problem to a sequence of simple 2D segmentation sub-problems by means of rotationally reslicing the 3D TRUS images. In practice, this significantly decreases the computational load, facilitates introducing learned shape information and improves segmentation efficiency and accuracy. For each 2D resliced frame, we introduce a new convex optimization based contour evolution method to locate the 2D slicewise prostate boundary subject to the additional shape constraint. The proposed contour evolution method provides a fully time implicit scheme to move the contour to its globally optimal position at each discrete time, which allows a large evolving time step-size to accelerate convergence. Moreover, the proposed algorithm is implemented on a GPU to achieve a high performance. Quantitative validations on twenty 3D TRUS patient prostate images demonstrate that the proposed approach can obtain a DSC of 93:7 ± 2:5%, a sensitivity of 91:2 ± 3:1%, a MAD of 1:37 ± 0:3mm, and a MAXD of 3:02 ± 0:44mm. The mean segmentation time for the dataset was 18:3 ± 2:5s, in addition to 25s for initialization. Our proposed method exhibits the advantages of accuracy, efficiency and robustness compared to the level set and active contour based methods.
A robust model-based approach to detect the mitral annulus in 3D ultrasound
Bastian Graser, Diana Wald, Mathias Seitel, et al.
Over 40.000 mitral reconstructions are performed each year in the United States. To ensure a successful and durable outcome of the operation, detailed quantification of the mitral annulus is helpful. However, manual measurement is time consuming and hard to perform in clinical routine. We propose a fast semi-automatic method to create a precise model of the mitral annulus from 3D ultrasound data. The basic idea is to combine image information with anatomical knowledge in form of a standard mitral annulus model. This way, the method can adjust to the individual image data and still cope with strong artifacts and incomplete images. By comparing the resulting models to manually created ground truth data of 39 patients, we identified a mean error of 3.49 mm. This is lower than the determined standard deviation of the expert (4.13 mm) and confirms the accuracy of the proposed method. The overall time to create a mitral annulus model from 3D ultrasound image data is less than a minute. Due to its speed, accuracy and robustness, the method is eligible for the clinical routine.
Segmentation of the left heart ventricle in ultrasound images using a region based snake
Matilda Landgren, Niels Christian Overgaard, Anders Heyden
Ultrasound imaging of the heart is a non-invasive method widely used for different applications. One of them is to measure the blood volume in the left ventricle at different stages of the heart cycle. This demands a proper segmentation of the left ventricle and a (semi-) automated method would decrease intra-variability as well as workload. This paper presents a semi-automated segmentation method that uses a region based snake. To avoid any unwanted concavities in the segmentations due to the cardiac valve we use two anchor points in the snake that are located to the left and to the right of the cardiac valve respectively. For the possibility of segmentations in different stages of the heart cycle these anchor points are tracked through the cycle. This tracking is based both on the resemblance of a region around the anchor points and a prior model of the movement in the y-direction of the anchor points. The region based snake functional is the sum of two terms, a regularizing term and a data term. It is our data term that is region based since it involves the integration of a two-dimensional subdomain of the image plane. A segmentation of the left ventricle is obtained by minimizing the functional which is done by continuously reshaping the contour until the optimal shape and size is obtained. The developed method shows promising results.
Automatic systole-diastole classification of mitral valve complex from RT-3D echocardiography based on multiresolution processing
Gary K. W. Tsui, Kwan-Yee K. Wong, Alex P. W. Lee
Mitral valve repair is one of the most prevalent operations for various mitral valve conditions. Echocardiography, being famous for its low-cost, non-invasiveness and speediness, is the dominant imaging modality used for carrying out mitral valve condition analysis in both pre-operative and intra-operative examinations. In order to perform analysis on different phases of a cardiac cycle, it is necessary to first classify the echocardiograhic data into volumes corresponding to the systole and diastole phases. This often requires tedious manual work. This paper presents a fully-automatic method for systole-diastole classification of real-time three-dimensional transesophageal echocardiography (RT-3D-TEE) data. The proposed method first resamples the data with radial cutting planes, segments the mitral valve by thresholding, and removes noise by median filtering. Classification is then carried out based on the number of identified mitral valve regions. A multiresolution processing scheme is proposed to further improve the classification accuracy by aggregating classification results obtained from different image resolution scales. The proposed method was evaluated against the classification results produced by a cardiologist. Experimental results show that the proposed method, without the use of computationally intensive algorithms or the use of any training database, can achieve a classification accuracy of 91.04%.
Learning based ensemble segmentation of anatomical structures in liver ultrasound image
Xuetao Feng, Xiaolu Shen, Qiang Wang, et al.
Automatic segmentation of anatomical structure is crucial for computer aided diagnosis and image guided online treatment. In this paper, we present a novel approach for fully automatic segmentation of all anatomical structures from a target liver organ in a coherent framework. Firstly, all regional anatomical structures such as vessel, tumor, diaphragm and liver parenchyma are detected simultaneously using random forest classifiers. They share the same feature set and classification procedure. Secondly, an efficient region segmentation algorithm is used to obtain the precise shape of these regional structures. It is based on level set with proposed active set evolution and multiple features handling which achieves 10 times speedup over existing algorithms. Thirdly, the liver boundary curve is extracted via a graph-based model. The segmentation results of regional structures are incorporated into the graph as constraints to improve the robustness and accuracy. Experiment is carried out on an ultrasound image dataset with 942 images captured with liver motion and deformation from a number of different views. Quantitative results demonstrate the efficiency and effectiveness of the proposed algorithm.
Gland segmentation of breast ultrasound exams
Rui Braz, J. Moutinho, Mário Freire, et al.
A novel approach for the mammary gland region segmentation of Breast Ultrasound exams is proposed. This method is important because the mammary gland is the Region of Interest for pathological diagnosis. Five different pre-processing methods that enhance the transition areas or remove the speckle of the ultrasound images were selected: Non-linear diffusion, Speckle Reducing Anisotropic Diffusion, Entropy filter, Laplacian filter and Homomorphic filter. The results of these processing methods define the features that are used as descriptors for a K-Means and SVM classifier or as weak classifiers by an Adaboost classifier. The pixel classification results in a rough tissue segmentation. A new method is proposed to interpolate the classification results into an accurate tissue separation line, using graph theory. This step overcomes the problem of the discontinuities between the different classified areas. The developed segmentation method was applied to a database with 61 images, 34 without masses and 27 with masses collected using digital support, and segmented by an experienced medical oncologist in Centro Hospitalar da Cova da Beira in Portugal. The presented results were obtained using cross-validation.
3D seam selection techniques with application to improved ultrasound mosaicing
In this work we introduce two different techniques for the global optimization of surfaces and apply them to the task of finding the optimal stitching seam between neighboring and overlapping 3D ultrasound volumes. Existing techniques for US mosaicing, based on interpolation or planar seams, introduce artifacts into the composite volume especially when using a large number of clinical scans. Our first method models the seam as a B-spline surface and treats its calculation as a shape optimization problem. In this case the optimal location of the surface-defining control points is a large scale constrained optimization problem, which is solved using a cooperatively coevolving particle swarm based approach. The second method treats the seam selection as a voxel labeling problem, where each voxel in the composite volume is labeled with its respective source volume. Therefore if we have N volumes, each voxel in the composite volume may be assigned one of the N labels. The optimal labeling, which implicitly defines a seam, minimizes the intensity and gradient difference between adjacent volumes The formulation is optimized using graphcuts, which guarantees that a global minimum is achieved due to the submodularity of the energy function. The final composite volume is constructed voxel-wise by taking the value of the source volume, which is designated by its label. Our application of this procedure is the construction of composite ultrasound image volumes for incorporation into an ultrasound simulator. These methods are validated on clinical US data acquired from obstetrics patients.
Semiautomatic segmentation of atherosclerotic carotid artery lumen using 3D ultrasound imaging
Md. Murad Hossain, Khalid AlMuhanna , Limin Zhao, et al.
Carotid atherosclerosis is a major cause of stroke. Imaging and monitoring plaque progression in 3D can better classify disease severity and potentially identify plaque vulnerability to rupture. In this study we propose to validate a new semiautomatic carotid lumen segmentation algorithm based on 3D ultrasound imaging that is designed to work in the presence of poor boundary contrast and complex 3D lumen geometries. Our algorithm uses a distance regularized level set evolution with a novel initialization and stopping criteria to localize the lumen-intima boundary (LIB). The external energy used in the level set method is a combination of region-based and edge-based energy. Initialization of LIB segmentation is first done in the longitudinal slice where the geometry of the carotid bifurcation is best visualized and then reconstructed in the cross sectional slice to guide the 3D initialization. Manual initialization of the contour is done only on the starting slice of the common carotid, bifurcation, and internal and external carotid arteries. Initialization of the other slices is done by eroding segmentation of previous slices. The user also initializes the boundary points for every slice. A combination of changes in the modified Hausdorff distance (MHD) between contours at successive iterations and a stopping boundary formed from initial boundary points is used as a stopping criterion to avoid over- or under-segmentation. The proposed algorithm is evaluated against manually segmented boundaries by calculating dice similarity coefficient (DSC), HD and MHD in the common carotid (C), carotid bulb (B) and internal carotid (I) regions to get a better understanding of accuracy?. Results from five subjects with <50% carotid stenosis showed good agreement with manual segmentation; between the semiautomatic algorithm and manuals: DSC (C: 86.49± 9.38, B: 82.21±8.49, I: 78.96±7.55), MHD (C: 3.79 ± 1.64, B: 4.09 ± 1.71, I: 4.12 ± 2.01), HD (C: 8.07±2.59, B: 10.09±3.95, I: 11.28±5.06); and inter observers: DSC (C: 88.31±5, B: 82.45±7.57, I: 82.03±8.83), MHD (C: 3.77±2.09, B: 4.32±1.88, I: 4.56±2.24), HD (C: 7.61±2.67, B: 10.22±4.30, I: 10.63±4.94). This method is a first step towards achieving full 3D characterization of plaque progression, and is currently being evaluated in a longitudinal study of asymptomatic carotid stenosis.