Proceedings Volume 7259

Medical Imaging 2009: Image Processing

cover
Proceedings Volume 7259

Medical Imaging 2009: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 February 2009
Contents: 20 Sessions, 176 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2009
Volume Number: 7259

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7259
  • Segmentation I
  • Statistical Models
  • Statistical Methods
  • Registration I
  • Registration II
  • Motion Analysis
  • Vascular Image Processing
  • Atlas-based Methods
  • Keynote and Diffusion Tensor Imaging
  • Registration III
  • Segmentation II
  • Posters: Classification
  • Posters: Diffusion Tensor Imaging
  • Posters: Functional Imaging
  • Posters: Filtering, Restoration, and Enhancement
  • Posters: Motion
  • Posters: Registration
  • Posters: Segmentation
  • Posters: Shape and Texture
Front Matter: Volume 7259
icon_mobile_dropdown
Front Matter: Volume 7259
This PDF file contains the front matter associated with SPIE Proceedings Volume 7259, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Segmentation I
icon_mobile_dropdown
Hierarchical parsing and semantic navigation of full body CT data
Sascha Seifert, Adrian Barbu, S. Kevin Zhou, et al.
Whole body CT scanning is a common diagnosis technique for discovering early signs of metastasis or for differential diagnosis. Automatic parsing and segmentation of multiple organs and semantic navigation inside the body can help the clinician in efficiently obtaining accurate diagnosis. However, dealing with the large amount of data of a full body scan is challenging and techniques are needed for the fast detection and segmentation of organs, e.g., heart, liver, kidneys, bladder, prostate, and spleen, and body landmarks, e.g., bronchial bifurcation, coccyx tip, sternum, lung tips. Solving the problem becomes even more challenging if partial body scans are used, where not all organs are present. We propose a new approach to this problem, in which a network of 1D and 3D landmarks is trained to quickly parse the 3D CT data and estimate which organs and landmarks are present as well as their most probable locations and boundaries. Using this approach, the segmentation of seven organs and detection of 19 body landmarks can be obtained in about 20 seconds with state-of-the-art accuracy and has been validated on 80 CT full or partial body scans.
Probabilistic pairwise Markov models: application to prostate cancer detection
Markov Random Fields (MRFs) provide a tractable means for incorporating contextual information into a Bayesian framework. This contextual information is modeled using multiple local conditional probability density functions (LCPDFs) which the MRF framework implicitly combines into a single joint probability density function (JPDF) that describes the entire system. However, only LCPDFs of certain functional forms are consistent, meaning they reconstitute a valid JPDF. These forms are specified by the Gibbs-Markov equivalence theorem which indicates that the JPDF, and hence the LCPDFs, should be representable as a product of potential functions (i.e. Gibbs distributions). Unfortunately, potential functions are mathematical abstractions that lack intuition; and consequently, constructing LCPDFs through their selection becomes an ad hoc procedure, usually resulting in generic and/or heuristic models. In this paper we demonstrate that under certain conditions the LCDPFs can be formulated in terms of quantities that are both meaningful and descriptive: probability distributions. Using probability distributions instead of potential functions enables us to construct consistent LCPDFs whose modeling capabilities are both more intuitive and expansive than typical MRF models. As an example, we compare the efficacy of our so-called probabilistic pairwise Markov models (PPMMs) to the prevalent Potts model by incorporating both into a novel computer aided diagnosis (CAD) system for detecting prostate cancer in whole-mount histological sections. Using the Potts model the CAD system is able to detection cancerous glands with a specificity of 0.82 and sensitivity of 0.71; its area under the receiver operator characteristic (AUC) curve is 0.83. If instead the PPMM model is employed the sensitivity (specificity is held fixed) and AUC increase to 0.77 and 0.87.
Tissue probability map constrained CLASSIC for increased accuracy and robustness in serial image segmentation
Traditional fuzzy clustering algorithms have been successfully applied in MR image segmentation for quantitative morphological analysis. However, the clustering results might be biased due to the variability of tissue intensities and anatomical structures. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serialMR brain image segmentation for longitudinal study of human brains. The tissue probability maps consist of segmentation priors obtained from a population and reflect the probability of different tissue types. More accurate image segmentation can be achieved by using these segmentation priors in the clustering algorithm. Experimental results of both simulated longitudinal MR brain data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using the new serial image segmentation algorithm in the framework of CLASSIC show more accurate and robust longitudinal measures.
WERITAS: weighted ensemble of regional image textures for ASM segmentation
Robert Toth, Scott Doyle, Mark Rosen, et al.
In this paper we present WERITAS, which is based in part on the traditional Active Shape Model (ASM) segmentation system. WERITAS generates multiple statistical texture features, and finds the optimal weighted average of those texture features by maximizing the correlation between the Euclidean distance to the ground truth and the Mahalanobis distance to the training data. The weighted average is used a multi-resolution segmentation system to more accurately detect the object border. A rigorous evaluation was performed on over 200 clinical images comprising of prostate images and breast images from 1.5 Tesla and 3 Tesla MRI machines via 6 distinct metrics. WERITAS was tested against a traditional multi-resolution ASM in addition to an ASM system which uses a plethora of random features to determine if the selection of features is improving the results rather than simply the use of multiple features. The results indicate that WERITAS outperforms all other methods to a high degree of statistical significance. For 1.5T prostate MRI images, the overlap from WERITAS is 83%, the overlap from the random features is 81%, and the overlap from the traditional ASM is only 66%. In addition, using 3T prostate MRI images, the overlap from WERITAS is 77%, the overlap from the random features is 54%, and the overlap from the traditional ASM is 59%, suggesting the usefulness of WERITAS. The only metrics in which WERITAS was outperformed did not hold any degree of statistical significance. WERITAS is a robust, efficient, and accurate segmentation system with a wide range of applications.
Automatic left ventricle detection in MRI images using marginal space learning and component-based voting
Magnetic resonance imaging (MRI) is currently the gold standard for left ventricle (LV) quantification. Detection of the LV in an MRI image is a prerequisite for functional measurement. However, due to the large variations in orientation, size, shape, and image intensity of the LV, automatic detection of the LV is still a challenging problem. In this paper, we propose to use marginal space learning (MSL) to exploit the recent advances in learning discriminative classifiers. Instead of learning a monolithic classifier directly in the five dimensional object pose space (two dimensions for position, one for orientation, and two for anisotropic scaling) as full space learning (FSL) does, we train three detectors, namely, the position detector, the position-orientation detector, and the position-orientation-scale detector. Comparative experiments show that MSL significantly outperforms FSL in both speed and accuracy. Additionally, we also detect several LV landmarks, such as the LV apex and two annulus points. If we combine the detected candidates from both the whole-object detector and landmark detectors, we can further improve the system robustness. A novel voting based strategy is devised to combine the detected candidates by all detectors. Experiments show component-based voting can reduce the detection outliers.
Statistical Models
icon_mobile_dropdown
Model driven quantification of left ventricular function from sparse single-beat 3D echocardiography
Meng Ma, Marijn van Stralen, Johan H. C. Reiber, et al.
This paper presents a novel model based segmentation technique for quantification of Left Ventricular (LV) function from sparse single-beat 3D echocardiographic data acquired with a Fast Rotating Ultrasound (FRU) transducer. This transducer captures cardiac anatomy in a sparse set of radially sampled, curved cross sections within a single cardiac cycle. The method employs a 3D Active Shape Model of the Left Ventricle (LV) in combination with local appearance models as prior knowledge to steer the segmentation. A set of local appearance patches generate the model update points for fitting the model to the LV in the curved FRU cross-sections. Updates are then propagated over the dense 3D model mesh to overcome correspondence problems due to the data sparsity, whereas the 3D Active Shape Model serves to retain the plausibility of the generated shape. Leave-one-out cross validation was carried out on single-beat FRU data from 28 patients suffering from various cardiac pathologies. Error measurements against expert-annotated contours yielded an average point-to-point distance of around 3.8 ± 2.4 mm and point-to-surface distance of 2.0 ± 1.8 mm and average volume estimation error of around 9 ± 7%. Robustness tests with respect to different model initializations showed acceptable performance for initial positions within a range of 22 mm for displacement and 12° for orientation. This demonstrates that the method combines robustness with respect to initialization with an acceptable accuracy, while using sparse single-beat FRU data.
Shape-based diagnosis of the aortic valve
Razvan Ioan Ionasec, Alexey Tsymbal, Dime Vitanovski, et al.
Disorders of the aortic valve represent a common cardiovascular disease and an important public-health problem worldwide. Pathological valves are currently determined from 2D images through elaborate qualitative evalu- ations and complex measurements, potentially inaccurate and tedious to acquire. This paper presents a novel diagnostic method, which identies diseased valves based on 3D geometrical models constructed from volumetric data. A parametric model, which includes relevant anatomic landmarks as well as the aortic root and lea ets, represents the morphology of the aortic valve. Recently developed robust segmentation methods are applied to estimate the patient specic model parameters from end-diastolic cardiac CT volumes. A discriminative distance function, learned from equivalence constraints in the product space of shape coordinates, determines the corresponding pathology class based on the shape information encoded by the model. Experiments on a heterogeneous set of 63 patients aected by various diseases demonstrated the performance of our method with 94% correctly classied valves.
RABBIT: rapid alignment of brains by building intermediate templates
This paper proposes a brain image registration algorithm, called RABBIT, which achieves fast and accurate image registration by using an intermediate template generated by a statistical shape deformation model during the image registration procedure. The statistical brain shape deformation information is learned by means of principal component analysis (PCA) from a set of training brain deformations, each of them linking a selected template to an individual brain sample. Using the statistical deformation information, the template image can be registered to a new individual image by optimizing a statistical deformation model with a small number of parameters, thus generating an intermediate template very close to the individual brain image. The remaining shape difference between the intermediate template and the individual brain is then minimized by a general registration algorithm, such as HAMMER. With the help of the intermediate template, the registration between the template and individual brain images can be achieved fast and with similar registration accuracy as HAMMER. The effectiveness of the RABBIT has been evaluated by using both simulated atrophy data and real brain images. The experimental results show that RABBIT can achieve over five times speedup, compared to HAMMER, without losing any registration accuracy or statistical power in detecting brain atrophy.
Bayes estimation of shape model with application to vertebrae boundaries
Alessandro Crimi, Anarta Ghosh, Jon Sporring, et al.
Estimation of the covariance matrix is a pivotal step in landmark based statistical shape analysis. For high dimensional representation of the shapes, often the number of available shape examples is far too small for ML covariance matrix is rank deficient and eigenvectors corresponding to the small eigenvalues. We take a Bayesian approach to the problem and show how the prior information can be used to estimate the covariance matrix from a small number of samples in a high dimensional shape space. The performance of the proposed method is evaluated in the context of reconstructions of high resolution vertebral boundary from an incomplete and lower dimensional representation. The algorithm performs better than the ML method, especially for small numbers of samples in the training set. The superiority of the proposed Bayesian approach was also observed when noisy incomplete lower dimensional representation of the vertebral boundary was used in the reconstruction algorithm. Moreover, unlike other commonly used approaches, e.g., regularization, the presented method does not depend heavily on the choice of the parameter values.
Automated vertebra identification in CT images
In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.
GC-ASM: synergistic integration of active shape modeling and graph-cut methods
Xinjian Chen, Jayaram K. Udupa, Drew A. Torigian, et al.
Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) methods with the model based ASM methods to arrive at the GC-ASM method. GC-ASM effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape information. The ASM results are fully utilized to help GC in several ways: (1) For automatically selecting seeds to do GC segmentation, thus helping GC with object recognition; (2) For refining the parameters of the GC algorithm from the ASM result; (3) For bringing object shape information into the GC cost computation. (4) In turn, for using the cost of GC result to improve the ASM's object recognition process. The proposed methods are implemented to operate on 2D images and tested on a clinical abdominal CT data set. The results show: (1) GC-ASM becomes largely independent of initialization. (2) The number of landmarks can be reduced by a factor of 3 in GC-ASM over that in ASM. (3) The accuracy of segmentation via GC-ASM is considerably better than that of ASM.
Statistical Methods
icon_mobile_dropdown
Comparing the sensitivity of wavelets, Minkowski functionals, and scaling indices to higher order correlations in MR images of the trabecular bone using surrogates
Christoph Räth, Jan Bauer, Dirk Müller, et al.
The quantitative characterization of tissue probes as visualized by CT or MR is of great interest in many fields of medical image analysis. A proper quantification of the information content in such images can be realized by calculating well-suited texture measures, which are able to capture the main characteristics of the image structures under study. Using test images showing the complex trabecular structure of the inner bone of a healthy and osteoporotic patient we propose and apply a novel statistical framework, with which one can systematically assess the sensitivity of the chosen texture measures to higher order correlations (HOCs), i.e. correlations not being captured by linear methods like the power spectrum. To this end, so-called surrogate images are generated, in which the linear properties are preserved, while parts or all higher order correlations are wiped out. This is achieved by dedicated Fourier phase shuffling techniques. We compare three commonly used classes of texture measures, namely spherical Mexican hat wavelets (SMHW), Minkowski functionals (MF) and scaling indices (SIM). While the SMHW yield only very poor sensitivity to HOCs in both cases, the MF and SIM could detect the HOCs very well with significance up to S = 320σ (MF) and S = 150σ (SIM). The relative performance of the MF and SIM differed significantly for the healthy and osteoporotic bone. Thus, MF and SIM are preferable for a proper quantification of the bone structure. They depict complementary aspects of it and thus should both be used for characterising the trabecular bone.
A similarity retrieval method for functional magnetic resonance imaging (fMRI) statistical maps
R. F. Tungaraza, J. Guan, S. Rolfe, et al.
We propose a method for retrieving similar fMRI statistical images given a query fMRI statistical image. Our method thresholds the voxels within those images and extracts spatially distinct regions from the voxels that remain. Each region is defined by a feature vector that contains the region centroid, the region area, the average activation value for all the voxels within that region, the variance of those activation values, the average distance of each voxel within that region to the region's centroid, and the variance of the voxel's distance to the region's centroid. The similarity between two images is obtained by the summed minimum distance of their constituent feature vectors. Results on a dataset of fMRI statistical images from experiments involving distinct cognitive tasks are shown.
LSTGEE: longitudinal analysis of neuroimaging data
Longitudinal imaging studies are essential to understanding the neural development of neuropsychiatric disorders, substance use disorders, and normal brain. Using appropriate image processing and statistical tools to analyze the imaging, behavioral, and clinical data is critical for optimally exploring and interpreting the findings from those imaging studies. However, the existing imaging processing and statistical methods for analyzing imaging longitudinal measures are primarily developed for cross-sectional neuroimaging studies. The simple use of these cross-sectional tools to longitudinal imaging studies will significantly decrease the statistical power of longitudinal studies in detecting subtle changes of imaging measures and the causal role of time-dependent covariate in disease process. The main objective of this paper is to develop longitudinal statistics toolbox, called LSTGEE, for the analysis of neuroimaging data from longitudinal studies. We develop generalized estimating equations for jointly modeling imaging measures with behavioral and clinical variables from longitudinal studies. We develop a test procedure based on a score test statistic and a resampling method to test linear hypotheses of unknown parameters, such as associations between brain structure and function and covariates of interest, such as IQ, age, gene, diagnostic groups, and severity of disease. We demonstrate the application of our statistical methods to the detection of the changes of the fractional anisotropy across time in a longitudinal neonate study. Particularly, our results demonstrate that the use of longitudinal statistics can dramatically increase the statistical power in detecting the changes of neuroimaging measures. The proposed approach can be applied to longitudinal data with multiple outcomes and accommodate incomplete and unbalanced data, i.e., subjects with different number of measurements.
Medical x-ray image enhancement by intra-image and inter-image similarity
André Gooßen, Thomas Pralow, Rolf-Rainer Grigat
In medical X-ray examinations, images suffer considerably from severe, signal-dependent noise as a result of the effort to keep applied doses as low as possible. This noise can be seen as an additive signal that degrades image quality and might disguise valuable content. Lost information has to be restored in a post-processing step. The crucial aspect of filtering medical images is preservation of edges and texture on the one hand and removing noise on the other hand. Classical smoothing filters, such as Gaussian or box filtering. are data-independent and equally blur the image content. State-of-the-art methods currently make use of local neighborhoods or global image statistics. However, exploiting global self-similarity within an image and inter-image similarity for subsequent frames of a sequence bears an unused potential for image restoration. We introduce a non-local filter with data-dependent response that closes the gap between local filtering and stochastic methods. The filter is based on the non-local means approach proposed by Buades1 et al. and is similar to bilateral filtering. In order to apply this approach to medical data, we heavily reduce the computational costs incurred by the original approach. Thus it is possible to interactively enhance single frames or selected regions of interest within a sequence. The proposed filter is applicable for time-domain filtering without the need for accurate motion estimation. Hence it can be seen as a general solution for filtering 2D as well as 2D+t X-ray image data.
A new method for thresholding and gradient optimization at different tissue interfaces using class uncertainty
The knowledge of thresholding and gradient at different tissue interfaces is of paramount interest in image segmentation and other imaging methods and applications. Most thresholding and gradient selection methods primarily focus on image histograms and therefore, fail to harness the information generated by intensity patterns in an image. We present a new thresholding and gradient optimization method which accounts for spatial arrangement of intensities forming different objects in an image. Specifically, we recognize object class uncertainty, a histogram-based feature, and formulate an energy function based on its correlation with image gradients that characterizes the objects and shapes in a given image. Finally, this energy function is used to determine optimum thresholds and gradients for various tissue interfaces. The underlying theory behind the method is that objects manifest themselves with fuzzy boundaries in an acquired image and that, in a probabilistic sense; intensities with high class uncertainty are associated with high image gradients generally indicating object/tissue interfaces. The new method simultaneously determines optimum values for both thresholds and gradient parameters at different object/tissue interfaces. The method has been applied on several 2D and 3D medical image data sets and it has successfully determined both thresholds and gradients for different tissue interfaces even when some of the thresholds are almost impossible to locate in their histograms. The accuracy and reproducibility of the method has been examined using 3D multi-row detector computed tomography images of two cadaveric ankles each scanned thrice with repositioning the specimen between two scans.
Registration I
icon_mobile_dropdown
Overlap invariance of cumulative residual entropy measures for multimodal image alignment
Nathan D. Cahill, Julia A. Schnabel, J. Alison Noble, et al.
Cumulative residual entropy (CRE)1,2 has recently been advocated as an alternative to differential entropy for describing the complexity of an image. CRE has been used to construct an alternate form of mutual information (MI),3,4 called symmetric cumulative mutual information (SCMI)5 or cross-CRE (CCRE).6 This alternate form of MI has exhibited superior performance to traditional MI in a variety of ways.6 However, like traditional MI, SCMI suffers from sensitivity to the changing size of the overlap between images over the course of registration. Alternative similarity measures based on differential entropy, such as normalized mutual information (NMI),7 entropy correlation coefficient (ECC)8,9 and modified mutual information (M-MI),10 have been shown to exhibit superior performance to MI with respect to the overlap sensitivity problem. In this paper, we show how CRE can be used to compute versions of NMI, ECC, and M-MI that we call the normalized cumulative mutual information (NCMI), cumulative residual entropy correlation coefficient (CRECC), and modified symmetric cumulative mutual information (M-SCMI). We use publicly available CT, PET, and MR brain images* with known ground truth transformations to evaluate the performance of these CRE-based similarity measures for rigid multimodal registration. Results show that the proposed similarity measures provide a statistically significant improvement in target registration error (TRE) over SCMI.
Improved fMRI time-series registration using joint probability density priors
Roshni Bhagalia, Jeffrey A. Fessler, Boklye Kim, et al.
Functional MRI (fMRI) time-series studies are plagued by varying degrees of subject head motion. Faithful head motion correction is essential to accurately detect brain activation using statistical analyses of these time-series. Mutual information (MI) based slice-to-volume (SV) registration is used for motion estimation when the rate of change of head position is large. SV registration accounts for head motion between slice acquisitions by estimating an independent rigid transformation for each slice in the time-series. Consequently each MI optimization uses intensity counts from a single time-series slice, making the algorithm susceptible to noise for low complexity endslices (i.e., slices near the top of the head scans). This work focuses on improving the accuracy of MI-based SV registration of end-slices by using joint probability density priors derived from registered high complexity centerslices (i.e., slices near the middle of the head scans). Results show that the use of such priors can significantly improve SV registration accuracy.
Automatic detection of registration errors for quality assessment in medical image registration
A novel method for quality assessment in medical image registration is presented. It is evaluated on 24 follow-up CT scan pairs of the lung. Based on a reference standard of manually matched landmarks we established a pattern recognition approach for detection of local registration errors. To capture characteristics of these misalignments a set of intensity, entropy and deformation related features was employed. Feature selection was conducted and a kNN classifier was trained and evaluated on a subset of landmarks. Registration errors larger than 2 mm were classified with a sensitivity of 88% and specificity of 94%.
A parallel-friendly normalized mutual information gradient for free-form registration
Marc Modat, Gerard R. Ridgway, Zeike A. Taylor, et al.
Non-rigid registration techniques are commonly used in medical image analysis. However these techniques are often time consuming. Graphics Processing Unit (GPU) execution appears to be a good way to decrease computation time significantly. However for an efficient implementation on GPU, an algorithm must be data parallel. In this paper we compare the analytical calculation of the gradient of Normalised Mutual Information with an approximation better suited to parallel implementation. Both gradient approaches have been implemented using a Free-Form Deformation framework based on cubic B-Splines and including a smoothness constraint. We applied this technique to recover realistic deformation fields generated from 65 3D-T1 images. The recovered fields using both gradients and the ground truth were compared. We demonstrated that the approximated gradient performed similarly to the analytical gradient but with a greatly reduced computation time when both approaches are implemented on the CPU. The implementation of the approximated gradient on the GPU leads to a computation time of 3 to 4 minutes when registering 190 × 200 × 124 voxel images with a grid including 57 × 61 × 61 control points.
Curvelet-based sampling for accurate and efficient multimodal image registration
M. N. Safran, M. Freiman, M. Werman, et al.
We present a new non-uniform adaptive sampling method for the estimation of mutual information in multi-modal image registration. The method uses the Fast Discrete Curvelet Transform to identify regions along anatomical curves on which the mutual information is computed. Its main advantages of over other non-uniform sampling schemes are that it captures the most informative regions, that it is invariant to feature shapes, orientations, and sizes, that it is efficient, and that it yields accurate results. Extensive evaluation on 20 validated clinical brain CT images to Proton Density (PD) and T1 and T2-weighted MRI images from the public RIRE database show the effectiveness of our method. Rigid registration accuracy measured at 10 clinical targets and compared to ground truth measurements yield a mean target registration error of 0.68mm(std=0.4mm) for CT-PD and 0.82mm(std=0.43mm) for CT-T2. This is 0.3mm (1mm) more accurate in the average (worst) case than five existing sampling methods. Our method has the lowest registration errors recorded to date for the registration of CT-PD and CT-T2 images in the RIRE website when compared to methods that were tested on at least three patient datasets.
Group-wise registration of large image dataset by hierarchical clustering and alignment
Group-wise registration has been proposed recently for consistent registration of all images in the same dataset. Since all images need to be registered simultaneously with lots of deformation parameters to be optimized, the number of images that the current group-wise registration methods can handle is limited due to the capability of CPU and physical memory in a general computer. To overcome this limitation, we present a hierarchical group-wise registration method for feasible registration of large image dataset. Our basic idea is to decompose the large-scale group-wise registration problem into a series of small-scale registration problems, each of which can be easily solved. In particular, we use a novel affinity propagation method to hierarchically cluster a group of images into a pyramid of classes. Then, images in the same class are group-wisely registered to their own center image. The center images of different classes are further group-wisely registered from the lower level to the upper level of the pyramid. A final atlas for the whole image dataset is thus synthesized when the registration process reaches the top of the pyramid. By applying this hierarchical image clustering and atlas synthesis strategy, we can efficiently and effectively perform group-wise registration to a large image dataset and map each image into the atlas space. More importantly, experimental results on both real and simulated data also confirm that the proposed method can achieve more robust and accurate registration than the conventional group-wise registration algorithms.
Registration II
icon_mobile_dropdown
Towards local estimation of emphysema progression using image registration
M. Staring, M. E. Bakker, D. P. Shamonin, et al.
Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of three patients suggest that image registration may be a suitable method to locally estimate emphysema progression.
Using statistical deformation models for the registration of multimodal breast images
Christine Tanner, John H. Hipwell, David J. Hawkes
This paper describes a novel method for registering multimodal breast images. The method is based on guiding initial alignment by a 3D statistical deformation model (SDM) followed by a standard non-rigid registration method for fine alignment. The method was applied to the problem of compensating for large breast compressions, namely registering magnetic resonance (MR) images to tomosynthesis images and X-ray mammograms. The SDM was based on simulating plausible breast compressions for a population of 20 subjects via finite element models created from segmented 3D MR breast images. Leave-one-out tests on simulated data showed that using SDM guided registration rather than affine registration for the initial alignment led on average to lower mean registration errors, namely 3.2 mm versus 4.2 mm for MR to tomosynthesis images (17.1 mm initially) and 5.0 mm versus 6.2 mm for MR to X-ray mammograms (15.0 mm initially).
Nonrigid registration algorithm for longitudinal breast MR images and the preliminary analysis of breast tumor response
Although useful for the detection of breast cancers, conventional imaging methods, including mammography and ultrasonography, do not provide adequate information regarding response to therapy. Dynamic contrast enhanced MRI (DCE-MRI) has emerged as a promising technique to provide relevant information on tumor status. Consequently, accurate longitudinal registration of breast MR images is critical for the comparison of changes induced by treatment at the voxel level. In this study, a nonrigid registration algorithm is proposed to allow for longitudinal registration of breast MR images obtained throughout the course of treatment. We accomplish this by modifying the adaptive bases algorithm (ABA) through adding a tumor volume preserving constraint in the cost function. The registration results demonstrate the proposed algorithm can successfully register the longitudinal breast MR images and permit analysis of the parameter maps. We also propose a novel validation method to evaluate the proposed registration algorithm quantitatively. These validations also demonstrate that the proposed algorithm constrains tumor deformation well and performs better than the unconstrained ABA algorithm.
Feature-based non-rigid volume registration of serial coronary CT angiography
Jonghye Woo, Byung-Woo Hong, Damini Dey, et al.
Coronary CT angiography (CTA) with multi-slice helical scanners is becoming the integral part of major diagnostic pathways for coronary artery disease. In addition, coronary CTA has demonstrated substantial potential in quantitative coronary plaque characterization. If serial comparisons of plaque progression or regression are to be made, accurate 3D volume registration of these volumes would be particularly useful. In this work, we propose a coronary CTA volume registration of the paired coronary CTA scans using feature-based non-rigid volume registration. We achieve this with a combined registration strategy, which uses the global rigid registration as an initialization, followed by local registration using non-rigid volume registration with a volume preserving constraint. We exploit the extracted coronary trees to help localize and emphasize the region of interest as unnecessary regions hinder registration process, which results in wrong registration result. The extracted binary masks of each coronary tree may not be the same due to initial segmentation errors, which could lead to subsequent bias in the registration process. Therefore we utilize a blur mask which is generated by convolving the Gaussian function with the binary coronary tree mask to include the neighboring vessel region into account. A volume preserving constraint is imposed so that the total volume of the binary mask before and after co-registration remains constant. To validate the proposed method, we perform experiments with data from 3 patients with available serial CT scans (6 scans in total) and measure the distance of anatomical landmarks between the registered serial scans of the same patient.
A method for registration and model-based segmentation of Doppler ultrasound images
Hrvoje Kalinić, Sven Lončarić, Maja Čikeš M.D., et al.
Morphological changes of Doppler ultrasound images are an important source of information for diagnosis of cardiovascular diseases. Quantification of these flow profiles requires segmentation of the ultrasound images. In this article, we propose a new model-based method for segmentation of (aortic outflow) velocity profiles. The method is based on a procedure for registration using a geometric transformation specifically designed for matching Doppler ultrasound profiles. After manual segmentation of a model image, the model image is temporarily registered to a new image using two manually defined points in time. Next, a non-rigid registration was carried out in the velocity direction. As a similarfity measure normalized mutual information is used, while optimization is performed by a genetic algorithm. The registration method is experimentally validated using an in-silico image phantom, and showed an accuracy of 5.4%. The model based on segmentation is evaluated in a seris of aortic outflow Doppler ultrasound images from 30 normal volunteers. Comparing the automated method to the manual delineation by an expert cardiologist the method proved accurate to 6.6%. The experimental results confirm the accuracy of the approach and shows that the method can be used for the segmentation of the clinically obtained aortic outflow velocity profiles.
Motion Analysis
icon_mobile_dropdown
Free-breathing intra- and intersubject respiratory motion capturing, modeling, and prediction
Respiration-induced organ motion can limit the accuracy required for many clinical applications working on the thorax or upper abdomen. One approach to reduce the uncertainty of organ location caused by respiration is to use prior knowledge of breathing motion. In this work, we deal with the extraction and modeling of lung motion fields based on free-breathing 4D-CT data sets of 36 patients. Since data was acquired for radiotherapy planning, images of the same patient were available over different weeks of treatment. Motion field extraction is performed using an iterative shape-constrained deformable model approach. From the extracted motion fields, intra- and inter-subject motion models are built and adapted in a leave-one-out test. The created models capture the motion of corresponding landmarks over the breathing cycle. Model adaptation is then performed by examplarily assuming the diaphragm motion to be known. Although, respiratory motion shows a repetitive character, it is known that patients' variability in breathing pattern impedes motion estimation. However, with the created motion models, we obtained a mean error between the phases of maximal distance of 3.4 mm for the intra-patient and 4.2 mm for the inter-patient study when assuming the diaphragm motion to be known.
Validation and comparison of a biophysical modeling approach and non-linear registration for estimation of lung motion fields in thoracic 4D CT data
Spatiotemporal image data allow analyzing respiratory dynamics and its impact on radiation therapy. A key feature within this field of research is the process of lung motion field estimation. For a multitude of applications feasible and "realistic" motion field estimates are required. Widely non-linear registration methods are applied to estimate motion fields; in this case physiology is not taken into account. Using Finite Element Methods we implemented a biophysical approach to model respiratory lung motion starting with the physiology of breathing. Resulting motion models are compared to motion field estimates of a non-linear non-parametric intensity-based registration approach. Additionally, we extended the registration approach to cope with discontinuities in pleura and chest wall motion as motivated by the biophysical model. Accuracy of the different modeling approaches is evaluated using a total of 800 user-defined landmarks in 4D(=3D+t) CT data of 10 lung tumor patients (between 70 and 90 landmarks each patient). Mean registration residuals (= difference between landmark motion as predicted model-based and as observed by an expert) are 3.2±2.0 mm (biophysical model), 3.4±2.4 mm (registration of segmented lung data), 2.1±2.3 mm (registration of CT data), and 1.6±1.3 mm (extended registration of CT data); intraobserver variability of landmark identification is 0.9±0.8 mm, mean landmark motion is 6.8±5.4 mm. Thus, prediction accuracy is higher for non-linear registration of the CT data, but it is shown that explicit modeling of boundary conditions motivated by the physiology of breathing and the biophysical modeling approach, respectively, improves registration accuracy significantly.
Shortest path refinement For HARP motion tracking
Harmonic phase (HARP) motion analysis is widely used in the analysis of tagged magnetic resonance images of the heart. HARP motion tracking can yield gross errors, however, when there is a large amount of motion between successive time frames. Methods that use spatial continuity of motion - so-called refinement methods - have previously been reported to reduce these errors. This paper describes a new refinement method based on shortest-path computations. The method uses a graph representation of the image and seeks an optimal tracking order from a specified seed to each point in the image by solving a single source shortest path problem. This minimizes the potential for path dependent solutions which are found in other refinement methods. Experiments on cardiac motion tracking shows that the proposed method can track the whole tissue more robustly and is also computationally efficient.
Tracking left ventricular borders in 3D echocardiographic sequences using motion-guided optical flow
K. Y. Esther Leung, Mikhail G. Danilouchkine, Marijn van Stralen, et al.
For obtaining quantitative and objective functional parameters from three-dimensional (3D) echocardiographic sequences, automated segmentation methods may be preferable to cumbersome manual delineation of 3D borders. In this study, a novel optical-flow based tracking method is proposed for propagating 3D endocardial contours of the left ventricle throughout the cardiac cycle. To take full advantage of the time-continuous nature of cardiac motion, a statistical motion model was explicitly embedded in the optical flow solution. The cardiac motion was modeled as frame-to-frame affine transforms, which were extracted using Procrustes analysis on a set of training contours. Principal component analysis was applied to obtain a compact model of cardiac motion throughout the whole cardiac cycle. The parameters of this model were resolved in an optical flow manner, via spatial and temporal gradients in image intensity. The algorithm was tested on 36 noncontrast and 28 contrast enhanced 3D echocardiographic sequences in a leave-one-out manner. Good results were obtained using a combination of the proposed motion-guided method and a purely data-driven optical flow approach. The improvement was particularly noticeable in areas where the LV wall was obscured by image artifacts. In conclusion, the results show the applicability of the proposed method in clinical quality echocardiograms.
4D motion modeling of the coronary arteries from CT images for robotic assisted minimally invasive surgery
Dong Ping Zhang, Eddie Edwards, Lin Mei, et al.
In this paper, we present a novel approach for coronary artery motion modeling from cardiac Computed Tomography( CT) images. The aim of this work is to develop a 4D motion model of the coronaries for image guidance in robotic-assisted totally endoscopic coronary artery bypass (TECAB) surgery. To utilize the pre-operative cardiac images to guide the minimally invasive surgery, it is essential to have a 4D cardiac motion model to be registered with the stereo endoscopic images acquired intraoperatively using the da Vinci robotic system. In this paper, we are investigating the extraction of the coronary arteries and the modelling of their motion from a dynamic sequence of cardiac CT. We use a multi-scale vesselness filter to enhance vessels in the cardiac CT images. The centerlines of the arteries are extracted using a ridge traversal algorithm. Using this method the coronaries can be extracted in near real-time as only local information is used in vessel tracking. To compute the deformation of the coronaries due to cardiac motion, the motion is extracted from a dynamic sequence of cardiac CT. Each timeframe in this sequence is registered to the end-diastole timeframe of the sequence using a non-rigid registration algorithm based on free-form deformations. Once the images have been registered a dynamic motion model of the coronaries can be obtained by applying the computed free-form deformations to the extracted coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries in each time frame with the predicted position of the coronaries as estimated from the non-rigid registration. We expect that this motion model of coronaries can facilitate the planning of TECAB surgery, and through the registration with real-time endoscopic video images it can reduce the conversion rate from TECAB to conventional procedures.
Coronary DSA: enhancing coronary tree visibility through discriminative learning and robust motion estimation
Ying Zhu, Simone Prummer, Terrence Chen, et al.
Digital subtraction angiography (DSA) is a well-known technique for improving the visibility and perceptibility of blood vessels in the human body. Coronary DSA extends conventional DSA to dynamic 2D fluoroscopic sequences of coronary arteries which are subject to respiratory and cardiac motion. Effective motion compensation is the main challenge for coronary DSA. Without a proper treatment, both breathing and heart motion can cause unpleasant artifacts in coronary subtraction images, jeopardizing the clinical value of coronary DSA. In this paper, we present an effective method to separate the dynamic layer of background structures from a fluoroscopic sequence of the heart, leaving a clean layer of moving coronary arteries. Our method combines the techniques of learning-based vessel detection and robust motion estimation to achieve reliable motion compensation for coronary sequences. Encouraging results have been achieved on clinically acquired coronary sequences, where the proposed method considerably improves the visibility and perceptibility of coronary arteries undergoing breathing and cardiac movement. Perceptibility improvement is significant especially for very thin vessels. The potential clinical benefit is expected in the context of obese patients and deep angulation, as well as in the reduction of contrast dose in normal size patients.
Vascular Image Processing
icon_mobile_dropdown
Segmentation of arteries and veins on 4D CT perfusion scans for constructing arteriograms and venograms
Adriënne Mendrik, Evert-jan Vonken M.D., Annet Waaijer M.D., et al.
3D CT Angiography (CTA) scans are currently used to assess the cerebral arteries. An additional 4D CT Perfusion (CTP) scan is often acquired to determine perfusion parameters in the cerebral parenchyma. We propose a method to extract a three dimensional volume showing either the arteries (arteriogram) or the veins (venogram) from the 4D CTP scan. This would allow cerebrovascular assessment using the CTP scan and obviate the need for acquiring an additional CTA scan. Preprocessing steps consist of registration of the time volumes of the CTP scan using rigid registration and masking out extracranial structures, bone and air. Next a 3D volume is extracted containing the vessels (vascular volume) by using the absolute area under the first derivative curve in time. To segment the arteries and veins we use the time to peak of the contrast enhancement curve combined with region growing within a rough vessel segmentation. Finally the artery/vein segmentation is used to suppress either the veins or the arteries in the vascular volume to construct the arteriogram and venogram. To evaluate the method, 11 arteriograms and venograms were visually inspected by an expert observer, with special attention to the important cerebral arteries (Circle of Willis) and veins (straight and transverse sinus). Results show that the proposed method is effective in extracting the major cerebral arteries and veins from CTP scans.
A novel multiscale topo-morphometric approach for separating arteries and veins via pulmonary CT imaging
Punam K. Saha, Zhiyun Gao, Sara Alford, et al.
Distinguishing arterial and venous trees in pulmonary multiple-detector X-ray computed tomography (MDCT) images (contrast-enhanced or unenhanced) is a critical first step in the quantification of vascular geometry for purposes of determining, for instance, pulmonary hypertension, using vascular dimensions as a comparator for assessment of airway size, detection of pulmonary emboli and more. Here, a novel method is reported for separating arteries and veins in MDCT pulmonary images. Arteries and veins are modeled as two iso-intensity objects closely entwined with each other at different locations at various scales. The method starts with two sets of seeds -- one for arteries and another for veins. Initialized with seeds, arteries and veins grow iteratively while maintaining their spatial separation and eventually forming two disjoint objects at convergence. The method combines fuzzy distance transform, a morphologic feature, with a topologic connectivity property to iteratively separate finer and finer details starting at a large scale and progressing towards smaller scales. The method has been validated in mathematically generated tubular objects with different levels of fuzziness, scale and noise. Also, it has been successfully applied to clinical CT pulmonary data. The accuracy of the method has been quantitatively evaluated by comparing its results with manual outlining. For arteries, the method has yielded correctness of 81.7% at the cost of 6.7% false positives and 11.6% false negatives. Our method is very promising for automated separation of arteries and veins in MDCT pulmonary images even when there is no mark of intensity variation at conjoining locations.
A two-stage approach for fully automatic segmentation of venous vascular structures in liver CT images
Jens N. Kaftan, Hüseyin Tek, Til Aach
The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.
Segmentation of lung vessel trees by global optimization
Pieter Bruyninckx, Dirk Loeckx, Dirk Vandermeulen, et al.
We present a novel method for lung vessel tree segmentation. The method combines image information and a high-level physiological model, stating that the vasculature is organized such that the whole organ is perfused using minimal effort. The method consists of three consecutive steps. First, a limited set of possible bifurcation locations is determined. Subsequently, individual vessel segments of varying diameters are constructed between each two bifurcation locations. This way, a graph is constructed consisting of each bifurcation location candidate as vertices and vessel segments as edges. Finally, the overall vessel tree is found by selecting the subset of these segments that perfuses the whole organ, while minimizing an energy function. This energy function contains a data term, a volume term and a bifurcation term. The data term measures how well the selected vessel segments fit to the image data, the volume term measures the total amount of blood in the vasculature, and the bifurcation term models the physiological fit of the diameters of the in- and outgoing vessels in each bifurcation. The selection of the optimal subset of vessel segments into a single vessel tree is an NP-hard combinatorial optimization problem that is solved here with an ant colony optimization approach. The bifurcation detection as well as the segmentation method have been validated on lung CT images with manually segmented arteries and veins.
Globally optimal 3D graph search incorporating both edge and regional information: application to aortic MR image segmentation
We present a novel method for incorporating both edge and regional image information in a 3-D graph-theoretic approach for globally optimal surface segmentation. The energy functional takes a ratio form of the "onsurface" cost and the "in-region" cost. We thus introduce an optimal surface segmentation model allowing regional information such as volume, homogeneity and texture to be included with boundary information such as intensity gradients. Compared to the linear combination as in the standard active contour energies, this ratioform energy is parameter free with no bias toward either a large or small region. Our method is the first attempt to use a ratio-form energy functional in graph search framework for high dimensional image segmentation, which delivers a globally optimal solution in polynomial time. The globally optimal surface can be achieved by solving a parametric maximum flow problem in the time complexity of computing a single maximum flow. Our new approach is applied to the aorta segmentation of 15 3-D MR aortic images from 15 subjects. Compared to an expert-defined independent standard, the overall mean unsigned surface positioning error was 0.76± 0.88 voxels. Our experiments showed that the incorporation of the regional information was effective to alleviate the interference of adjacent objects.
Flow-based segmentation of the large thoracic arteries in tridirectional phase-contrast MRI
Michael Schmidt, Roland Unterhinninghofen, Sebastian Ley, et al.
Tridirectional Phase-Contrast (PC)-MRI sequences provide spatially and temporally resolved measurements of blood flow velocity vectors in the human body. Analyzing flow conditions based on these datasets requires prior segmentation of the vessels of interest. In view of decreased quality of morphology images in PC-MRI sequences, the flow data provides valuable information to support reliable segmentation. This work presents a semi-automatic approach for segmenting the large arteries utilizing both morphology and flow information. It consists of two parts, the extraction of a simplified vessel model based on vessel centerlines and diameters, and a following refinement of the resulting surface for each time frame. Vessel centerlines and diameters are extracted using an offset adaptive medialness function that estimates a voxel's likelihood of belonging to a vessel centerline. The resulting centerline model is manually post-processed to select the appropriate centerlines and link possible gaps. The surface described by the final centerline model is used to initialize a 3D level set segmentation of each time frame. Deformation velocities that depend on both morphology and flow information are proposed and a new approach to account for the curved shape of vessels is introduced. The described segmentation system has been successfully applied on a total of 22 datasets of the thoracic aorta and the pulmonary arteries. Resulting segmentations have been assessed by an expert radiologist and were considered to be very satisfactory.
Optimal graph search based image segmentation for objects with complex topologies
Xiaomin Liu, Danny Z. Chen, Xiaodong Wu, et al.
Segmenting objects with complicated topologies in 3D images is a challenging problem in medical image processing, especially for objects with multiple interrelated surfaces. In this paper, we extend a graph search based technique to simultaneously identifying multiple interrelated surfaces for objects that have complex topologies (e.g., with tree-like structures) in 3D. We first perform a pre-segmentation on the input image to obtain basic information of the objects' topologies. Based on the initial pre-segmentation, the original image is resampled along judiciously determined directions to produce a set of vectors of voxels (called voxel columns). The resampling process utilizes medial axes to ensure that voxel columns of appropriate lengths are used to capture the sought object surfaces. Then a geometric graph is constructed whose edges connect voxels in the resampled voxel columns and enforce the smoothness constraint and separation constraint on the sought surfaces. Validation of our algorithm was performed on the segmentation of airway trees and lung vascular trees in human in-vivo CT scans. Cost functions with directional information are applied to distinguish the airway inner wall and outer wall. We succeed in extracting the outer airway wall and optimizing the location of the inner wall in all cases, while the vascular trees are optimized as well. Comparing with the pre-segmentation results, our approach captures the wall surfaces more accurately, especially across bifurcations. The statistical evaluation on a double wall phantom derived from in-vivo CT images yields highly accurate results of the wall thickness measurement on the whole tree (with mean unsigned error 0.16 ± 0.16mm).
Atlas-based Methods
icon_mobile_dropdown
Automatic segmentation of the optic nerves and chiasm in CT and MR using the atlas-navigated optimal medial axis and deformable-model algorithm
In recent years, radiation therapy has become the preferred treatment for many types of head and neck tumors. To minimize side effects, radiation beams are planned pre-operatively to avoid over-radiation of vital structures, such as the optic nerves and chiasm, which are essential to the visual process. To plan the procedure, these structures must be identified using CT/MR imagery. Currently, a radiation oncologist must manually segment the structures, which is both inefficient and ineffective. Clearly an automated approach could be beneficial to the planning process. The problem is difficult due to the shape variability and low image contrast of the structures, and several attempts at automatic localization have been reported with marginal results. In this work we present a novel method for localizing the optic nerves and chiasm in CT/MR volumes using the atlas-navigated optimal medial axis and deformable-model algorithm (NOMAD). NOMAD uses a statistical model and image registration to provide a priori local intensity and shape information to both a medial axis extraction procedure and a deformable-model, which deforms the medial axis and completes the segmentation process. This approach achieves mean dice coefficients greater than 0.8 for both the optic nerves and the chiasm when compared to manual segmentations over ten test cases. By comparing quantitative results with existing techniques it can be seen that this method produces more accurate results.
Statistical model of laminar structure for atlas-based segmentation of the fetal brain from in utero MR images
Piotr A. Habas, Kio Kim, Dharshan Chandramohan, et al.
Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.
System for definition of the central-chest vasculature
Accurate definition of the central-chest vasculature from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. For instance, the aorta and pulmonary artery help in automatic definition of the Mountain lymph-node stations for lung-cancer staging. This work presents a system for defining major vascular structures in the central chest. The system provides automatic methods for extracting the aorta and pulmonary artery and semi-automatic methods for extracting the other major central chest arteries/veins, such as the superior vena cava and azygos vein. Automatic aorta and pulmonary artery extraction are performed by model fitting and selection. The system also extracts certain vascular structure information to validate outputs. A semi-automatic method extracts vasculature by finding the medial axes between provided important sites. Results of the system are applied to lymph-node station definition and guidance of bronchoscopic biopsy.
A comparison study of atlas-based image segmentation: the advantage of multi-atlas based on shape clustering
Purpose: By incorporating high-level shape priors, atlas-based segmentation has achieved tremendous success in the area of medical image analysis. However, the effect of various kinds of atlases, e.g., average shape model, example-based multi-atlas, has not been fully explored. In this study, we aim to generate different atlases and compare their performance in segmentation. Methods: We compare segmentation performance using parametric deformable model with four different atlases, including 1) a single atlas, i.e., average shape model (SAS); 2) example-based multi-atlas (EMA); 3) cluster-based average shape models (CAS); 4) cluster-based statistical shape models (average shape + principal shape variation modes)(CSS). CAS and CSS are novel atlases constructed by shape clustering. For comparison purpose, we also use PDM without atlas (NOA) as a benchmark method. Experiments: The experiment is carried on liver segmentation from whole-body CT images. Atlases are constructed by 39 manually delineated liver surfaces. 11 CT scans with ground truth are used as testing data set. Segmentation accuracy using different atlases are compared. Conclusion: Compared with segmentation without atlas, all of the four atlas-based image segmentation methods achieve better results. Multi-atlas based segmentation behaves better than single-atlas based segmentation. CAS exhibit superior performance to all other methods.
Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan
Shiva Keihaninejad, Rolf A. Heckemann, Ioannis S. Gousias, et al.
A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).
Keynote and Diffusion Tensor Imaging
icon_mobile_dropdown
RADTI: regression analyses of diffusion tensor images
Diffusion tensor image (DTI) is a powerful tool for quantitatively assessing the integrity of anatomical connectivity in white matter in clinical populations. The prevalent methods for group-level analysis of DTI are statistical analyses of invariant measures (e.g., fractional anisotropy) and principal directions across groups. The invariant measures and principal directions, however, do not capture all information in full diffusion tensor, which can decrease the statistical power of DTI in detecting subtle changes of white matters. Thus, it is very desirable to develop new statistical methods for analyzing full diffusion tensors. In this paper, we develop a set of toolbox, called RADTI, for the analysis of the full diffusion tensors as responses and establish their association with a set of covariates. The key idea is to use the recent development of log-Euclidean metric and then transform diffusion tensors in a nonlinear space into their matrix logarithms in a Euclidean space. Our regression model is a semiparametric model, which avoids any specific parametric assumptions. We develop an estimation procedure and a test procedure based on score statistics and a resampling method to simultaneously assess the statistical significance of linear hypotheses across a large region of interest. Monte Carlo simulations are used to examine the finite sample performance of the test procedure for controlling the family-wise error rate. We apply our methods to the detection of statistical significance of diagnostic and age effects on the integrity of white matter in a diffusion tensor study of human immunodeficiency virus.
Fiber-to-bundle registration of white matter tracts
Qing Xu, Adam W. Anderson, John C. Gore, et al.
Magnetic resonance diffusion tensor imaging (DTI) is being widely used to reconstruct brain white matter (WM) fiber tracts. For further characterization of the tracts, the fibers with similar courses often need to be grouped into a fiber bundle that corresponds to certain underlying WM anatomic structure. In addition, the alignments of fibers from different studies are often desirable for bundle comparisons and group analysis. In this work, a novel registration algorithm based on fiber-to-bundle matching was proposed to address the above two needs. Using an Expectation Maximization (EM) algorithm, the proposed method is capable of estimating a Thin-Plate- Spline transformation that optimally aligns whole-brain target fiber sets with a reference bundle model. Based on the resulting transformations, the fibers from different target datasets can all be warped into the reference coordinate system for comparisons and group analysis. The fibers can be further automatically labeled according to their similarity to the reference model. The algorithm was evaluated with eight human brain DTI data volumes acquired in vivo at 3T. After registration, the warped target bundles exhibit good similarity to the reference bundles. Quantitative experiments further demonstrated that the detected target bundles agree with ground truth obtained by manual segmentation at a sub-voxel accuracy.
Segmentation of DTI based on tensorial morphological gradient
This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.
Registration III
icon_mobile_dropdown
Non-stationary diffeomorphic registration: application to endo-vascular treatment monitoring
M. De Craene, O. Camara, B. Bijnens, et al.
This paper proposes a new pairwise registration algorithm called Large Diffeomorphic Free-Form Deformation (LDFFD). In the LDFFD algorithm, the diffeomorphic mapping is represented as a composition of Free-Form Deformation (FFD) transformations at each time step, all time steps being jointly optimized. The fact that the transformation at one time step influences all subsequent time steps, naturally enforces temporal consistency in the transformation compared to other existing diffeomorphic registration algorithms and does not restrict the space of solutions to stationary displacement fields over all time steps. A multiresolution strategy is presented to find the optimal number of time steps and solve efficiently the joint optimization of all transformations in the registration chain. Accuracy of the algorithm in the presence of large deformations is illustrated and compared to other standard or diffeomorphic registration algorithms. Our algorithm is applied here in the context of monitoring disease progression by image-based quantification of aneurysmal growth post endovascular treatment. In this context, aneurysm volume changes occurring after embolization are quantified by integrating the Jacobian of the diffeomorphic transformation between 3D rotational angiography images acquired at subsequent followups.
Non-rigid registration of small animal skeletons from micro-CT using 3D shape context
Small animal registration is an important step for molecular image analysis. Skeleton registration from whole-body or only partial micro Computerized Tomography (CT) image is often performed to match individual rats to atlases and templates, for example to identify organs in positron emission tomography (PET). In this paper, we extend the shape context matching technique for 3D surface registration and apply it for rat hind limb skeleton registration from CT images. Using the proposed method, after standard affine iterative closest point (ICP) registration, correspondences between the 3D points from sour and target objects were robustly found and used to deform the limb skeleton surface with thin-plate-spline (TPS). Experiments are described using phantoms and actual rat hind limb skeletons. On animals, mean square errors were decreased by the proposed registration compared to that of its initial alignment. Visually, skeletons were successfully registered even in cases of very different animal poses.
Elastic registration of multiphase CT images of liver
In this work we present a novel approach for elastic image registration of multi-phase contrast enhanced CT images of liver. A problem in registration of multiphase CT is that the images contain similar but complementary structures. In our application each image shows a different part of the vessel system, e.g., portal/hepatic venous/arterial, or biliary vessels. Portal, arterial and biliary vessels run in parallel and abut on each other forming the so called portal triad, while hepatic veins run independent. Naive registration will tend to align complementary vessel. Our new approach is based on minimizing a cost function consisting of a distance measure and a regularizer. For the distance we use the recently proposed normalized gradient field measure that focuses on the alignment of edges. For the regularizer we use the linear elastic potential. The key feature of our approach is an additional penalty term using segmentations of the different vessel systems in the images to avoid overlaps of complementary structures. We successfully demonstrate our new method by real data examples.
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0±3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
Curvature orientation histograms for detection and matching of vascular landmarks in retinal images
Keerthi Ram, Yogesh Babu, Jayanthi Sivaswamy
Registration is a primary step in tracking pathological changes in medical images. Point-based registration requires a set of distinct, identifiable and comparable landmark points to be extracted from images. In this work, we illustrate a method for obtaining landmarks based on changes in a topographic descriptor of a retinal image. Building on the curvature primal sketch introduced by Asada and Brady1 for describing interest points on planar curves, we extend the notion to grayscale images. We view an image as a topographic surface and propose to identify interest points on the surface using the surface curvature as a descriptor. This is illustrated by modeling retinal vessels as trenches and identifying landmarks as points where the trench behaviour changes, such as it splits or bends sharply. Based on this model, we present a method which uses the surface curvature to characterise landmark points on retinal vessels as points of high dispersion in the curvature orientation histogram computed around the points. This approach yields junction/crossover points of retinal vessels and provides a means to derive additional information about the type of junction. A scheme is developed for using such information and determining the correspondence between sets of landmarks from two images related by a rigid transformation. In this paper we present the details of the proposed approach and results of testing on images from public domain datasets. Results include comparison of landmark detection with two other methods, and results of correspondence derivation. Results show the approach to be successful and fast.
Segmentation II
icon_mobile_dropdown
User-constrained guidewire localization in fluoroscopy
In this paper we present a learning-based guidewire localization algorithm which can be constrained by user inputs. The proposed algorithm automatically localizes guidewires in fluoroscopic images. In cases where the results are not satisfactory, the user can provide input to constrain the algorithm by clicking on the guidewire segment missed by the detection algorithm. The algorithm then re-localizes the guidewire and updates the result in less than 0.3 second. In extreme cases, more constraints can be provided until a satisfactory result is reached. The proposed algorithm can not only serve as an efficient initialization tool for guidewire tracking, it can also serve as an efficient annotation tool, either for cardiologists to mark the guidewire, or to build up a labeled database for evaluation. Through the improvement of the initialization of guidewire tracking, it also helps to improve the visibility of the guidewire during interventional procedures. Our study shows that even highly complicated guidewires can mostly be localized within 5 seconds by less than 6 clicks.
Hierarchical guidewire tracking in fluoroscopic sequences
Peng Wang, Ying Zhu, Wei Zhang, et al.
In this paper, we present a novel hierarchical framework of guidewire tracking for image-guided interventions. Our method can automatically and robustly track a guidewire in fluoroscopy sequences during interventional procedures. The method consists of three main components: learning based guidewire segment detection, robust and fast rigid tracking, and nonrigid guidewire tracking. Each component aims to handle guidewire motion at a specific level. The learning based segment detection identifies small segments of a guidewire at the level of individual frames, and provides unique primitive features for subsequent tracking. Based on identified guidewire segments, the rigid tracking method robustly tracks the guidewire across successive frames, assuming that a major motion of guidewire is rigid, mainly caused by the breathing motion and table movement. Finally, a non-rigid tracking algorithm is applied to finely deform the guidewire to provide accurate shape. The presented guidewire tracking method has been evaluated on a test set of 47 sequences with more than 1000 frames. Quantitative evaluation demonstrates that the mean tracking error on the guidewire body is less than 2 pixels. Therefore the presented guidewire tracking method has a great potential for applications in image guided interventions.
A comparison of line enhancement techniques: applications to guide-wire detection and respiratory motion tracking
Vincent Bismuth, Laurence Vancamberg, Sébastien Gorges
During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.
3D variational brain tumor segmentation on a clustered feature set
Karteek Popuri, Dana Cobzas, Martin Jagersand, et al.
Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.
Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D
Y. Yin, X. Zhang, D. D. Anderson, et al.
We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object segmentation problems.
Segmentation of 3D tubular structures based on 3D intensity models and particle filter tracking
Stefan Wörz, William J. Godinez, Karl Rohr
We introduce a new approach for tracking-based segmentation of 3D tubular structures. The approach is based on a novel combination of a 3D cylindrical intensity model and particle filter tracking. In comparison to earlier work we utilize a 3D intensity model as the measurement model of the particle filter, thus a more realistic 3D appearance model is used that directly represents the image intensities of 3D tubular structures within semiglobal regions-of-interest. We have successfully applied our approach using 3D synthetic images and real 3D MRA image data of the human pelvis.
Posters: Classification
icon_mobile_dropdown
A way toward analyzing high-content bioimage data by means of semantic annotation and visual data mining
Julia Herold, Sylvie Abouna, Luxian Zhou, et al.
In the last years, bioimaging has turned from qualitative measurements towards a high-throughput and highcontent modality, providing multiple variables for each biological sample analyzed. We present a system which combines machine learning based semantic image annotation and visual data mining to analyze such new multivariate bioimage data. Machine learning is employed for automatic semantic annotation of regions of interest. The annotation is the prerequisite for a biological object-oriented exploration of the feature space derived from the image variables. With the aid of visual data mining, the obtained data can be explored simultaneously in the image as well as in the feature domain. Especially when little is known of the underlying data, for example in the case of exploring the effects of a drug treatment, visual data mining can greatly aid the process of data evaluation. We demonstrate how our system is used for image evaluation to obtain information relevant to diabetes study and screening of new anti-diabetes treatments. Cells of the Islet of Langerhans and whole pancreas in pancreas tissue samples are annotated and object specific molecular features are extracted from aligned multichannel fluorescence images. These are interactively evaluated for cell type classification in order to determine the cell number and mass. Only few parameters need to be specified which makes it usable also for non computer experts and allows for high-throughput analysis.
A comparative study in ultrasound breast imaging classification
American College of Radiology introduces a standard in classification, the breast imaging reporting and data system (BIRADS), standardize the reporting of ultrasound findings, clarify its interpretation, and facilitate communication between clinicians. The effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing computer tools in the diagnostics process. Initially a detailed study was carried out to evaluate the performance of two commonly used appearance based classification algorithms, based on the use of Principal Component Analysis (PCA), and two dimensional linear discriminant analysis (2D-LDA). The study showed that these two appearance based classification approaches are not capable of handling the classification of ultrasound breast image lesions. Therefore further investigations in the use of a popular feature based classifier - Support Vector Machine (SVM) was conducted. A pre-processing step before feature based classification is feature extraction, which involve shape, texture and edge descriptors for the Region of Interest (ROI). The input dataset to SVM classification is from a fully automated ROI detection. We achieve the success rate of 0.550 in PCA, 0.500 in LDA, and 0.931 in SVM. The best combination of features in SVM classification is to combine the shape, texture and edge descriptors, with sensitivity 0.840 and specificity 0.968. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of our work.
Probabilistic classification of intracranial gliomas in digital microscope images based on EGFR quantity
Marcin Grzegorzek, Marianna Buckan, Sigrid Horn
A glioma is a type of cancer occurring, in the majority of cases, in the brain. The World Health Organization (WHO) assigns a grade from I to IV to this tumor, with I being the least aggressive and IV being the most aggressive. In glioma cells of grade IV the Epidermal Growth Factor Receptors (EGFRs) are over expressed. In this paper we hypothesize that this overexpression occurs also for gliomas of grades I to III. Moreover, we present a medical study aiming to determine the correlation between the WHO classification and the EGFR quantity in glioma tissue. We define five quantity classes for EGFR. First, results of immunohistochemical staining on brain glioma slices, which visualize the EGFR quantity, are examined under an optical microscope and manually classified into these five classes. In this paper we propose to perform this classification automatically using statistical pattern recognition technique for digital images. For this, digital microscope images of glioma are acquired and their histograms computed. Afterwards, all five EGFR quantity classes (image classes) are statistically modeled using training samples. This allows a fully automatic classification of unknown images into one of the five classes using the Maximum Likelihood (ML) estimation. Experimental results show that, on the one hand, the automatic EGFR quantity classification performs with a quite high accuracy, on the other hand, it is done much faster than manual labeling done by a human.
Integrated feature extraction and selection for neuroimage classification
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Estimating the body portion of CT volumes by matching histograms of visual words
Johannes Feulner, S. Kevin Zhou, Sascha Seifert, et al.
Being able to automatically determine which portion of the human body is shown by a CT volume image offers various possibilities like automatic labeling of images or initializing subsequent image analysis algorithms. This paper presents a method that takes a CT volume as input and outputs the vertical body coordinates of its top and bottom slice in a normalized coordinate system whose origin and unit length are determined by anatomical landmarks. Each slice of a volume is described by a histogram of visual words: Feature vectors consisting of an intensity histogram and a SURF descriptor are first computed on a regular grid and then classified into the closest visual words to form a histogram. The vocabulary of visual words is a quantization of the feature space by offline clustering a large number of feature vectors from prototype volumes into visual words (or cluster centers) via the K-Means algorithm. For a set of prototype volumes whose body coordinates are known the slice descriptions are computed in advance. The body coordinates of a test volume are computed by a 1D rigid registration of the test volume with the prototype volumes in axial direction. The similarity of two slices is measured by comparing their histograms of visual words. Cross validation on a dataset of 44 volumes proved the robustness of the results. Even for test volumes of ca. 20cm height, the average error was 15.8mm.
Voxel-based discriminant map classification on brain ventricles for Alzheimer's disease
One major hallmark of the Alzheimer's disease (AD) is the loss of neurons in the brain. In many cases, medical experts use magnetic resonance imaging (MRI) to qualitatively measure the neuronal loss by the shrinkage or enlargement of the structures-of-interest. Brain ventricle is one of the popular choices. It is easily detectable in clinical MR images due to the high contrast of the cerebro-spinal fluid (CSF) with the rest of the parenchyma. Moreover, atrophy in any periventricular structure will directly lead to ventricle enlargement. For quantitative analysis, volume is the common choice. However, volume is a gross measure and it cannot capture the entire complexity of the anatomical shape. Since most existing shape descriptors are complex and difficult-to-reproduce, more straightforward and robust ways to extract ventricle shape features are preferred in the diagnosis. In this paper, a novel ventricle shape based classification method for Alzheimer's disease has been proposed. Training process is carried out to generate two probability maps for two training classes: healthy controls (HC) and AD patients. By subtracting the HC probability map from the AD probability map, we get a 3D ventricle discriminant map. Then a matching coefficient has been calculated between each training subject and the discriminant map. An adjustable cut-off point of the matching coefficients has been drawn for the two classes. Generally, the higher the cut-off point that has been drawn, the higher specificity can be achieved. However, it will result in relatively lower sensitivity and vice versa. The benchmarked results against volume based classification show that the area under the ROC curves for our proposed method is as high as 0.86 compared with only 0.71 for volume based classification method.
Fast unsupervised hot-spot detection in 1H-MR spectroscopic imaging data using ICA
Markus T. Harz, Volker Diehl, Bernd Merkel, et al.
Independent Component Analysis (ICA) is a blind source separation technique that has previously been applied to various time-varying signals. It may in particular be utilized to study 1H-MR spectroscopic imaging (MRSI) data. The work presented firstly investigates preprocessing and parameterization for ICA on simulated data to assess different strategies. We then applied ICA processing to 2D/3D brain and prostate MRSI data obtained from two healthy volunteers and 17 patients. We conducted a correlation analysis of the mixing and separating matrices resulting from ICA processing with maps obtained from metabolite quantitations in order to elucidate the relationship between quantitative and ICA results. We found that the mixing matrices corresponding to the estimated independent components highly correlate with the metabolite maps for some cases, and for others differ. We provide explanations and speculations for that and propose a scheme to utilize the knowledge for hot-spot detection. From our experience, ICA is much faster than the calculation of metabolic maps. Additionally, water and lipid contaminations are on the way removed from the data; the user needs not manually exclude spectroscopic voxels from processing or analysis. ICA results show hot spots in the data, even where quantitation-based metabolic maps are difficult to assess due to noisy data or macromolecule distortions.
Posters: Diffusion Tensor Imaging
icon_mobile_dropdown
A comparative study of diffusion tensor field transformations
Diffusion imaging provides the ability to study white matter connectivity and integrity noninvasively. Diffusion weighted imaging contains orientation information that must be appropriately reoriented when applying spatial transforms to the resulting imaging data. Alexander et al. have introduced two methods to resolve the reorientation problem. In the first method, the rotation matrix is computed from the transform and the tensors are reoriented. The second method called as the preservation of principal direction (PPD) method, takes into account the deformation and rotation components to estimate the rotation matrix. These methods cannot be directly used for higher order diffusion models (e.g. Q-ball). We have introduced a novel technique called gradient rotation where the rotation is directly applied to the diffusion sensitizing gradients providing a voxel by voxel estimate of the diffusion gradients instead of a volume of by volume estimate. A PPD equivalent gradient rotation can be computed using principal component analysis (PCA). Four subjects were spatially normalized to a template subject using a multistage registration sequence that includes nonlinear diffeomorphic demons registration. Comparative results of all four methods have been shown. It can be observed that all the methods work closely to each other, PPD (original and gradient equivalent) being slightly better than rigid rotation, based on the fact that it includes the shear and scale component. Results also demonstrate that the multistage registration is a viable method for spatial normalization of diffusion models.
Real-time magnetic resonance Q-Ball imaging using Kalman filtering with Laplace-Beltrami regularization
Rachid Deriche, Jeff Calder
Diffusion MRI has become an established research tool for the investigation of tissue structure and orientation from which has stemmed a number of variations, such as Diffusion Tensor Imaging (DTI) Diffusion Spectrum Imaging (DSI) and QBall Imaging (QBI). The acquisition and analysis of such data is very challenging due to its complexity. Recently, an exciting new Kalman filtering framework has been proposed for DTI and QBI reconstructions in real time during the repetition time (TR) of the acquisition sequence. In this article, we first revisit and thoroughly analyze this approach and show it is actually sub-optimal and not recursively minimizing the intended criterion due to the Laplace-Beltrami regularization term. Then, we propose a new approach that implements the QBI reconstruction algorithm in real-time using a fast and robust Laplace-Beltrami regularization without sacrificing the optimality of the Kalman filter. We demonstrate that our method solves the correct minimization problem at each iteration and recursively provides the optimal QBI solution. We validate with real QBI data that our proposed real-time method is equivalent in terms of QBI estimation accuracy to the standard off-line processing techniques and outperforms the existing solution. This opens new and interesting opportunities for real-time feedback for clinicians during an acquisition and also for researchers investigating into optimal diffusion orientation sets and, real-time fiber tracking and connectivity mapping.
Generalized analytic expressions for the b matrix of twice-refocused spin echo pulse sequence
Diffusion weighted imaging (DWI) technique has been used to help understand the human brain white matter fiber structures in vivo. Currently used standard diffusion tensor magnetic resonance imaging (DTI) tractography based on the second order diffusion tensor model has limitations in its ability to resolve complex fiber tracts. The generalized diffusion tensor (GDT) imaging technique has been proposed to overcome these limitations associated with the standard second order tensor model. Based on the GDT model, a generalized partial differential equation (PDE) governing the anisotropic diffusion process can be derived. For the purpose of solving the PDE and computing the generalized diffusion tensor, we derive a generalized analytic expression for the high order b matrix in the case of twice-refocused spin echo (TRSE) pulse sequence which is used in the DWI data acquisition. The TRSE pulse sequence is considered because of its ability to null the eddy current effect generated during the scanning. The b matrix was computed by integrating the transverse precessing magnetization between the excitation time and the echo time (TE). In our experiments, we show some computational results of the generalized b matrix based on the new analytic expression. In addition, comparisons between the generalized b matrix computed using our formula and the second order b matrix given by the MRI machine are presented. The characteristics of the fomula and the data are discussed at last.
Posters: Functional Imaging
icon_mobile_dropdown
An independent component analysis based tool for exploring functional connections in the brain
S. M. Rolfe, L. Finney, R. F. Tungaraza, et al.
This work introduces a MATLAB-based tool developed for investigating functional connectivity in the brain. Independent component analysis (ICA) is used as a measure of voxel similarity which allows the user to find and view statistically independent maps of correlated voxels. These maps of correlated voxel activity may indicate functionally connected regions. Specialized clustering and feature extraction techniques have been designed to find and characterize clusters of activated voxels, which allows comparison of the spatial maps of correlation across subjects. This method is also used to compare the ICA generated images to fMRI images showing statistically significant activations generated by Statistical Parametric Mapping (SPM). The capability of querying specific coordinates in the brain supports integration and comparison with other data modalities such as Cortical Stimulation Mapping and Single Unit Recordings.
Posters: Filtering, Restoration, and Enhancement
icon_mobile_dropdown
Optimized GPU framework for semi-implicit AOS scheme based speckle reducing nonlinear diffusion
Tian Cao, Bo Wang, Dong C. Liu
Ultrasound image quality is degraded because of the presence of speckle, which causes loss of image contrast resolution and makes the detection of small features difficult. The traditional nonlinear diffusion filtering of speckle reduction with explicit schemes can achieve desirable results, but they are only stable for very small time steps. Semi-implicit additive operator splitting (AOS) schemes for nonlinear diffusion are stable for all time size and more efficient than the traditional explicit schemes. However, the AOS schemes are still inefficient for real time speckle reducing of ultrasound images. Current graphics processing units (GPUs) offers an opportunity to boost the computation speed of AOS schemes through high computational power at low cost. In this paper, an optimized GPU framework for AOS schemes is presented. By using the well-established method of cyclic reduction of tridiagonal systems in our framework, we are able to implement the AOS schemes on GPU. Experiments from CPU implemented AOS schemes and our GPU based framework show that our method is about 10 times faster than the CPU implementation. Our presented framework deals with the local coherence anisotropic diffusion, but it can be generalized to the class of nonlinear diffusion methods which can be discretized by AOS schemes.
Enhanced detection in CT colonography using adaptive diffusion filtering
Computer-aided detection (CAD) is a computerized procedure in medical science that supports the medical team's interpretations and decisions. CAD often uses information from a medical imaging modality such as Computed Tomography to detect suspicious lesions. Algorithms to detect these lesions are based on geometric models which can describe the local structures and thus provide potential region candidates. Geometrical descriptive models are very dependent on the data quality which may affect the false positive rates in CAD. In this paper we propose an efficient adaptive diffusion technique that adaptively controls the diffusion flux of the local structures in the data using robust statistics. The proposed method acts isotropically in the homogeneous regions and anisotropically in the vicinity of jump discontinuities. This method structurally enhances the data and makes the geometrical descriptive models robust. For the iterative solver, we use an efficient gradient descent flow solver based on a PDE formulation of the problem. The whole proposed strategy, which makes use of adaptive diffusion filter coupled with gradient descent flows has been developed and evaluated on clinical data in the application to colonic polyp detection in Computed Tomography Colonography.
Device enhancement using rotational x-ray angiography
Gert A. F. Schoonenberg, Peter W. van den Houten, Raoul Florent, et al.
Implantable cardiac devices, such as stents and septal defect closure devices are sometimes difficult to see on angiographic X-ray projection images. We present a method to enhance the visibility of these devices in rotational X-ray angiography acquisitions using automated marker detection and motion compensation. Automatic marker detection allows registration of the devices in the images of the rotational run. Motion compensation is done by warping the images to a specific reference position. Averaging multiple of those motion compensated images together with the reference frame results in an enhanced image with improved visibility due to an increase in contrast of the device with the background structure. This allows the clinician to look at the device from multiple angles with an improved visibility of the device to better appreciate the 3D geometry of the device. In particular, enhancement of rotational acquisitions compared to standard enhanced fixed-angle acquisitions allows the clinician to better perceive any asymmetry in the deployed device.
Parameter optimization for image denoising based on block matching and 3D collaborative filtering
Ramu Pedada, Emin Kugu, Jiang Li, et al.
Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation.
Edge preserving image smoothing using nonlinear diffusion and a semi-local edge detection technique in digital mammography images
Mark M. Roden, Lawrence W. Bassett M.D., Daniel Valentino
Digital mammography images often contain noise proportional to the dose given to the patient. The signal to noise ratio is typically improved by increasing the dose as noise reduction methods tend to blur clinically important edges and details, such as fibers or calcifications. A new algorithm is presented to reduce noise while preserving features relevant in digital mammography and allowing for an overall reduction of patient dose.
Contrast enhancement of subcutaneous blood vessel images by means of visible and near-infrared hyper-spectral imaging
Jaka Katrašnik, Miran Bürmen, Franjo Pernuš, et al.
Visualization of subcutaneous veins is very difficult with the naked eye, but important for diagnosis of medical conditions and different medical procedures such as catheter insertion and blood withdrawal. Moreover, recent studies showed that the images of subcutaneous veins could be used for biometric identification. The majority of methods used for enhancing the contrast between the subcutaneous veins and surrounding tissue are based on simple imaging systems utilizing CMOS or CCD cameras with LED illumination capable of acquiring images from the near infrared spectral region, usually near 900 nm. However, such simplified imaging methods cannot exploit the full potential of the spectral information. In this paper, a new highly versatile method for enhancing the contrast of subcutaneous veins based on state-of-the-art high-resolution hyper-spectral imaging system utilizing the spectral region from 550 to 1700 nm is presented. First, a detailed analysis of the contrast between the subcutaneous veins and the surrounding tissue as a function of wavelength, for several different positions on the human arm, was performed in order to extract the spectral regions with the highest contrast. The highest contrast images were acquired at 1100 nm, however, combining the individual images from the extracted spectral regions by the proposed contrast enhancement method resulted in a single image with up to ten-fold better contrast. Therefore, the proposed method has proved to be a useful tool for visualization of subcutaneous veins.
An MRI-guided PET partial volume correction method
Accurate quantification of positron emission tomography (PET) is important for diagnosis and assessment of cancer treatment. The low spatial resolution of PET imaging induces partial volume effect to PET images that biases quantification. A PET partial volume correction method is proposed using high-resolution, anatomical information from magnetic resonance images (MRI). The corrected PET is pursued by removing the convolution of PET point spread function (PSF) and by preserving edges present in PET and the aligned MR images. The correction is implemented in a Bayesian's deconvolution framework that is minimized by a conjugate gradient method. The method is evaluated on simulated phantom and brain PET images. The results show that the method effectively restores 102 ± 7% of the true PET activity with a size of greater than the full-width at half maximum of the point spread function. We also applied the method to synthesized brain PET data. The method does not require prior information about tracer activity within tissue regions. It can offer a partial volume correction method for various PET applications and can be particularly useful for combined PET/MRI studies.
Image quality improvement based on wavelet regularization for cone beam breast CT (CBBCT)
Dong Yang, Ruola Ning, Xiaohua Zhang, et al.
Flat-panel detector-based cone beam CT usually employs FDK algorithm as the reconstruction method. Traditionally, the row-wise ramp linear filtering was regularized by noise-suppression windows, such as Shepp-Logan, Hamming windows etc before the backprojection to get the final acceptable (in terms of SNR) reconstructed 3-D volume data. Though noise was reduced, this linear filtering regularized by noise suppression window had the potential to affect the signal spatial resolution and thus to reduce the sharpness of the structure boundaries within the breast image especially impeding the detection of the small calcifications and very small abnormalities that may indicate early breast cancer. Furthermore, the reconstructed images were still characterized by smudges. In order to combat the aforementioned shortcomings, a Wavelet regularization method was conducted on projection data followed by row-wise ramp linear filtering inherited within FDK.
An approach for automatic selecting of optimal data acquisition window for magnetic resonance coronary angiography
Tetsuo Sato, Tomohisa Okada, Shigehide Kuhara, et al.
The purpose of this study is to develop an automated method without user interaction for the optimal placing of ROI for selecting optimal data acquisition window in coronary MRA. One of the major problems of magnetic resonance coronary angiography (MRCA) is the effective suppression of coronary motion due to respiration and cardiac contraction. To compensate for cardiac movement, data acquisition is generally limited to the coronary artery rest period mainly found during end-diastole. End-diastole is called cardiac rest period. Generally, cardiac rest period is decided by operator. Therefore it is subjective, and requires many experiences. As for setting region of interest right coronary is known as an appropriate region for determining cardiac rest period. We proposed a method that is based on the extraction of intensive change regions and the calculation of the correlation coefficient. We tested the algorithm for two sets of clinical MR images and the results are shown.
Improved vessel enhancement for fully automatic coronary modeling
Vincent Auvray, Uwe Jandt, Raoul Florent, et al.
3D coronary modeling extracts the centerlines and width of the coronary arteries from a rotational sequence of angiographies. This process heavily relies on a preliminary filtering of the 2D angiograms that enhances the vessels. We propose an improved vessel enhancement method specifically designed for this application. It keeps the advantages of Hessian-based extraction methods (speed, robustness, multiscale) while bypassing its more important limitations: the blurring of bifurcations, and the incomplete filling of very large vessels. The major contributions of this paper are twofold. First, the classical centered kernel used in Hessian-based methods is substituted with an elongated off-centered kernel. The new filter detects the different orientations involved at a bifurcation: it can answer properly to 'half vessels' beginning at the considered pixel (as opposed to the centered classical filter). The proposed "semi-oriented ridge" filter is also more robust to noise, and it stays multi-scale and quickly computable. Second, an original bifurcation detection and enhancement method is presented, based on the following heuristics: "bifurcations have three vessels (at least) in their immediate neighborhood". More precisely, the semi-oriented ridges answers in each tested orientation θ∈]-π,π] are stored in a circular histogram. The proposed bifurcation energy is the height of the third peak in this histogram: it will have a significant value at bifurcations only. The performance of the complete framework is demonstrated both on the produced vessel maps and on the final modeling results.
Posters: Motion
icon_mobile_dropdown
Consistency of flow quantifications in tridirectional phase-contrast MRI
R. Unterhinninghofen, S. Ley, R. Dillmann
Tridirectionally encoded phase-contrast MRI is a technique to non-invasively acquire time-resolved velocity vector fields of blood flow. These may not only be used to analyze pathological flow patterns, but also to quantify flow at arbitrary positions within the acquired volume. In this paper we examine the validity of this approach by analyzing the consistency of related quantifications instead of comparing it with an external reference measurement. Datasets of the thoracic aorta were acquired from 6 pigs, 1 healthy volunteer and 3 patients with artificial aortic valves. Using in-house software an elliptical flow quantification plane was placed manually at 6 positions along the descending aorta where it was rotated to 5 different angles. For each configuration flow was computed based on the original data and data that had been corrected for phase offsets. Results reveal that quantifications are more dependent on changes in position than on changes in angle. Phase offset correction considerably reduces this dependency. Overall consistency is good with a maximum variation coefficient of 9.9% and a mean variation coefficient of 7.2%.
Detection of non-uniform multi-body motion in image time-series using saccades-enhanced phase correlation
Evgeny Gladilin, Roland Eils
Unsupervised analysis of time-series of live-cell images is one of the important tools of quantitative biology. Due to permanent cell motility or displacements of subcellular structures, microscopic images exhibit intrinsic non-uniform motion. In this article, we present a novel approach for detection of non-uniform multi-body motion which is based on combination of the Fourier-phase correlation with iterative probing target and background image regions similar to the strategy known from saccadic eye movements. We derive theoretical expressions that yield plausible explanation why this strategy turns out to be advantageous for tracking particular image pattern. Our experiments with synthetic and live-cell images demonstrate that the proposed approach is capable of accurately detecting non-uniform motion in synthetic and live-cell images.
Motion-compensated post-processing of gated cardiac SPECT images using a deformable mesh model
We present a post-reconstruction motion-compensated spatio-temporal filtering method for noise reduction in cardiac gated SPECT images. SPECT imaging suffers from low photon count due to radioactive dose limitations resulting in a high noise level in the reconstructed images. This is especially true in gated cardiac SPECT where the total number of counts is divided into a number of gates (time frames). Classical spatio-temporal filtering approaches, used in gated cardiac SPECT for noise reduction, do not accurately account for myocardium motion and brightening and therefore perform sub-optimally. The proposed post-reconstruction method consists of two steps: motion and brightening estimation and spatio-temporal motion-compensated filtering. In the first step we utilize a left ventricle model and a deformable mesh structure. The second step, which consists of motion-compensated spatio-temporal filtering, makes use of estimated myocardial motion to enable accurate smoothing. Additionally, the algorithm preserves myocardial brightening, a result of partial volume effect which is widely used as a diagnostic feature. The proposed method is evaluated quantitatively to assess noise reduction and the influence on estimated ejection fraction.
A fast and accurate method for echocardiography strain rate imaging
Vahid Tavakoli, Nima Sahba, Nima Hajebi, et al.
Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.
Posters: Registration
icon_mobile_dropdown
Automatic bone registration in MR knee images for cartilage morphological analysis
Ji Hyun Yoo, Soo Kyung Kim, Helen Hong, et al.
We propose a cartilage matching technique based on the registration of the corresponding bone structures instead of using the cartilage. Our method consists of five steps. First, cartilage and corresponding bone structures are extracted by semi-automatic segmentation. Second, gross translational mismatch between corresponding bone structures is corrected by point-based rough registration. The center of inertia (COI) of each segmented bone structure is considered as the reference point. Third, the initial alignment is refined by distance-based surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the Gaussian-weighted narrow-band distance propagation. Fourth, rigid transformation of the bone surface registration is applied to the cartilage of baseline MR images. Finally, morphological differences of the corresponding cartilages are visualized by color-coded mapping and image fusion. Experimental results show that the cartilage morphological changes of baseline and follow-up MR knee images can be easily recognized by the correct registration of the corresponding bones.
Evaluation of moving least squares as a technique for non-rigid medical image registration
This paper evaluates the performance of two non-rigid image registration techniques. The moving least squares (MLS) technique is compared to the more common thin-plate spline (TPS) method. Both methods interpolate a set of fiducial points in registering two images. An attractive feature of the MLS method is that it seeks to minimize local scaling and shearing, producing a global transformation that is as rigid as possible. The MLS and TPS techniques are applied to twoand three-dimensional medical images. Both qualitative and quantitative comparisons are presented. The two techniques are quantitatively evaluated by computing target registration errors (TREs) at selected points of interest. Our results indicate that the MLS algorithm performs better than the TPS method with lower TRE values and visually better registered images, indicating that MLS may be a better candidate for registration tasks when rigid registration is insufficient but the deformation field is sought to be minimal.
Registration of EEG electrode positions to PET and fMRI images
Žiga Špiclin, Boštjan Likar, Franjo Pernuš
Integration and correlation of brain's electrical (EEG) and physiological activity (PET, fMRI) is crucial for the early evaluation of patients with neurophysiological disorders, such as epilepsy. Based on the scalp-recorded EEG signals, the source image of brain's electrical activity can be reconstructed and spatially correlated with tomographic functional images, thereby aiding to the characterization and localization of epileptic foci. However, mis-localization of the electrode positions, with respect to the underlying anatomy, adversely affects the localization precision performed by the interpretation of the source image. In this paper, a novel method for registration of EEG electrode positions to tomographic functional images of the brain is proposed. Accuracy and robustness of the registration were evaluated on three databases of real and simulated PET and real fMRI images. The registration method showed good convergence properties for both PET [>10 mm] and especially fMRI images [>30 mm]. Based on Monte Carlo simulations, the obtained mean registration error of electrode positions in tomographic functional images was in the range of 1-2 corresponding voxel size. In this way, the constant bias in the reconstructed source image, that is due to the mis-registration of EEG electrode positions, can be suppressed with respect to the random errors induced by EEG signal noise. Finally, we aim to improve, or at all enable, the integration and application of the many functional modalities involved in the analysis and evaluation of clinical neurophysiological disorders.
An image warping technique for rodent brain MRI-histology registration based on thin-plate splines with landmark optimization
Yutong Liu, Mariano Uberti, Huanyu Dou, et al.
Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.
Optimized graph-based mosaicking for virtual microscopy
Virtual microscopy has the potential to partially replace traditional microscopy. For virtualization, the slide is scanned once by a fully automatized robotic microscope and saved digitally. Typically, such a scan results in several hundreds to thousands of fields of view. Since robotic stages have positioning errors, these fields of view have to be registered locally and globally in an additional step. In this work we propose a new global mosaicking method for the creation of virtual slides based on sub-pixel exact phase correlation for local alignment in combination with Prim's minimum spanning tree algorithm for global alignment. Our algorithm allows for a robust reproduction of the original slide even in the presence of views with little to no information content. This makes it especially suitable for the mosaicking of cervical smears. These smears often exhibit large empty areas, which do not contain enough information for common stitching approaches.
Automated alignment of MRI brain scan by anatomic landmarks
We present a method to automate acquisition of MR brain scans to allow consistent alignment of diagnostic images for patient follow-up, and to depict standardized anatomy for all patients. The algorithm takes as input a low-resolution acquisition that depicts the patient position within the scanner. The mid-sagittal plane dividing the brain hemispheres is automatically detected, as are bony landmarks at the front and back of the skull. The orientation and position of a subsequent diagnostic, high resolution scan is then aligned based on these landmarks. The method was tested on 91 data sets, and was completely successful in 93.4% of cases, performed acceptably in 4.4% of cases, and failed for 1.1%. We conclude that the method is suitable for clinical use and should prove valuable for improving consistency of acquisitions.
COLLINARUS: collection of image-derived non-linear attributes for registration using splines
Jonathan Chappelow, B. Nicolas Bloch, Neil Rofsky M.D., et al.
We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer extent. The set of synthetic multiprotocol images, acquired from the BrainWeb Simulated Brain Database, comprises 11 pairs of T1-w and proton density (PD) MRI of the brain. Following the application of a known warping to misalign the images, non-rigid registration was then performed to recover the original, correct alignment of each image pair. Quantitative evaluation of brain registration was performed by direct comparison of (1) the recovered deformation field to the applied field and (2) the original undeformed and recovered PD MRI. For each of the data sets, COLLINARUS is compared with the MI-driven counterpart of the B-spline technique. In each of the quantitative experiments, registration accuracy was found to be significantly (p < 0.05) for COLLINARUS compared with MI-driven B-spline registration. Over 11 slices, the mean absolute error in the deformation field recovered by COLLINARUS was found to be 0.8830 mm.
New GPU optimizations for intensity-based registration
Razik Yousfi, Guillaume Bousquet, Christophe Chefd'hotel
The task of registering 3D medical images is very computationally expensive. With CPU-based implementations of registration algorithms it is typical to use various approximations, such as subsampling, to maintain reasonable computation times. This may however result in suboptimal alignments. With the constant increase of capabilities and performances of GPUs (Graphics Processing Unit), these highly vectorized processors have become a viable alternative to CPUs for image related computation tasks. This paper describes new strategies to implement on GPU the computation of image similarity metrics for intensity-based registration, using in particular the latest features of NVIDIA's GeForce 8 architecture and the Cg language. Our experimental results show that the computations are many times faster. In this paper, several GPU implementations of two image similarity criteria for both intramodal and multi-modal registration have been compared. In particular, we propose a new efficient and flexible solution based on the geometry shader.
Nonrigid correction of interleaving artefacts in pelvic MRI
Jason Dowling, Pierrick Bourgeat, David Raffelt, et al.
This paper presents a novel method to reduce the effects of interleaving motion artefacts in single-plane MR scanning of the pelvic region without the need for k-space information. Interleaved image (or multipacket) acquisition is frequently used to reduce cross-talk and scanning time during full pelvic MR scans. Patient motion during interleaved acquisition can result in non-linear "staircase" imaging artefacts which are most visible on sagittal and coronal reconstructions. These artefacts can affect the segmentation of organs, registration, and visualization. A fast method has been implemented to replace artefact affected slices in a packet with interpolated slices based on Penney et al (2004) whose method involves the registration of neighbouring slices to obtain correspondences, followed by linear interpolation of voxel intensities along the displacement fields. This interpolation method has been applied to correct motion affected MRI volumes by firstly creating a new volume where every axial slice from the artefact affected packet is removed and replaced with an interpolated slice and then secondly for each of these slices, 2D non-rigid registration is used to register each original axial slice back to its matching interpolated slice. Results show visible improvements in artefacts particularly in sagittal and coronal image reconstructions, and should result in improved intensity based non-rigid registration results between MR scans (for example for atlas based automatic segmentation). Further validation was performed on simulated interleaving artefacts which were applied to an artefact free volume. Results obtained on prostate cancer radiotherapy treatment planning contouring were inconclusive and require further investigation.
Gene to mouse atlas registration using a landmark-based nonlinear elasticity smoother
Tungyou Lin, Carole Le Guyader, Erh-Fang Lee, et al.
We propose a unified variational approach for registration of gene expression data to neuroanatomical mouse atlas in two dimensions. The proposed energy (minimized in the unknown displacement u) is composed of three terms: a standard data fidelity term based on L2 similarity measure, a regularizing term based on nonlinear elasticity (allowing larger smooth deformations), and a geometric penalty constraint for landmark matching. We overcome the difficulty of minimizing the nonlinear elasticity functional by introducing an auxiliary variable v that approximates ∇u, the Jacobian of the unknown displacement u. We therefore minimize now the functional with respect to the unknowns u (a vector-valued function of two dimensions) and v (a two-by-two matrix-valued function). An additional quadratic term is added, to insure good agreement between v and ∇u. In this way, the nonlinearity in the derivatives of the unknown u no longer exists in the obtained Euler-Lagrange equations, producing simpler implementations. Several satisfactory experimental results show that gene expression data are mapped to a mouse atlas with good landmark matching and smooth deformation. We also present comparisons with the biharmonic regularization. An advantage of the proposed nonlinear elasticity model is that usually no numerical correction such as regridding is necessary to keep the deformation smooth, while unifying the data fidelity term, regularization term, and landmark constraints in a single minimization approach.
Mass preserving registration for lung CT
Vladlena Gorbunova, Pechin Lo, Martine Loeve, et al.
In this paper, we evaluate a novel image registration method on a set of expiratory-inspiratory pairs of computed tomography (CT) lung scans. A free-form multi resolution image registration technique is used to match two scans of the same subject. To account for the differences in the lung intensities due to differences in inspiration level, we propose to adjust the intensity of lung tissue according to the local expansion or compression. An image registration method without intensity adjustment is compared to the proposed method. Both approaches are evaluated on a set of 10 pairs of expiration and inspiration CT scans of children with cystic fibrosis lung disease. The proposed method with mass preserving adjustment results in significantly better alignment of the vessel trees. Analysis of local volume change for regions with trapped air compared to normally ventilated regions revealed larger differences between these regions in the case of mass preserving image registration, indicating that mass preserving registration is better at capturing localized differences in lung deformation.
Bead-based mosaicing of single plane illumination microscopy images using geometric local descriptor matching
Stephan Preibisch, Stephan Saalfeld, Torsten Rohlfing, et al.
Single Plane Illumination Microscopy (SPIM) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the biological sample from multiple angles, SPIM has the potential to achieve isotropic resolution throughout relatively large biological specimens. For every angle, however, only a shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. Existing intensity-based registration techniques still struggle to robustly and accurately align images that are characterized by limited overlap and/or heavy blurring. To be able to register such images, we add sub-resolution fluorescent beads to the rigid agarose medium in which the imaged specimen is embedded. For each segmented bead, we store the relative location of its n nearest neighbors in image space as rotation-invariant geometric local descriptors. Corresponding beads between overlapping images are identified by matching these descriptors. The bead correspondences are used to simultaneously estimate the globally optimal transformation for each individual image. The final output image is created by combining all images in an angle-independent output space, using volume injection and local content-based weighting of contributing images. We demonstrate the performance of our approach on data acquired from living embryos of Drosophila and fixed adult C.elegans worms. Bead-based registration outperformed intensity-based registration in terms of computation speed by two orders of magnitude while producing bead registration errors below 1 μm (about 1 pixel). It, therefore, provides an ideal tool for processing of long term time-lapse recordings of embryonic development consisting of hundreds of time points.
Linear time algorithms for exact distance transform: elaboration on Maurer et al. algorithm
In 2003, Maurer at al. [7] published a paper describing an algorithm that computes the exact distance transform in a linear time (with respect to image size) for the rectangular binary images in the k-dimensional space Rk and distance measured with respect to Lp-metric for 1 ≤ p ≤ ∞, which includes Euclidean distance L2. In this paper we discuss this algorithm from theoretical and practical points of view. On the practical side, we concentrate on its Euclidean distance version, discuss the possible ways of implementing it as signed distance transform, and experimentally compare implemented algorithms. We also describe the parallelization of these algorithms and the computation time savings associated with such an implementation. The discussed implementations will be made available as a part of the CAVASS software system developed and maintained in our group [5]. On the theoretical side, we prove that our version of the signed distance transform algorithm, GBDT, returns, in a linear time, the exact value of the distance from the geometrically defined object boundary. We notice that, actually, the precise form of the algorithm from [7] is not well defined for L1 and L∞ metrics and point to our complete proof (not given in [7]) that all these algorithms work correctly for the Lp-metric with 1 < p < ∞.
A simple penalty that encourages local invertibility and considers sliding effects for respiratory motion
Se Young Chun, Jeffrey A. Fessler, Marc L. Kessler
Nonrigid image registration is a key tool in medical imaging. Because of high degrees of freedom in nonrigid transforms, there have been many efforts to regularize the deformation based on some reasonable assumptions. Especially, motion invertibility and local tissue rigidity have been investigated as reasonable priors in image registration. There have been several papers on exploiting each constraint separately. These constraints are reasonable in respiratory motion estimation because breathing motion is invertible and there are some rigid structures such as bones. Using both constraints seems very attractive in respiratory motion registration since using invertibility prior alone usually causes bone warping in ribs. Using rigidity prior seems natural and straightforward. However, the "sliding effect" near the interface between rib cage and diaphragm makes problem harder because it is not locally invertible. In this area, invertibility and rigidity priors have opposite forces. Recently, we proposed a simple piecewise quadratic penalty that encourages the local invertibility of motions. In this work we relax this penalty function by using a Geman-type function that allows the deformation to be piecewise smooth instead of globally smooth. This allows the deformation to be discontinuous in the area of the interface between rib cage and diaphragm. With some small sacrifice of regularity, we could achieve more realistic discontinuous motion near diaphragm, better data fitting error as well as less bone warping. We applied this Geman-type function penalty only to the x- and y-direction partial derivatives of the z-direction deformation to address the sliding effect. 192 × 128 × 128 3D CT inhale and exhale images of a real patient were used to show the benefits of this new penalty method.
Hierarchical unbiased group-wise registration for atlas construction and population comparison
A novel hierarchical unbiased group-wise registration is developed to robustly transform each individual image towards a common space for atlas based analysis. This hierarchical group-wise registration approach consists of two main components, (1) data clustering to group similar images together and (2) unbiased group-wise registration to generate a mean image for each cluster. The mean images generated in the lower hierarchical level are regarded as the input images for the higher hierarchy. In the higher hierarchical level, these input images will be further clustered and then registered by using the same two components mentioned. This hierarchical bottom-up clustering and within-cluster group-wise registration is repeated until a final mean image for the whole population is formed. This final mean image represents the common space for all the subjects to be warped to in order for the atlas based analysis. Each individual image at the bottom of the constructed hierarchy is transformed towards the root node through concatenating all the intermediate displacement fields. In order to evaluate the performance of the proposed hierarchical registration in atlas based statistical analysis, comparisons were made with the conventional group-wise registration in detecting simulated brain atrophy as well as fractional anisotropy differences between neonates and 1-year-olds. In both cases, the proposed approach demonstrated improved sensitivity (higher t-scores) than the conventional unbiased registration approach.
A new method for assessing PET-MRI coregistration
Christine DeLorenzo, Arno Klein, Arthur Mikhno, et al.
Positron emission tomography (PET) images are acquired for many purposes, from diagnostic assessment to aiding in the development of novel therapies. Whatever the intended use, it is often necessary to distinguish between different anatomical regions within these images. Because of this, magnetic resonance images (MRIs) are generally acquired to provide an anatomical reference. This reference will only be accurate if the PET image is properly coregistered to the MRI; yet currently, a method to evaluate PET-MRI coregistration accuracy does not exist. This problem is compounded by the fact that two visually indistinguishable coregistration results can produce estimates of ligand binding that vary significantly. Therefore, the focus of this work was to develop a method that can evaluate coregistration performance based on measured ligand binding within certain regions of the coregistered PET image. The evaluation method is based on the premise that a more accurate coregistration will result in higher ligand binding in certain anatomical regions defined by the MRI. This fully automated method was able to assess coregistration results within the variance of an expert manual rater and shows promise as a possible coregistration cost function.
Nonrigid registration framework for bronchial tree labeling using robust point matching
Arunabha Roy, Uday Patil, Bipul Das
Automated labeling of the bronchial tree is essential for localization of airway related diseases (e.g. chronic bronchitis) and is also a useful precursor to lung-lobe labeling. We describe an automated method for registration-based labeling of a bronchial tree. The bronchial tree is segmented from a CT image using a region-growing based algorithm. The medial line of the extracted tree is then computed using a potential field based approach. The expert-labeled target (atlas) and the source bronchial trees in the form of extracted centerline point sets are brought into alignment by calculating a non-rigid thin-plate spline (TPS) mapping from the source to the target. The registration takes into account global as well as local variations in anatomy between the two images through the use of separable linear and non-linear components of the transformation; as a result it is well suited to matching structures that deviate at finer levels: namely higher order branches. The method is validated by registering together pairs of datasets for which the ground truth labels are known in advance: the labels are transferred after matching target to source and then compared with the true values. The method was tested on datasets each containing 18 branch centerpoints and 12 bifurcation locations (30 landmarks in total) annotated manually by a radiologist, where the performance was measured as the number of landmarks having the correct transfer of labels. An overall accuracy of labeling of 91.5 % was obtained in matching 23 pairs of datasets obtained from different patients.
Intra-operative adaptive FEM-based registration accommodating tissue resection
Petter Risholm, Eivind L. Melvær, Knut Mørken, et al.
Intra-operative imaging during neurosurgical procedures facilitates aggressive resections and potentially an increased surgical success rate compared to the traditional approach of relying purely on pre-operative data. However, acquisition of functional images like fMRI and DTI still have to be performed pre-operatively which necessitates registration to map them to the intra-operative image space. We present an elastic FEM-based registration algorithm which is tailored to register pre-operative to intra-operative images where a superficial tumor has been resected. To restrict matching of the cortical brain surface of the pre-operative image with the resected cavity in the intra-operative image, we define a weight function based on the "concavity" of the deformation field. These weights are applied to the load vector which effectively restricts the unwanted image forces around the resected area from matching the brain surface in the pre-operative image with the surface of the resected cavity. Another novelty of the proposed method is an adaptive multi-level FEM grid. After convergence of the algorithm on one level, the FEM grid is subdivided to add more degrees of freedom to the deformation around areas with a bad match. We present results from applying the algorithm on both 2D synthetic and medical image data and can show that the adaptivity of the grid both improves registration results and registration speed while the inclusion of the weighting function improves the results in the presence of resected tissue.
Feature detector and descriptor for medical images
Dusty Sargent, Chao-I Chen, Chang-Ming Tsai, et al.
The ability to detect and match features across multiple views of a scene is a crucial first step in many computer vision algorithms for dynamic scene analysis. State-of-the-art methods such as SIFT and SURF perform successfully when applied to typical images taken by a digital camera or camcorder. However, these methods often fail to generate an acceptable number of features when applied to medical images, because such images usually contain large homogeneous regions with little color and intensity variation. As a result, tasks like image registration and 3D structure recovery become difficult or impossible in the medical domain. This paper presents a scale, rotation and color/illumination invariant feature detector and descriptor for medical applications. The method incorporates elements of SIFT and SURF while optimizing their performance on medical data. Based on experiments with various types of medical images, we combined, adjusted, and built on methods and parameter settings employed in both algorithms. An approximate Hessian based detector is used to locate scale invariant keypoints and a dominant orientation is assigned to each keypoint using a gradient orientation histogram, providing rotation invariance. Finally, keypoints are described with an orientation-normalized distribution of gradient responses at the assigned scale, and the feature vector is normalized for contrast invariance. Experiments show that the algorithm detects and matches far more features than SIFT and SURF on medical images, with similar error levels.
Mapping ventricular expansion and its clinical correlates in Alzheimer's disease and mild cognitive impairment using multi-atlas fluid image alignment
Yi-Yu Chou, Natasha Lepore, Christina Avedissian, et al.
We developed an automated analysis pipeline to analyze 3D changes in ventricular morphology; it provides a highly sensitive quantitative marker of Alzheimer's disease (AD) progression for MRI studies. In the ADNI image database, we created expert delineations of the ventricles, as parametric surface meshes, in 6 brain MRI scans. These 6 images and their embedded surfaces were fluidly registered to MRI scans of 80 AD patients, 80 individuals with mild cognitive impairment (MCI), and 80 healthy controls. Surface averaging within subjects greatly reduced segmentation error. Surface-based statistical maps revealed powerful correlations between surface morphology at baseline and (1) diagnosis, (2) cognitive performance (MMSE scores), (3) depression, and (4) predicted future decline, over a 1 year interval, in 3 standard clinical scores (MMSE, global and sum-of-boxes CDR). We used a false discovery rate method (FDR) method based on cumulative probability plots to find that 40 subjects were sufficient to discriminate AD from normal groups. 60 and 119 subjects, respectively, were required to correlate ventricular enlargement with MMSE and clinical depression. Surface-based FDR, along with multi-atlas fluid registration to reduce segmentation error, will allow researchers to (1) estimate sample sizes with adequate power to detect groups differences, and (2) compare the power of mapping methods head-to-head, optimizing cost-effectiveness for future clinical trials.
Freesurfer-initialized large deformation diffeomorphic metric mapping with application to Parkinson's disease
Jingyun Chen, Samantha J. Palmer, Ali R. Khan, et al.
We apply a recently developed automated brain segmentation method, FS+LDDMM, to brain MRI scans from Parkinson's Disease (PD) subjects, and normal age-matched controls and compare the results to manual segmentation done by trained neuroscientists. The data set consisted of 14 PD subjects and 12 age-matched control subjects without neurologic disease and comparison was done on six subcortical brain structures (left and right caudate, putamen and thalamus). Comparison between automatic and manual segmentation was based on Dice Similarity Coefficient (Overlap Percentage), L1 Error, Symmetrized Hausdorff Distance and Symmetrized Mean Surface Distance. Results suggest that FS+LDDMM is well-suited for subcortical structure segmentation and further shape analysis in Parkinson's Disease. The asymmetry of the Dice Similarity Coefficient over shape change is also discussed based on the observation and measurement of FS+LDDMM segmentation results.
Improving an affine and non-linear image registration and/or segmentation task by incorporating characteristics of the displacement field
Konstantin Ens, Stefan Heldmann, Jan Modersitzki, et al.
Image registration is an important and active area of medical image processing. Given two images, the idea is to compute a reasonable displacement field which deforms one image such that it becomes similar to the other image. The design of an automatic registration scheme is a tricky task and often the computed displacement field has to be discarded, when the outcome is not satisfactory. On the other hand, however, any displacement field does contain useful information on the underlying images. It is the idea of this note, to utilize this information and to benefit from an even unsuccessful attempt for the subsequent treatment of the images. Here, we make use of typical vector analysis operators like the divergence and curl operator to identify meaningful portions of the displacement field to be used in a follow-up run. The idea is illustrated with the help of academic as well as a real life medical example. It is demonstrated on how the novel methodology may be used to substantially improve a registration result and to solve a difficult segmentation problem.
Design of a synthetic database for the validation of non-linear registration and segmentation of magnetic resonance brain images
Konstantin Ens, Fabian Wenzel, Stewart Young, et al.
Image registration and segmentation are two important tasks in medical image analysis. However, the validation of algorithms for non-linear registration in particular often poses significant challenges:1, 2 Anatomical labeling based on scans for the validation of segmentation algorithms is often not available, and is tedious to obtain. One possibility to obtain suitable ground truth is to use anatomically labelled atlas images. Such atlas images are, however, generally limited to single subjects, and the displacement field of the registration between the template and an arbitrary data set is unknown. Therefore, the precise registration error cannot be determined, and approximations of a performance measure like the consistency error must be adapted. Thus, validation requires that some form of ground truth is available. In this work, an approach to generate a synthetic ground truth database for the validation of image registration and segmentation is proposed. Its application is illustrated using the example of the validation of a registration procedure, using 50 magnetic resonance images from different patients and two atlases. Three different non-linear image registration methods were tested to obtain a synthetic validation database consisting of 50 anatomically labelled brain scans.
Improving inter-fragmentary alignment for virtual 3D reconstruction of highly fragmented bone fractures
Beibei Zhou, Andrew Willis, Yunfeng Sui, et al.
This article describes two new algorithms that, when integrated into an existing semi-automatic virtual bone fragment reconstruction system, allow for more accurate anatomic restoration. Furthermore, they spare the user the painstaking task of positioning each fragment in 3D, which can be extremely time consuming and difficult. The virtual interactive environment gives the user capabilities to influence the reconstruction process and that allows idiosyncratic geometric surface reconstruction scenarios. Coarse correspondences specified by the user are refined by a new alignment functional that allows geometric surface variations such as ridges and valleys to more heavily influence the final alignment solution. Integration of these algorithms into the system provides improved reconstruction accuracy, which is critical for increasing the likelihood of satisfactory clinical outcome after the injury.
Evaluation of the accuracy of deformable registration of prostate MRI for targeted prostate cancer radiotherapy
Karthik Krishnan, Rex Cheung
Endorectal MRI provides detailed images of the prostate anatomy and is useful for radiation treatment planning. The endorectal probe (which is often removed during radiotherapy) introduces a large prostate deformation, thereby posing a challenge for purposes of treatment planning. The probe-in MRI needs to be deformably registered to the planning MRI prior to radiation treatment. The goal of this paper is to evaluate a deformable registration workflow and quantify its accuracy and suitability for radiation treatment planning. We use three metrics to evaluate the accuracy of the prostate/tumor segmentations from the registered volume to the gold standard prostate/tumor segmentations: (a) Dice Similarity Coefficient (b) Hausdorff Distance (c) Mean surface distance. These metrics quantify the acceptability of the registration within the prescribed treatment margin. We evaluate and adapt existing methods, both manual and automated, to accurately track, visualize and quantify the deformations in the prostate geometry between the endorectal MRI and the treatment planning image. An important aspect of the work described in this paper is the integration of interactive guidance on the registration process. The approach described in this paper provides users with the option of performing interactive manual alignment followed by deformable registration.
Validation of nonrigid registration for multi-tracer PET-CT treatment planning in rectal cancer radiotherapy
Pieter Slagmolen, Sarah Roels, Dirk Loeckx, et al.
The goal of radiotherapy is to deliver maximal dose to the tumor and minimal dose to the surrounding tissue. This requires accurate target definition. In sites were the tumor is difficult to see on the CT images, such as for rectal cancer, PET-CT imaging can be used to better define the target. If the information from multiple PETCT images with different tracers needs to be combined, a nonrigid registration is indispensable to compensate for rectal tissue deformations. Such registration is complicated by the presence of different volumes of bowel gas in the images to be registered. In this paper, we evaluate the performance of different nonrigid registration approaches by looking at the overlap of manually delineated rectum contours after registration. Using a B-spline transformation model, the results for two similarity measures, sum of squared differences and mutual information, either calculated over the entire image or on a region of interest are compared. Finally, we also assess the effect of the registration direction. We show that the combination of MI with a region of interest is best able to cope with residual rectal contrast and differences in bowel filling. We also show that for optimal performance the registration direction should be chosen depending on the difference in bowel filling in the images to be registered.
A tool for registration verification based on gradient correspondence
Primoz Markelj, Franjo Pernus, Bostjan Likar
Verification of registration accuracy is paramount for assessing the validity and clinical feasibility of a registration method. When a ground truth registration is not available or when local misalignments need to be examined, a qualitative assessment of registration results must be performed. The verification of registration by analyzing correspondences of gradients derived from the rigidly registered CT and MR images was performed. Strongest local CT gradients were extracted and transformed into the MR gradient image. A local gradient correspondence search in the MR image was performed using discrete systematic displacements in the direction of the strongest local CT gradients. As the measure of gradient correspondence between the CT and MR gradients the absolute values and the directions of the gradients were considered. The directional information was integrated by means of a weighting function which was calculated as a product of absolute values of the strongest local CT gradient and the MR gradient weighted by the angle between these two gradients. Two correspondence visualization techniques and gradient displacement analysis were developed to highlight the misaligned gradients and provide qualitative assessment of local misregistration. The feasibility of the proposed approach was demonstrated on the CT and MR images of the RIRE database registered using the normalized mutual information similarity measure. Global and local misregistrations were detected. Furthermore, the acquisition artifacts of non-rectified MR images could be visualized and were shown to degrade the registration performance.
Worst-case analysis of target localization errors in fiducial-based rigid body registration
Reuben R. Shamir, Leo Joskowicz
Fiducial-based rigid registration is the preferred method for aligning the preoperative image with the intra-operative physical anatomy in existing image-guided surgery systems. After registration, the targets locations usually cannot be measured directly, so the Target Registration Error (TRE) is often estimated with the Fiducial Registration Error (FRE), or with Fitzpatrick TRE (FTRE) estimation formula. However, large discrepancies between the FRE and the TRE have been exemplified in hypothetical setups and have been observed in the clinic. In this paper, we formally prove that in the worst case the FRE and the TRE, and the FTRE and the TRE are independent, regardless of the target location, it location, the number of fiducials, and their configuration. The worst case occurs when the unknown Fiducial Localization Error (FLE) is modeled as an affine anisotropic inhomogeneous bias. Our results generalize previous examples, contribute to the mathematical understanding of TRE estimation in fiducial-based rigid-body registration, and strengthen the need for realistic and reliable FLE models and effective TRE estimation methods.
Recent improvements in tensor scale computation and its applications to medical imaging
Tensor scale (t-scale) is a local morphometric parameter describing local structure shape, orientation and scale. At any image location, t-scale is the parametric representation of the largest ellipse (an ellipsoid in 3D) centered at that location and contained in the same homogeneous region. Recently, we have improved the t-scale computation algorithm by: (1) optimizing digital representations for LoG and DoG kernels for edge detection and (2) ellipse fitting by using minimization of both algebraic and geometric distance errors. Also, t-scale has been applied to computing the deformation vector field with applications to medical image registration. Currently, the method is implemented in twodimension (2D) and the deformation vector field is directly computed from t-scale-derived normal vectors at matching locations in two images to be registered. Also, the method has been used to develop a simple algorithm for computing 2D warping from one shape onto another. Normal vector yields local structure orientation pointing to the closest edge. However, this information is less reliable along the medial axis of a shape as it may be associated with either of the two opposite edges of the local shape. This problem is overcome using a shape-linearity measure estimating relative changes in scale along the orthogonal direction. Preliminary results demonstrate the method's potential in estimating deformation between two images.
Posters: Segmentation
icon_mobile_dropdown
Segmentation of brain PET-CT images based on adaptive use of complementary information
Yong Xia, Lingfeng Wen, Stefan Eberl, et al.
Dual modality PET-CT imaging provides aligned anatomical (CT) and functional (PET) images in a single scanning session, which can potentially be used to improve image segmentation of PET-CT data. The ability to distinguish structures for segmentation is a function of structure and modality and varies across voxels. Thus optimal contribution of a particular modality to segmentation is spatially variant. Existing segmentation algorithms, however, seldom account for this characteristic of PET-CT data and the results using these algorithms are not optimal. In this study, we propose a relative discrimination index (RDI) to characterize the relative abilities of PET and CT to correctly classify each voxel into the correct structure for segmentation. The definition of RDI is based on the information entropy of the probability distribution of the voxel's class label. If the class label derived from CT data for a particular voxel has more certainty than that derived from PET data, the corresponding RDI will have a higher value. We applied the RDI matrix to balance adaptively the contributions of PET and CT data to segmentation of brain PET-CT images on a voxel-by-voxel basis, with the aim to give the modality with higher discriminatory power a larger weight. The resultant segmentation approach is distinguished from traditional approaches by its innovative and adaptive use of the dual-modality information. We compared our approach to the non-RDI version and two commonly used PET-only based segmentation algorithms for simulation and clinical data. Our results show that the RDI matrix markedly improved PET-CT image segmentation.
Level-set segmentation of pulmonary nodules in radiographs using a CT prior
Jay S. Schildkraut, Shoupu Chen, Michael Heath, et al.
This research addresses the problem of determining the location of a pulmonary nodule in a radiograph with the aid of a pre-existing computed tomographic (CT) scan. The nodule is segmented in the radiograph using a level set segmentation method that incorporates characteristics of the nodule in a digitally reconstructed radiograph (DRR) that is calculated from the CT scan. The segmentation method includes two new level set energy terms. The contrast energy seeks to increase the contrast of the segmented region relative to its surroundings. The gradient direction convergence energy is minimized when the intensity gradient direction in the region converges to a point. The segmentation method was tested on 23 pulmonary nodules from 20 cases for which both a radiographic image and CT scan were collected. The mean nodule effective diameter is 22.5 mm. The smallest nodule has an effective diameter of 12.0 mm and the largest an effective diameter of 48.1 mm. Nodule position uncertainty was simulated by randomly offsetting the true nodule center from an aim point. The segmented region is initialized to a circle centered at the aim point with a radius that is equal to the effective radius of the nodule plus a 10.0 mm margin. When the segmented region that is produced by the proposed method is used to localize the nodule, the average reduction in nodule-position uncertainty is 46%. The relevance of this method to the detection of radiotherapy targets at the time of treatment is discussed.
A topology-oriented and tissue-specific approach to detect pleural thickenings from 3D CT data
C. Buerger, K. Chaisaowong, A. Knepper, et al.
Pleural thickenings are caused by asbestos exposure and may evolve into malignant pleural mesothelioma. The detection of pleural thickenings is today mostly done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. We propose a new detection algorithm within our computer-assisted diagnosis (CAD) system to automatically detect pleural thickenings within CT data. First, pleura contours are identified by thresholding and contour relaxation with a probabilistic model. Subsequently, the approach to automatically detect pleural thickenings is proposed as a two-step procedure. Step one; since pleural thickenings appear as fine-scale occurrences on the rather large-scale pleura contour, a surface-based smoothing algorithm is developed. Pleural thickenings are initially detected as the difference between the original contours and the resulting "healthy" model of the pleura. Step two; as pleural thickenings can expand into the surrounding thoracic tissue, a subsequent tissue-specific segmentation for the initially detected pleural thickenings is performed in order to separate pleural thickenings from the surrounding thoracic tissue. For this purpose, a probabilistic Hounsfield model for pleural thickenings as a mixture of Gaussian distributions has been constructed. The parameters were estimated by applying the Expectation-Maximization (EM) algorithm. A model fitting technique in combination with the application of a Gibbs-Markov random field (GMRF) model then allows the tissuespecific segmentation of pleural thickenings with high precision. With these methods, a new approach is presented in order to assure a precise and reproducible detection of pleural mesothelioma in its early stage.
Texture-learning-based system for three-dimensional segmentation of renal parenchyma in abdominal CT images
Cong-Qi Peng, Yuan-Hsiang Chang, Li-Jen Wang, et al.
Abdominal CT images are commonly used for the diagnosis of kidney diseases. With the advances of CT technology, processing of CT images has become a challenging task mainly because of the large number of CT images being studied. This paper presents a texture-learning based system for the three-dimensional (3D) segmentation of renal parenchyma in abdominal CT images. The system is designed to automatically delineate renal parenchyma and is based on the texturelearning and the region-homogeneity-based approaches. The first approach is achieved with the texture analysis using the gray-level co-occurrence matrix (GLCM) features and an artificial neural network (ANN) to determine if a pixel in the CT image is likely to fall within the renal parenchyma. The second approach incorporates a two-dimensional (2D) region growing to segment renal parenchyma in single CT image slice and a 3D region growing to propagate the segmentation results to neighboring CT image slices. The criterion for the region growing is a test of region-homogeneity which is defined by examining the ANN outputs. In system evaluation, 10 abdominal CT image sets were used. Automatic segmentation results were compared with manually segmentation results using the Dice similarity coefficient. Among the 10 CT image sets, our system has achieved an average Dice similarity coefficient of 0.87 that clearly shows a high correlation between the two segmentation results. Ultimately, our system could be incorporated in applications for the delineation of renal parenchyma or as a preprocessing in a CAD system of kidney diseases.
Automated detection and delineation of lung tumors in PET-CT volumes using a lung atlas and iterative mean-SUV threshold
Cherry Ballangan, Xiuying Wang, Stefan Eberl, et al.
Automated segmentation for the delineation of lung tumors with PET-CT is a challenging task. In PET images, primary lung tumors can have varying degrees of tracer uptake, which sometimes does not differ markedly from normal adjacent structures such as the mediastinum, heart and liver. In addition, separation of tumor from adjacent soft tissues and bone in the chest wall is problematic due to limited resolution. For CT, the tumor soft tissue density can be similar to that in the blood vessels and the chest wall; and although CT provides better boundary definition, exact tumor delineation is also difficult when the tumor density is similar to adjacent structures. We propose an innovative automated adaptive method to delineate lung tumors in PET-CT images in conjunction with a lung atlas in which an iterative mean-SUV (Standardized Uptake Value) threshold is used to gradually define the tumor region in PET. Tumor delineation in the CT data is performed using region growing and seeds obtained autonomously from the PET tumor regions. We evaluated our approach in 13 patients with non-small cell lung cancer (NSCLC) and found it could delineate tumors of different size, shape and location, even when when the NSCLC involved the chest wall.
A combined watershed and level set method for segmentation of brightfield cell images
Shutong Tse, Laura Bradbury, Justin W.L. Wan, et al.
Segmentation of brightfield cell images from microscopy is challenging in several ways. The contrast between cells and the background is low. Cells are usually surrounded by "halo", an optical artifact common in brightfield images. Also, cell divisions occur frequently, which raises the issue of topological change to segmentation. In this paper, we present a robust segmentation method based on the watershed and level set methods. Instead of heuristically locate where the initial markers for watershed should be, we apply a multiphase level set marker extraction to determine regions inside a cell. In contrast with the standard level set segmentation where only one level set function is used, we apply multiple level set functions (usually 3) to capture the different intensity levels in a cell image. This is particularly important to be able to distinguish regions of similar but different intensity levels in low contrast images. All the pixels obtained will be used as an initial marker for watershed. The region growing process of watershed will capture the rest of the cell until it hits the halo which serves as a "wall" to stop the expansion. By using these relatively large number of points as markers together with watershed, we show that the low contrast cell boundary can be captured correctly. Furthermore, we present a technique for watershed and level set to detect cell division automatically with no special human attention. Finally, we present segmentation results of C2C12 cells in brightfield images to illustrate the effectiveness of our method.
Pleural effusion segmentation in thin-slice CT
Rory Donohue, Andrew Shearer, John Bruzzi, et al.
A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.
3D contour based local manual correction of tumor segmentations in CT scans
Frank Heckel, Jan Hendrik Moltz, Lars Bornemann, et al.
Segmentation is an essential task in medical image analysis. For example measuring tumor growth in consecutive CT scans based on the volume of the tumor requires a good segmentation. Since manual segmentation takes too much time in clinical routine automatic segmentation algorithms are typically used. However there are always cases where an automatic segmentation fails to provide an acceptable segmentation for example due to low contrast, noise or structures of the same density lying close to the lesion. These erroneous segmentation masks need to be manually corrected. We present a novel method for fast three-dimensional local manual correction of segmentation masks. The user needs to draw only one partial contour which describes the lesion's actual border. This two-dimensional interaction is then transferred into 3D using a live-wire based extrapolation of the contour that is given by the user in one slice. Seed points calculated from this contour are moved to adjacent slices by a block matching algorithm. The seed points are then connected by a live-wire algorithm which ensures a segmentation that passes along the border of the lesion. After this extrapolation a morphological postprocessing is performed to generate a coherent and smooth surface corresponding to the user drawn contour as well as to the initial segmentation. An evaluation on 108 lesions by six radiologists has shown that our method is both intuitive and fast. Using our method the radiologists were able to correct 96.3% of lesion segmentations rated as insufficient to acceptable ones in a median time of 44s.
Cell boundary analysis using radial search for dual staining techniques
Saadia Iftikhar, Anil Anthony Bharath
In medical image analysis and segmentation, many conventional methods work very well on good quality tissue section images, but often fail when the images are not of good quality. Active contours or snakes are widely used in medical image processing applications especially for boundary detection. However, the problems with initialization and poor performance of snakes on noisy images limit their efficacy. As an alternative, this research presents an efficient and robust method to segment cell nuclei and their respective boundaries for low contrast cell images using a combination of a radial search and interpolation methods. This radial search method can be used in medical image analysis and segmentation applications for images which are very noisy or whose structural regions are not very clear. The processes in this method consists of (1) extracting the location of the cell nuclei (2) finding the edge information of the given image (3) applying radial search on the edge image patch for finding the radial initialization and finally (4) using an interpolation method to find the desired boundary points, which describe the potential boundary points to best fit to that candidate shape or cell. The results shown on the images of branch aorta of rabbit are suggesting that the proposed radial search method correctly finds the boundaries even on very low contrast images, which can be used for further medical image analysis.
Maximize uniformity summation heuristic (MUSH): a highly accurate simple method for intracranial delineation
Ronald Pierson, Gregory Harris, Hans J. Johnson, et al.
A common procedure performed by many groups in the analysis of neuroimaging data is separating the brain from other tissues. This procedure is often utilized both by volumetric studies as well as functional imaging studies. Regardless of the intent, an accurate, robust method of identifying the brain or cranial vault is imperative. While this is a common requirement, there are relatively few tools to perform this task. Most of these tools require a T1 weighted image and are therefore not able to accurately define a region that includes surface CSF. In this paper, we have developed a novel brain extraction technique termed Maximize Uniformity by Summation Heuristic (MUSH) optimization. The algorithm was designed for extraction of the brain and surface CSF from a multi-modal magnetic resonance (MR) imaging study. The method forms a linear combination of multi-modal MR imaging data to make the signal intensity within the brain as uniform as possible. The resulting image is thresholded and simple morphological operators are utilized to generate the resulting representation of the brain. The resulting method was applied to a sample of 20 MR brain scans and compared to the results generated by 3dSkullStrip, 3dIntracranial, BET, and BET2. The average Jaccard metrics for the twenty subjects was 0.66 (BET), 0.61 (BET2), 0.88 (3dIntracranial), 0.91 (3dSkullStrip), and 0.94 (MUSH).
Robust model-based centerline extraction of vessels in CTA data
Thomas Beck, Christina Biermann, Dominik Fritz, et al.
Extracting the centerline of blood vessels is a frequently used technique to assist the physician in the diagnosis of common artery disease in CTA images. Thereby, a robust and precise computation of the centerline is an essential prerequisite. In this paper we present a novel approach to robustly model the vessel tree and to compute its centerline. The algorithm is initialized with two clicks from the physician, which mark the start and end point of the vessel to be examined. Our approach is divided into two consecutive steps. In the first step, a section of the vessel tree is mapped to the model so that the desired centerline is entirely included. After the generation of the model, the centerline can easily be extracted in the second step. The robust and efficient extraction of required model parameters is performed by a ray-casting approach. The proposed method determines a set of points on the vascular wall. The analysis of these points using the principal component analysis provides all parameters needed for modeling the vessel. The proposed technique reduces computation time and does not require a segmentation of the vessel lumen to determine the centerline of the vessel. Furthermore, a priori knowledge of vessel structures is incorporated to improve robustness in the presence of pathological deformations.
Simultaneous 3D segmentation of three bone compartments on high resolution knee MR images from osteoarthritis initiative (OAI) using graph cuts
Hackjoon Shim, C. Kent Kwoh, Il Dong Yun, et al.
Osteoarthritis (OA) is associated with degradation of cartilage and related changes in the underlying bone. Quantitative measurement of those changes from MR images is an important biomarker to study the progression of OA and it requires a reliable segmentation of knee bone and cartilage. As the most popular method, manual segmentation of knee joint structures by boundary delineation is highly laborious and subject to user-variation. To overcome these difficulties, we have developed a semi-automated method for segmentation of knee bones, which consisted of two steps: placement of seeds and computation of segmentation. In the first step, seeds were placed by the user on a number of slices and then were propagated automatically to neighboring images. The seed placement could be performed on any of sagittal, coronal, and axial planes. The second step, computation of segmentation, was based on a graph-cuts algorithm where the optimal segmentation is the one that minimizes a cost function, which integrated the seeds specified by the user and both the regional and boundary properties of the regions to be segmented. The algorithm also allows simultaneous segmentation of three compartments of the knee bone (femur, tibia, patella). Our method was tested on the knee MR images of six subjects from the osteoarthritis initiative (OAI). The segmentation processing time (mean±SD) was (22±4)min, which is much shorter than that by the manual boundary delineation method (typically several hours). With this improved efficiency, our segmentation method will facilitate the quantitative morphologic analysis of changes in knee bones associated with osteoarthritis.
User-assisted aortic aneurysm analysis
Amandine Ouvrard, Rahul Renapuraar, Randolph M. Setser, et al.
Aortic Aneurysms (AA) are the 13th leading cause of death in the US. In standard clinical practice, intervention is initiated when the maximal diameter cross-sectional reaches 5.5cm. However, this is a 1D measure and it has been suggested in the literature that higher order measurements (area, volume) might be more appropriate clinically. Unfortunately, no commercially available tools exist for extracting a 3D model of the epithelial layer (versus the lumen) of the vessel. Therefore, we present work towards semi-automatically recovering the aorta from CT angiography volumes with the aim to facilitate such studies. We build our work upon a previous approach to this problem. Bodur et. al., presented a variant of the iso-perimetric algorithm to semi-automatically segment several individual aortic cross-sections across longitudinal studies, quantifying any growth. As a by-product of these sparse cross-sections, it is possible to form a series of rough 3D models of the aorta. In this work we focus on creating a more detailed 3D model at a single time point by automatically recovering the aorta between the sparse user-initiated segmentations. Briefly, we fit a tube model to the sparse segmentations to approximate the cross-sections at intermediate regions, refine the approximations and apply the isoperimetric algorithm to them. From these resulting dense cross-sections we reconstruct our model. We applied our technique to 12 clinical datasets which included significant amounts of thrombus. Comparisons of the automatically recovered cross-sections with cross-sections drawn by an expert resulted in an average difference of .3cm for diameter and 2cm^2 for area.
Efficient multigrid solver for the 3D random walker algorithm
Xin Wang, Tobias Heimann, Arne Naegel, et al.
The random walker algorithm is a graph-based segmentation method that has become popular over the past few years. The basis of the algorithm is a large, sparsely occupied system of linear equations, whose size corresponds to the number of voxels in the image. To solve these systems, typically comprised of millions of equations, the computational performance of conventional numerical solution methods (e.g. Gauss-Seidel) is no longer satisfactory. An alternative method that has been described previously for solving 2D random walker problems is the geometrical multigrid method. In this paper, we present a geometrical multigrid approach for the 3D random walker problem. Our approach features an optimized calculation of the required Galerkin product and a robust smoothing using the ILUβ method. To reach better convergence rates, the multigrid solver is used as a preconditioner for the conjugate gradient solver. We compared the performance of our new multigrid approach with the conjugate gradient solver on five MRI lung images with a resolution of 96 x 128 x 52 voxels. Initial results show an increasing in speed of up to four times, reducing the average computation time from six minutes to less than two minutes when using our proposed approach. Employing a multigrid solver for the random walker algorithm thus permits accurate interactive segmentation with fewer delays.
Automated segmentation and recognition of the bone structure in non-contrast torso CT images using implicit anatomical knowledge
X. Zhou, T. Hayashi, M. Han, et al.
X-ray CT images have been widely used in clinical diagnosis in recent years. A modern CT scanner can generate about 1000 CT slices to show the details of all the human organs within 30 seconds. However, CT image interpretations (viewing 500-1000 slices of CT images manually in front of a screen or films for each patient) require a lot of time and energy. Therefore, computer-aided diagnosis (CAD) systems that can support CT image interpretations are strongly anticipated. Automated recognition of the anatomical structures in CT images is a basic pre-processing of the CAD system. The bone structure is a part of anatomical structures and very useful to act as the landmarks for predictions of the other different organ positions. However, the automated recognition of the bone structure is still a challenging issue. This research proposes an automated scheme for segmenting the bone regions and recognizing the bone structure in noncontrast torso CT images. The proposed scheme was applied to 48 torso CT cases and a subjective evaluation for the experimental results was carried out by an anatomical expert following the anatomical definition. The experimental results showed that the bone structure in 90% CT cases have been recognized correctly. For quantitative evaluation, automated recognition results were compared to manual inputs of bones of lower limb created by an anatomical expert on 10 randomly selected CT cases. The error (maximum distance in 3D) between the recognition results and manual inputs distributed from 3-8 mm in different parts of the bone regions.
Curve evolution with a dual shape similarity and its application to segmentation of left ventricle
Jonghye Woo, Byung-Woo Hong, Amit Ramesh, et al.
Automated image segmentation has been playing a critical role in medical image analysis. Recentl, Level Set methods have shown an efficacy and efficiency in various imaging modalities. In this paper, we present a novel segmentation approach to jointly delineate the boundaries of epi- and endocardium of the left ventricle on the Magnetic Resonance Imaging (MRI) images in a variational framework using level sets, which is in great demand as a clinical application in cardiology. One strategy to tackle segmentation under undesirable conditions such as subtle boundaries and occlusions is to exploit prior knowledge which is specific to the object to segment, in this case the knowledge about heart anatomy. While most left ventricle segmentation approaches incorporate a shape prior obtained by a training process from an ensemble of examples, we exploit a novel shape constraint using an implicit shape prior knowledge, which assumes shape similarity between epi- and endocardium allowing a variation under the Gaussian distribution. Our approach does not demand a training procedure which is usually subject to the training examples and is also laborious and time-consuming in generating the shape prior. Instead, we model a shape constraint by a statistical distance between the shape of epi- and endocardium employing signed distance functions. We applied this technique to cardiac MRI data with quantitative evaluations performed on 10 subjects. The experimental results show the robustness and effectiveness of our shape constraint within a Mumford-Shah segmentation model in the segmentation of left ventricle from cardiac MRI images in comparison with the manual segmentation results.
Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest
Sang Cheol Park, Won Pil Kim, Bin Zheng, et al.
Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.
Left ventricle endocardium segmentation for cardiac CT volumes using an optimal smooth surface
We recently proposed a robust heart chamber segmentation approach based on marginal space learning. In this paper, we focus on improving the LV endocardium segmentation accuracy by searching for an optimal smooth mesh that tightly encloses the whole blood pool. The refinement procedure is formulated as an optimization problem: maximizing the surface smoothness under the tightness constraint. The formulation is a convex quadratic programming problem, therefore has a unique global optimum and can be solved efficiently. Our approach has been validated on the largest cardiac CT dataset (457 volumes from 186 patients) ever reported. Compared to our previous work, it reduces the mean point-to-mesh error from 1.13 mm to 0.84 mm (22% improvement). Additionally, the system has been extensively tested on a dataset with 2000+ volumes without any major failure.
Computer-assisted scheme for automated determination of imaging planes in cervical spinal cord MRI
This paper presents a computerized scheme to assist MRI operators in accurate and rapid determination of sagittal sections for MRI exam of cervical spinal cord. The algorithm of the proposed scheme consisted of 6 steps: (1) extraction of a cervical vertebra containing spinal cord from an axial localizer image; (2) extraction of spinal cord with sagittal image from the extracted vertebra; (3) selection of a series of coronal localizer images corresponding to various, involved portions of the extracted spinal cord with sagittal image; (4) generation of a composite coronal-plane image from the obtained coronal images; (5) extraction of spinal cord from the obtained composite image; (6) determination of oblique sagittal sections from the detected location and gradient of the extracted spinal cord. Cervical spine images obtained from 25 healthy volunteers were used for the study. A perceptual evaluation was performed by five experienced MRI operators. Good agreement between the automated and manual determinations was achieved. By use of the proposed scheme, average execution time was reduced from 39 seconds/case to 1 second/case. The results demonstrate that the proposed scheme can assist MRI operators in performing cervical spinal cord MRI exam accurately and rapidly.
Multi-channel MRI segmentation with graph cuts using spectral gradient and multidimensional Gaussian mixture model
Jérémy Lecoeur, Jean-Christophe Ferré, D. Louis Collins, et al.
A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.
Employing anatomical knowledge in vertebral column labeling
The spinal column constitutes the central axis of human torso and is often used by radiologists to reference the location of organs in the chest and abdomen. However, visually identifying and labeling vertebrae is not trivial and can be timeconsuming. This paper presents an approach to automatically label vertebrae based on two pieces of anatomical knowledge: one vertebra has at most two attached ribs, and ribs are attached only to thoracic vertebrae. The spinal column is first extracted by a hybrid method using the watershed algorithm, directed acyclic graph search and a four-part vertebra model. Then curved reformations in sagittal and coronal directions are computed and aggregated intensity profiles along the spinal cord are analyzed to partition the spinal column into vertebrae. After that, candidates for rib bones are detected using features such as location, orientation, shape, size and density. Then a correspondence matrix is established to match ribs and vertebrae. The last vertebra (from thoracic to lumbar) with attached ribs is identified and labeled as T12. The rest of vertebrae are labeled accordingly. The method was tested on 50 CT scans and successfully labeled 48 of them. The two failed cases were mainly due to rudimentary ribs.
A coupled level-set framework for bladder wall segmentation with application to MRI-based virtual cystoscopy
In this paper, we propose a coupled level-set framework for segmentation of bladder wall using T1-weighted magnetic resonance (MR) images. The segmentation results will be used for non-invasive MR-based virtual cystoscopy (VCys). The framework uses two level-set functions to segment inner and outer borders of the bladder wall respectively. Based on Chan-Vese (C-V) model, a local adaptive fitting (LAF) image energy is introduced to capture local intensity contrast. Comparing with previous work, our method has the following advantages. First of all, unlike most other work which only segments the boundary of the bladder but not inner border and outer border respectively, our method extracts the inner border as well as the outer border of bladder wall automatically. Secondly, we focus on T1-weighted MR images which decrease the image intensity of the urine and therefore minimize the partial volume effect (PVE) on the bladder wall for detection of abnormalities on the mucosa layer in contrast to others' work on CT images and T2-weighted MR images which enhance the intensity of the urine and encounter the PVE. In addition, T1-weighted MR images provide the best tissue contrast for detection of the outer border of the bladder wall. Since MR images tend to be inhomogeneous and have ghost artifacts due to motion and other causes as compared to computer tomography (CT)-based VCys, our framework is easy to control the geometric property of level-set functions to mitigate the influences of inhomogeneity and ghosts. Finally, a variety of geometric parameters, such as the thickness of bladder wall, etc, can be measured easily under the level-set framework. These parameters are clinically important for VCys. The segmentation results were evaluated by experienced radiologists, whose feedback strongly demonstrated the usefulness of such coupled level-set framework for VCys.
Segmentation of low contrast-to-noise ratio images applied to functional imaging using adaptive region growing
J. Cabello, A. Bailey, I. Kitchen, et al.
Segmentation in medical imaging plays a critical role easing the delineation of key anatomical functional structures in all the imaging modalities. However, many segmentation approaches are optimized with the assumption of high contrast, and then fail when segmenting poor contrast to noise objects. The number of approaches published in the literature falls dramatically when functional imaging is the aim. In this paper a feature extraction based approach, based on region growing, is presented as a segmentation technique suitable for poor quality (low Contrast to Noise Ratio CNR) images, as often found in functional images derived from Autoradiography. The region growing combines some modifications from the typical region growing method, to make the algorithm more robust and more reliable. Finally the algorithm is validated using synthetic images and biological imagery.
Novel level-set based segmentation method of the lung at HRCT images of diffuse interstitial lung disease (DILD)
In this paper, we propose an algorithm for reliable segmentation of the lung at HRCT of DILD. Our method consists of four main steps. First, the airway and colon are segmented and excluded by thresholding(-974 HU) and connected component analysis. Second, initial lung is identified by thresholding(-474 HU). Third, shape propagation outward the lung is performed on the initial lung. Actual lung boundaries exist inside the propagated boundaries. Finally, subsequent shape modeling level-set inward the lung from the propagated boundary can identify the lung boundary when the curvature term was highly weighted. To assess the accuracy of the proposed algorithm, the segmentation results of 54 patients are compared with those of manual segmentation done by an expert radiologist. The value of 1 minus volumetric overlap is less than 5% error. Accurate result of our method would be useful in determining the lung parenchyma at HRCT, which is the essential step for the automatic classification and quantification of diffuse interstitial lung disease.
Brain tissue segmentation of neonatal MR images using a longitudinal subject-specific probabilistic atlas
Brain tissue segmentation of neonate MR images is a challenging task in study of early brain development, due to low signal contrast among brain tissues and high intensity variability especially in white matter. Among various brain tissue segmentation algorithms, the atlas-based segmentation techniques can potentially produce reasonable segmentation results on neonatal brain images. However, their performance on the population-based atlas is still limited due to the high variability of brain structures across different individuals. Moreover, it may be impossible to generate a reasonable probabilistic atlas for neonates without tissue segmentation samples. To overcome these limitations, we present a neonatal brain tissue segmentation method by taking advantage of the longitudinal data available in our study to establish a subject-specific probabilistic atlas. In particular, tissue segmentation of the neonatal brain is formulated as two iterative steps of bias correction and probabilistic atlas based tissue segmentation, along with the guidance of brain tissue segmentation resulted from the later time images of the same subject which serve as a subject-specific probabilistic atlas. The proposed method has been evaluated qualitatively through visual inspection and quantitatively by comparing with manual delineation results. Experimental results show that the utilization of a subject-specific probabilistic atlas can substantially improve tissue segmentation of neonatal brain images.
Evaluation of atlas based mouse brain segmentation
Magentic Reasonance Imaging for mouse phenotype study is one of the important tools to understand human diseases. In this paper, we present a fully automatic pipeline for the process of morphometric mouse brain analysis. The method is based on atlas-based tissue and regional segmentation, which was originally developed for the human brain. To evaluate our method, we conduct a qualitative and quantitative validation study as well as compare of b-spline and fluid registration methods as components in the pipeline. The validation study includes visual inspection, shape and volumetric measurements and stability of the registration methods against various parameter settings in the processing pipeline. The result shows both fluid and b-spline registration methods work well in murine settings, but the fluid registration is more stable. Additionally, we evaluated our segmentation methods by comparing volume differences between Fmr1 FXS in FVB background vs C57BL/6J mouse strains.
Decision algorithm for 3D blood vessel loop based on a route edit distance
D. Kobayashi, H. Yokota, S. Morishita, et al.
This paper reports on a method to distinguish true from false of the loop in the blood vessel graph. Most conventional studies have used a graph to represent 3D blood vessels structure. Blood vessels graph sometimes has a false loop and this exerts a harmful influence to the graph analysis. Conventional study simply cut them but this is not suitable for the graph include real loop. For this reason, we try to distinguish true from false of the loop in the graph. Our method uses the loop inside and the outside main blood vessel shape to distinguish the similar loop. This main blood vessel we called route is long, thick, and not shares to other route as much as possible. Even if a graph includes false loop, this main route will avoid the false connection and detect the same main blood vessel. Our method detects such a main route in each loop branch point and stores it as the outside feature for comparing. Inside feature is measured by converting the inside blood vessels as one route. Each loop is compared by the graph edit distance. Graph edit distance is easily able to deal with the route adding, deleting and replacing. Our method was tested by the cerebral blood vessels image in MRI. Our method tried to detect the arterial cycles of Willis from the graph including false loops. As a result, our method detected it correctly in four data from five.
Automated probabilistic segmentation of tumors from CT data using spatial and intensity properties
Jung Leng Foo, Thom Lobe, Eliot Winer
This paper presents a probabilistic segmentation process developed using the selection process from the Simulated Annealing optimization algorithm as a foundation. This process allows pixels to be segmented based on a probability selection process. An automated seed and search region selection processes multiple image slices automatically as an object's size, shape, and location changes between subsequent slices. Apart from the first slice in the dataset, where the user manually selects the seed and search region for segmentation, the method performs automatically for all other slices. From the test cases, the automated seed selection process was efficient in searching for new seed locations, as the object changed size, location, and orientation in each slice of the study. Segmentation results from both algorithms showed success in segmenting the tumor from nine of the ten CT datasets with less than 17% false positive errors and seven test cases with less than 20% false negative errors. Statistical testing of the results showed a high repeatability factor, with low values of inter- and intra-user variability. Furthermore, the method requires information from only a two-dimensional image data at a time to accommodate performance on a regular personal computer.
Probabilistic boosting trees for automatic bone removal from CT angiography images
In CT angiography images, osseous structures occluding vessels pose difficulties for physicians during diagnosis. Simple thresholding techniques for removing bones fail due to overlapping CT values of vessels filled with contrast agent and osseous tissue, while manual delineation is slow and tedious. Thus, we propose to automatically segment bones using a trainable classifier to label image patches as bone or background. The image features provided to the classifier are based on grey value statistics and gradients. In contrast to most existing methods, osseous tissue segmentation in our algorithm works without any prior knowledge of the body region depicted in the image. This is achieved by using a probabilistic boosting tree, which is capable of automatically decomposing the input space. The whole system works by partitioning the image using a watershed transform, classifying image regions as bone or background and refining the result by means of a graph-based procedure. Additionally, an intuitive way of manually refining the segmentation result is incorporated. The system was evaluated on 15 CTA datasets acquired from various body regions, showing an average correct recognition of bone regions of 80% at a false positive rate of 0.025% of the background voxels.
Mammography mass detection: a multi-stage hybrid approach
Here in this paper a combined method of pixel based and region based mass detection is proposed. In the first step, the background and pectoral muscle are filtered from mammography images and the image contrast is enhanced using an adaptive density weighted approach. Then, in a coarse level, suspected regions are extracted based on mathematical morphology and adaptive thresholding methods. Finally, to reduce the false positives produced in the coarse stage, a useful feature vector based on ranklet transform is obtained and fed into a support vector machine classifier to detect masses. MIAS (Mammographic Image Analysis Society) and Imam Hospital databases were used to evaluate the performance of the algorithm. The sensitivity and specificity of the proposed method are 74% and 91% respectively. The proposed algorithm shows a high degree of robustness in detecting masses of different shapes.
An automated image segmentation and classification algorithm for immunohistochemically stained tumor cell nuclei
Hangu Yeo, Vadim Sheinin, Yuri Sheinin
As medical image data sets are digitized and the number of data sets is increasing exponentially, there is a need for automated image processing and analysis technique. Most medical imaging methods require human visual inspection and manual measurement which are labor intensive and often produce inconsistent results. In this paper, we propose an automated image segmentation and classification method that identifies tumor cell nuclei in medical images and classifies these nuclei into two categories, stained and unstained tumor cell nuclei. The proposed method segments and labels individual tumor cell nuclei, separates nuclei clusters, and produces stained and unstained tumor cell nuclei counts. The representative fields of view have been chosen by a pathologist from a known diagnosis (clear cell renal cell carcinoma), and the automated results are compared with the hand-counted results by a pathologist.
Reconstruction from a flexible number of projections in cone-beam computed tomography via active shape models
Peter B. Noël, Jason J. Corso, Jinhui Xu, et al.
With a steady increase of CT interventions, population dose is increasing. Thus, new approaches must be developed to reduce the dose. In this paper, we present a means for rapid identification and reconstruction of objects of interest in reconstructed data. Active shape models are first trained on sets of data obtained from similar subjects. A reconstruction is performed using a limited number of views. As each view is added, the reconstruction is evaluated using the active shape models. Once the object of interest is identified, the volume of interest alone is reconstructed, saving reconstruction time. Note that the data outside of the objects of interest can be reconstructed using fewer views or lower resolution providing the context of the region of interest data. An additional feature of our algorithm is that a reliable segmentation of objects of interest is achieved from a limited set of projections. Evaluations were performed using simulations with Shepp-Logan phantoms and animal studies. In our evaluations, regions of interest are identified using about 33 projections on average. The overlap of the identified regions with the true regions of interest is approximately 91%. The identification of the region of interest requires about 1/5 of the time required for full reconstruction, the time for reconstruction of the region of interest is currently determined by the fraction of voxels in the region of interest (i.e, voxels in region of interest/voxels in full volume). The algorithm has several important clinical applications, e.g., rotational angiography, digital tomosynthesis mammography, and limited view computed tomography.
Prostate contouring in MRI-guided biopsy
Siddharth Vikal, Steven Haker, Clare Tempany, et al.
With MRI possibly becoming a modality of choice for detection and staging of prostate cancer, fast and accurate outlining of the prostate is required in the volume of clinical interest. We present a semi-automatic algorithm that uses a priori knowledge of prostate shape to arrive at the final prostate contour. The contour of one slice is then used as initial estimate in the neighboring slices. Thus we propagate the contour in 3D through steps of refinement in each slice. The algorithm makes only minimum assumptions about the prostate shape. A statistical shape model of prostate contour in polar transform space is employed to narrow search space. Further, shape guidance is implicitly imposed by allowing only plausible edge orientations using template matching. The algorithm does not require region-homogeneity, discriminative edge force, or any particular edge profile. Likewise, it makes no assumption on the imaging coils and pulse sequences used and it is robust to the patient's pose (supine, prone, etc.). The contour method was validated using expert segmentation on clinical MRI data. We recorded a mean absolute distance of 2.0 ± 0.6 mm and dice similarity coefficient of 0.93 ± 0.3 in midsection. The algorithm takes about 1 second per slice.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A fast quantum mechanics based contour extraction algorithm
A fast algorithm was proposed to decrease the computational cost of the contour extraction approach based on quantum mechanics. The contour extraction approach based on quantum mechanics is a novel method proposed recently by us, which will be presented on the same conference by another paper of us titled "a statistical approach to contour extraction based on quantum mechanics". In our approach, contour extraction was modeled as the locus of a moving particle described by quantum mechanics, which is obtained by the most probable locus of the particle simulated in a large number of iterations. In quantum mechanics, the probability that a particle appears at a point is equivalent to the square amplitude of the wave function. Furthermore, the expression of the wave function can be derived from digital images, making the probability of the locus of a particle available. We employed the Markov Chain Monte Carlo (MCMC) method to estimate the square amplitude of the wave function. Finally, our fast quantum mechanics based contour extraction algorithm (referred as our fast algorithm hereafter) was evaluated by a number of different images including synthetic and medical images. It was demonstrated that our fast algorithm can achieve significant improvements in accuracy and robustness compared with the well-known state-of-the-art contour extraction techniques and dramatic reduction of time complexity compared to the statistical approach to contour extraction based on quantum mechanics.
Accurate, fast, and robust vessel contour segmentation of CTA using an adaptive self-learning edge model
Stefan Grosskopf, Christina Biermann, Kai Deng, et al.
We present an efficient algorithm for the robust segmentation of vessel contours in Computed Tomography Angiography (CTA) images. The algorithm performs its task within several steps based on a 3D Active Contour Model (ACM) with refinements on Multi-Planar Reconstructions (MPRs) using 2D ACMs. To be able to distinguish true vessel edges from spurious, an adaptive self-learning edge model is applied. We present details of the algorithm together with an evaluation on n=150 CTA data sets and compare the results of the automatic segmentation with manually outlined contours resulting in a median dice similarity coefficient (DSC) of 92.2%. The algorithm is able to render 100 contours within 1.1s on a Pentium®4 CPU 3.20 GHz, 2 GByte of RAM.
Tumor segmentation of multiecho MR T2-weighted images with morphological operators
W. Torres, M. Martín-Landrove, M. Paluszny, et al.
In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.
Morpho-geometrical approach for 3D segmentation of pulmonary vascular tree in multi-slice CT
Catalin Fetita, Pierre-Yves Brillet, Françoise J. Prêteux
The analysis of pulmonary vessels provides better insights into the lung physio-pathology and offers the basis for a functional investigation of the respiratory system. In order to be performed in clinical routine, such analysis has to be compatible with the general protocol for thorax imaging based on multi-slice CT (MSCT), which does not involve the use of contrast agent for vessels enhancement. Despite the fact that a visual assessment of the pulmonary vascular tree is facilitated by the natural contrast existing between vessels and lung parenchyma, a quantitative analysis becomes quickly tedious due to the high spatial density and subdivision complexity of these anatomical structures. In this paper, we develop an automated 3D approach for the segmentation of the pulmonary vessels in MSCT allowing further quantification facilities for the lung function. The proposed approach combines mathematical morphology and discrete geometry operators in order to reach distal small caliber blood vessels and to preserve the border with the wall of the bronchial tree which features identical intensity values. In this respect, the pulmonary field is first roughly segmented using thresholding, and the trachea and the main bronchi removed. The lung shape is then regularized by morphological alternate filtering and the high opacities (vessels, bronchi, and other eventual pathologic features) selected. After the attenuation of the bronchus wall for large and medium airways, the set of vessel candidates are obtained by morphological grayscale reconstruction and binarization. The residual bronchus wall components are then removed by means of a geometrical shape filtering which includes skeletonization and cylindrical shape estimation. The morphology of the reconstructed pulmonary vessels can be visually investigated with volume rendering, by associating a specific color code with the local vessel caliber. The complement set of the vascular tree among the high intensity structures in the lung may also inform on lung pathological conditions (inflammation, interstitial disease, nodular patterns,...). The results obtained on normal and pathological subjects or in inspiration vs. expiration, are presented and discussed.
Detection of clusters of microcalcification based on associated differential and morphological filters in full mammogram
Segmenting structures of interest represents one of the most important stages in the processes of interpreting, classifying and diagnosing analyses for Computer Aided-Diagnosis schemes. In this work, a method to segment microcalcification clusters in mammograms is proposed which is based on a differential filter, associated to the classic Sobel filter, in a presegmentation step (Step 1). This process will identify the significant pixels, which means that they will have the same value in both resultant images at each filtering processes. Also, two morphologic operations, a classic dilatation scheme and the proposed filter in multidirectional format are applied to obtain better border definitions and filled in regions of interest. In the next step (Step 2), an image map is obtained by translating a template in almost all possible image positions generating a vector of densities, formed by counting significant pixels. This discrete function makes it possible to find maximum points which will represent the possible microcalcification clusters. An algorithm to transform areas into single points is proposed to enable counting how many possible microcalcifications there are inside these regions. Tests with two databases composed of full mammograms and regions of interest with phantom images are presented with its respective performances.
Automatic quantification of neo-vasculature from micro-CT
Yogish Mallya, A. K. Narayanan, Lyubomir Zagorchev
Angiogenesis is the process of formation of new blood vessels as outgrowths of pre-existing ones. It occurs naturally during development, tissue repair, and abnormally in pathologic diseases such as cancer. It is associated with proliferation of blood vessels/tubular sprouts that penetrate deep into tissues to supply nutrients and remove waste products. The process starts with migration of endothelial cells. As the cells move towards the target area they form small tubular sprouts recruited from the parent vessel. The sprouts grow in length due to migration, proliferation, and recruitment of new endothelial cells and the process continues until the target area becomes fully vascular. Accurate quantification of sprout formation is very important for evaluation of treatments for ischemia as well as angiogenesis inhibitors and plays a key role in the battle against cancer. This paper presents a technique for automatic quantification of newly formed blood vessels from Micro-CT volumes of tumor samples. A semiautomatic technique based on interpolation of Bezier curves was used to segment out the cancerous growths. Small vessels as determined by their diameter within the segmented tumors were enhanced and quantified with a multi-scale 3-D line detection filter. The same technique can be easily extended for quantification of tubular structures in other 3-D medical imaging modalities. Experimental results are presented and discussed.
Automatic brain cropping enhancement using active contours initialized by a PCNN
Active contours are a popular medical image segmentation strategy. However in practice, its accuracy is dependent on the initialization of the process. The PCNN (Pulse Coupled Neural Network) algorithm developed by Eckhorn to model the observed synchronization of neural assemblies in small mammals such as cats allows for segmenting regions of similar intensity but it lacks a convergence criterion. In this paper we report a novel PCNN based strategy to initialize the zero level contour for automatic brain cropping of T2 weighted MRI image volumes of Long-Evans rats. Individual 2D anatomy slices of the rat brain volume were processed by means of a PCNN and a surrogate image 'signature' was constructed for each slice. By employing a previously trained artificial neural network (ANN) an approximate PCNN iteration (binary mask) was selected. This mask was then used to initialize a region based active contour model to crop the brain region. We tested this hybrid algorithm on 30 rat brain (256*256*12) volumes and compared the results against manually cropped gold standard. The Dice and Jaccard similarity indices were used for numerical evaluation of the proposed hybrid model. The highly successful system yielded an average of 0.97 and 0.94 respectively.
Sphere extraction in MR images with application to whole-body MRI
Christian Wachinger, Simon Baumann, Jochen Zeltner, et al.
Recent technological advances in magnetic resonance imaging (MRI) lead to shorter acquisition times and consequently make it an interesting whole-body imaging modality. The acquisition time can further be reduced by acquiring images with a large field-of-view (FOV), making less scan stations necessary. Images with a large FOV are however disrupted by severe geometric distortion artifacts, which become more pronounced closer to the boundaries. Also the current trend in MRI, towards shorter and wider bore magnets, makes the images more prone to geometric distortion. In a previous work,4 we proposed a method to correct for those artifacts using simultaneous deformable registration. In the future, we would like to integrate previous knowledge about the distortion field into the process. For this purpose we scan a specifically designed phantom consisting of small spheres arranged in a cube. In this article, we focus on the automatic extraction of the centers of the spheres, wherein we are particularly interested, for the calculation of the distortion field. The extraction is not trivial because of the significant intensity inhomogeneity within the images. We propose to use the local phase for the extraction purposes. The phase has the advantage that it provides structural information invariant to intensity. We use the monogenic signal to calculate the phase. Subsequently, we once apply a Hough transform and once a direct maxima search, to detect the centers. Moreover, we use a gradient and variance based approach for the radius estimation. We performed our extraction on several phantom scans and obtained good results.
Ridge-branch-based blood vessel detection algorithm for multimodal retinal images
Y. Li, N. Hutchings, R. W. Knighton, et al.
Automatic detection of retinal blood vessels is important to medical diagnoses and imaging. With the development of imaging technologies, various modals of retinal images are available. Few of currently published algorithms are applied to multimodal retinal images. Besides, the performance of algorithms with pathologies is expected to be improved. The purpose of this paper is to propose an automatic Ridge-Branch-Based (RBB) detection algorithm of blood vessel centerlines and blood vessels for multimodal retinal images (color fundus photographs, fluorescein angiograms, fundus autofluorescence images, SLO fundus images and OCT fundus images, for example). Ridges, which can be considered as centerlines of vessel-like patterns, are first extracted. The method uses the connective branching information of image ridges: if ridge pixels are connected, they are more likely to be in the same class, vessel ridge pixels or non-vessel ridge pixels. Thanks to the good distinguishing ability of the designed "Segment-Based Ridge Features", the classifier and its parameters can be easily adapted to multimodal retinal images without ground truth training. We present thorough experimental results on SLO images, color fundus photograph database and other multimodal retinal images, as well as comparison between other published algorithms. Results showed that the RBB algorithm achieved a good performance.
A multi-modality segmentation framework: application to fully automatic heart segmentation
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging modalities, e.g., computed tomography (CT), magnetic resonance (MR) and rotational X-ray volume imaging. While many segmentation approaches exist, most of them are developed for a single, specific imaging modality and a single organ. In clinical practice, however, it is becoming increasingly important to handle multiple modalities: First due to a case-specific choice of the most suitable imaging modality (e.g. CT versus MR), and second in order to integrate complementary data from multiple modalities. In this paper, we present a single, integrated segmentation framework which can easily be adapted to a range of imaging modalities and organs. Our algorithm is based on shape-constrained deformable models. Key elements are (1) a shape model representing the geometry and variability of the target organ of interest, (2) spatially varying boundary detection functions representing the gray value appearance of the organ boundaries for the specific imaging modality or protocol, and (3) a multi-stage segmentation approach. Focussing on fully automatic heart segmentation, we present evaluation results for CT,MR (contrast enhanced and non-contrasted), and rotational X-ray angiography (3-D RA). We achieved a mean segmentation error of about 0.8mm for CT and (non-contrasted) MR, 1.0mm for contrast-enhanced MR and 1.3mm for 3-D RA, demonstrating the success of our segmentation framework across modalities.
Automated determination of spinal centerline in CT and MR images
Darko Štern, Tomaž Vrtovec, Franjo Pernuš, et al.
The spinal curvature is one of the most important parameters for the evaluation of spinal deformities. The spinal centerline, represented by the curve that passes through the centers of the vertebral bodies in three-dimensions (3D), allows valid quantitative measurements of the spinal curvature at any location along the spine. We propose a novel automated method for the determination of the spinal centerline in 3D spine images. Our method exploits the anatomical property that the vertebral body walls are cylindrically-shaped and therefore the lines normal to the edges of the vertebral body walls most often intersect in the middle of the vertebral bodies, i.e. at the location of spinal centerline. These points of intersection are first obtained by a novel algorithm that performs a selective search in the directions normal to the edges of the structures and then connected with a parametric curve that represents the spinal centerline in 3D. As the method is based on anatomical properties of the 3D spine anatomy, it is modality-independent, i.e. applicable to images obtained by computed tomography (CT) and magnetic resonance (MR). The proposed method was evaluated on six CT and four MR images (T1- and T2-weighted) of normal spines and on one scoliotic CT spine image. The qualitative and quantitative results for the normal spines show that the spinal centerline can be successfully determined in both CT and MR spine images, while the results for the scoliotic spine indicate that the method may also be used to evaluate pathological curvatures.
A statistical approach to contour extraction based on quantum mechanics
Contour extraction is a key issue in many medical applications. A novel statistical approach based on quantum mechanics to extract contour of the interested object of medical images was proposed in this paper. The natures of quantum statistical concepts such as the quantum discontinuity and the wave function correspond to the discrete and gray possibility of an image respectively. Contour extraction will be performed by the quantum particle movement, where the particle will be moved forward to the position with high probability density edges in image potential field. Experimental results with medical images demonstrated that the proposed approach has the capability to extract contours with arbitrary initialization and handle topology changes as well as both the inner and outer contours by a single initialization. Keywords: Statistical approach, contour extraction, path integral, quantum mechanics.ing
Segmentation of 2D gel electrophoresis spots using a Markov random field
We propose a statistical model-based approach for the segmentation of fragments of DNA as a first step in the automation of the primarily manual process of comparing two or more images resulting from the Restriction Landmark Genomic Scanning (RLGS) method. These 2D gel electrophoresis images are the product of the separation of DNA into fragments that appear as spots on X-ray films. The goal is to find instances where a spot appears in one image and not in another since a missing spot can be correlated with a region of DNA that has been affected by a disease such as cancer. The entire comparison process is typically done manually, which is tedious and very error prone. We pose the problem as the labeling of each image pixel as either a spot or non-spot and use a Markov Random Field (MRF) model and simulated annealing for inference. Neighboring spot labels are then connected to form spot regions. The MRF based model was tested on actual 2D gel electrophoresis images.
Automatic anatomy recognition via multi-object-oriented active shape models
The computerized assistive process of recognizing, delineating and quantifying organs and tissue regions in medical images, occurring automatically during clinical image interpretation, is called automatic anatomic recognition (AAR). This paper studies the feasibility of developing an AAR system in clinical radiology. The anatomy recognition method described here consists of three components: (a) oriented active shape modeling (OASM); (b) multi object generalization of OASM; (c) object recognition strategies. (b) and (c) are novel and depend heavily on the idea of OASM, presented previously in this conference. The delineation of an object boundary is done in OASM via a two level dynamic programming algorithm wherein the first level finds optimal location for the landmarks and the second level finds optimal oriented boundary segments between successive landmarks. This algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and a scale component) for the multi object model that yields the smallest total boundary cost for all objects. The evaluation results on a routine clinical abdominal CT data set indicate the following: (1) High recognition accuracy can be achieved without fail by including a large number of objects which are spread out in the body region; (2) An overall delineation accuracy of TPVF>97%, FPVF<0.2% was achieved, suggesting the feasibility of AAR.
Dependent component analysis based approach to robust demarcation of skin tumors
Ivica Kopriva, Antun Peršin, Neira Puizina-Ivić, et al.
Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.
Robust segmentation using non-parametric snakes with multiple cues for applications in radiation oncology
Jayashree Kalpathy-Cramer, Umut Ozertem, William Hersh, et al.
Radiation therapy is one of the most effective treatments used in the treatment of about half of all people with cancer. A critical goal in radiation therapy is to deliver optimal radiation doses to the perceived tumor while sparing the surrounding healthy tissues. Radiation oncologists often manually delineate normal and diseased structures on 3D-CT scans, a time consuming task. We present a segmentation algorithm using non-parametric snakes and principal curves that can be used in an automatic or semi-supervised fashion. It provides fast segmentation that is robust with respect to noisy edges and does not require the user to optimize a variety of parameters, unlike many segmentation algorithms. It allows multiple cues to be incorporated easily for the purposes of estimating the edge probability density. These cues, including texture, intensity and shape priors, can be used simultaneously to delineate tumors and normal anatomy, thereby increasing the robustness of the algorithm. The notion of principal curves is used to interpolate between data points in sparse areas. We compare the results using a non-parametric snake technique with a gold standard consisting of manually delineated structures for tumors as well as normal organs.
A machine learning approach to extract spinal column centerline from three-dimensional CT data
Caihua Wang, Yuanzhong Li, Wataru Ito, et al.
The spinal column is one of the most important anatomical structures in the human body and its centerline, that is, the centerline of vertebral bodies, is a very important feature used by many applications in medical image processing. In the past, some approaches have been proposed to extract the centerline of spinal column by using edge or region information of vertebral bodies. However, those approaches may suffer from difficulties in edge detection or region segmentation of vertebral bodies when there exist vertebral diseases such as osteoporosis, compression fracture. In this paper, we propose a novel approach based on machine learning to robustly extract the centerline of the spinal column from threedimensional CT data. Our approach first applies a machine learning algorithm, called AdaBoost, to detect vertebral cord regions, which have a S-shape similar to and close to, but can be detected more stably than, the spinal column. Then a centerline of detected vertebral cord regions is obtained by fitting a spline curve to their central points, using the associated AdaBoost scores as weights. Finally, the obtained centerline of vertebral cord is linearly deformed and translated in the sagittal direction to fit the top and bottom boundaries of the vertebral bodies and then a centerline of the spinal column is obtained. Experimental results on a large CT data set show the effectiveness of our approach.
Affinity functions: recognizing essential parameters in fuzzy connectedness based image segmentation
Fuzzy connectedness (FC) constitutes an important class of image segmentation schemas. Although affinity functions represent the core aspect (main variability parameter) of FC algorithms, they have not been studied systematically in the literature. In this paper, we present a thorough study to fill this gap. Our analysis is based on the notion of equivalent affinities: if any two equivalent affinities are used in the same FC schema to produce two versions of the algorithm, then these algorithms are equivalent in the sense that they lead to identical segmentations. We give a complete characterization of the affinity equivalence and show that many natural definitions of affinity functions and their parameters used in the literature are redundant in the sense that different definitions and values of such parameters lead to equivalent affinities. We also show that two main affinity types - homogeneity based and object feature based - are equivalent, respectively, to the difference quotient of the intensity function and Rosenfeld's degree of connectivity. In addition, we demonstrate that any segmentation obtained via relative fuzzy connectedness (RFC) algorithm can be viewed as segmentation obtained via absolute fuzzy connectedness (AFC) algorithm with an automatic and adaptive threshold detection. We finish with an analysis of possible ways of combining different component affinities that result in non equivalent affinities.
Image segmentation using joint spatial-intensity-shape features: application to CT lung nodule segmentation
Automatic segmentation of medical images is a challenging problem due to the complexity and variability of human anatomy, poor contrast of the object being segmented, and noise resulting from the image acquisition process. This paper presents a novel feature-guided method for the segmentation of 3D medical lesions. The proposed algorithm combines 1) a volumetric shape feature (shape index) based on high-order partial derivatives; 2) mean shift clustering in a joint spatial-intensity-shape (JSIS) feature space; and 3) a modified expectation-maximization (MEM) algorithm on the mean shift mode map to merge the neighboring regions (modes). In such a scenario, the volumetric shape feature is integrated into the process of the segmentation algorithm. The joint spatial-intensity-shape features provide rich information for the segmentation of the anatomic structures or lesions (tumors). The proposed method has been evaluated on a clinical dataset of thoracic CT scans that contains 68 nodules. A volume overlap ratio between each segmented nodule and the ground truth annotation is calculated. Using the proposed method, the mean overlap ratio over all the nodules is 0.80. On visual inspection and using a quantitative evaluation, the experimental results demonstrate the potential of the proposed method. It can properly segment a variety of nodules including juxta-vascular and juxta-pleural nodules, which are challenging for conventional methods due to the high similarity of intensities between the nodules and their adjacent tissues. This approach could also be applied to lesion segmentation in other anatomies, such as polyps in the colon.
Validation tools for image segmentation
Dirk Padfield, James Ross
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Segmentation of mosaicism in cervicographic images using support vector machines
The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating a large digital repository of cervicographic images for the study of uterine cervix cancer prevention. One of the research goals is to automatically detect diagnostic bio-markers in these images. Reliable bio-marker segmentation in large biomedical image collections is a challenging task due to the large variation in image appearance. Methods described in this paper focus on segmenting mosaicism, which is an important vascular feature used to visually assess the degree of cervical intraepithelial neoplasia. The proposed approach uses support vector machines (SVM) trained on a ground truth dataset annotated by medical experts (which circumvents the need for vascular structure extraction). We have evaluated the performance of the proposed algorithm and experimentally demonstrated its feasibility.
Posters: Shape and Texture
icon_mobile_dropdown
Method for fast and accurate segmentation processing from prior shape: application to femoral head segmentation on x-ray images
Ramnada Chav, Thierry Cresson, Claude Kauffmann, et al.
This paper proposes a prior shape segmentation method to create a constant-width ribbon-like zone that runs along the boundary to be extracted. The image data corresponding to that zone is transformed into a rectangular image subspace where the boundary is roughly straightened. Every step of the segmentation process is then applied to that straightened subspace image where the final extracted boundary is transformed back into the original image space. This approach has the advantage of producing very efficient filtering and edge detection using conventional techniques. The final boundary is continuous even over image regions where partial information is missing. The technique was applied to the femoral head segmentation where we show that the final segmented boundary is very similar to the one obtained manually by a trained orthopedist and has low sensitivity to the initial positioning of the prior shape.
Vertebral segmentation using contourlet-based salient point matching and localized multiscale shape prior
R. Zewail, A. Elsafi, N. Durdle
Medical experts often examine hundreds of spine x-rays to determine existence of diseases like osteoarthritis, osteoporoses, and osteophites. Accurate vertebrae segmentation plays a great role in the proper assessment of various vertebral abnormalities. Manual segmentation methods are both time consuming and non-reproducible, hence, developing efficient computer-assisted segmentation methods has been a long standing goal. Over the past decade, segmentation methods that utilize statistical models of shape and appearance have drawn much interest within the medical imaging community. However, despite being a promising approach, they are always faced with a number of challenges such as: poor image quality, and the ability to generalize well to unseen vertebral deformities. This paper presents a novel vertebral segmentation method using Contourlet-based salient point matching and a localized multi-scale shape prior. We employ a multi-scale directional analysis tool, namely contourlets, to build local appearance profiles at salient points of the vertebra's body. The contourlet-based local appearance model is used to detect the vertebral bodies in the test x-ray image. A novel localized multi-scale shape prior is used to drive the segmentation process. Within a best-basis selection framework, the proposed shape prior benefits from the multi-scale nature of wavelet packets, and the capability of ICA to capture hidden independent modes of variations. Experiments were conducted using a set of 100 digital x-ray images of lumbar spines. The contourlet-based appearance profiles and the localized multi-scale shape prior were constructed using a training subset of images, and then matched to unseen images. Promising results were obtained compared to related work in the literature with an average segmentation error of 1.1997 mm.
Volumetric topological analysis: a novel method for trabecular bone characterization on the continuum between plates and rods
Punam K. Saha, Yan Xu, Guoyuan Liang, et al.
Trabecular bone (TB) is a complex quasi-random network of interconnected struts and plates. TB constantly remodels to adapt dynamically to the stresses to which it is subjected (Wolff's Law). In osteoporosis, this dynamic equilibrium between bone formation and resorption is perturbed, leading to bone loss and structural deterioration, both increasing fracture risk. Bone's mechanical competence can only be partly explained by variations in bone mineral density, which led to the notion of bone structural quality. Previously, we developed digital topological analysis or DTA which classifies plates, rods, profiles, edges and junctions in a TB skeletal representation. Although the method has become quite popular, a major limitation is that DTA produces hard classifications only, failing to distinguish between narrow and wide plates. Here, we present a new method called volumetric topological analysis or VTA for quantification of regional topology in complex quasi-random TB networks. At each TB voxel, the method uniquely classifies the topology on the continuum between perfect plates and rods. Therefore, the method is capable of detecting early alterations of trabeculae from plates to rods according to the known etiology of osteoporotic bone loss. Here, novel ideas of geodesic distance transform, geodesic scale and feature propagation have been introduced and combined with DTA and fuzzy distance transform methods conceiving the new VTA technology. The method has been applied to MDCT and μCT images of a cadaveric distal tibia specimen and the results have been quantitatively evaluated. Specifically, intra- and inter-modality reproducibility of the method has been examined and the results are found very promising.
A comparison of local and global scale approaches in characterizing shapes
Scale is a fundamental concept in computer vision and pattern recognition, especially in the fields of shape analysis, image segmentation, and registration. It represents the level of detail of object information in scenes. Global scale methods in image processing process the scene at each of various fixed scales and combine the results, as in scale space approaches. Local scale approaches define the largest homogeneous region at each point, and treat these as fundamental units. A similar dichotomy exists for describing shapes also. To vary the level of detail depending on application, it is desirable to be able to detect dominant points on shape boundaries at different scales. In this paper, we compare global and local scale approaches to shape analysis. For global scale, the Curvature Scale Space (CSS) method is selected, which is a state of the art shape descriptor, and is used in the MPEG-7 standard. The local scale approach is based on the notion of curvature-scale (c-scale), which is a new local scale concept that brings the idea of local morphometric scale (such as ball-, tensor-, and generalized scale) developed for images to the realm of boundaries. All previous methods of extracting dominant points lack this concept of a local scale. In this paper, we present a thorough evaluation of these global and local scale methods. Our analysis indicates that locally adaptive scale has advantages over global scale in shape description, just as it has also been demonstrated in image filtering, segmentation, and registration.
Automatic 3D shape severity quantification and localization for deformational plagiocephaly
Indriyati Atmosukarto, Linda G. Shapiro, Michael L. Cunningham, et al.
Recent studies have shown an increase in the occurrence of deformational plagiocephaly and brachycephaly in children. This increase has coincided with the "Back to Sleep" campaign that was introduced to reduce the risk of Sudden Infant Death Syndrome (SIDS). However, there has yet to be an objective quantification of the degree of severity for these two conditions. Most diagnoses are done on subjective factors such as patient history and physician examination. The existence of an objective quantification would help research in areas of diagnosis and intervention measures, as well as provide a tool for finding correlation between the shape severity and cognitive outcome. This paper describes a new shape severity quantification and localization method for deformational plagiocephaly and brachycephaly. Our results show that there is a positive correlation between the new shape severity measure and the scores entered by a human expert.
Texture analysis using lacunarity and average local variance
Dantha C. Manikka-Baduge, Geoff Dougherty
Texture and spatial pattern are important attributes of images and their potential as features in image classification, for example to discriminate between normal and abnormal status in medical images, has long been recognized. In order to be clinically useful, a texture metric should be robust to changes in image acquisition and digitization. We compared four multi-scale texture metrics accessible in the spatial domain (lacunarity, average local variance (ALV), and two novel variations) in terms of ease of interpretation, sensitivity and computational cost. We analyzed a variety of patterns and textures, using simple synthetic images, standard texture images, and three-dimensional point distributions. ALV is invariant to brightness, but depends on image contrast; it detects the size of a pattern element as a large peak in the plot. Lacunarity shows the periodicity within an image. Normalizing lacunarity removes its dependence on image density, but not on image brightness and contrast, so that comparisons should always be made using histogram equalized images. We extended the treatment to grayscale images directly, which is not equivalent to a weighted sum of the normalized lacunarity of the bit-plane images. Different sampling schemes were introduced and compared in terms of resolution and computational tractability. The plots can be used directly as a texture signature, and parametric features can be extracted from monotonic lacunarity plots for classification purposes.
Preliminary study report: topological texture features extracted from standard radiographs of the heel bone are correlated with femoral bone mineral density
H. F. Boehm, J. Lutz, M. Koerner, et al.
With the growing number of eldery patients in industrialized nations the incidence of geriatric, i.e. osteoporotic fractures is steadily on the rise. It is of great importance to understand the characteristics of hip fractures and to provide diagnostic tests for the assessment of an individual's fracture-risk that allow to take preventive action and give therapeutic advice. At present, bone-mineral-density (BMD) obtained from DXA (dual-energy x-ray-absorptiometry) is the clinical standard of reference for diagnosis and follow-up of osteoporosis. Since availability of DXA - other than that of clinical X-ray imaging - is usually restricted to specialized medical centers it is worth trying to implement alternative methods to estimate an individual's BMD. Radiographs of the peripheral skeleton, e.g. the ankle, range among the most ordered diagnostic procedures in surgery for exclusion or confirmation of fracture. It would be highly beneficial if - as a by-product of conventional imaging - one could obtain a quantitative parameter that is closely correlated with femoral BMD in addition to the original diagnostic information, e.g. fracture status at the peripheral site. Previous studies could demonstrate a correlation between calcaneal BMD and osteoporosis. The objective of our study was to test the hypothesis that topological analysis of calcaneal bone texture depicted by a lateral x-ray projection of the ankle allows to estimate femoral BMD. Our analysis on 34 post-menopausal patients indicate that texture properties based on graylevel topology in calcaneal x-ray-films are closely correlated with BMD at the hip and may qualify as a substitute indicator of femoral fracture risk.