Proceedings Volume 8314

Medical Imaging 2012: Image Processing

cover
Proceedings Volume 8314

Medical Imaging 2012: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 March 2012
Contents: 23 Sessions, 176 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2012
Volume Number: 8314

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8314
  • Segmentation I
  • Registration I
  • Keynote and Cardiac Applications
  • Diffusion Imaging
  • Shape: Applications and Methods
  • Segmentation II
  • Label Fusion
  • Brain Applications
  • Registration II
  • OCT and Ultrasound
  • Segmentation of Vessels and Tubular Structures
  • Digital Pathology I
  • Digital Pathology II
  • Posters: Registration
  • Posters: Segmentation
  • Posters: Shape
  • Posters: Image Enhancement
  • Posters: Neuro Applications
  • Posters: Compressive Sensing
  • Posters: Functional Imaging
  • Posters: Classification
  • Posters: Motion
Front Matter: Volume 8314
icon_mobile_dropdown
Front Matter: Volume 8314
This PDF file contains the front matter associated with SPIE Proceedings Volume 8314, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Segmentation I
icon_mobile_dropdown
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
Bo Wang, Marcel Prastawa, Andrei Irimia, et al.
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Comparison of threshold-based and watershed-based segmentation for the truncation compensation of PET/MR images
Thomas Blaffert, Steffen Renisch, Jing Tang, et al.
Recently introduced combined PET/MR scanners need to handle the specific problem that a limited MR field of view sometimes truncates arm or body contours, which prevents an accurate calculation of PET attenuation correction maps. Such maps of attenuation coefficients over body structures are required for a quantitatively correct PET image reconstruction. This paper addresses this problem by presenting a method that segments a preliminary reconstruction type of PET images, time of flight non-attenuation corrected (ToF-NAC) images, and outlining a processing pipeline that compensates the arm or body truncation with this segmentation. The impact of this truncation compensation is demonstrated together with a comparison of two segmentation methods, simple gray value threshold segmentation and a watershed algorithm on a gradient image. Our results indicate that with truncation compensation a clinically tolerable quantitative SUV error is robustly achievable.
Validation of model-based pelvis bone segmentation from MR images for PET/MR attenuation correction
S. Renisch, T. Blaffert, J. Tang, et al.
With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET) systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.
Automatic bone segmentation in knee MR images using a coarse-to-fine strategy
Sang Hyun Park, Soochahn Lee, Il Dong Yun, et al.
Segmentation of bone and cartilage from a three dimensional knee magnetic resonance (MR) image is a crucial element in monitoring and understanding of development and progress of osteoarthritis. Until now, various segmentation methods have been proposed to separate the bone from other tissues, but it still remains challenging problem due to different modality of MR images, low contrast between bone and tissues, and shape irregularity. In this paper, we present a new fully-automatic segmentation method of bone compartments using relevant bone atlases from a training set. To find the relevant bone atlases and obtain the segmentation, a coarse-to-fine strategy is proposed. In the coarse step, the best atlas among the training set and an initial segmentation are simultaneously detected using branch and bound tree search. Since the best atlas in the coarse step is not accurately aligned, all atlases from the training set are aligned to the initial segmentation, and the best aligned atlas is selected in the middle step. Finally, in the fine step, segmentation is conducted as adaptively integrating shape of the best aligned atlas and appearance prior based on characteristics of local regions. For experiment, femur and tibia bones of forty test MR images are segmented by the proposed method using sixty training MR images. Experimental results show that a performance of the segmentation and the registration becomes better as going near the fine step, and the proposed method obtain the comparable performance with the state-of-the-art methods.
Fully automated prostate segmentation in 3D MR based on normalized gradient fields cross-correlation initialization and LOGISMOS refinement
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape. Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The refinement model is based on a graph-search based framework, which contains both shape and topology information during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and region-specific classifier training. The proposed algorithm was developed using 261 training images and tested on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40 seconds depending on image size and resolution.
Registration I
icon_mobile_dropdown
Real-time 2D/3D registration for tumor motion tracking during radiotherapy
H. Furtado, C. Gendrin, C. Bloch, et al.
Organ motion during radiotherapy is one of causes of uncertainty in dose delivery. To cope with this, the planned target volume (PTV) has to be larger than needed to guarantee full tumor irradiation. Existing methods deal with the problem by performing tumor tracking using implanted fiducial markers or magnetic sensors. In this work, we investigate the feasibility of using x-ray based real time 2D/3D registration for non-invasive tumor motion tracking during radiotherapy. Our method uses purely intensity based techniques, thus avoiding markers or fiducials. X-rays are acquired during treatment at a rate of 5.4Hz. We iteratively compare each x-ray with a set of digitally reconstructed radiographs (DRR) generated from the planning volume dataset, finding the optimal match between the x-ray and one of the DRRs. The DRRs are generated using a ray-casting algorithm, implemented using general purpose computation on graphics hardware (GPGPU) programming techniques using CUDA for greater performance. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the PTV. The phantom motion is measured with an rms error of 2.1 mm and mean registration time is 220 ms. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is seen. Mean registration time is always under 105 ms which is well suited for our purposes. These results demonstrate that real-time organ motion monitoring using image based markerless registration is feasible.
Robust elastic 2D/3D geometric graph matching
Eduard Serradell, Jan Kybic, Francesc Moreno-Noguer, et al.
We present an algorithm for geometric matching of graphs embedded in 2D or 3D space. It is applicable for registering any graph-like structures appearing in biomedical images, such as blood vessels, pulmonary bronchi, nerve fibers, or dendritic arbors. Our approach does not rely on the similarity of local appearance features, so it is suitable for multimodal registration with a large difference in appearance. Unlike earlier methods, the algorithm uses edge shape, does not require an initial pose estimate, can handle partial matches, and can cope with nonlinear deformations and topological differences. The matching consists of two steps. First, we find an affine transform that roughly aligns the graphs by exploring the set of all consistent correspondences between the nodes. This can be done at an acceptably low computational expense by using parameter uncertainties for pruning, backtracking as needed. Parameter uncertainties are updated in a Kalman-like scheme with each match. In the second step we allow for a nonlinear part of the deformation, modeled as a Gaussian Process. Short sequences of edges are grouped into superedges, which are then matched between graphs. This allows for topological differences. A maximum consistent set of superedge matches is found using a dedicated branch-and-bound solver, which is over 100 times faster than a standard linear programming approach. Geometrical and topological consistency of candidate matches is determined in a fast hierarchical manner. We demonstrate the effectiveness of our technique at registering angiography and retinal fundus images, as well as neural image stacks.
Non-rigid surface proximity registration of CT images considering the influence of pleural thickenings
Peter Faltin, Kraisorn Chaisaowong, Thomas Kraus, et al.
Given two CT thorax images from the same patient taken at two different points in time, a detailed follow-up assessment of pleural thickenings and their growth requires a registration of the regarded image regions. While the spatio-temporal matching of thickenings could be achieved by a rigid registration, the direct visual comparison or the combination of thickening segmentations from different points in time require a more precise registration. We present a new method which provides a non-rigid registration of the 3D image data in the region close to the lung surface, where pleural thickenings are located. A B-spline based approach is used to compensate the non-rigid deformations of the lungs. The control-grid for the B-splines is determined using a non-iterative method, which requires matched feature points from the registered image pair. However, current non-rigid registration methods compensate all changes of the lung surface. This is in our case explicitly undesired for changes caused by pleural thickenings. Therefore, our approach takes the thickenings into account by choosing feature points not directly located on the lung surface. The number of feature points is reduced and only strong features are kept for a 3D block matching.
Joint estimation of subject motion and tracer kinetic parameters of dynamic PET data in an EM framework
Jieqing Jiao, Cristian A. Salinas, Graham E. Searle, et al.
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.
Nonrigid registration and classification of the kidneys in 3D dynamic contrast enhanced (DCE) MR images
Xiaofeng Yang, Pegah Ghafourian, Puneet Sharma, et al.
We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.
Super-resolution reconstruction for tongue MR images
Magnetic resonance (MR) images of the tongue have been used in both clinical medicine and scientific research to reveal tongue structure and motion. In order to see different features of the tongue and its relation to the vocal tract it is beneficial to acquire three orthogonal image stacks-e.g., axial, sagittal and coronal volumes. In order to maintain both low noise and high visual detail, each set of images is typically acquired with in-plane resolution that is much better than the through-plane resolution. As a result, any one data set, by itself, is not ideal for automatic volumetric analyses such as segmentation and registration or even for visualization when oblique slices are required. This paper presents a method of super-resolution reconstruction of the tongue that generates an isotropic image volume using the three orthogonal image stacks. The method uses preprocessing steps that include intensity matching and registration and a data combination approach carried out by Markov random field optimization. The performance of the proposed method was demonstrated on five clinical datasets, yielding superior results when compared with conventional reconstruction methods.
Keynote and Cardiac Applications
icon_mobile_dropdown
Representation of deformable motion for compression of dynamic cardiac image data
Andreas Weinlich, Peter Amon, Andreas Hutter, et al.
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Automatic detection of cardiac cycle and measurement of the mitral annulus diameter in 4D TEE images
Bastian Graser, Maximilian Hien, Helmut Rauch, et al.
Mitral regurgitation is a wide spread problem. For successful surgical treatment quantification of the mitral annulus, especially its diameter, is essential. Time resolved 3D transesophageal echocardiography (TEE) is suitable for this task. Yet, manual measurement in four dimensions is extremely time consuming, which confirms the need for automatic quantification methods. The method we propose is capable of automatically detecting the cardiac cycle (systole or diastole) for each time step and measuring the mitral annulus diameter. This is done using total variation noise filtering, the graph cut segmentation algorithm and morphological operators. An evaluation took place using expert measurements on 4D TEE data of 13 patients. The cardiac cycle was detected correctly on 78% of all images and the mitral annulus diameter was measured with an average error of 3.08 mm. Its full automatic processing makes the method easy to use in the clinical workflow and it provides the surgeon with helpful information.
Feasibility of determining myocardial transient ischemic dilation from cardiac CT by automated stress/rest registration
Jonghye Woo, Piotr J. Slomka, Ryo Nakazato, et al.
Transient ischemic dilation (TID) of the left ventricle measured by myocardial perfusion Single Photon Emission Computed Tomography (SPECT) and defined as a the ratio of stress myocardial blood volume to rest myocardial blood volume has been shown to be highly specific for detection of severe coronary artery disease. This work investigates automated quantification of TID from cardiac Computed Tomography (CT) perfusion images. To date, TID has not been computed from CT. Previous studies to compute TID have assumed accurate segmentation of the left ventricle and performed subsequent analysis of volume change mainly on static or less often on gated myocardial perfusion SPECT. This, however, may limit the accuracy of TID due to potential errors from segmentation, perfusion defects or volume measurement from both images. In this study, we propose to use registration methods to determine TID from cardiac CT scans where deformation field within the structure of interest is used to measure the local volume change between stress and rest. Promising results have been demonstrated with 7 datasets, showing the potential of this approach as a comparative method for measuring TID.
Localised manifold learning for cardiac image analysis
Kanwal K. Bhatia, Anthony N. Price, Jo V. Hajnal, et al.
Manifold learning is increasingly being used to discover the underlying structure of medical image data. Traditional approaches operate on whole images with a single measure of similarity used to compare entire images. In this way, information on the locality of differences is lost and smaller trends may be masked by dominant global differences. In this paper, we propose the use of multiple local manifolds to analyse regions of images without any prior knowledge of which regions are important. Localised manifolds are created by partitioning images into regular subsections with a manifold constructed for each patch. We propose a framework for incorporating information from the neighbours of each patch to calculate a coherent embedding. This generates a simultaneous dimensionality reduction of all patches and results in the creation of embeddings which are spatially-varying. Additionally, a hierarchical method is presented to enable a multi-scale embedding solution. We use this to extract spatially-varying respiratory and cardiac motions from cardiac MRI. Although there is a complex interplay between these motions, we show how they can be separated on a regional basis. We demonstrate the utility of the localised joint embedding over a global embedding of whole images and over embedding individual patches independently.
Diffusion Imaging
icon_mobile_dropdown
HARDI denoising using nonlocal means on S2
Alan Kuurstra, Sudipto Dolui, Oleg Michailovich
Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.
Measures for validation of DTI tractography
Sylvain Gouttard, Casey B. Goodlett, Marek Kubicki, et al.
The evaluation of analysis methods for diffusion tensor imaging (DTI) remains challenging due to the lack of gold standards and validation frameworks. Significant work remains in developing metrics for comparing fiber bundles generated from streamline tractography. We propose a set of volumetric and tract oriented measures for evaluating tract differences. The different methodsdeveloped for this assessment work are: an overlap measurement, a point cloud distance and a quantification of the diffusion properties at similar locations between fiber bundles. The application of the measures in this paper is a comparison of atlas generated tractography to tractography generated in individual images. For the validation we used a database of 37 subject DTIs, and applied the measurements on five specific fiber bundles: uncinate, cingulum (left and right for both bundles) and genu. Each measurments is interesting for specific use: the overlap measure presents a simple and comprehensive metric but is sensitive to partial voluming and does not give consistent values depending on the bundle geometry. The point cloud distance associated with a quantile interpretation of the distribution gives a good intuition of how close and similar the bundles are. Finally, the functional difference is useful for a comparison of the diffusion properties since it is the focus of many DTI analysis to compare scalar invariants. The comparison demonstrated reasonable similarity of results. The tract difference measures are also applicable to comparison of tractography algorithms, quality control, reproducibility studies, and other validation problems.
Towards automatic quantitative quality control for MRI
Carolyn B. Lauzon, Brian C. Caffo, Bennett A. Landman
Quality and consistency of clinical and research data collected from Magnetic Resonance Imaging (MRI) scanners may become suspect due to a wide variety of common factors including, experimental changes, hardware degradation, hardware replacement, software updates, personnel changes, and observed imaging artifacts. Standard practice limits quality analysis to visual assessment by a researcher/clinician or a quantitative quality control based upon phantoms which may not be timely, cannot account for differing experimental protocol (e.g. gradient timings and strengths), and may not be pertinent to the data or experimental question at hand. This paper presents a parallel processing pipeline developed towards experiment specific automatic quantitative quality control of MRI data using diffusion tensor imaging (DTI) as an experimental test case. The pipeline consists of automatic identification of DTI scans run on the MRI scanner, calculation of DTI contrasts from the data, implementation of modern statistical methods (wild bootstrap and SIMEX) to assess variance and bias in DTI contrasts, and quality assessment via power calculations and normative values. For this pipeline, a DTI specific power calculation analysis is developed as well as the first incorporation of bias estimates in DTI data to improve statistical analysis.
Using radial NMR profiles to characterize pore size distributions
Rachid Deriche, John Treilhard
Extracting information about axon diameter distributions in the brain is a challenging task which provides useful information for medical purposes; for example, the ability to characterize and monitor axon diameters would be useful in diagnosing and investigating diseases like amyotrophic lateral sclerosis (ALS)1 or autism.2 Three families of operators are defined by Ozarslan,3 whose action upon an NMR attenuation signal extracts the moments of the pore size distribution of the ensemble under consideration; also a numerical method is proposed to continuously reconstruct a discretely sampled attenuation profile using the eigenfunctions of the simple harmonic oscillator Hamiltonian: the SHORE basis. The work presented here extends Ozarlan's method to other bases that can offer a better description of attenuation signal behaviour; in particular, we propose the use of the radial Spherical Polar Fourier (SPF) basis. Testing is performed to contrast the efficacy of the radial SPF basis and SHORE basis in practical attenuation signal reconstruction. The robustness of the method to additive noise is tested and analysed. We demonstrate that a low-order attenuation signal reconstruction outperforms a higher-order reconstruction in subsequent moment estimation under noisy conditions. We propose the simulated annealing algorithm for basis function scale parameter estimation. Finally, analytic expressions are derived and presented for the action of the operators on the radial SPF basis (obviating the need for numerical integration, thus avoiding a spectrum of possible sources of error).
Efficient global fiber tracking on multi-dimensional diffusion direction maps
Jan Klein, Benjamin Köhler, Horst K. Hahn
Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.
Shape: Applications and Methods
icon_mobile_dropdown
Efficient searching of globally optimal and smooth multi-surfaces with shape priors
Lei Xu, Branislav Stojkovic, Hu Ding, et al.
Despite extensive studies in the past, the problem of segmenting globally optimal multiple surfaces in 3D volumetric images remains challenging in medical imaging. The problem becomes even harder in highly noisy and edge-weak images. In this paper we present a novel and highly efficient graph-theoretical iterative method based on a volumetric graph representation of the 3D image that incorporates curvature and shape prior information. Compared with the graph-based method, applying the shape prior to construct the graph on a specific preferred shape model allows easy incorporation of a wide spectrum of shape prior information. Furthermore, the key insight that computation of the objective function can be done independently in the x and y directions makes local improvement possible. Thus, instead of using global optimization technique such as maximum flow algorithm, the iteration based method is much faster. Additionally, the utilization of the curvature in the objective function ensures the smoothness. To the best of our knowledge, this is the first paper to combine the shape-prior penalties with utilizing curvature in objective function to ensure the smoothness of the generated surfaces while striving for achieving global optimality. To evaluate the performance of our method, we test it on a set of 14 3D OCT images. Comparing to the best existing approaches, our experiments suggest that the proposed method reduces the unsigned surface positioning errors form 5.44 ± 1.07(μm) to 4.52 ± 0.84(μm). Moreover, our method has a much improved running time, yields almost the same global optimality but with much better smoothness, which makes it especially suitable for segmenting highly noisy images. The proposed method is also suitable for parallel implementation on GPUs, which could potentially allow us to segment highly noisy volumetric images in real time.
A shape prior-based MRF model for 3D masseter muscle segmentation
Tahir Majeed, Ketut Fundana, Marcel Lüthi, et al.
Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.
Segmentation of parotid glands in head and neck CT images using a constrained active shape model with landmark uncertainty
Antong Chen, Jack H. Noble, Kenneth J. Niermann, et al.
Automatic segmentation of parotid glands in head and neck CT images for IMRT planning has drawn attention in recent years. Although previous approaches have achieved substantial success by reaching high overall volume-wise accuracy, suboptimal segmentations are observed on the interior boundary of the gland where the contrast is poor against the adjacent muscle groups. Herein we propose to use a constrained active shape model with landmark uncertainty to improve the segmentation in this area. Results obtained using this method are compared with results obtained using a regular active shape model through a leave-one-out experiment.
Classification of Alzheimer's disease patients with hippocampal shape wrapper-based feature selection and support vector machine
Jonathan Young, Gerard Ridgway, Kelvin Leung, et al.
It is well known that hippocampal atrophy is a marker of the onset of Alzheimer's disease (AD) and as a result hippocampal volumetry has been used in a number of studies to provide early diagnosis of AD and predict conversion of mild cognitive impairment patients to AD. However, rates of atrophy are not uniform across the hippocampus making shape analysis a potentially more accurate biomarker. This study studies the hippocampi from 226 healthy controls, 148 AD patients and 330 MCI patients obtained from T1 weighted structural MRI images from the ADNI database. The hippocampi are anatomically segmented using the MAPS multi-atlas segmentation method, and the resulting binary images are then processed with SPHARM software to decompose their shapes as a weighted sum of spherical harmonic basis functions. The resulting parameterizations are then used as feature vectors in Support Vector Machine (SVM) classification. A wrapper based feature selection method was used as this considers the utility of features in discriminating classes in combination, fully exploiting the multivariate nature of the data and optimizing the selected set of features for the type of classifier that is used. The leave-one-out cross validated accuracy obtained on training data is 88.6% for classifying AD vs controls and 74% for classifying MCI-converters vs MCI-stable with very compact feature sets, showing that this is a highly promising method. There is currently a considerable fall in accuracy on unseen data indicating that the feature selection is sensitive to the data used, however feature ensemble methods may overcome this.
Consistent estimation of shape parameters in statistical shape model by symmetric EM algorithm
Kaikai Shen, Pierrick Bourgeat, Jurgen Fripp, et al.
In order to fit an unseen surface using statistical shape model (SSM), a correspondence between the unseen surface and the model needs to be established, before the shape parameters can be estimated based on this correspondence. The correspondence and parameter estimation problem can be modeled probabilistically by a Gaussian mixture model (GMM), and solved by expectation-maximization iterative closest points (EM-ICP) algorithm. In this paper, we propose to exploit the linearity of the principal component analysis (PCA) based SSM, and estimate the parameters for the unseen shape surface under the EM-ICP framework. The symmetric data terms are devised to enforce the mutual consistency between the model reconstruction and the shape surface. The a priori shape information encoded in the SSM is also included as regularization. The estimation method is applied to the shape modeling of the hippocampus using a hippocampal SSM.
A hybrid framework of multiple active appearance models and global registration for 3D prostate segmentation in MRI
Real-time fusion of Magnetic Resonance (MR) and Trans Rectal Ultra Sound (TRUS) images aid in the localization of malignant tissues in TRUS guided prostate biopsy. Registration performed on segmented contours of the prostate reduces computational complexity and improves the multimodal registration accuracy. However, accurate and computationally efficient 3D segmentation of the prostate in MR images could be a challenging task due to inter-patient shape and intensity variability of the prostate gland. In this work, we propose to use multiple statistical shape and appearance models to segment the prostate in 2D and a global registration framework to impose shape restriction in 3D. Multiple mean parametric models of the shape and appearance corresponding to the apex, central and base regions of the prostate gland are derived from principal component analysis (PCA) of prior shape and intensity information of the prostate from the training data. The estimated parameters are then modified with the prior knowledge of the optimization space to achieve segmentation in 2D. The 2D segmented slices are then rigidly registered with the average 3D model produced by affine registration of the ground truth of the training datasets to minimize pose variations and impose 3D shape restriction. The proposed method achieves a mean Dice similarity coefficient (DSC) value of 0.88±0.11, and mean Hausdorff distance (HD) of 3.38±2.81 mm when validated with 15 prostate volumes of a public dataset in leave-one-out validation framework. The results achieved are better compared to some of the works in the literature.
Segmentation II
icon_mobile_dropdown
Robust estimation of mammographic breast density: a patient-based approach
Harald S. Heese, Klaus Erhard, Andre Gooßen, et al.
Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
Image segmentation using random-walks on the histogram
Jean-Philippe Morin, Christian Desrosiers, Luc Duong
This document presents a novel method for the problem of image segmentation, based on random-walks. This method shares similarities with the Mean-shift algorithm, as it finds the modes of the intensity histogram of images. However, unlike Mean-shift, our proposed method is stochastic and also provides class membership probabilities. Also, unlike other random-walk based methods, our approach does not require any form of user interaction, and can scale to very large images. To illustrate the usefulness, efficiency and scalability of our method, we test it on the task of segmenting anatomical structures present in cardiac CT and brain MRI images.
Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 ± 0.33 mm and 3.10 ± 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.
Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting
D. P. Shamonin, M. Staring, M. E. Bakker, et al.
We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.
Iterative approach to joint segmentation of cellular structures
Peter Ajemba, Richard Scott, Janakiramanan Ramachandran, et al.
Accurate segmentation of overlapping nuclei is essential in determining nuclei count and evaluating the sub-cellular localization of protein biomarkers in image Cytometry and Histology. Current cellular segmentation algorithms generally lack fast and reliable methods for disambiguating clumped nuclei. In immuno-fluorescence segmentation, solutions to challenges including nuclei misclassification, irregular boundaries, and under-segmentation require reliable separation of clumped nuclei. This paper presents a fast and accurate algorithm for joint segmentation of cellular cytoplasm and nuclei incorporating procedures for reliably separating overlapping nuclei. The algorithm utilizes a combination of ideas and is a significant improvement on state-of-the-art algorithms for this application. First, an adaptive process that includes top-hat filtering, blob detection and distance transforms estimates the inverse illumination field and corrects for intensity non-uniformity. Minimum-error-thresholding based binarization augmented by statistical stability estimation is applied prior to seed-detection constrained by a distance-map-based scale-selection to identify candidate seeds for nuclei segmentation. The nuclei clustering step also incorporates error estimation based on statistical stability. This enables the algorithm to perform localized error correction. Final steps include artifact removal and reclassification of nuclei objects near the cytoplasm boundary as epithelial or stroma. Evaluation using 48 realistic phantom images with known ground-truth shows overall segmentation accuracy exceeding 96%. It significantly outperformed two state-of-the-art algorithms in clumped nuclei separation. Tests on 926 prostate biopsy images (326 patients) show that the segmentation improvement improves the predictive power of nuclei architecture features based on the minimum spanning tree algorithm. The algorithm has been deployed in a large scale pathology application.
Label Fusion
icon_mobile_dropdown
Simultaneous segmentation and statistical label fusion
Labeling or segmentation of structures of interest in medical imaging plays an essential role in both clinical and scientific understanding. Two of the common techniques to obtain these labels are through either fully automated segmentation or through multi-atlas based segmentation and label fusion. Fully automated techniques often result in highly accurate segmentations but lack the robustness to be viable in many cases. On the other hand, label fusion techniques are often extremely robust, but lack the accuracy of automated algorithms for specific classes of problems. Herein, we propose to perform simultaneous automated segmentation and statistical label fusion through the reformulation of a generative model to include a linkage structure that explicitly estimates the complex global relationships between labels and intensities. These relationships are inferred from the atlas labels and intensities and applied to the target using a non-parametric approach. The novelty of this approach lies in the combination of previously exclusive techniques and attempts to combine the accuracy benefits of automated segmentation with the robustness of a multi-atlas based approach. The accuracy benefits of this simultaneous approach are assessed using a multi-label multi-atlas whole-brain segmentation experiment and the segmentation of the highly variable thyroid on computed tomography images. The results demonstrate that this technique has major benefits for certain types of problems and has the potential to provide a paradigm shift in which the lines between statistical label fusion and automated segmentation are dramatically blurred.
Manifold learning for atlas selection in multi-atlas-based segmentation of hippocampus
Alzheimer's disease (AD) severely affects the hippocampus: it loses mass and shrinks as the disease advances. Thus delineation of the hippocampus is an important task in the clinical study of AD. Because of its simplicity and good performance, multi-atlas based segmentation has become a popular approach for medical image segmentation. We propose to use manifold learning for atlas selection in the framework of multi-atlas based segmentation. The framework only benefits when selecting atlases similar to the target image. Since manifold learning assigns each image a coordinate in low-dimensional space by respecting the neighborhood relationship, it is well suited for atlas selection. The key contribution is that we use manifold learning based on a metric derived from non-rigid transformation as the resulting embedding better captures deformations or shape differences between images than similarity measures based on voxel intensity. The proposed method is evaluated in a leave-one-out experiment on a set of 110 hippocampus images; we report mean Dice score of 0.9114 (0.0227). The method was validated against a state-of-the-art method for hippocampus segmentation.
Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT
M. Agarwal, E. A. Hendriks, B. C. Stoel, et al.
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
Generalized statistical label fusion using multiple consensus levels
Segmentation plays a critical role in exposing connections between biological structure and function. The process of label fusion collects and combines multiple observations into a single estimate. Statistically driven techniques provide mechanisms to optimally combine segmentations; yet, optimality hinges upon accurate modeling of rater behavior. Traditional approaches, e.g., Majority Vote and Simultaneous Truth and Performance Level Estimation (STAPLE), have been shown to yield excellent performance in some cases, but do not account for spatial dependences of rater performance (i.e., regional task difficulty). Recently, the COnsensus Level, Labeler Accuracy and Truth Estimation (COLLATE) label fusion technique augmented the seminal STAPLE approach to simultaneously estimate regions of relative consensus versus confusion along with rater performance. Herein, we extend the COLLATE framework to account for multiple consensus levels. Toward this end, we posit a generalized model of rater behavior of which Majority Vote, STAPLE, STAPLE Ignoring Consensus Voxels, and COLLATE are special cases. The new algorithm is evaluated with simulations and shown to yield improved performance in cases with complex region difficulties. Multi-COLLATE achieve these results by capturing different consensus levels. The potential impacts and applications of generative model to label fusion problems are discussed.
Brain Applications
icon_mobile_dropdown
Sparse regression analysis of task-relevant information distribution in the brain
Irina Rish, Guillermo A. Cecchi, Kyle Heuton, et al.
One of key topics in fMRI analysis is discovery of task-related brain areas. We focus on predictive accuracy as a better relevance measure than traditional univariate voxel activations that miss important multivariate voxel interactions. We use sparse regression (more specifically, the Elastic Net1) to learn predictive models simultaneously with selection of predictive voxel subsets, and to explore transition from task-relevant to task-irrelevant areas. Exploring the space of sparse solutions reveals a much wider spread of task-relevant information in the brain than it is typically suggested by univariate correlations. This happens for several tasks we considered, and is most noticeable in case of complex tasks such as pain rating; however, for certain simpler tasks, a clear separation between a small subset of relevant voxels and the rest of the brain is observed even with multivariate approach to measuring relevance.
A surface based approach for cortical thickness comparison between PiB+ and PiB- healthy control subjects
Vincent Doré, Pierrick Bourgeat, Jurgen Fripp, et al.
β-amyloid has been shown to play a crucial role in Alzheimer's disease (AD). In vivo β-amyloid imaging using [11C]Pittsburgh compound Β (PiB) positron emission tomography has made it possible to analyze the relationship between β-amyloid deposition and different pathological markers involved in AD. PiB allows us to stratify the population between subjects which are likely to have prodromal AD, and those who don't. The comparison of the cortical thickness in these different groups is important to better understanding and detect the first symptoms of the disease which may lead to an earlier therapeutic care to reduce neurone loss. Several techniques have been developed to compare the cortical volume and/or thickness between AD and HC groups. However due to the noise introduced by the cortical thickness estimation and by the registration, these methods do not allow to unveil any major different when comparing prodromal AD groups with healthy control subjects group. To improve our understanding of where initial Alzheimer neurodegeneration occurs in the cortex we have developed a surface based technique, and have applied it to the discrimination between PIB-positive and PiB-negative HCs. We first identify the regions where AD patients show high cortical atrophy by using an AD/PiB- HC vertex-wise T-test. In each of these discriminating regions, comparison between PiB+ HC, PiB- HC and AD are performed. We found some significant differences between the two HC groups in the hippocampus and in the temporal lobe for both hemisphere and in the precuneus and occipital regions only for the left hemisphere.
Simultaneous cortical surface labeling and sulcal curve extraction
Zhen Yang, Aaron Carass, Chen Chen, et al.
Automatic labeling of the gyri and sulci on the cortical surface is important for studying cortical morphology and brain functions within populations. A method to simultaneously label gyral regions and extract sulcal curves is proposed. Assuming that the gyral regions parcellate the whole cortical surface into contiguous regions with certain fixed topology, the proposed method labels the subject cortical surface by deformably registering a network of curves that form the boundary of gyral regions to the subject cortical surface. In the registration process, the curves are encouraged to follow the fine details of the sulcal geometry and to observe the shape statistics learned from training data. Using the framework of probabilistic point set registration methods, the proposed algorithm finds the sulcal curve network that maximizes the posterior probability by Expectation-Maximization (EM). The automatic labeling method was evaluated on 15 cortical surfaces using a leave-one-out strategy. Quantitative error analysis is carried out on both labeled regions and major sulcal curves.
fMRI alignment based on local functional connectivity patterns
Di Jiang, Yuhui Du, Hewei Cheng, et al.
In functional neuroimaging studies, the inter-subject alignment of functional magnetic resonance imaging (fMRI) data is a necessary precursor to improve functional consistency across subjects. Traditional structural MRI based registration methods cannot achieve accurate inter-subject functional consistency in that functional units are not necessarily consistently located relative to anatomical structures due to functional variability across subjects. Although spatial smoothing commonly used in fMRI data preprocessing can reduce the inter-subject functional variability, it may blur the functional signals and thus lose the fine-grained information. In this paper we propose a novel functional signal based fMRI image registration method which aligns local functional connectivity patterns of different subjects to improve the inter-subject functional consistency. Particularly, the functional connectivity is measured using Pearson correlation. For each voxel of an fMRI image, its functional connectivity to every voxel in its local spatial neighborhood, referred to as its local functional connectivity pattern, is characterized by a rotation and shift invariant representation. Based on this representation, the spatial registration of two fMRI images is achieved by minimizing the difference between their corresponding voxels' local functional connectivity patterns using a deformable image registration model. Experiment results based on simulated fMRI data have demonstrated that the proposed method is more robust and reliable than the existing fMRI image registration methods, including maximizing functional correlations and minimizing difference of global connectivity matrices across different subjects. Experiment results based on real resting-state fMRI data have further demonstrated that the proposed fMRI registration method can statistically significantly improve functional consistency across subjects.
A comparison of distributional considerations with statistical analysis of resting state fMRI at 3T and 7T
Xue Yang, Martha J. Holmes, Allen T. Newton, et al.
Ultra-high field 7T magnetic resonance imaging (MRI) offers potentially unprecedented spatial resolution of functional activity within the human brain through increased signal and contrast to noise ratios over traditional 1.5T and 3T MRI scanners. However, the effects physiological and imaging artifacts are also greatly increased. Traditional statistical parametric mapping theories based on distributional properties representative of data acquired at lower fields may be inadequate for new 7T data. Herein, we investigate the model fitting residuals based on two 7T and one 3T protocols. We find that model residuals are substantively more non-Gaussian at 7T relative to 3T. Imaging slices that passed through regions with peak inhomogeneity problems (e.g., mid-brain acquisitions for the 7T hippocampus) exhibited visually higher degrees of distortion along with spatially correlated and extreme values of kurtosis (a measure of non- Gaussianity). The impacts of artifacts have been previously addressed for 3T data by estimating the covariance matrix of the regression errors. We further extend the robust estimation approach for autoregressive models and evaluate the qualitative impacts of this technique relative to traditional inference. Clear differences in statistical significance are shown between inferences based on classical versus robust assumptions, which suggest that inferences based on Gaussian assumptions are subject to practical (as well as theoretical) concerns regarding their power and validity. Hence, modern statistical approaches, such as the robust autoregressive model posed herein, are appropriate and suitable for inference with ultra-high field functional magnetic resonance imaging.
Registration II
icon_mobile_dropdown
Nearly rigid descriptor-based matching for volume reconstruction from histological sections
A common task in the analysis of digitized histological sections is reconstructing a volumetric representation of the original specimen. Image registration algorithms are used in this task to compensate for translational, rotational, scale, shear, and local geometric differences between slices. Various systems have been developed to perform volumetric reconstruction by registering pairs of successive slices according to rigid, similarity, affine, and/or deformable transformations. To provide a coarse initial volumetric reconstruction, rigid transformations may be too constrained, as they do not allow for scale or shear; but, affine transformations may be too flexible, enabling larger scale or shear factors than physically reflected in the histological sections. One difficulty with these systems is caused by the aperture problem; even if successive slices are registered reasonably well, the composition of transformations over tens or hundreds of slices can yield global twisting and scale and shear changes that yield a volumetric reconstruction that is significantly distorted from the shape of the true specimen. The impact of the aperture problem can be reduced by considering more than two successive images in the registration process. Systems that take this approach use global energy functions, elastic spring models, post hoc filtering/smoothing, or solutions to shortest-path problems on graphs. In this article, we propose a volume reconstruction algorithm that handles the aperture problem and yields nearly rigid transformations (i.e., affine transformations with small scale and shear factors). Our algorithm is based on robust geometric alignment of descriptive feature points (for example, using SIFT16) via constrained optimization. We will illustrate our algorithm on the task of volumetric reconstruction from histological sections of a chicken embryo with an embedded tumor spheroid.
Nonrigid free-form registration using landmark-based statistical deformation models
Stefan Pszczolkowski, Luis Pizarro, Ricardo Guerrero, et al.
In this paper, we propose an image registration algorithm named statistically-based FFD registration (SFFD). This registration method is a modification of a well-known free-form deformations (FFD) approach. Our framework dramatically reduces the number of parameters to optimise and only needs to perform a single-resolution optimisation to account for coarse and fine local displacements, in contrast to the multi-resolution strategy employed by the FFD-based registration. The proposed registration uses statistical deformation models (SDMs) as a priori knowledge to guide the alignment of a new subject to a common reference template. These SDMs account for the anatomical mean and variability across a population of subjects. We also propose that available anatomical landmark information can be encoded within the proposed SDM framework to enforce the alignment of certain anatomical structures. We present results in terms of fiducial localisation error, which illustrate the ability of the SDMs to encode landmark position information. We also show that our statistical registration algorithm can provide registration results comparable to the standard FFD-based approach at a much lower computational cost.
Estimation of rigid-body registration quality using registration networks
Many rigid and affine registration methods rely on optimizing an intensity-based similarity criterion between images. Once registered, however, it is difficult to assess the quality of the registration based solely on the value of the similarity measure. Past work in quantitative error analysis relies on the availability of fiducial markers. Little work has been done on developing techniques that would permit assessing the registration quality between images that do not contain fiducial markers without manual intervention. In this paper, we present an automatic technique that permits to do so. We apply our method to estimate the registration quality of 10 MR and CT pairs and 10 MR and MR-contrast pairs. We show that our technique is capable of detecting cases with registration error larger than 2° around one axis. We also show that our method is better able to identify error in MR to CT registrations than popular similarity measures. Work is under way to better determine the sensitivity of the technique.
Registration of 3D spectral OCT volumes combining ICP with a graph-based approach
The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.
Elastic registration based on matrix-valued spline functions and direct integration of landmarks and intensities
Stefan Wörz, Andreas Biesdorf, Karl Rohr
We introduce a new approach for spline-based elastic registration using both point landmarks and intensity information. With this approach, both types of information and a regularization based on the Navier equation are directly integrated in a single energy minimizing functional. For this functional, we have derived an analytic solution, which is based on matrix-valued non-radial basis functions. Our approach can cope with monomodal and multimodal images. For the latter case, we have integrated a computationally efficient analytic similarity measure. We have successfully applied our approach to synthetic images, phantom images, and MR images of the human brain.
Minimally deformed correspondences between surfaces for intra-operative registration
Thiago R. dos Santos, Caspar J. Goch, Alfred M. Franz, et al.
Range imaging modalities, such as time-of-flight cameras (ToF), are becoming very popular for the acquisition of intra-operative data, which can be used for registering the patient's anatomy with pre-operative data, such as 3D images generated by computed tomographies (CT) or magnetic resonance imaging (MRI). However, due to the distortions that appear because of the different acquisition principles of the input surfaces, the noise, and the deformations that may occur in the intra-operative environment, we face different surface properties for points lying on the same anatomical locations and unreliable feature points detection, which are crucial for most surface matching algorithms. In order to overcome these issues, we present a method for automatically finding correspondences between surfaces that searches for minimally deformed configurations. For this purpose, an error metric that expresses the reliability of a correspondence set based on its spatial configuration is employed. The registration error is minimized by a combinatorial analysis through search-trees. Our method was evaluated with real and simulated ToF and CT data, and showed to be reliable for the registration of partial multi-modal surfaces with noise and distortions.
OCT and Ultrasound
icon_mobile_dropdown
Automatic detection and segmentation of renal lesions in 3D contrast-enhanced ultrasound images
Raphael Prevost, Laurent D. Cohen, Jean-Michel Correas, et al.
Contrast-enhanced ultrasound (CEUS) is a valuable imaging modality in the detection and evaluation of different kinds of lesions. Three-dimensional CEUS acquisitions allow quantitative volumetric assessments and better visualization of lesions, but automatic and robust analysis of such images is very challenging because of their poor quality. In this paper, we propose a method to automatically segment lesions such as cysts in 3D CEUS data. First we use a pre-processing step, based on the guided filtering framework, to improve the visibility of the lesions. The lesion detection is then performed through a multi-scale radial symmetry transform. We compute the likelihood of a pixel to be the center of a dark rounded shape. The local maxima of this likelihood are considered as lesions centers. Finally, we recover the whole lesions volume with multiple front propagation based on image intensity, using a fast marching method. For each lesion, the final segmentation is chosen as the one which maximizes the gradient flux through its boundary. Our method has been tested on several clinical 3D CEUS images of the kidney and provides promising results.
Lesion segmentation and bias correction in breast ultrasound B-mode images including elastography information
Gerard Pons, Joan Martí, Robert Martí, et al.
Breast ultrasound (BUS) imaging is an imaging modality used for the detection and diagnosis of breast lesions and it has become a crucial modality nowadays specially for providing a complementary view when other modalities (i.e. mammography) are not conclusive. However, lesion detection in ultrasound images is still a challenging problem due to the presence of artifacts such as low contrast, speckle, inhomogeneities and shadowing. In order to deal with these problems and improve diagnosis accuracy, radiologists tend to complement ultrasound imaging with elastography data. Following the prominent relevance of elastography in clinical environments, it is reasonable to assume that lesion segmentation methods could also benefit from this complementary information. This paper proposes a novel breast ultrasound lesion segmentation framework for B-mode images including elastography information. A distortion field is estimated to restore the ideal image while simultaneously identifying regions of similar intensity inhomogeneity using a Markov Random Field (MRF) and a maximum a posteriori (MAP) formulation. Bivariate Gaussian distributions are used to model both B-mode and elastography information. This paper compares the fused B-mode and elastography framework with B-mode or elastography alone using different cases, including illustrative cases, where B-mode shows a well defined lesion and where elastography provides more meaningful information, showing a significant improvement when B-mode images are not conclusive which is often the case in non cystic lesions. Results show that combining both B-mode and elastography information in an unique framework makes the algorithm more robust and image quality independent.
Real-time segmentation in 4D ultrasound with continuous max-flow
M. Rajchl, J. Yuan, T. M. Peters
We present a novel continuous Max-Flow based method to segment the inner left ventricular wall from 3D trans-esophageal echocardiography image sequences, which minimizes an energy functional encoding two Fisher-Tippett distributions and a geometrical constraint in form of a Euclidean distance map in a numerically efficient and accurate way. After initialization the method is fully automatic and is able to perform at up to 10Hz making it available for image-guided interventions. Results are shown on 4D TEE data sets from 18 patients with pathological cardiac conditions and the speed of the algorithm is assessed under a variety of conditions.
Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3D segmentation of intraretinal surfaces in SD-OCT volumes
Bhavna J. Antony, Michael D. Abràmoff, Milan Sonka, et al.
While efficient graph-theoretic approaches exist for the optimal (with respect to a cost function) and simultaneous segmentation of multiple surfaces within volumetric medical images, the appropriate design of cost functions remains an important challenge. Previously proposed methods have used simple cost functions or optimized a combination of the same, but little has been done to design cost functions using learned features from a training set, in a less biased fashion. Here, we present a method to design cost functions for the simultaneous segmentation of multiple surfaces using the graph-theoretic approach. Classified texture features were used to create probability maps, which were incorporated into the graph-search approach. The efficiency of such an approach was tested on 10 optic nerve head centered optical coherence tomography (OCT) volumes obtained from 10 subjects that presented with glaucoma. The mean unsigned border position error was computed with respect to the average of manual tracings from two independent observers and compared to our previously reported results. A significant improvement was noted in the overall means which reduced from 9.25 ± 4.03μm to 6.73 ± 2.45μm (p < 0.01) and is also comparable with the inter-observer variability of 8.85 ± 3.85μm.
Parallel graph search: application to intraretinal layer segmentation of 3D macular OCT scans
Kyungmoo Lee, Michael D. Abràmoff, Mona K. Garvin, et al.
Image segmentation is of paramount importance for quantitative analysis of medical image data. Recently, a 3-D graph search method which can detect globally optimal interacting surfaces with respect to the cost function of volumetric images has been introduced, and its utility demonstrated in several application areas. Although the method provides excellent segmentation accuracy, its limitation is a slow processing speed when many surfaces are simultaneously segmented in large volumetric datasets. Here, we propose a novel method of parallel graph search, which overcomes the limitation and allows the quick detection of multiple surfaces. To demonstrate the obtained performance with respect to segmentation accuracy and processing speedup, the new approach was applied to retinal optical coherence tomography (OCT) image data and compared with the performance of the former non-parallel method. Our parallel graph search methods for single and double surface detection are approximately 267 and 181 times faster than the original graph search approach in 5 macular OCT volumes (200 x 5 x 1024 voxels) acquired from the right eyes of 5 normal subjects. The resulting segmentation differences were small as demonstrated by the mean unsigned differences between the non-parallel and parallel methods of 0.0 ± 0.0 voxels (0.0 ± 0.0 μm) and 0.27 ± 0.34 voxels (0.53 ± 0.66 μm) for the single- and dual-surface approaches, respectively.
Segmentation of Vessels and Tubular Structures
icon_mobile_dropdown
Automated reconstruction of neural trees using front re-initialization
Amit Mukherjee, Armen Stepanyants
This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers.
Segmentation of anatomical branching structures based on texture features and conditional random field
Tatyana Nuzhnaya, Predrag Bakic, Despina Kontos, et al.
This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.
Liver vessel tree segmentation based on a hybrid graph cut / fuzzy connectedness method
In the monitoring of oncological therapy, the prediction of liver tumor growth from consecutive CT scans is an important aspect in deciding the treatment planning. The accurate segmentation of liver vessel tree is fundamental for successful prediction of the tumor growth. In this paper, we report a 3D liver vessel tree segmentation method based on the hybrid graph cut (GC) / fuzzy connectedness (FC) method. GC is a popular image segmentation technique. However, it is not always efficient when segmenting thin elongated objects due to its "shrinking bias". To overcome this problem, we propose to impose an additional connectivity prior, which comes from the FC segmentation results. The proposed method synergistically combines the GC with FC methods. The proposed method consists of two main steps. First, the FC method is applied to initially segment the liver vessel tree, which provided the connectivity prior to the subsequent GC method. Second, the connectivity prior integrated GC method is employed to refine the segmented liver vessel tree. The proposed method was tested on 10 clinical portal venous phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method. The accuracy of segmentation on this dataset, expressed in sensitivity, was 60%, 92% and 100% for vessel diameters in the range of 0.5 to 1, 1 to 2 and >2 mm, respectively.
Contrast independent detection of branching points in network-like structures
Boguslaw Obara, Mark Fricker, Vicente Grau
Many biomedical applications require the detection of branching structures in images. While several algorithms have been proposed for (semi-)automatic extraction of these structures, branching points usually need specific treatment. We propose a vector field-based approach to identify branching points in images. A vector field is calculated using a novel contrast-independent tensor representation based on local phase. Non-curvilinear structures, including junctions and end points, are detected using directional statistics of the principal orientation as defined by the tensor. Results on synthetic and real biomedical images show the robustness of the algorithm against changes in contrast, and its ability to detect junctions in highly complex images.
Robust RANSAC-based blood vessel segmentation
Ahmed Yureidini, Erwan Kerrien, Stéphane Cotin
Many vascular clinical applications require a vessel segmentation process that is able to extract both the centerline and the surface of the blood vessels. However, noise and topology issues (such as kissing vessels) prevent existing algorithm from being able to easily retrieve such a complex system as the brain vasculature. We propose here a new blood vessel tracking algorithm that 1) detects the vessel centerline; 2) provides a local radius estimate; and 3) extracts a dense set of points at the blood vessel surface. This algorithm is based on a RANSAC-based robust fitting of successive cylinders along the vessel. Our method was validated against the Multiple Hypothesis Tracking (MHT) algorithm on 10 3DRA patient data of the brain vasculature. Over 744 blood vessels of various sizes were considered for each patient. Our results demonstrated a greater ability of our algorithm to track small, tortuous and touching vessels (96% success rate), compared to MHT (65% success rate). The computed centerline precision was below 1 voxel when compared to MHT. Moreover, our results were obtained with the same set of parameters for all patients and all blood vessels, except for the seed point for each vessel, also necessary for MHT. The proposed algorithm is thereafter able to extract the full intracranial vasculature with little user interaction.
Digital Pathology I
icon_mobile_dropdown
Robust alignment of prostate histology slices with quantified accuracy
Cecilia Hughes, Olivier Rouviere, Florence Mege Lechevallier, et al.
Prostate cancer is the most common malignancy among men yet no current imaging technique is capable of detecting the tumours with precision. To evaluate each technique, the histology data must be precisely mapped to the imaged data. As it cannot be assumed that the histology slices are cut along the same plane as the imaged data is acquired, the registration is a 3D problem. This requires the prior accurate alignment of the histology slices. We propose a protocol to create in a rapid and standardised manner internal fiducial markers in fresh prostate specimens and an algorithm by which these markers can then be automatically detected and classified enabling the automatic rigid alignment of each slice. The protocol and algorithm were tested on 10 prostate specimens, with 19.2 histology slices on average per specimen. On average 90.9% of the fiducial markers created were visible in the slices, of which 96.1% were automatically correctly detected and classified. The average accuracy of the alignment was 0.19 ± 0.15 mm at the fiducial markers. The algorithm took 5.46 min on average per specimen. The proposed protocol and algorithm were also tested using simulated images and a beef liver sample. The simulated images showed that the algorithm has no associated residual error and justified the choice of a rigid registration. In the beef liver images, the average accuracy of the alignment was 0.11 ± 0.09 mm at the fiducial markers and 0.63 ± 0.47 mm at a validation marker approximately 20 mm from the fiducial markers.
Digital Pathology II
icon_mobile_dropdown
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
Maia Hariri, Justin W. L. Wan
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
Posters: Registration
icon_mobile_dropdown
On the construction of topology-preserving deformations
Dominique Apprato, Christian Gout, Carole Le Guyader
In this paper, we investigate a new method to enforce topology preservation on two/three-dimensional deformation fields for non-parametric registration problems involving large-magnitude deformations. The method is composed of two steps. The first one consists in correcting the gradient vector field of the deformation at the discrete level, in order to fulfill a set of conditions ensuring topology preservation in the continuous domain after bilinear interpolation. This part, although related to prior works by Kara¸cali and Davatzikos (Estimating Topology Preserving and Smooth Displacement Fields, B. Kara¸cali and C. Davatzikos, IEEE Transactions on Medical Imaging, vol. 23(7), 2004), proposes a new approach based on interval analysis and provides, unlike their method, uniqueness of the correction parameter α at each node of the grid, which is more consistent with the continuous setting. The second one aims to reconstruct the deformation, given its full set of discrete gradient vector field. The problem is phrased as a functional minimization problem on a convex subset K of an Hilbert space V . Existence and uniqueness of the solution of the problem are established, and the use of Lagrange's multipliers allows to obtain the variational formulation of the problem on the Hilbert space V . The discretization of the problem by the finite element method does not require the use of numerical schemes to approximate the partial derivatives of the deformation components and leads to solve two/three uncoupled sparse linear subsystems. Experimental results in brain mapping and comparisons with existing methods demonstrate the efficiency and the competitiveness of the method.
Motion coherent image registration and demons: practical handling of deformation boundaries
Much effort has gone into the understanding of regularization in ill-posed problems in computer vision. Yuille and Grzywacz were among the first to propose use of the Gaussian Tikhonov regularizer for image registration, illustrating the ideal properties of this regularizer for preserving motion coherence. Nielsen et al.4 later described the intricate connection between the Gaussian Tikhonov regularizer and scale space theory; this work provided the basis with which Pennec et al.8 detailed the theoretical underpinnings of the Thirion's Demons algorithm for deformable image registration. The Demons algorithm iteratively computes a force vector field to drive the deformation in the appropriate direction, and then smooths the force vector field by Gaussian convolution in order to update the deformation. The Gaussian convolution step, which can be performed in the Fourier domain or via recursive filters, explicitly incorporates motion-coherent regularization into the registration algorithm. However, these procedures do not allow for ideal treatment of the deformation at the image boundaries. Resolution in the Fourier domain forces the choice of periodic boundary conditions which do not mimic physical behavior. Recursive filters force the user to decide how to extrapolate image data, and they may yield deformations that are not endomorphic. In this article, we illustrate how to remove these limitations and define computationally efficient algorithms for motion-coherent registration under a wide variety of boundary conditions that enable endomorphic deformations and/or physically realistic behavior. The resulting algorithms enable a new degree of user control over the behavior of Demons-style motion coherent registration algorithms.
Image registration method based on multiresolution for dual-energy subtraction radiography
Takahiro Kawamura, Norihiro Omae, Masahiko Yamada, et al.
In this paper we propose a novel image registration method for dual-energy subtraction radiography. The body motions by heartbeat, breathing and patient's movement during two exposures cause misregistration artifacts on a subtracted image. One conventional method can accurately compensate many varieties of motions from large to small using multi-resolution technique. This approach, however, does not consider motion directions of overlapped structures and thus causes misregistration around the heart where pulmonary blood vessels overlap with the heart since they have different motion directions. Therefore, we propose a new image registration method in order to solve this problem. Our new method has a registration process that detects the directions of the motions for each structure size, and a merging process that integrates the deformed results of each structure size. To verify the effectivity of the proposed method, we evaluate the image quality of each anatomical structure of 31 chest energy subtraction images. As a result, the proposed method has proven to be more effective in reducing misregistration artifacts of the heart, pulmonary blood vessels and ribs than the conventional method. Reducing these misregistration artifacts would contribute to improve diagnostic performance of energy subtraction images.
3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients
Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.
Improving point registration in dental cephalograms by two-stage rectified point translation transform
W. K. Tam, H. J. Lee
Cephalometric analysis requires to detect landmarks on cephalograms. Current registration techniques, such as that use scale-invariant feature descriptor (SIFT), perform poorly on cephalograms. We proposed to improve the registration technique for detecting the landmarks on cephalograms. The results were compared with the landmark identified by dental professionals. Twenty digital cephalograms were collected from a dental clinic. Twenty orthodontic landmarks were identified by dental professionals on each image; one of them was used as a template image. We automatically locate the landmarks using a two stages approach, the global registration of the interest points between two images and a local registration of the landmarks. In the first stage, SIFT was employed to establish point-to-point matching pairs. The matched points on the input image were treated as a set of translation transforms from the original template image. The consistence of the translation was controlled by applying a rectification factor defined in this study. In the second stage, we localized the search within the suspected regions around the landmarks derived by the translations in the first stage. Local registrations were rectified and fine-tuned until the translations close to the identified landmarks were obtained. Our method could detect all the landmarks with error distances less than the 2mm standard set forth by previous researcher. By improving the consistence of the translations, the performance of registration between two images was greatly improved. This method can be used as an initial step to locate the regions around the landmarks for improving detection in the future work.
Robust registration of sparsely sectioned histology to ex-vivo MRI of temporal lobe resections
Maged Goubran, Ali R. Khan, Cathie Crukley, et al.
Surgical resection of epileptic foci is a typical treatment for drug-resistant epilepsy, however, accurate preoperative localization is challenging and often requires invasive sub-dural or intra-cranial electrode placement. The presence of cellular abnormalities in the resected tissue can be used to validate the effectiveness of multispectralMagnetic Resonance Imaging (MRI) in pre-operative foci localization and surgical planning. If successful, these techniques can lead to improved surgical outcomes and less invasive procedures. Towards this goal, a novel pipeline is presented here for post-operative imaging of temporal lobe specimens involving MRI and digital histology, and present and evaluate methods for bringing these images into spatial correspondence. The sparsely-sectioned histology images of resected tissue represents a challenge for 3D reconstruction which we address with a combined 3D and 2D rigid registration algorithm that alternates between slice-based and volume-based registration with the ex-vivo MRI. We also evaluate four methods for non-rigid within-plane registration using both images and fiducials, with the top performing method resulting in a target registration error of 0.87 mm. This work allows for the spatially-local comparison of histology with post-operative MRI and paves the way for eventual registration with pre-operative MRI images.
Regularity-guaranteed transformation estimation in medical image registration
In addition to seeking geometric correspondence between the inputs, a legitimate image registration algorithm should also keep the estimated transformation meaningful or regular. In this paper, we present a mathematically sound formulation that explicitly controls the deformation to keep each grid in a meaningful shape over the entire geometric matching procedure. The deformation regularity conditions are enforced by maintaining all the moving neighbors as non-twist grids. In contrast to similar works, our model differentiates and formulates the convex and concave update cases under an efficient and straightforward point-line/surface orientation framework, and uses equality constraints to guarantee grid regularity and prevent folding. Experiments on MR images are presented to show the improvements made by our model over the popular Demon's and DCT-based registration algorithms.
Quantitative assessment of mis-registration issues of diffusion tensor imaging (DTI)
Yue Li, Hangyi Jiang, Susumu Mori
Image distortions caused by eddy current and patient motion have been two major sources of the mis-registration issues in diffusion tensor imaging (DTI). Numerous registration methods have been proposed to correct them. However, quality control of DTI remains an important issue, because we rarely report how much mis-registration existed and how well they were corrected. In this paper, we propose a method for quantitative reporting of DTI data quality. This registration method minimizes a cost function based on mean square tensor fitting errors. Registration with twelve-parameter full affine transformation is used. From the registration result, distortion and motion parameters are estimated. Because the translation parameters involve both eddy-current-induced image translation and the patient motion, by analyzing the transformation model, we separate them by removing the contributions that are linearly correlated with diffusion gradients. We define the metrics measuring the amounts of distortion, rotation, translation. We tested our method on a database with 64 subjects and found the statistics of each of metrics. Finally we demonstrate that how these statistics can be used for assessing the data quality quantitatively in several examples.
Using fractional gradient information in non-rigid image registration: application to breast MRI
A. Melbourne, N. Cahill, C. Tanner, et al.
This work applies fractional differentiation (differentiation to non-integer order) to the gradients determined from image intensities for enhanced image registration. The technique is used to correct known simulated deformations of volumetric breast MR data using two algorithms: direct registration of gradient magnitude images and an extension of a previously published method that incorporates both image intensity and image gradient information to enhance registration performance. Better recovery of known deformations are seen when using non-integer order derivatives: half-derivative breast images are better registered when these methods are incorporated into a standard diffusion-based registration algorithm.
Multi-objective optimization for deformable image registration: proof of concept
In this work we develop and study a methodology for deformable image registration that overcomes a drawback of optimization procedures in common deformable image registration approaches: the use of a single combination of different objectives. Because selecting the best combination is well-known to be non-trivial, we use a multi-objective optimization approach that computes and presents multiple outcomes (a so-called Pareto front) at once. The approach is inherently more powerful because not all Pareto-optimal outcomes are necessarily obtainable by running existing approaches multiple times, for different combinations. Furthermore, expert knowledge can be easily incorporated in making the final best-possible decision by simply looking at (a diverse selection of) the outcomes illustrating both the transformed image and the associated deformation vector field. At the basis of the optimization methodology lies an advanced, model-based evolutionary algorithm that aims to exploit features of a problem's structure in a principled manner via probabilistic modeling. Two objectives are defined: 1) maximization of intensity similarity (normalized mutual information) and 2) minimization of energy required to accomplish the transformation (a model based on Hooke's law that incorporates elasticity characteristics associated with different tissue types). A regular grid of points forms the basis of the transformation model. Interpolation extends the correspondence as found for the grid to the rest of the volume. As a proof of concept we performed tests on a 2D axial slice of a CT scan of a breast. Results indicate plausible behavior of the proposed methodology that innovatively combines intensity-based and model-based registration criteria with state-of-the-art adaptive computation techniques for multi-objective optimization in deformable image registration.
Automatic correspondence detection in mammogram and breast tomosynthesis images
Jan Ehrhardt, Julia Krüger, Arpad Bischof, et al.
Two-dimensional mammography is the major imaging modality in breast cancer detection. A disadvantage of mammography is the projective nature of this imaging technique. Tomosynthesis is an attractive modality with the potential to combine the high contrast and high resolution of digital mammography with the advantages of 3D imaging. In order to facilitate diagnostics and treatment in the current clinical work-flow, correspondences between tomosynthesis images and previous mammographic exams of the same women have to be determined. In this paper, we propose a method to detect correspondences in 2D mammograms and 3D tomosynthesis images automatically. In general, this 2D/3D correspondence problem is ill-posed, because a point in the 2D mammogram corresponds to a line in the 3D tomosynthesis image. The goal of our method is to detect the "most probable" 3D position in the tomosynthesis images corresponding to a selected point in the 2D mammogram. We present two alternative approaches to solve this 2D/3D correspondence problem: a 2D/3D registration method and a 2D/2D mapping between mammogram and tomosynthesis projection images with a following back projection. The advantages and limitations of both approaches are discussed and the performance of the methods is evaluated qualitatively and quantitatively using a software phantom and clinical breast image data. Although the proposed 2D/3D registration method can compensate for moderate breast deformations caused by different breast compressions, this approach is not suitable for clinical tomosynthesis data due to the limited resolution and blurring effects perpendicular to the direction of projection. The quantitative results show that the proposed 2D/2D mapping method is capable of detecting corresponding positions in mammograms and tomosynthesis images automatically for 61 out of 65 landmarks. The proposed method can facilitate diagnosis, visual inspection and comparison of 2D mammograms and 3D tomosynthesis images for the physician.
An image registration based ultrasound probe calibration
Xin Li, Dinesh Kumar, Saradwata Sarkar, et al.
Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).
Medical image registration using machine learning-based interest point detector
Sergey Sergeev, Yang Zhao, Marius George Linguraru, et al.
This paper presents a feature-based image registration framework which exploits a novel machine learning (ML)-based interest point detection (IPD) algorithm for feature selection and correspondence detection. We use a feed-forward neural network (NN) with back-propagation as our base ML detector. Literature on ML-based IPD is scarce and to our best knowledge no previous research has addressed feature selection strategy for IPD purpose with cross-validation (CV) detectability measure. Our target application is the registration of clinical abdominal CT scans with abnormal anatomies. We evaluated the correspondence detection performance of the proposed ML-based detector against two well-known IPD algorithms: SIFT and SURF. The proposed method is capable of performing affine rigid registrations of 2D and 3D CT images, demonstrating more than two times better accuracy in correspondence detection than SIFT and SURF. The registration accuracy has been validated manually using identified landmark points. Our experimental results shows an improvement in 3D image registration quality of 18.92% compared with affine transformation image registration method from standard ITK affine registration toolkit.
Automatic control-point selection for image registration using disparity fitting
We present an algorithm for automatically selecting and matching control points for the purpose of registering images acquired using different imaging modalities. The modulus maxima of the wavelet transform were used to define a criterion for identifying control points. This criterion is capable of selecting points based on the size of features in the image. This technique can be tailored, by adjusting the scale of the filters in the modulus calculation, to the specific objects or structures known to occur in each image being registered. The control-point matching technique includes an iterative method for reducing the set of control-point pairs using the horizontal and vertical disparities between the matched pairs of points. Least-squares planes are fit to the horizontal and vertical disparity data, and control-point pairings are deleted based on their distances from those planes. The remaining points are used to recompute the planes. The process is iterated until the remaining points fall within a certain distance from the planes. Finally, a spatial transformation is performed on the template image to bring it into alignment with the reference image. The result of the control-point pair reduction is a more accurate alignment than what would have been produced using the initial control-point pairs. These techniques are applicable to medical images, but examples are given using images of paintings.
Posters: Segmentation
icon_mobile_dropdown
Vessel segmentation using an iterative fast marching approach with directional prior
Wei Liao, Stefan Wörz, Karl Rohr
This paper introduces a new approach to segment vessels from medical images using the fast marching method. Our approach relies on an iterative scheme: Starting from a given start point and initial direction, the optimal path within a circular region of interest (ROI) around this point is found using the fast marching method and a combination of different speed functions. Besides using speed functions based on a vesselness measure and the vessel radius, we introduce a directional speed function which prefers directions close to the predicted direction. The end point of the detected path is then used as the new start point to find again the optimal path within a new ROI centered around this point. This procedure is repeated until the user-specified end point is reached, or some other termination criterion is satisfied. The final result is the concatenation of the sequence of paths of the individual ROIs. Our approach has been applied to synthetic and real datasets. The experiments show that our approach is not only more efficient than a previous fast marching approach but also produces better results when dealing with short cuts and crossings in the segmentation of long vessels.
Adaptive epithelial cytoplasm segmentation and epithelial unit separation in immunoflurorescent images
Janakiramanan Ramachandran, Richard Scott, Peter Ajemba, et al.
Tissue segmentation is one of the key preliminary steps in the morphometric analysis of tissue architecture. In multi-channel immunoflurorescent biomarker images, the primary segmentation steps consist of segmenting the nuclei (epithelial and stromal) and epithelial cytoplasm from 4',6-diamidino-2-phenylindole (DAPI) and cytokeratin 18 (CK18) biomarker images respectively. The epithelial cytoplasm segmentation can be very challenging due to variability in cytoplasm morphology and image staining. A robust and adaptive segmentation algorithm was developed for the purpose of both delineating the boundaries and separating thin gaps that separate the epithelial unit structures. This paper discusses novel methods that were developed for adaptive segmentation of epithelial cytoplasm and separation of epithelial units. The adaptive segmentation was performed by computing the non-epithelial background texture of every CK18 biomarker image. The epithelial unit separation was performed using two complementary techniques: a marker based, center-initialized watershed transform and a boundary initialized fast marching-watershed segmentation. The adaptive segmentation algorithm was tested on 926 CK18 biomarker biopsy images (326 patients) with limited background noise and 1030 prostatectomy images (374 patients) with noisy to very noisy background. The segmentation performance was measured using two different methods, namely; stability and background textural metrics. It was observed that the database of 1030 noisy prostatectomy images had a lower mean value (using stability and three background texture performance metrics) compared to the biopsy dataset of 926 images that had limited background noise. The average of all four performance metrics yielded 94.32% accuracy for prostatectomy images compared to 99.40% accuracy for biopsy images.
Nuclei extraction from histopathological images using a marked point process approach
Maria Kulikova, Antoine Veillard, Ludovic Roux, et al.
Morphology of cell nuclei is a central aspect in many histopathological studies, in particular in breast cancer grading. Therefore, the automatic detection and extraction of cell nuclei from microscopic images obtained from cancer tissue slides is one of the most important problems in digital histopathology. We propose to tackle the problem using a model based on marked point processes (MPP), a methodology for extraction of multiple objects from images. The advantage of MPP based models is their ability to take into account the geometry of objects; and the information about their spatial repartition in the image. Previously, the MPP models have been applied for the extraction of objects of simple geometrical shapes. For histological grading, a morphological criterion known as nuclear pleomorphism corresponding to fine morphological differences between the nuclei is assessed by pathologists. Therefore, the accurate delineation of nuclei became an issue of even greater importance than optimal nuclei detection. Recently, the MPP framework has been defined on the space of arbitrarily-shaped objects allowing more accurate extraction of complex-shaped objects. The nuclei often appear joint or even overlap in histopathological images. The model still allows to extract them as individual joint or overlapping objects without discarding the overlapping parts and therefore without significant loss in delineation precision. We aim to compare the MPP model with two state-of-the-art methods selected from a comprehensive review of the available methods. The experiments are performed using a database of H&E stained breast cancer images covering a wide range of histological grades.
A framework of whole heart extracellular volume fraction estimation for low dose cardiac CT images
Xinjian Chen, Ronald M. Summers, Marcelo Souto Nacif, et al.
Cardiac magnetic resonance imaging (CMRI) has been well validated and allows quantification of myocardial fibrosis in comparison to overall mass of the myocardium. Unfortunately, CMRI is relatively expensive and is contraindicated in patients with intracardiac devices. Cardiac CT (CCT) is widely available and has been validated for detection of scar and myocardial stress/rest perfusion. In this paper, we sought to evaluate the potential of low dose CCT for the measurement of myocardial whole heart extracellular volume (ECV) fraction. A novel framework was proposed for CCT whole heart ECV estimation, which consists of three main steps. First, a shape constrained graph cut (GC) method was proposed for myocardium and blood pool segmentation for post-contrast image. Second, the symmetric Demons deformable registrations method was applied to register pre-contrast to post-contrast images. Finally, the whole heart ECV value was computed. The proposed method was tested on 7 clinical low dose CCT datasets with pre-contrast and post-contrast images. The preliminary results demonstrated the feasibility and efficiency of the proposed method.
Heart region segmentation from low-dose CT scans: an anatomy based approach
Anthony P. Reeves, Alberto M. Biancardi, David F. Yankelevitz, et al.
Cardiovascular disease is a leading cause of death in developed countries. The concurrent detection of heart diseases during low-dose whole-lung CT scans (LDCT), typically performed as part of a screening protocol, hinges on the accurate quantification of coronary calcification. The creation of fully automated methods is ideal as complete manual evaluation is imprecise, operator dependent, time consuming and thus costly. The technical challenges posed by LDCT scans in this context are mainly twofold. First, there is a high level image noise arising from the low radiation dose technique. Additionally, there is a variable amount of cardiac motion blurring due to the lack of electrocardiographic gating and the fact that heart rates differ between human subjects. As a consequence, the reliable segmentation of the heart, the first stage toward the implementation of morphologic heart abnormality detection, is also quite challenging. An automated computer method based on a sequential labeling of major organs and determination of anatomical landmarks has been evaluated on a public database of LDCT images. The novel algorithm builds from a robust segmentation of the bones and airways and embodies a stepwise refinement starting at the top of the lungs where image noise is at its lowest and where the carina provides a good calibration landmark. The segmentation is completed at the inferior wall of the heart where extensive image noise is accommodated. This method is based on the geometry of human anatomy and does not involve training through manual markings. Using visual inspection by an expert reader as a gold standard, the algorithm achieved successful heart and major vessel segmentation in 42 of 45 low-dose CT images. In the 3 remaining cases, the cardiac base was over segmented due to incorrect hemidiaphragm localization.
Enhanced detection of the vertebrae in 2D CT-images
Franz Graf, Robert Greil, Hans-Peter Kriegel, et al.
In recent years, a considerable amount of methods have been proposed for detecting and reconstructing the spine and the vertebrae from CT and MR scans. The results are either used for examining the vertebrae or serve as a preprocessing step for further detection and annotation tasks. In this paper, we propose a method for reliably detecting the position of the vertebrae on a single slice of a transversal body CT scan. Thus, our method is not restricted by the available portion of the 3D scan, but even suffices with a single 2D image. A further advantage of our method is that detection does not require adjusting parameters or direct user interaction. Technically, our method is based on an imaging pipeline comprising five steps: The input image is preprocessed. The relevant region of the image is extracted. Then, a set of candidate locations is selected based on bone density. In the next step, image features are extracted from the surrounding of the candidate locations and an instance-based learning approach is used for selecting the best candidate. Finally, a refinement step optimizes the best candidate region. Our proposed method is validated on a large diverse data set of more than 8 000 images and improves the accuracy in terms of area overlap and distance from the true position significantly compared to the only other method being proposed for this task so far.
Metastatic liver tumor detection from 3D CT images using a level set algorithm with liver-edge term
Junichi Miyakoshi, Shuntaro Yui, Kazuki Matsuzaki, et al.
We developed a metastatic liver tumor detection method using a level set algorithm with a liver-edge term. The level set algorithm is suitable for detection that requires an automated and accurate technique to reduce the time it takes to interpret the results. The conventional detection method, which is based on shape analysis using the Hessian matrix, tends to miss tumors on the edge of liver parenchyma because such tumors have a different shape than those in the center: on the edge they are blob-like and in the center they are step-like. The proposed method, which we call the liver-edge term, improves the accuracy of detection on the edge of liver parenchyma by recognizing step-like shapes on an intensity distribution. We applied the method to five 3-D CT images and evaluated the accuracy. Results showed that the proposed method had an average sensitivity of 92% compared to the 88% of the conventional method.
Fully automatic vertebra detection in x-ray images based on multi-class SVM
Fabian Lecron, Mohammed Benjelloun, Saïd Mahmoudi
Automatically detecting vertebral bodies in X-Ray images is a very complex task, especially because of the noise and the low contrast resulting in that kind of medical imagery modality. Therefore, the contributions in the literature are mainly interested in only 2 medical imagery modalities: Computed Tomography (CT) and Magnetic Resonance (MR). Few works are dedicated to the conventional X-Ray radiography and propose mostly semi-automatic methods. However, vertebra detection is a key step in many medical applications such as vertebra segmentation, vertebral morphometry, etc. In this work, we develop a fully automatic approach for the vertebra detection, based on a learning method. The idea is to detect a vertebra by its anterior corners without human intervention. To this end, the points of interest in the radiograph are firstly detected by an edge polygonal approximation. Then, a SIFT descriptor is used to train an SVM-model. Therefore, each point of interest can be classified in order to detect if it belongs to a vertebra or not. Our approach has been assessed by the detection of 250 cervical vertebræ on radiographs. The results show a very high precision with a corner detection rate of 90.4% and a vertebra detection rate from 81.6% to 86.5%.
Local label learning (L3) for multi-atlas based segmentation
Yongfu Hao, Jieqiong Liu, Yunyun Duan, et al.
For subcortical structure segmentation, multi-atlas based segmentation methods have attracted great interest due to their competitive performance. Under this framework, using deformation fields generated for registering atlas images to the target image, labels of the atlases are first propagated to the target image space and further fused somehow to get the target segmentation. Many label fusion strategies have been proposed and most of them adopt predefined weighting models which are not necessarily optimal. In this paper, we propose a local label learning (L3) strategy to estimate the target image's label using statistical machine learning techniques. Specifically, we use Support Vector Machine (SVM) to learn a classifier for each of the target image voxels using its neighboring voxels in the atlases as a training dataset. Each training sample has dozens of image features extracted around its neighborhood and these features are optimally combined by the SVM learning method to classify the target voxel. The key contribution of this method is the development of a locally specific classifier for each target voxel based on informative texture features. The validation experiment on 57 MR images has demonstrated that our method generates segmentation results of hippocampal with a dice overlap of 0.908±0.023 to manual segmentations, statistically significantly better than state-of-the-art segmentation algorithms.
Automated anatomical labeling method for abdominal arteries extracted from 3D abdominal CT images
Masahiro Oda, Bui Huy Hoang, Takayuki Kitasaka, et al.
This paper presents an automated anatomical labeling method of abdominal arteries. In abdominal surgery, understanding of blood vessel structure concerning with a target organ is very important. Branching pattern of blood vessels differs among individuals. It is required to develop a system that can assist understanding of a blood vessel structure and anatomical names of blood vessels of a patient. Previous anatomical labbeling methods for abdominal arteries deal with either of the upper or lower abdominal arteries. In this paper, we present an automated anatomical labeling method of both of the upper and lower abdominal arteries extracted from CT images. We obtain a tree structure of artery regions and calculate feature values for each branch. These feature values include the diameter, curvature, direction, and running vectors of a branch. Target arteries of this method are grouped based on branching conditions. The following processes are separately applied for each group. We compute candidate artery names by using classifiers that are trained to output artery names. A correction process of the candidate anatomical names based on the rule of majority is applied to determine final names. We applied the proposed method to 23 cases of 3D abdominal CT images. Experimental results showed that the proposed method is able to perform nomenclature of entire major abdominal arteries. The recall and the precision rates of labeling are 79.01% and 80.41%, respectively.
Computerized analysis of pelvic incidence from 3D images
Tomaž Vrtovec, Michiel M. A. Janssen, Franjo Pernuš, et al.
The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean±standard deviation) was equal to 46.6°±9.2° for male subjects (N = 189), 47.6°±10.7° for female subjects (N = 181), and 47.1°±10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.
Incorporation of physical constraints in optimal surface search for renal cortex segmentation
Xiuli Li, Xinjian Chen, Jianhua Yao, et al.
In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.
A parametric statistic model and fast algorithm for brain MR image segmentation and bias correction
In this paper, we propose an improved method for simultaneous estimation of the bias field and segmentation of tissues for magnetic resonance images, which is an extension of the method in. Firstly, the bias field is modeled as a linear combination of a set of basis functions, and thereby parameterized by the coefficients of the basis functions. Then we model the distribution of intensity in each tissue as a Gaussian distribution, and use the maximum a posteriori probability and total variation (TV) regularization to define our objective energy function. At last, an efficient iterative algorithm based on split Bregman method is used to minimize our energy function at a fast rate. Comparisons with other approaches demonstrate the superior performance of this algorithm.
Live-wire-based segmentation of 3D anatomical structures for image-guided lung interventions
Computed Tomography (CT) has been widely used for assisting in lung cancer detection/diagnosis and treatment. In lung cancer diagnosis, suspect lesions or regions of interest (ROIs) are usually analyzed in screening CT scans. Then, CT-based image-guided minimally invasive procedures are performed for further diagnosis through bronchoscopic or percutaneous approaches. Thus, ROI segmentation is a preliminary but vital step for abnormality detection, procedural planning, and intra-procedural guidance. In lung cancer diagnosis, such ROIs can be tumors, lymph nodes, nodules, etc., which may vary in size, shape, and other complication phenomena. Manual segmentation approaches are time consuming, user-biased, and cannot guarantee reproducible results. Automatic methods do not require user input, but they are usually highly application-dependent. To counterbalance among efficiency, accuracy, and robustness, considerable efforts have been contributed to semi-automatic strategies, which enable full user control, while minimizing human interactions. Among available semi-automatic approaches, the live-wire algorithm has been recognized as a valuable tool for segmentation of a wide range of ROIs from chest CT images. In this paper, a new 3D extension of the traditional 2D live-wire method is proposed for 3D ROI segmentation. In the experiments, the proposed approach is applied to a set of anatomical ROIs from 3D chest CT images, and the results are compared with the segmentation derived from a previous evaluated live-wire-based approach.
Semi-automatic intracranial tumor segmentation and tumor tissue classification based on multiple MR protocols
A. Franz, H. Tschampa, A. Müller, et al.
Segmentation of intracranial tumors in Magnetic Resonance (MR) data sets and classification of the tumor tissue into vital, necrotic, and perifocal edematous areas is required in a variety of clinical applications. Manual delineation of the tumor tissue boundaries is a tedious and error-prone task, and reproducibility is problematic. Furthermore, tissue classification mostly requires information of several MR protocols and contrasts. Here we present a nearly automatic segmentation and classification algorithm for intracranial tumor tissue working on a combination of T1 weighted contrast enhanced (T1CE) and fluid attenuated inversion recovery (FLAIR) data sets. Both data types are included in MR intracranial tumor protocols that are used in clinical routine. The algorithm is based on a region growing technique. The main required user interaction is a mouse click to provide the starting point. The region growing thresholds are automatically adapted to the requirements of the actual data sets. If the segmentation result is not fully satisfying, the user is allowed to adapt the algorithmic parameters for final fine-tuning. We developed a user interface, where the data sets can be loaded, the segmentation can be started by a mouse click, the parameters can be amended, and the segmentation results can be saved. With this user interface, our segmentation tool can be used in the hospital on an image processing workstation or even directly on the MR scanner. This enables an extensive validation study. On the 20 clinical test cases of human intracranial tumors we investigated so far, the results were satisfying in 85% of the cases.
A multi-dimensional model for localization of highly variable objects
Heike Ruppertshofen, Thomas Bülow, Jens von Berg, et al.
In this work, we present a new type of model for object localization, which is well suited for anatomical objects exhibiting large variability in size, shape and posture, for usage in the discriminative generalized Hough transform (DGHT). The DGHT combines the generalized Hough transform (GHT) with a discriminative training approach to automatically obtain robust and efficient models. It has been shown to be a strong tool for object localization capable of handling a rather large amount of shape variability. For some tasks, however, the variability exhibited by different occurrences of the target object becomes too large to be represented by a standard DGHT model. To be able to capture such highly variable objects, several sub-models, representing the modes of variability as seen by the DGHT, are created automatically and are arranged in a higher dimensional model. The modes of variability are identified on-the-fly during training in an unsupervised manner. Following the concept of the DGHT, the sub-models are jointly trained with respect to a minimal localization error employing the discriminative training approach. The procedure is tested on a dataset of thorax radiographs with the target to localize the clavicles. Due to different arm positions, the posture and arrangement of the target and surrounding bones differs strongly, which hampers the training of a good localization model. Employing the new model approach the localization rate improves by 13% on unseen test data compared to the standard model.
Improving semi-automated segmentation by integrating learning with active sampling
Jing Huo, Kazunori Okada, Matthew Brown
Interactive segmentation algorithms such as GrowCut usually require quite a few user interactions to perform well, and have poor repeatability. In this study, we developed a novel technique to boost the performance of the interactive segmentation method GrowCut involving: 1) a novel "focused sampling" approach for supervised learning, as opposed to conventional random sampling; 2) boosting GrowCut using the machine learned results. We applied the proposed technique to the glioblastoma multiforme (GBM) brain tumor segmentation, and evaluated on a dataset of ten cases from a multiple center pharmaceutical drug trial. The results showed that the proposed system has the potential to reduce user interaction while maintaining similar segmentation accuracy.
Robust lumen segmentation of coronary arteries in 2D angiographic images
Maria Polyanskaya, Chris Schwemmer, Andre Linarth, et al.
Diagnosis and treatment of coronary diseases depends on the data acquired during angiographic investigations. To provide better assistance for angiographic procedures, a segmentation of the lumen is required. A new algorithm for vessel centerline computation and lumen segmentation in 2D projection coronary angiograms is presented. Centerlines are extracted by a graph-based optimization technique, which searches for paths with minimal costs. The search starts from a source point, which is automatically set by the proposed algorithm. A new objective function for determining the costs of the graph edges is proposed. It consists of the response from the medialness filter and is regularized by the centerline potential function. In the medialness filter a vessel cross-section is represented by a 1D profile parameterized by center position and radius. The medialness filter at a point optimizes a gradient-based response over the profile radius. The proposed centerline potential function defines likeliness of each point of the image to be a centerline. Both the medialness filter and the centerline potential function are multi-scale. The entire lumen segmentation is achieved by the radii extracted during the medialness response computation. Application to clinical data shows that the presented algorithm segments coronary lumen with good accuracy and allows for subsequent assessment of the quantitative characteristics (i.e. diameter, curvature, etc.) of the vessels.
Extraction of liver volumetry based on blood vessel from the portal phase CT dataset
At liver surgery planning stage, the liver volumetry would be essential for surgeons. Main problem at liver extraction is the wide variability of livers in shapes and sizes. Since, hepatic blood vessels structure varies from a person to another and covers liver region, the present method uses that information for extraction of liver in two stages. The first stage is to extract abdominal blood vessels in the form of hepatic and nonhepatic blood vessels. At the second stage, extracted vessels are used to control extraction of liver region automatically. Contrast enhanced CT datasets at only the portal phase of 50 cases is used. Those data include 30 abnormal livers. A reference for all cases is done through a comparison of two experts labeling results and correction of their inter-reader variability. Results of the proposed method agree with the reference at an average rate of 97.8%. Through application of different metrics mentioned at MICCAI workshop for liver segmentation, it is found that: volume overlap error is 4.4%, volume difference is 0.3%, average symmetric distance is 0.7 mm, Root mean square symmetric distance is 0.8 mm, and maximum distance is 15.8 mm. These results represent the average of overall data and show an improved accuracy compared to current liver segmentation methods. It seems to be a promising method for extraction of liver volumetry of various shapes and sizes.
Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model
Myungeun Lee, Jong Hyo Kim
Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.
A new prostate segmentation approach using multispectral magnetic resonance imaging and a statistical pattern classifier
Bianca Maan, Ferdi van der Heijden, Jurgen J. Fütterer
Prostate segmentation is essential for calculating prostate volume, creating patient-specific prostate anatomical models and image fusion. Automatic segmentation methods are preferable because manual segmentation is timeconsuming and highly subjective. Most of the currently available segmentation methods use a priori knowledge of the prostate shape. However, there is a large variation in prostate shape between patients. Our approach uses multispectral magnetic resonance imaging (MRI) data, containing T1, T2 and proton density (PD) weighted images and the distance from the voxel to the centroid of the prostate, together with statistical pattern classifiers. We investigated the performance of a parametric and a non-parametric classification approach by applying a Baysian-quadratic and a k-nearest-neighbor classifier respectively. An annotated data set is made by manual labeling of the image. Using this data set, the classifiers are trained and evaluated. sThe following results are obtained after three experiments. Firstly, using feature selection we showed that the average segmentation error rates are lowest when combining all three images and the distance with the k-nearest-neighbor classifier. Secondly, the confusion matrix showed that the k-nearest-neighbor classifier has the sensitivity. Finally, the prostate is segmented using both classifier. The segmentation boundaries approach the prostate boundaries for most slices. However, in some slices the segmentation result contained errors near the borders of the prostate. The current results showed that segmenting the prostate using multispectral MRI data combined with a statistical classifier is a promising method.
Design of spectral filtering for tissue classification
Ajay Narayanan, Pratik Shah, Bipul Das
Tissue characterization from imaging studies is an integral part of clinical practice. We describe a spectral filter design for tissue separation in dual energy CT scans obtained from Gemstone Spectral Imaging scanner. It enables to have better 2D/3D visualization and tissue characterization in normal and pathological conditions. The major challenge to classify tissues in conventional computed tomography (CT) is the x-ray attenuation proximity of multiple tissues at any given energy. The proposed method analyzes the monochromatic images at different energy levels, which are derived from the two scans obtained at low and high KVp through fast switching. Although materials have a distinct attenuation profile across different energies, tissue separation is not trivial as tissues are a mixture of different materials with range of densities that vary across subjects. To address this problem, we define spectral filtering, that generates probability maps for each tissue in multi-energy space. The filter design incorporates variations in the tissue due to composition, density of individual constituents and their mixing proportions. In addition, it also provides a framework to incorporate zero mean Gaussian noise. We demonstrate the application of spectral filtering for bone-free vascular visualization and calcification characterization.
Robust left ventricular myocardium segmentation for multi-protocol MR
A. Groth, J. Weese, H. Lehmann
For a number of cardiac procedures like the treatments of ventricular tachycardia (VT), coronary artery disease (CAD) and heart failure (HF) both anatomical as well as vitality information about the left ventricular myocardium are required. To this end, two images for the anatomical and functional information, respectively, must be acquired and analyzed, e.g. using two different 3D MR protocols. To enable automatic analysis, a workflow has been proposed1 which allows to integrate the vitality information extracted from the functional image data into a patient-specific anatomical model generated from the anatomical image. However, in the proposed workflow the extraction of accurate vitality information from the functional image depends to a large extend on the accuracy of both the anatomical model and the mapping of the model to the functional image. In this paper we propose and evaluate methods for improving these two aspects. More specifically, on one hand we aim to improve the segmentation of the often low-contrast left ventricular epicardium in the anatomical 3D MR images by introducing a patient-specific shape-bias. On the other hand, we introduce a registration approach that facilitates the mapping of the anatomical model to images acquired by different protocols and modalities, such as functional 3D MR. The new methods are evaluated on clinical MR data, for which considerable improvements can be achieved.
Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences
Enea Poletti, Elisa Veronese, Massimiliano Calabrese, et al.
The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.
Fast automatic algorithm for bifurcation detection in vascular CTA scans
Matthias Brozio, Vladlena Gorbunova, Christian Godenschwager, et al.
Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s ± 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.
Pulmonary lobe segmentation with level sets
Automatic segmentation of the separate human lung lobes is a crucial task in computer aided diagnostics and intervention planning, and required for example for determination of disease spreading or pulmonary parenchyma quantification. In this work, a novel approach for lobe segmentation based on multi-region level sets is presented. In a first step, interlobular fissures are detected using a supervised enhancement filter. The fissures are then used to compute a cost image, which is incorporated in the level set approach. By this, the segmentation is drawn to the fissures at places where structure information is present in the image. In areas with incomplete fissures (e.g. due to insufficient image quality or anatomical conditions) the smoothing term of the level sets applies and a closed continuation of the fissures is provided. The approach is tested on nine pulmonary CT scans. It is shown that incorporating the additional force term improves the segmentation significantly. On average, 83% of the left fissure is traced correctly; the right oblique and horizontal fissures are properly segmented to 76% and 48%, respectively.
Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images
Zhiyun Gao, Randall W. Grout, Eric A. Hoffman, et al.
Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.
Binary image representation by contour trees
Dogu Baran Aydogan, Jari Hyttinen
There is a growing need in medical image processing to analyze segmented objects. In this study we are interested in analyzing morphological properties of complex structures such as the trabecular bone. Although, there are various shape description approaches proposed in the literature, there is not an adequate method to represent foreground object(s) morphology with respect to the background. In this article, we propose a way of representing binary images of any dimensions using graphs that emphasize connectivity of level-sets to foreground and background. We start by calculating the euclidean distance transform (EDT) to create a scalar field. Then the contour tree of this scalar field is calculated using a modified version of the algorithm proposed by Carr. Contour trees are mostly used to visualize high dimensional scalar fields as they can put on view the critical points, i.e: local min, max and saddle points; however, their use in representing complex shapes have not been studied. We demonstrate the use of our method on artificial 2D images having different topologies as well as 3D μ-CT images of two bone biopsies. We show that the application of contour trees to complex binary data particularly prove useful when interpreting pore-networks at micro-scale. Further work to quantify foreground and background interconnectivity using certain graph theoretical methods is still under research.
An automated multi-modal object analysis approach to coronary calcium scoring of adaptive heart isolated MSCT images
Jing Wu, Gordon Ferns, John Giles, et al.
Inter- and intra- observer variability is a problem often faced when an expert or observer is tasked with assessing the severity of a disease. This issue is keenly felt in coronary calcium scoring of patients suffering from atherosclerosis where in clinical practice, the observer must identify firstly the presence, followed by the location of candidate calcified plaques found within the coronary arteries that may prevent oxygenated blood flow to the heart muscle. This can be challenging for a human observer as it is difficult to differentiate calcified plaques that are located in the coronary arteries from those found in surrounding anatomy such as the mitral valve or pericardium. The inclusion or exclusion of false positive or true positive calcified plaques respectively will alter the patient calcium score incorrectly, thus leading to the possibility of incorrect treatment prescription. In addition to the benefits to scoring accuracy, the use of fast, low dose multi-slice CT imaging to perform the cardiac scan is capable of acquiring the entire heart within a single breath hold. Thus exposing the patient to lower radiation dose, which for a progressive disease such as atherosclerosis where multiple scans may be required, is beneficial to their health. Presented here is a fully automated method for calcium scoring using both the traditional Agatston method, as well as the Volume scoring method. Elimination of the unwanted regions of the cardiac image slices such as lungs, ribs, and vertebrae is carried out using adaptive heart isolation. Such regions cannot contain calcified plaques but can be of a similar intensity and their removal will aid detection. Removal of both the ascending and descending aortas, as they contain clinical insignificant plaques, is necessary before the final calcium scores are calculated and examined against ground truth scores of three averaged expert observer results. The results presented here are intended to show the requirement and feasibility for an automated scoring method that reduces the subjectivity and reproducibility error inherent with manual clinical calcium scoring.
Three dimensional multi-scale visual words for texture-based cerebellum segmentation
Segmentation of the various parts of the brain is a challenging area in medical imaging and it is a prerequisite for many image analysis tasks useful for clinical research. Advances have been made in generating brain image templates that can be registered to automatically segment regions of interest in the human brain. However, these methods may fail with some subjects if there is a significant shape distortion or difference from the proposed models. This is also the case of newborns, where the developing brain strongly differs from adult magnetic resonance imaging (MRI) templates. In this article, a texture-based cerebellum segmentation method is described. The algorithm presented does not use any prior spatial knowledge to segment the MRI images. Instead, the system learns the texture features by means of a multi-scale filtering and visual words feature aggregation. Visual words are a commonly used technique in image retrieval. Instead of using visual features directly, the features of specific regions are modeled (clustered) into groups of discriminative features. This means that the final feature space can be reduced in size and also that the visual words in local regions are really discriminative for the given data set. The system is currently trained and tested with a dataset of 18 adult brain MRIs. An extension to the use with newborn brain images is being foreseen as this could highlight the advantages of the proposed technique. Results show that the use of texture features can be valuable for the task described and can lead to good results. The use of visual words can potentially improve robustness of existing shape-based techniques for cases with significant shape distortion or other differences from the models. As the visual words based techniques are not assuming any prior knowledge such techniques could be used for other types of segmentations as well using a large variety of basic visual features.
Finding seeds for segmentation using statistical fusion
Image labeling is an essential step for quantitative analysis of medical images. Many image labeling algorithms require seed identification in order to initialize segmentation algorithms such as region growing, graph cuts, and the random walker. Seeds are usually placed manually by human raters, which makes these algorithms semi-automatic and can be prohibitive for very large datasets. In this paper an automatic algorithm for placing seeds using multi-atlas registration and statistical fusion is proposed. Atlases containing the centers of mass of a collection of neuroanatomical objects are deformably registered in a training set to determine where these centers of mass go after labels transformed by registration. The biases of these transformations are determined and incorporated in a continuous form of Simultaneous Truth And Performance Level Estimation (STAPLE) fusion, thereby improving the estimates (on average) over a single registration strategy that does not incorporate bias or fusion. We evaluate this technique using real 3D brain MR image atlases and demonstrate its efficacy on correcting the data bias and reducing the fusion error.
Watershed-based segmentation of the corpus callosum in diffusion MRI
Pedro Freitas, Leticia Rittner, Simone Appenzeller, et al.
The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres, and is related to several neurodegenerative diseases. Since segmentation is usually the first step for studies in this structure, and manual volumetric segmentation is a very time-consuming task, it is important to have a robust automatic method for CC segmentation. We propose here an approach for fully automatic 3D segmentation of the CC in the magnetic resonance diffusion tensor images. The method uses the watershed transform and is performed on the fractional anisotropy (FA) map weighted by the projection of the principal eigenvector in the left-right direction. The section of the CC in the midsagittal slice is used as seed for the volumetric segmentation. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC without any user intervention, with great results when compared to manual segmentation. Since it is simple, fast and does not require parameter settings, the proposed method is well suited for clinical applications.
Computational intelligence techniques for identifying the pectoral muscle region in mammograms
H. Erin Rickard, Ruben G. Villao, Adel S. Elmaghraby
Segmentation of the pectoral muscle is an imperative task in mammographic image analysis. The pectoral edge is specifically examined by radiologists for abnormal axillary lymph nodes, serves as one of the axes in 3-dimensional reconstructions, and is one of the fundamental landmarks in mammogram registration and comparison. However, this region interferes with intensity-based image processing methods and may bias cancer detection algorithms. The purpose of this study was to develop and evaluate computational intelligence techniques for identifying the pectoral muscle region in medio-lateral oblique (MLO) view mammograms. After removal of the background region, the mammograms were segmented using a K-clustered self-organizing map (SOM). Morphological operations were then applied to obtain an initial estimate of the pectoral muscle region. Shape-based analysis determined which of the K estimates to use in the final segmentation. The algorithm has been applied to 250 MLO-view Lumisys mammograms from the Digital Database for Screening Mammography (DDSM). Upon examination, it was discovered that three of the original mammograms did not contain the pectoral muscle and one contained a clear defect. Of the 246 remaining, 95.94% were considered to have successfully identified the pectoral muscle region. The results provide a compelling argument for the effectiveness of computational intelligence techniques for identifying the pectoral muscle region in MLO-view mammograms.
GrowCut-based fast tumor segmentation for 3D magnetic resonance images
Toshihiko Yamasaki, Tsuhan Chen, Masakazu Yagi, et al.
This paper presents a very fast segmentation algorithm based on the region-growing-based segmentation called GrowCut for 3D medical image slices. By the combination of four contributions such as hierarchical segmentation, voxel value quantization, skipping method, and parallelization, the computational time is drastically reduced from 507 seconds to 9.2-14.6 seconds on average for tumor segmentation of 256 x 256 x 200 MRIs.
Automatic detection of significant and subtle arterial lesions from coronary CT angiography
Dongwoo Kang, Piotr Slomka, Ryo Nakazato, et al.
Visual analysis of three-dimensional (3D) Coronary Computed Tomography Angiography (CCTA) remains challenging due to large number of image slices and tortuous character of the vessels. We aimed to develop an accurate, automated algorithm for detection of significant and subtle coronary artery lesions compared to expert interpretation. Our knowledge-based automated algorithm consists of centerline extraction which also classifies 3 main coronary arteries and small branches in each main coronary artery, vessel linearization, lumen segmentation with scan-specific lumen attenuation ranges, and lesion location detection. Presence and location of lesions are identified using a multi-pass algorithm which considers expected or "normal" vessel tapering and luminal stenosis from the segmented vessel. Expected luminal diameter is derived from the scan by automated piecewise least squares line fitting over proximal and mid segments (67%) of the coronary artery, considering small branch locations. We applied this algorithm to 21 CCTA patient datasets, acquired with dual-source CT, where 7 datasets had 17 lesions with stenosis greater than or equal to 25%. The reference standard was provided by visual and quantitative identification of lesions with any ≥25% stenosis by an experienced expert reader. Our algorithm identified 16 out of the 17 lesions confirmed by the expert. There were 16 additional lesions detected (average 0.13/segment); 6 out of 16 of these were actual lesions with <25% stenosis. On persegment basis, sensitivity was 94%, specificity was 86% and accuracy was 87%. Our algorithm shows promising results in the high sensitivity detection and localization of significant and subtle CCTA arterial lesions.
Automatic segmentation of the liver using multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images
Yujin Jang, Helen Hong, Jin Wook Chung, et al.
We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.
A novel approach for three dimensional dendrite spine segmentation and classification
Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical and genetic pathways by examining the morphological changes of the dendritic spines at the intracellular level. Automatic dendritic spine detection from high resolution microscopic images is an important step for such morphological studies. In this paper, a novel approach to automated dendritic spine detection is proposed based on a nonlinear degeneration model. Dendritic spines are recognized as small objects with variable shapes attached to dendritic backbones. We explore the problem of dendritic spine detection from a different angle, i.e., the nonlinear degeneration equation (NDE) is utilized to enhance the morphological differences between the dendrite and spines. Using NDE, we simulated degeneration for dendritic spine detection. Based on the morphological features, the shrinking rate on dendrite pixels is different from that on spines, so that spines can be detected and segmented after degeneration simulation. Then, to separate spines into different types, Gaussian curvatures were employed, and the biomimetic pattern recognition theory was applied for spine classification. In the experiments, we compared quantitatively the spine detection accuracy with previous methods, and the results showed the accuracy and superiority of our methods.
Segmentation algorithm of colon based on multi-slice CT colonography
Yizhong Hu, Mohammed Shabbir Ahamed, Eiji Takahashi, et al.
CT colonography is a radiology test that looks at people's large intestines(colon). CT colonography can screen many options of colon cancer. This test is used to detect polyps or cancers of the colon. CT colonography is safe and reliable. It can be used if people are too sick to undergo other forms of colon cancer screening. In our research, we proposed a method for automatic segmentation of the colon from abdominal computed Tomography (CT) images. Our multistage detection method extracted colon and spited colon into different parts according to the colon anatomy information. We found that among the five segmented parts of the colon, sigmoid (20%) and rectum (50%) are more sensitive toward polyps and masses than the other three parts. Our research focused on detecting the colon by the individual diagnosis of sigmoid and rectum. We think it would make the rapid and easy diagnosis of colon in its earlier stage and help doctors for analysis of correct position of each part and detect the colon rectal cancer much easier.
Automatic segmentation and analysis of fibrin networks in 3D confocal microscopy images
Xiaomin Liu, Jian Mu, Kellie R. Machlus, et al.
Fibrin networks are a major component of blood clots that provides structural support to the formation of growing clots. Abnormal fibrin networks that are too rigid or too unstable can promote cardiovascular problems and/or bleeding. However, current biological studies of fibrin networks rarely perform quantitative analysis of their structural properties (e.g., the density of branch points) due to the massive branching structures of the networks. In this paper, we present a new approach for segmenting and analyzing fibrin networks in 3D confocal microscopy images. We first identify the target fibrin network by applying the 3D region growing method with global thresholding. We then produce a one-voxel wide centerline for each fiber segment along which the branch points and other structural information of the network can be obtained. Branch points are identified by a novel approach based on the outer medial axis. Cells within the fibrin network are segmented by a new algorithm that combines cluster detection and surface reconstruction based on the α-shape approach. Our algorithm has been evaluated on computer phantom images of fibrin networks for identifying branch points. Experiments on z-stack images of different types of fibrin networks yielded results that are consistent with biological observations.
Placental fetal stem segmentation in a sequence of histology images
Prashant Athavale, Luminita A. Vese
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.
Fully automated 3D prostate central gland segmentation in MR images: a LOGISMOS based approach
One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ, automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary segmentation system, which tested on large datasets and showed good performance. Using the results of the pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed LOGISMOS model contained both shape and topology information during deformation. We generated graph cost by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset containing 261 images to optimize the system performance and compare with a voxel classification based reference approach. After the best parameter settings were found, the system was tested on a dataset containing another 261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG segmentation. Running time for the system is about 15 seconds.
A unifying graph-cut image segmentation framework: algorithms it encompasses and equivalences among them
We present a general graph-cut segmentation framework GGC, in which the delineated objects returned by the algorithms optimize the energy functions associated with the ℓp norm, 1 ≤ p ≤ ∞. Two classes of well known algorithms belong to GGC: the standard graph cut GC (such as the min-cut/max-flow algorithm) and the relative fuzzy connectedness algorithms RFC (including iterative RFC, IRFC). The norm-based description of GGC provides more elegant and mathematically better recognized framework of our earlier results from [18, 19]. Moreover, it allows precise theoretical comparison of GGC representable algorithms with the algorithms discussed in a recent paper [22] (min-cut/max-flow graph cut, random walker, shortest path/geodesic, Voronoi diagram, power watershed/shortest path forest), which optimize, via ℓp norms, the intermediate segmentation step, the labeling of scene voxels, but for which the final object need not optimize the used ℓp energy function. Actually, the comparison of the GGC representable algorithms with that encompassed in the framework described in [22] constitutes the main contribution of this work.
Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model
Hamed Akbari, Baowei Fei
Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.
Automatic organ segmentation on torso CT images by using content-based image retrieval
Xiangrong Zhou, Atsuto Watanabe, Xinxin Zhou, et al.
This paper presents a fast and robust segmentation scheme that automatically identifies and extracts a massive-organ region on torso CT images. In contrast to the conventional algorithms that are designed empirically for segmenting a specific organ based on traditional image processing techniques, the proposed scheme uses a fully data-driven approach to accomplish a universal solution for segmenting the different massive-organ regions on CT images. Our scheme includes three processing steps: machine-learning-based organ localization, content-based image (reference) retrieval, and atlas-based organ segmentation techniques. We applied this scheme to automatic segmentations of heart, liver, spleen, left and right kidney regions on non-contrast CT images respectively, which are still difficult tasks for traditional segmentation algorithms. The segmentation results of these organs are compared with the ground truth that manually identified by a medical expert. The Jaccard similarity coefficient between the ground truth and automated segmentation result centered on 67% for heart, 81% for liver, 78% for spleen, 75% for left kidney, and 77% for right kidney. The usefulness of our proposed scheme was confirmed.
An improved fuzzy c-means algorithm for unbalanced sized clusters
Shuguo Gu, Jingjing Liu, Qingguo Xie, et al.
In this paper, we propose an improved fuzzy c-means (FCM) algorithm based on cluster height information to deal with the sensitivity of unbalanced sized clusters in FCM. As we know, cluster size sensitivity is an major drawback of FCM, which tends to balance the cluster sizes during iteration, so the center of smaller cluster might be drawn to the adjacent larger one, which will lead to bad classification. To overcome this problem, the cluster height information is considered and introduced to the distance function to adjust the conventional Euclidean distance, thus to control the effect on classification from cluster size difference. Experimental results demonstrate that our algorithm can obtain good clustering results in spite of great size difference, while traditional FCM cannot work well in such case. The improved FCM has shown its potential for extracting small clusters, especially in medical image segmentation.
Graph representation of hepatic vessel based on centerline extraction and junction detection
Xing Zhang, Jie Tian, Kexin Deng, et al.
In the area of computer-aided diagnosis (CAD), segmentation and analysis of hepatic vessel is a prerequisite for hepatic diseases diagnosis and surgery planning. For liver surgery planning, it is crucial to provide the surgeon with a patient-individual three-dimensional representation of the liver along with its vasculature and lesions. The representation allows an exploration of the vascular anatomy and the measurement of vessel diameters, following by intra-patient registration, as well as the analysis of the shape and volume of vascular territories. In this paper, we present an approach for generation of hepatic vessel graph based on centerline extraction and junction detection. The proposed approach involves the following concepts and methods: 1) Flux driven automatic centerline extraction; 2) Junction detection on the centerline using hollow sphere filtering; 3) Graph representation of hepatic vessel based on the centerline and junction. The approach is evaluated on contrast-enhanced liver CT datasets to demonstrate its availability and effectiveness.
Vessel centerline extraction in phase-contrast MR images using vector flow information
Yoo-Jin Jeong, Sebastian Ley, Rüdiger Dillmann, et al.
To obtain hemodynamic-relevant parameters in case of cardiovascular diseases the velocity-encoded magnetic resonance imaging (PC-MRI) is used for the non-invasive measurement of the blood flow in terms of 3D velocity fields. During the segmentation of the vessel lumen in those datasets conventional segmentation methods often fail due to reduced image quality. In this paper we present a method for the centerline extraction of great vessels in PC-MR images using additional features extracted from vector flow information. The proposed algorithm can be divided in the following steps: the propagation along the vessel course by using streamlines and the largest eigenvector, the radial search for the vessel boundary, the determination of the center position in the cross-sectional plane of the vessel and the adjustment of the propagation step size subject to the vessel curvature. This is done by using a combination of morphology and flow information: the Sobel filtered and the threshold filtered image as morphologic features as well as the coherence values of the flow vectors and the behaviour of the blood flow streamlines within the vessel and around the borders as flow features. The developed algorithm was evaluated on clinical PC-MRI datasets with encouraging results. The centerline points of the entire aorta as well as corresponding border points were successfully extracted for 16 out of 17 examined datasets. For the detection of the vessel boundary the features extracted from flow information showed to yield more reliable results than morphology features.
A fuzzy clustering vessel segmentation method incorporating line-direction information
Zhimin Wang, Wei Xiong, Weimin Huang, et al.
A data clustering based vessel segmentation method is proposed for automatic liver vasculature segmentation in CT images. It consists of a novel similarity measure which incorporates the spatial context, vesselness information and line-direction information in a unique way. By combining the line-direction information and spatial information into the data clustering process, the proposed method is able to take care of the fine details of the vessel tree and suppress the image noise and artifacts at the same time. The proposed algorithm has been evaluated on the real clinical contrast-enhanced CT images, and achieved excellent segmentation accuracy without any experimentally set parameters.
Posters: Shape
icon_mobile_dropdown
A framework for longitudinal data analysis via shape regression
James Fishbaugh, Stanley Durrleman, Joseph Piven, et al.
Traditional longitudinal analysis begins by extracting desired clinical measurements, such as volume or head circumference, from discrete imaging data. Typically, the continuous evolution of a scalar measurement is estimated by choosing a 1D regression model, such as kernel regression or fitting a polynomial of fixed degree. This type of analysis not only leads to separate models for each measurement, but there is no clear anatomical or biological interpretation to aid in the selection of the appropriate paradigm. In this paper, we propose a consistent framework for the analysis of longitudinal data by estimating the continuous evolution of shape over time as twice differentiable flows of deformations. In contrast to 1D regression models, one model is chosen to realistically capture the growth of anatomical structures. From the continuous evolution of shape, we can simply extract any clinical measurements of interest. We demonstrate on real anatomical surfaces that volume extracted from a continuous shape evolution is consistent with a 1D regression performed on the discrete measurements. We further show how the visualization of shape progression can aid in the search for significant measurements. Finally, we present an example on a shape complex of the brain (left hemisphere, right hemisphere, cerebellum) that demonstrates a potential clinical application for our framework.
3D reconstruction of the scapula from biplanar radiographs
P. Y. Lagacé, T. Cresson, N. Hagemeister, et al.
Access to 3D bone models is critical for applications ranging from pre-operative planning to biomechanics studies. This work presents a method for 3D reconstruction of the scapula from biplanar radiographs, which is based on the combination of a parametric model approach in conjunction with a Moving Least Squares (MLS) deformation technique. A parametric scapula model was created by fitting geometric primitives (with their descriptive parameters) to the CT reconstruction of a dry scapula. These geometric primitives were then used to define a set of handles which allow the user to control the as-rigid-as-possible deformation of the template model in real-time, until optimal correspondence between the actual X-ray images and the retro-projection of the deformed model. When applied to 10 dry scapulae, the presented method allowed obtaining reconstructions which were on average within 1mm of the CT-derived model at scapula regions of interest. Morphological parameters such as the glenoid's dimensions and orientation were determined with errors of 1° and less than 1mm, on average. This is of great interest as the current methods used in clinical practice, which are based on 2D-CT, are subject to uncertainties of the order of 5° for glenoid version. This method is of particular interest as it further reduces our dependence to CT for 3D reconstruction of bones and clinical parameter estimation.
A shape-based statistical method to retrieve 2D TRUS-MR slice correspondence for prostate biopsy
Jhimli Mitra, Abhilash Srikantha, Désiré Sidibé, et al.
This paper presents a method based on shape-context and statistical measures to match interventional 2D Trans Rectal Ultrasound (TRUS) slice during prostate biopsy to a 2D Magnetic Resonance (MR) slice of a pre-acquired prostate volume. Accurate biopsy tissue sampling requires translation of the MR slice information on the TRUS guided biopsy slice. However, this translation or fusion requires the knowledge of the spatial position of the TRUS slice and this is only possible with the use of an electro-magnetic (EM) tracker attached to the TRUS probe. Since, the use of EM tracker is not common in clinical practice and 3D TRUS is not used during biopsy, we propose to perform an analysis based on shape and information theory to reach close enough to the actual MR slice as validated by experts. The Bhattacharyya distance is used to find point correspondences between shape-context representations of the prostate contours. Thereafter, Chi-square distance is used to find out those MR slices where the prostates closely match with that of the TRUS slice. Normalized Mutual Information (NMI) values of the TRUS slice with each of the axial MR slices are computed after rigid alignment and consecutively a strategic elimination based on a set of rules between the Chi-square distances and the NMI leads to the required MR slice. We validated our method for TRUS axial slices of 15 patients, of which 11 results matched at least one experts validation and the remaining 4 are at most one slice away from the expert validations.
Shape-constrained multi-atlas based segmentation with multichannel registration
Multi-atlas based segmentation methods have recently attracted much attention in medical image segmentation. The multi-atlas based segmentation methods typically consist of three steps, including image registration, label propagation, and label fusion. Most of the recent studies devote to improving the label fusion step and adopt a typical image registration method for registering atlases to the target image. However, the existing registration methods may become unstable when poor image quality or high anatomical variance between registered image pairs involved. In this paper, we propose an iterative image segmentation and registration procedure to simultaneously improve the registration and segmentation performance in the multi-atlas based segmentation framework. Particularly, a two-channel registration method is adopted with one channel driven by appearance similarity between the atlas image and the target image and the other channel optimized by similarity between atlas label and the segmentation of the target image. The image segmentation is performed by fusing labels of multiple atlases. The validation of our method on hippocampus segmentation of 30 subjects containing MR images with both 1.5T and 3.0T field strength has demonstrated that our method can significantly improve the segmentation performance with different fusion strategies and obtain segmentation results with Dice overlap of 0.892±0.024 for 1.5T images and 0.902±0.022 for 3.0T images to manual segmentations.
Automated detection of pain from facial expressions: a rule-based approach using AAM
Zhanli Chen, Rashid Ansari, Diana J. Wilkie
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Posters: Image Enhancement
icon_mobile_dropdown
Tomographic reconstruction of Cerenkov photons in tissues through approximate message-passing
Jianghong Zhong, Jie Tian, Haixiao Liu, et al.
Solution with adjustable sparsity to tomographic imaging of Cerenkov photons is presented in this work. The sparsity of radionuclides' distribution in tissues is an objective but unknown fact, and the inverse model of qualitative data is an ill-posed problem. Based on the optimization technique, the uniqueness of numerical solution to the ill-conditioned compact operator can be guaranteed by use of sparse regularization with the approximate message-passing (AMP) method. After absorbing formulations with the AMP, we analyzed the behavior of the hard thresholding operator. Iteratively numerical solutions were used to approximate the real light source by assuming the number of non-zero solution in manual mode. This modified AMP algorithm was performed in numerical simulation and physical experiments with 2-[18F]fluoro-2-deoxy-D-glucose. Experimental results indicated that the proposed method was a kind of low-complexity iterative thresholding algorithms for reconstructing 3D sparse distribution from a small set of optical measurements.
Quantization of reconstruction error with an interval-based algorithm: an experimental comparison
A. Hassoun, O. Strauss
SPECT* image based diagnosis generally consists in comparing the reconstructed activities within two regions of interest. Due to noise in the measured activities, this comparison is subject to instability, mainly because both statistical nature and level of the noise in the reconstructed activities is unknown. In this paper, we experimentally show that an interval valued extension of the classical MLEM algorithm is efficient to estimate this noise level. The experimental settings consist in simulating the acquisition of a phantom composed of three zones having the same shape but different levels of activity. The levels are chosen to simulate usual medical image conditions. We evaluate the ability of the interval-valued reconstruction to quantify the noise level by testing whether or not it allows the association of two zones having the same activity and the differentiation between two zones having different activities. Our experiment shows that the error quantification truly reflects the difficulty in differentiating two zones having very close activity level. Indeed, the method allows a reliable association of two zones having the same activity level, whatever the noise conditions. However, the possibility of differentiating two zones having different levels of activity depends on the signal-to-noise ratio.
Blind local noise estimation for medical images reconstructed from rapid acquisition
Xunyu Pan, Xing Zhang, Siwei Lyu
Developments in rapid acquisition techniques and reconstruction algorithms, such as sensitivity encoding (SENSE) for MR images and fan-beam filtered backprojection (fFBP) for CT images, have seen widely applications in medical imaging in recent years. Nevertheless, such techniques introduce spatially varying noise levels in the reconstructed medical images that may degrade the image quality and hinder subsequent diagnostic inspection. Though this may be alleviated with multiple scanning images or the sensitivity profiles of imaging device, these pieces of information are typically unavailable in clinical practice. In this work, we describe a novel local noise level estimation technique based on the near constancy of kurtosis of medical image in band-pass filtered domain. This technique can effectively estimate noise levels in the pixel domain and recover the noise map for reconstructed medical images with nonuniform noise distribution. The advantage of this method is that it requires no prior knowledge of the imaging devices and can be implemented when only one single medical image is available. We report experiments that demonstrate the effectiveness of the proposed method in estimating the local noise levels for medical images quantitatively and qualitatively, and compare its estimation performance to another recent developed blind noise estimation approach. Finally, we also evaluate the practical denoising performance of our noise estimation algorithm on medical images when it is used as a front-end to a denoiser that uses principal component analysis with local pixel grouping (LPG-PCA) technique.
Optimisation of reconstruction for the registration of CT liver perfusion sequences
B. Romain, V. Letort, O. Lucidarme, et al.
Objective. CT abdominal perfusion is frequently used to evaluate tumor evolution when patients are undergoing antiangiogenic therapy. Parameters depending on longer-term dynamics of the diffusion of the contrast medium (e. g. permeability) could help assessing the treatment efficacy. To this end, dynamic image sequences are obtained while patients breath freely. Prior to any analysis, one needs to compensate the respiratory motion. The goal of our study is to optimize the CT reconstruction parameters (filter of reconstruction, thickness of image volumes) for our registration method. We also aim at proposing relevant criteria allowing to quantify the registration quality. Methods. Registration is computed in 4 steps: z-global rigid registration, local refinements with multiresolution blockmatching, regularization and warping. Two new criteria are defined to evaluate the quality of registration: one for spatial evaluation and the other for temporal evaluation. Results. The two measures decrease after registration (58% and 10% average decrease for the best reconstruction parameters for the spatial and temporal criteria respectively) which is consistent with visual inspection of the images. They are therefore used to determine the best combination of reconstruction parameters.
Image fusion in x-ray differential phase-contrast imaging
W. Haas, M. Polyanskaya, F. Bayer, et al.
Phase-contrast imaging is a novel modality in the field of medical X-ray imaging. The pioneer method is the grating-based interferometry which has no special requirements to the X-ray source and object size. Furthermore, it provides three different types of information of an investigated object simultaneously - absorption, differential phase-contrast and dark-field images. Differential phase-contrast and dark-field images represent a completely new information which has not yet been investigated and studied in context of medical imaging. In order to introduce phase-contrast imaging as a new modality into medical environment the resulting information about the object has to be correctly interpreted. The three output images reflect different properties of the same object the main challenge is to combine and visualize these data in such a way that it diminish the information explosion and reduce the complexity of its interpretation. This paper presents an intuitive image fusion approach which allows to operate with grating-based phase-contrast images. It combines information of the three different images and provides a single image. The approach is implemented in a fusion framework which is aimed to support physicians in study and analysis. The framework provides the user with an intuitive graphical user interface allowing to control the fusion process. The example given in this work shows the functionality of the proposed method and the great potential of phase-contrast imaging in medical practice.
Super-resolution reconstruction in MRI: better images faster?
Esben Plenge, Dirk H. J. Poot, Monique Bernsen, et al.
Improving the resolution in magnetic resonance imaging (MRI) is always done at the expense of either the signal-to-noise ratio (SNR) or the acquisition time. This study investigates whether so-called super-resolution reconstruction (SRR) is an advantageous alternative to direct high-resolution (HR) acquisition in terms of the SNR and acquisition time trade-offs. An experimental framework was designed to accommodate the comparison of SRR images with direct high-resolution acquisitions with respect to these trade-offs. The framework consisted, on one side, of an image acquisition scheme, based on theoretical relations between resolution, SNR, and acquisition time, and, on the other side, of a protocol for reconstructing SRR images from a varying number of acquired low-resolution (LR) images. The quantitative experiments involved a physical phantom containing structures of known dimensions. Images reconstructed by three SRR methods, one based on iterative back-projection and two on regularized least squares, were quantitatively and qualitatively compared with direct HR acquisitions. To visually validate the quantitative evaluations, qualitative experiments were performed, in which images of three different subjects (a phantom, an ex-vivo rat knee, and a post-mortem mouse) were acquired with different MRI scanners. The quantitative results indicate that for long acquisition times, when multiple acquisitions are averaged to improve SNR, SRR can achieve better resolution at better SNR than direct HR acquisitions.
An iterative hard thresholding algorithm for CS MRI
S. R. Rajani, M. Ramasubba Reddy
The recently proposed compressed sensing theory equips us with methods to recover exactly or approximately, high resolution images from very few encoded measurements of the scene. The traditional ill-posed problem of MRI image recovery from heavily under-sampled κ-space data can be thus solved using CS theory. Differing from the soft thresholding methods that have been used earlier in the case of CS MRI, we suggest a simple iterative hard thresholding algorithm which efficiently recovers diagnostic quality MRI images from highly incomplete κ-space measurements. The new multi-scale redundant systems, curvelets and contourlets having high directionality and anisotropy, and thus best suited for curved-edge representation are used in this iterative hard thresholding framework for CS MRI reconstruction and their performance is compared. The κ-space under-sampling schemes such as the variable density sampling and the more conventional radial sampling are experimented at the same sampling rate and the effect of encoding scheme on iterative hard thresholding compressed sensing reconstruction is studied.
Image quality improvement through fusion of hybrid bone- and soft-tissue-texture filtering for 3D cone beam CT extremity imaging system
D. Yang, R. A. Senn, N. Packard, et al.
A flat-panel, detector-based cone beam CT system can provide advantages over a fan beam CT system in terms of 3D isotropic spatial resolution. However, as a result of increased X-ray coverage along the rotation axis, there is also an increase in scatter. This can lead to a decrease in low-contrast resolution as well as the appearance of non-uniform artifacts across the reconstructed image. These effects can be minimized with the use of an anti-scatter grid; however, further software corrections are often desirable. Software scatter correction is generally achieved through the subtraction of an estimate of the scatter distribution from the corresponding original projection data in the linear space. While the non-uniform artifacts effect is generally improved, a side effect of this subtractive process can be an undesirable amplification of the apparent noise, which makes the image quality, in terms of contrast-to-noise ratio (CNR), much worse than the images produced by fan beam CT systems. In this work, a novel modified imaging chain has been proposed to apply separate, non-linear noise-reduction algorithms on bone and soft tissues to improve the CNR for soft tissue as well as to maintain a high spatial resolution for the display of boney structures.
Quality evaluation for metal influenced CT data
Bärbel Kratz, Svitlana Ens, Christian Kaethner, et al.
In Computed Tomography (CT) metal objects in the region of interest introduce data inconsistencies during acquisition. The reconstruction process results in an image with star shaped artifacts. To enhance image quality the influence of metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms are available with and without metal inserts, ground truth data can easily be acquired by a reference scan. Obviously, this is not possible for clinical data. In this work, three different evaluation methods for metal artifacts as well as comparison of MAR methods without the need of an acquired reference data set will be presented and compared. The first metric is based on image contrast; a second approach involves the filtered gradient information of the image, and the third method uses a forward projection of the reconstructed image followed by a comparison with the actually measured projection data. All evaluation techniques are performed on phantom and on clinical CT data with and without MAR and compared with reference-based evaluation methods as well as expert-based classifications.
Denoising of 4D cardiac micro-CT data using median-centric bilateral filtration
D. Clark, G. A. Johnson, C. T. Badea
Bilateral filtration has proven an effective tool for denoising CT data. The classic filter uses Gaussian domain and range weighting functions in 2D. More recently, other distributions have yielded more accurate results in specific applications, and the bilateral filtration framework has been extended to higher dimensions. In this study, brute-force optimization is employed to evaluate the use of several alternative distributions for both domain and range weighting: Andrew's Sine Wave, El Fallah Ford, Gaussian, Flat, Lorentzian, Huber's Minimax, Tukey's Bi-weight, and Cosine. Two variations on the classic bilateral filter, which use median filtration to reduce bias in range weights, are also investigated: median-centric and hybrid bilateral filtration. Using the 4D MOBY mouse phantom reconstructed with noise (stdev. ~ 65 HU), hybrid bilateral filtration, a combination of the classic and median-centric filters, with Flat domain and range weighting is shown to provide optimal denoising results (PSNRs: 31.69, classic; 31.58 median-centric; 32.25, hybrid). To validate these phantom studies, the optimal filters are also applied to in vivo, 4D cardiac micro-CT data acquired in the mouse. In a constant region of the left ventricle, hybrid bilateral filtration with Flat domain and range weighting is shown to provide optimal smoothing (stdev: original, 72.2 HU; classic, 20.3 HU; median-centric, 24.1 HU; hybrid, 15.9 HU). While the optimal results were obtained using 4D filtration, the 3D hybrid filter is ultimately recommended for denoising 4D cardiac micro-CT data, because it is more computationally tractable and less prone to artifacts (MOBY PSNR: 32.05; left ventricle stdev: 20.5 HU).
Confidence map-based super-resolution reconstruction
Wissam El Hakimi, Stefan Wesarg
Magnetic Resonance Imaging and Computed Tomography usually provide highly anisotropic image data, so that the resolution in the slice-selection direction is poorer than in the in-plane directions. An isotropic high-resolution image can be reconstructed from two orthogonal scans of the same object. While combining the different data sets, all input data are usually equally weighted, without considering the fidelity level of each input information. In this paper we introduce a novel super-resolution method, which considers the fidelity level of each input data by introducing an adaptive confidence map. Experimental results on simulated and real data sets have shown the improved accuracy of reconstructed images, whose resolution approximate the original in-plane resolution in all directions. The quality of the reconstructed high resolution image was improved for noiseless input data sets, and even in the presence of different noise types with a low peak signal to noise ratio.
Enhancing super-resolution reconstructed image quality in 3D MR images using simulated annealing
Sami ur Rahman, Tsvetoslava Vateva, Stefan Wesarg
Super-resolution reconstruction (SRR) algorithms are used for getting high-resolution (HR) data from low-resolution observations. In Maximum a posteriori (MAP) based SRR the observation model is employed for estimating a HR image that best reproduces the two low-resolution input data sets. The parameters of the prior play a significant role in the MAP based SRR. This work concentrates on the investigation of the influence of one such parameter, called temperature, on the reconstructed 3D MR images. The existing approaches on SRR in 3D MR images use a constant value for this parameter. We use a cooling schedule similar to simulated annealing for computing the value of the temperature parameter at each iteration of the SRR. We have used 3D MR cardiac data sets in our experiments and have shown that the iterative computation of the temperature which resembles simulated annealing delivers better results.
A novel iterative non-local means algorithm for speckle reduction
Despeckling of ultrasound images is a crucial step for facilitating subsequent image processing. The non-local means (NLM) filter has been widely applied for denoising images corrupted by Gaussian noise. However, the direct application of this filter in ultrasound images cannot provide satisfactory restoration results. To address this problem, a novel iterative adaptive non-local means (IANLM) filter is proposed to despeckle ultrasound images. In the proposed filter, the speckle noise is firstly transformed into additive Gaussian noise by square root operation. Then the decay parameter is estimated based on a selected homogeneous region. Finally, an iterative strategy combined with the local clustering method based on pixel intensities is adopted to realize effective image smoothing while preserving image edges. Comparisons of the restoration performance of IANLM filter with other state-of-the-art despeckling methods are made. The quantitative comparisons of despeckling synthetic images based on Peak signal-to-noise ratio (PSNR) show that the IANLM filter can provide the best restoration performance among all the evaluated filters. The subjective visual comparisons of the denoised synthetic and ultrasound images demonstrate that the IANLM filter outperforms other compared algorithms in that it can achieve better performance of noise reduction, artifact avoidance, edges and textures preservation and contrast enhancement.
Additive Dirichlet models for projectional images
Simon Williams, Murk J. Bottema
An important difference between projection images such as x-rays and natural images is that the intensity at a single pixel in a projection image comprises information from all objects between the source and detector. In order to exploit this information, a Dirichlet mixture of Gaussian distributions is used to model the intensity function forming the projection image. The model requires initial seeding of Gaussians and uses the EM (estimation maximisation) algorithm to arrive at a final model. The resulting models are shown to be robust with respect to the number and positions of the Gaussians used to seed the algorithm. As an example, a screening mammogram is modelled as the Dirichlet sum of Gaussians suggesting possible application to early detection of breast cancer.
Prediction coefficient estimation in Markov random fields for iterative x-ray CT reconstruction
Bayesian estimation is a statistical approach for incorporating prior information through the choice of an a priori distribution for a random field. A priori image models in Bayesian image estimation are typically low-order Markov random fields (MRFs), effectively penalizing only differences among immediately neighboring voxels. This limits spectral description to a crude low-pass model. For applications where more flexibility in spectral response is desired, potential benefit exists in models which accord higher a priori probability to content in higher frequencies. Our research explores the potential of larger neighborhoods in MRFs to raise the number of degrees of freedom in spectral description. Similarly to classical filter design, the MRF coefficients may be chosen to yield a desired pass-band/stop-band characteristic shape in the a priori model of the images. In this paper, we present an alternative design method, where high-quality sample images are used to estimate the MRF coefficients by fitting them into the spatial correlation of the given ensemble. This method allows us to choose weights that increase the probability of occurrence of strong components at particular spatial frequencies. This allows direct adaptation of the MRFs for different tissue types based on sample images with different frequency content. In this paper, we consider particularly the preservation of detail in bone structure in X-ray CT. Our results show that MRF design can be used to obtain bone emphasis similar to that of conventional filtered back-projection (FBP) with a bone kernel.
Posters: Neuro Applications
icon_mobile_dropdown
Glial brain tumor detection by using symmetry analysis
Valentina Pedoia, Elisabetta Binaghi, Sergio Balbi, et al.
In this work a fully automatic algorithm to detect brain tumors by using symmetry analysis is proposed. In recent years a great effort of the research in field of medical imaging was focused on brain tumors segmentation. The quantitative analysis of MRI brain tumor allows to obtain useful key indicators of disease progression. The complex problem of segmenting tumor in MRI can be successfully addressed by considering modular and multi-step approaches mimicking the human visual inspection process. The tumor detection is often an essential preliminary phase to solvethe segmentation problem successfully. In visual analysis of the MRI, the first step of the experts cognitive process, is the detection of an anomaly respect the normal tissue, whatever its nature. An healthy brain has a strong sagittal symmetry, that is weakened by the presence of tumor. The comparison between the healthy and ill hemisphere, considering that tumors are generally not symmetrically placed in both hemispheres, was used to detect the anomaly. A clustering method based on energy minimization through Graph-Cut is applied on the volume computed as a difference between the left hemisphere and the right hemisphere mirrored across the symmetry plane. Differential analysis involves the loss the knowledge of the tumor side. Through an histogram analysis the ill hemisphere is recognized. Many experiments are performed to assess the performance of the detection strategy on MRI volumes in presence of tumors varied in terms of shapes positions and intensity levels. The experiments showed good results also in complex situations.
Automatic segmentation of white matter hyperintensities robust to multicentre acquisition and pathological variability
T. Samaille, O. Colliot, R. Cuingnet, et al.
White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%±16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.
Labeling of the cerebellar peduncles using a supervised Gaussian classifier with volumetric tract segmentation
Chuyang Ye, Pierre-Louis Bazin, John A. Bogovic, et al.
The cerebellar peduncles are white matter tracts that play an important role in the communication of the cerebellum with other regions of the brain. They can be grouped into three fiber bundles: inferior cerebellar peduncle middle cerebellar peduncle, and superior cerebellar peduncle. Their automatic segmentation on diffusion tensor images would enable a better understanding of the cerebellum and would be less time-consuming and more reproducible than manual delineation. This paper presents a method that automatically labels the three fiber bundles based on the segmentatin results from the diffusion oriented tract segmentation (DOTS) algorithm, which achieves volume segmentation of white matter tracts using a Markov random field (MRF) framework. We use the DOTS labeling result as a guide to determine the classification of fibers produced by wild bootstrap probabilistic tractography. Mean distances from each fiber to each DOTS volume label are defined and then used as features that contribute to classification. A supervised Gaussian classifier is employed to label the fibers. Manually delineated cerebellar peduncles serve as training data to determine the parameters of class probabilities for each label. Fibers are labeled ad the class that has the highest posterior probability. An outlier detection ste[ re,pves fober tracts that belong to noise of that are not modeled by DOTS. Experiments show a successful classification of the cerebellar peduncles. We have also compared results between successive scans to demonstrate the reproducibility of the proposed method.
Intracranial aneurysm growth quantification in CTA
Azadeh Firouzian, Rashindra Manniesing, Coert T. Metz, et al.
Next to aneurysm size, aneurysm growth over time is an important indicator for aneurysm rupture risk. Manual assessment of aneurysm growth is a cumbersome procedure, prone to inter-observer and intra-observer variability. In clinical practice, mainly qualitative assessment and/or diameter measurement are routinely performed. In this paper a semi-automated method for quantifying aneurysm volume growth over time in CTA data is presented. The method treats a series of longitudinal images as a 4D dataset. Using a 4D groupwise non-rigid registration method, deformations with respect to the baseline scan are determined. Combined with 3D aneurysm segmentation in the baseline scan, volume change is assessed using the deformation field at the aneurysm wall. For ten patients, the results of the method are compared with reports from expert clinicians, showing that the quantitative results of the method are in line with the assessment in the radiology reports. The method is also compared to an alternative method in which the volume is segmented in each 3D scan individually, showing that the 4D groupwise registration method agrees better with manual assessment.
A field map estimation strategy without the noise-bandwidth tradeoff
We propose a joint acquisition-processing solution to the problem of field map estimation. Our acquisition method captures data at three echo times carefully chosen to yield optimal field map estimation using the corresponding algorithm. We show that, over an arbitrary spectral range of inhomogeneity values, our method is not subject to the traditional noise-bandwidth tradeoff. The resulting implications include: improved robustness, enhanced spectral estimation and eliminating the need for spatial phase unwrapping. Our simulations show factors of improvement in the quality of field map estimates, as compared to existing methods. Our phantom data confirms these impressive gains.
Fiber estimation errors incurred from truncated sampling in q-space diffusion magnetic resonance imaging
Bryce Wilkins, Namgyun Lee, Manbir Singh
The effect of truncated sphere sampling commonly used in Diffusion Spectrum Imaging to estimate single fiber orientation errors is simulated using fully-sampled (7x7x7) q-space data (343 samples), truncated sphere sampling (203 samples), and zero-padding (to 21x21x21 samples), in both noise-free and noisy (SNR = 30) scenarios. We show that the error of resolving a single fiber direction depends on the fiber orientation relative to the "corners of q-space", which are not sampled when truncated sampling schemes are used, and that corner-samples help reduce the error in estimating a single fiber orientation (mean error 3.5° versus 3.9° for the noise case and zero-padding used). As truncated sphere sampling requires 40% less acquisition time, its use could justify the small increase in error (0.4°). A simulation of the FiberCup phantom was used to assess the significance of truncated sampling on fiber orientation estimation errors over a continuous space of voxels by examining differences in resulting tractography.
Brain tissue segmentation in 4D CT using voxel classification
R. van den Boom, M. T. H. Oei, S. Lafebre, et al.
A method is proposed to segment anatomical regions of the brain from 4D computer tomography (CT) patient data. The method consists of a three step voxel classification scheme, each step focusing on structures that are increasingly difficult to segment. The first step classifies air and bone, the second step classifies vessels and the third step classifies white matter, gray matter and cerebrospinal fluid. As features the time averaged intensity value and the temporal intensity change value were used. In each step, a k-Nearest-Neighbor classifier was used to classify the voxels. Training data was obtained by placing regions of interest in reconstructed 3D image data. The method has been applied to ten 4D CT cerebral patient data. A leave-one-out experiment showed consistent and accurate segmentation results.
Discriminating between brain rest and attention states using fMRI connectivity graphs and subtree SVM
Fatemeh Mokhtari, Shahab K. Bakhtiari, Gholam Ali Hossein-Zadeh, et al.
Decoding techniques have opened new windows to explore the brain function and information encoding in brain activity. In the current study, we design a recursive support vector machine which is enriched by a subtree graph kernel. We apply the classifier to discriminate between attentional cueing task and resting state from a block design fMRI dataset. The classifier is trained using weighted fMRI graphs constructed from activated regions during the two mentioned states. The proposed method leads to classification accuracy of 1. It is also able to elicit discriminative regions and connectivities between the two states using a backward edge elimination algorithm. This algorithm shows the importance of regions including cerebellum, insula, left middle superior frontal gyrus, post cingulate cortex, and connectivities between them to enhance the correct classification rate.
MITK global tractography
Peter F. Neher, Bram Stieltjes, Marco Reisert, et al.
Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.
ISMI: a classification index for high angular resolution diffusion imaging
D. Röttger, D. Dudai, D. Merhof, et al.
Magnetic resonance diffusion imaging provides a unique insight into the white matter architecture of the brain in vivo. Applications include neurosurgical planning and fundamental neuroscience. Contrary to diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI) is able to characterize complex intra-voxel diffusion distributions and hence provides more accurate information about the true diffusion profile. Anisotropy indices aim to reduce the information of the diffusion probability function to a meaningful scalar representation that classifies the underlying diffusion and thereby the neuronal fiber configuration within a voxel. These indices can be used to answer clinical questions such as the integrity of certain neuronal pathways. Information about the underlying fiber distribution can be beneficial in tractography approaches, reconstructing neuronal pathways using local diffusion orientations. Therefore, an accurate classification of diffusion profiles is of great interest. However, the differentiation between multiple fiber orientations and isotropic diffusion is still a challenging task. In this work, we introduce ISMI, an index which successfully differentiates isotropic diffusion and single and multiple fiber populations. The classifier is based on the orientation distribution function (ODF) resulting from Q-ball imaging. We compare our results with the well-known general fractional anisotropy (GFA) index using a fiber phantom comprising challenging diffusion profiles such as crossing, fanning and kissing fiber configurations and a human brain dataset considering the centrum semiovale. Additionally, we visualize the results directly on the fibers represented by streamtubes using a heat color map.
Intrinsic functional connectivity pattern-based brain parcellation using normalized cut
Hewei Cheng, Dandan Song, Hong Wu, et al.
In imaging data based brain network analysis, a necessary precursor for constructing meaningful brain networks is to identify functionally homogeneous regions of interest (ROIs) for defining network nodes. For parcellating the brain based on resting state fMRI data, normalized cut is one widely used clustering algorithm which groups voxels according to the similarity of functional signals. Due to low signal to noise ratio (SNR) of resting state fMRI signals, spatial constraint is often applied to functional similarity measures to generate smooth parcellation. However, improper spatial constraint might alter the intrinsic functional connectivity pattern, thus yielding biased parcellation results. To achieve reliable and least biased parcellation of the brain, we propose an optimization method for the spatial constraint to functional similarity measures in normalized cut based brain parcellation. Particularly, we first identify the space of all possible spatial constraints that are able to generate smooth parcellation, then find the spatial constraint that leads to the brain parcellation least biased from the intrinsic function pattern based parcellation, measured by the minimal Ncut value calculated based on the functional similarity measure of original functional signals. The proposed method has been applied to the parcellation of medial superior frontal cortex for 20 subjects based on their resting state fMRI data. The experiment results indicate that our method can generate meaningful parcellation results, consistent with existing functional anatomy knowledge.
Accelerated diffusion spectrum imaging via compressed sensing for the human connectome project
Namgyun Lee, Bryce Wilkins, Manbir Singh
Diffusion Spectrum Imaging (DSI) has been developed as a model-free approach to solving the so called multiple-fibers-per- voxel problem in diffusion MRI. However, inferring heterogeneous microstructures of an imaging voxel rapidly remains a challenge in DSI because of extensive sampling requirements in a Cartesian grid of q-space. In this study, we propose compressed sensing based diffusion spectrum imaging (CS-DSI) to significantly reduce the number of diffusion measurements required for accurate estimation of fiber orientations. This method reconstructs each diffusion propagator of an MR data set from 100 variable density undersampled diffusion measurements minimizing the l1-norm of the finite-differences (i.e.,anisotropic total variation) of the diffusion propagator. The proposed method is validated against a ground truth from synthetic data mimicking the FiberCup phantom, demonstrating the robustness of CS-DSI on accurately estimating underlying fiber orientations from noisy diffusion data. We demonstrate the effectiveness of our CS-DSI method on a human brain dataset acquired from a clinical scanner without specialized pulse sequences. Estimated ODFs from CS-DSI method are qualitatively compared to those from the full dataset (DSI203). Lastly, we demonstrate that streamline tractography based on our CS-DSI method has a comparable quality to conventional DSI203. This illustrates the feasibility of CS-DSI for reconstructing whole brain white-matter fiber tractography from clinical data acquired at imaging centers, including hospitals, for human brain connectivity studies.
Mesial temporal lobe epilepsy lateralization using SPHARM-based features of hippocampus and SVM
This paper improves the Lateralization (identification of the epileptogenic hippocampus) accuracy in Mesial Temporal Lobe Epilepsy (mTLE). In patients with this kind of epilepsy, usually one of the brain's hippocampi is the focus of the epileptic seizures, and resection of the seizure focus is the ultimate treatment to control or reduce the seizures. Moreover, the epileptogenic hippocampus is prone to shrinkage and deformation; therefore, shape analysis of the hippocampus is advantageous in the preoperative assessment for the Lateralization. The method utilized for shape analysis is the Spherical Harmonics (SPHARM). In this method, the shape of interest is decomposed using a set of bases functions and the obtained coefficients of expansion are the features describing the shape. To perform shape comparison and analysis, some pre- and post-processing steps such as "alignment of different subjects' hippocampi" and the "reduction of feature-space dimension" are required. To this end, first order ellipsoid is used for alignment. For dimension reduction, we propose to keep only the SPHARM coefficients with maximum conformity to the hippocampus shape. Then, using these coefficients of normal and epileptic subjects along with 3D invariants, specific lateralization indices are proposed. Consequently, the 1536 SPHARM coefficients of each subject are summarized into 3 indices, where for each index the negative (positive) value shows that the left (right) hippocampus is deformed (diseased). Employing these indices, the best achieved lateralization accuracy for clustering and classification algorithms are 85% and 92%, respectively. This is a significant improvement compared to the conventional volumetric method.
Segmentation of the optic tracts using graph-based techniques
In DBS surgery, electrodes are implanted in specific nuclei of the brain to treat several types of movement disorders. Pre-operative knowledge of the location of the optic tracts may prove useful for pre-operative planning assistance or intra-operative target refinement. In this article we present a semi-automated method to localize the optic tracts in MR. As opposed to previous approaches presented to identify these structures, our methods are able to recover the eccentric shape of the optic tracts. This approach consists of two parts: (1) automatic model construction from manually segmented exemplars and (2) segmentation of structures in unknown images using these models. The segmentation problem is solved by finding an optimal path in a graph. The graph is designed with novel structures that permit the incorporation of prior information from the model into the optimization process and account for several weaknesses of traditional graph-based approaches. The approach achieved mean and maximum surface errors of 0.35 and 1.9 mm in a validation study on 10 images. The results from all experiments were considered acceptable.
Detection of abrupt motion in DCE-MRI
Kumar Rajamani, Dattesh Shanbhag, Rakesh Mullick, et al.
Dynamic Contrast Enhanced MRI (DCE-MRI) is being increasingly used as a method for studying the tumor vasculature. It is also used as a biomarker to evaluate the response to anti-angiogenic therapies and the efficacy of a therapy. The uptake of contrast in the tissue is analyzed using pharmacokinetic models for understanding the perfusion characteristics and cell structure, which are indicative of tumor proliferation. However, in most of these 4D acquisitions the time required for the complete scan are quite long as sufficient time must be allowed for the passage of contrast medium from the vasculature to the tumor interstitium and subsequent extraction. Patient motion during such long scans is one of the major challenges that hamper automated and robust quantification. A system that could automatically detect if motion has occurred during the acquisition would be extremely beneficial. Patient motion observed during such 4D acquisitions are often rapid shifts, probably due to involuntary actions such as coughing, sneezing, peristalsis, or jerk due to discomfort. The detection of such abrupt motion would help to decide on a course of action for correction for motion such as eliminating time frames affected by motion from analysis, or employing a registration algorithm, or even considering the exam us unanalyzable. In this paper a new technique is proposed for effective detection of motion in 4D medical scans by determination of the variation in the signal characteristics from multiple regions of interest across time. This approach offers a robust, powerful, yet simple technique to detect motion.
Retinal vessel width measurement at branching points using an improved electric field theory-based graph approach
Xiayu Xu, Michael D. Abràmoff, Geir Bertelsen, et al.
An accurate and fully automatic method to measure the vessel width at branching points in fundus images is presented. This method is a graph-based method, in which an electric field theory based graph construction method is applied to specifically deal with the complicated branching patterns. The vessel centerline image is used as the initial segmentation. The branching points are detected on the vessel centerline image using a series of detection kernels. Crossing points are distinguished from branching points and excluded in this study. Electric field theory motivated graph construction method is applied to construct the graph, inspired by the non-intersecting property of the electric line of force. Of the three branches in a branching unit, the one closest to the optic disc is automatically detected as the parent branch and the other two are regarded as the daughter branches. The location of the optic disc is automatically detected based on a machine learning technique. The method was validated on a set of 50 fundus images.
Posters: Compressive Sensing
icon_mobile_dropdown
Multi-slice and multi-frame image reconstruction by predictive compressed sensing
Jun Zhang, Jun Wang, Guangwu Xu, et al.
In this paper, we describe a prediction based compressed-sensing approach for multi-slice (same time, different locations) or multi-frame (same location, different time) CT image reconstruction. In this approach, the second slice/frame of a pair of consecutive slices/frames is reconstructed through reconstructing the prediction error image between the first and second slice/frame, using compressed-sensing. This approach exploits the inter-slice/inter-frame correlation and the higher degree of sparsity of the prediction error image to achieve more efficient image reconstruction, i.e., fewer projections for the same image quality or higher image quality for the same number of projections. The efficacy of our approach is demonstrated through simulation results.
Compressed sensing for phase-contrast computed tomography
Thomas Gaass, Guillaume Potdevin, Martin Bech, et al.
Modern X-ray techniques opened the possibility to reconstruct phase contrast (PC) information. This provides significantly improved soft-tissue contrast when compared to conventional computed tomography (CT). While PCCT significantly ameliorates contrast information, radiation dose continues to be an issue when translated to the clinic. Possible dose reduction can be achieved by using more efficient reconstruction algorithms. In this work, dose reduction is achieved by applying a compressed sensing (CS) reconstruction to a highly sparse set of PCCT projections. The applied reconstruction algorithm is based on a non-uniform fast Fourier transform (NUFFT), where sparse sets of projections are reconstructed with a CS algorithm, employing wavelet domain sparsity and finite differences minimization. We evaluated this approach with both phantom and real data. Measured data from a conventional X-ray source were acquired using grating-based interferometry. The resulting reconstructions are compared visually, and quantitatively on the basis of standard deviation within different regions-of-interest. The assessment of phantom and measured data demonstrated the possibility to reconstruct from drastically fewer projections than the Nyquist-theorem demands. The measured standard deviations were comparable or even lower compared to full dose reconstructions. In this initial evaluation of CS-based methods in PCCT, we presented a considerable reduction of necessary projections. Thus, radiation dose can be reduced while maintaining the superior soft-tissue contrast and image quality of PCCT. In the future, approaches such as the presented, will enable 4D PCCT, for instance in cardiac applications.
A feasibility study for compressed sensing combined phase contrast MR angiography reconstruction
Dong-Hoon Lee, Cheol-Pyo Hong, Man-Woo Lee, et al.
Phase contrast magnetic resonance angiography (PC MRA) is a technique for flow velocity measurement and vessels visualization, simultaneously. The PC MRA takes long scan time because each flow encoding gradients which are composed bipolar gradient type need to reconstruct the angiography image. Moreover, it takes more image acquisition time when we use the PC MRA at the low-tesla MRI system. In this study, we studied and evaluation of feasibility for CS MRI reconstruction combined PC MRA which data acquired by low-tesla MRI system. We used non-linear reconstruction algorithm which named Bregman iteration for CS image reconstruction and validate the usefulness of CS combined PC MRA reconstruction technique. The results of CS reconstructed PC MRA images provide similar level of image quality between fully sampled reconstruction data and sparse sampled reconstruction using CS technique. Although our results used half of sampling ratio and do not used specification hardware device or performance which are improving the temporal resolution of MR image acquisition such as parallel imaging reconstruction using phased array coil or non-cartesian trajectory, we think that CS combined PC MRA technique will be helpful to increase the temporal resolution and at low-tesla MRI system.
Quality assessment of fast wavelet-encoded MRI utilizing compressed sensing
Fast acquisition of MR images is possible through sparse encoding in the Fourier/Wavelet domain under the incoherent measurement constraint required by the theory of compressed sensing (CS) for stable reconstruction. In one such sparse encoding method, we utilize the wavelet tree structure to undersample the wavelet-encoded MRI k-spaces with tailored spatially-selective RF excitation pulses. The resulting undersampled k-spaces contain many more significant coefficients than randomly undersampled k-spaces. Thus, the quality of CS reconstruction based on these undersampled k-spaces is improved, and such an encoding scheme may reduce the patient scan time for MRI and fMRI. Using a fully sampled Fourier encoded 3-D digital brain phantom as the gold standard, a mathematical framework with full-reference and visual image quality matrices has been proposed to assess the CS reconstruction performance in wavelet-encoded MRI. The quality of MR images recovered from undersampled k-space by different reconstruction methods is computed. The undersampling rates and noise levels in k-space are considered in evaluating the robustness of CS reconstruction. The simulation results show that the performance of CS reconstruction in wavelet-encoded MRI is more accurate and stable than in Fourier-encoded MRI with the same undersampling rate and noise level, at a significantly reduced scan time.
Posters: Functional Imaging
icon_mobile_dropdown
Rician compressed sensing for fast and stable signal reconstruction in diffusion MRI
Sudipto Dolui, Alan Kuurstra, Oleg V. Michailovich
The advent of the theory of compressed sensing (CS) has revolutionized multiple areas of applied sciences, a particularly important instance of which is medical imaging. In particular, the theory provides a solution to the problem of long acquisition times, which is intrinsic in diffusion MRI (dMRI). As a specific instance of dMRI, this work focuses on high angular resolution diffusion imaging (HARDI), which is known to excel in delineating multiple diffusion flows through a given voxel within the brain. Specifically, to reduce the acquisition time, CS allows undersampling the HARDI data by employing fewer diffusion-encoding gradients than it is required by the classical sampling theory. Subsequently, the undersampled data is used to recover the original signals by means of non-linear decoding. In earlier reconstruction methods, such decoding has been carried out under a Gaussian model for measurement noises, instead of the Rician model which is known to prevail in MRI. Accordingly, the main contribution of the present work is twofold. First, we introduce a way to substantially improve the stability of the CS-based reconstruction of HARDI signals under the assumption of Gaussian noises. Second, we extend this approach to the case of Rician noise statistics. In addition to providing formal developments of the reconstruction algorithm based on Rician statistics, we also detail a computationally efficient numerical scheme which can be used to implement the above reconstruction. Finally, the methods based on the Gaussian and the Rician noise models are compared using both simulated and in-vivo MRI data under various measurement conditions.
Model-based blood flow quantification from DSA: quantitative evaluation on patient data and comparison with TCCD
I. Waechter-Stehle, A. Groth, T. Bruijns, et al.
Purpose: To support intra-interventional decisions on diagnosis and treatment of cerebrovascular diseases, a method providing quantitative information about the blood flow in the vascular system is proposed. Method: This method combines rotational angiography to extract the 3D vessel geometry and digital subtraction angiography (DSA) to obtain the flow observations. A physical model of blood flow and contrast agent transport is used to predict the propagation of the contrast agent through the vascular system. In an iterative approach, the model parameters, including the volumetric blood flow rate, are adapted until the prediction matches the observations from the DSA. The flow estimation method was applied to patient data: For 24 patients, the volumetric blood flow rate was determined from angiographic images and for 17 patients, results were compared with transcranial color coded Doppler (TCCD) measurements. Results: The agreement of the x-ray based flow estimates with TCCD was reasonable (bias ΔM = 3%, correlation ρ = 0.76) and reproducibility was clearly better than the reproducibility of the acquired TCCD measurements. Conclusion: Overall we conclude that it is feasible to model the contrast agent transport in patients and to utilize the flow model to quantify their blood flow with angiographic means.
Identification of subject specific and functional consistent ROIs using semi-supervised learning
Yuhui Du, Hongming Li, Hong Wu, et al.
Regions of interests (ROIs) for defining nodes of brain network are of great importance in brain network analysis of fMRI data. The ROIs are typically identified using prior anatomical information, seed region based correlation analysis, clustering analysis, region growing or ICA based methods. In this paper, we propose a novel method to identify subject specific and functional consistent ROIs for brain network analysis using semi-supervised learning. Specifically, a graph theory based semi-supervised learning method is adopted to optimize ROIs defined using prior knowledge with a constraint of local and global functional consistency, yielding subject specific ROIs with enhanced functional connectivity. Experiments using simulated fMRI data have demonstrated that functional consistent ROIs can be identified effectively from data with different signal to noise ratios (SNRs). Experiments using resting state fMRI data of 25 normal subjects for identifying ROIs of the default mode network have demonstrated that the proposed method is capable of identifying subject specific ROIs with stronger functional connectivity and higher consistency across subjects than existing alternative techniques, indicating that the proposed method can better identify brain network ROIs with intrinsic functional connectivity.
ADHD classification using bag of words approach on network features
Berkan Solmaz, Soumyabrata Dey, A. Ravishankar Rao, et al.
Attention Deficit Hyperactivity Disorder (ADHD) is receiving lots of attention nowadays mainly because it is one of the common brain disorders among children and not much information is known about the cause of this disorder. In this study, we propose to use a novel approach for automatic classification of ADHD conditioned subjects and control subjects using functional Magnetic Resonance Imaging (fMRI) data of resting state brains. For this purpose, we compute the correlation between every possible voxel pairs within a subject and over the time frame of the experimental protocol. A network of voxels is constructed by representing a high correlation value between any two voxels as an edge. A Bag-of-Words (BoW) approach is used to represent each subject as a histogram of network features; such as the number of degrees per voxel. The classification is done using a Support Vector Machine (SVM). We also investigate the use of raw intensity values in the time series for each voxel. Here, every subject is represented as a combined histogram of network and raw intensity features. Experimental results verified that the classification accuracy improves when the combined histogram is used. We tested our approach on a highly challenging dataset released by NITRC for ADHD-200 competition and obtained promising results. The dataset not only has a large size but also includes subjects from different demography and edge groups. To the best of our knowledge, this is the first paper to propose BoW approach in any functional brain disorder classification and we believe that this approach will be useful in analysis of many brain related conditions.
Measurement of glucose concentration by image processing of thin film slides
Sankaranaryanan Piramanayagam, Eli Saber, David Heavner
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
Posters: Classification
icon_mobile_dropdown
Cascaded classifier for large-scale data applied to automatic segmentation of articular cartilage
Adhish Prasoon, Christian Igel, Marco Loog, et al.
Many classification/segmentation tasks in medical imaging are particularly challenging for machine learning algorithms because of the huge amount of training data required to cover biological variability. Learning methods scaling badly in the number of training data points may not be applicable. This may exclude powerful classifiers with good generalization performance such as standard non-linear support vector machines (SVMs). Further, many medical imaging problems have highly imbalanced class populations, because the object to be segmented has only few pixels/voxels compared to the background. This article presents a two-stage classifier for large-scale medical imaging problems. In the first stage, a classifier that is easily trainable on large data sets is employed. The class imbalance is exploited and the classifier is adjusted to correctly detect background with a very high accuracy. Only the comparatively few data points not identified as background are passed to the second stage. Here a powerful classifier with high training time complexity can be employed for making the final decision whether a data point belongs to the object or not. We applied our method to the problem of automatically segmenting tibial articular cartilage from knee MRI scans. We show that by using nearest neighbor (kNN) in the first stage we can reduce the amount of data for training a non-linear SVM in the second stage. The cascaded system achieves better results than the state-of-the-art method relying on a single kNN classifier.
Digitized tissue microarray classification using sparse reconstruction
Fuyong Xing, Baiyang Liu, Xin Qi, et al.
In this paper, we propose a novel image classification method based on sparse reconstruction errors to discriminate cancerous breast tissue microarray (TMA) discs from benign ones. Sparse representation is employed to reconstruct the samples and separate the benign and cancer discs. The method consists of several steps including mask generation, dictionary learning, and data classification. Mask generation is performed using multiple scale texton histogram, integral histogram and AdaBoost. Two separate cancer and benign TMA dictionaries are learned using K-SVD. Sparse coefficients are calculated using orthogonal matching pursuit (OMP), and the reconstructive error of each testing sample is recorded. The testing image will be divided into many small patches. Each small patch will be assigned to the category which produced the smallest reconstruction error. The final classification of each testing sample is achieved by calculating the total reconstruction errors. Using standard RGB images, and tested on a dataset with 547 images, we achieved much better results than previous literature. The binary classification accuracy, sensitivity, and specificity are 88.0%, 90.6%, and 70.5%, respectively.
Global pattern analysis and classification of dermoscopic images using textons
Maryam Sadeghi, Tim K. Lee, David McLean, et al.
Detecting and classifying global dermoscopic patterns are crucial steps for detecting melanocytic lesions from non-melanocytic ones. An important stage of melanoma diagnosis uses pattern analysis methods such as 7-point check list, Menzies method etc. In this paper, we present a novel approach to investigate texture analysis and classification of 5 classes of global lesion patterns (reticular, globular, cobblestone, homogeneous, and parallel pattern) in dermoscopic images. Our statistical approach models the texture by the joint probability distribution of filter responses using a comprehensive set of the state of the art filter banks. This distribution is represented by the frequency histogram of filter response cluster centers called textons. We have also examined other two methods: Joint Distribution of Intensities (JDI) and Convolutional Restricted Boltzmann Machine (CRBM) to learn the pattern specific features to be used for textons. The classification performance is compared over the Leung and Malik filters (LM), Root Filter Set (RFS), Maximum Response Filters (MR8), Schmid, Laws and our proposed filter set as well as CRBM and JDI. We analyzed 375 images of the 5 classes of the patterns. Our experiments show that the joint distribution of color (JDC) in the L*a*b* color space outperforms the other color spaces with a correct classification rate of 86.8%.
Texture analysis using Minkowski functionals
Xiaoxing Li, Paulo R. S. Mendonça, Rahul Bhotika
Minkowski Functionals (MFs) are geometric measurements of 3D shapes, including volume, surface area, curvature and Euler number. MFs can be used as texture descriptors for medical image analysis in the segmentation of normal anatomy as well as in the detection/diagnosis of pathology. In this paper, we propose a method for fast computation of MFs based on integral images, which offers significantly improved accuracy and efficiency compared with previous works. In addition, MFs computed using our method are used in applications on image segmentation and pathology detection. Our experiment results clearly demonstrate the potential of MFs in such medical image analysis tasks.
A novel online Variance Based Instance Selection (VBIS) method for efficient atypicality detection in chest radiographs
Chest radiographs are complex, heterogeneous medical images that depict many different types of tissues, and many different types of abnormalities. A radiologist develops a sense of what visual textures are typical for each anatomic region within chest radiographs by viewing a large set of "normal" radiographs over a period of years. As a result, an expert radiologist is able to readily detect atypical features. In our previous research, we modeled this type of learning by (1) collecting a large set of "normal" chest radiographs, (2) extracting local textural and contour features from anatomical regions within these radiographs, in the form of high-dimensional feature vectors, (3) using a distance-based transductive machine learning method to learn what it typical for each anatomical region, and (4) computing atypicality scores for the anatomical regions in test radiographs. That research demonstrated that the transductive One-Nearest-Neighbor (1NN) method was effective for identifying atypical regions in chest radiographs. However, the large set of training instances (and the need to compute a distance to each of these instances in a high dimensional space) made the transductive method computationally expensive. This paper discusses a novel online Variance Based Instance Selection (VBIS) method for use with the Nearest Neighbor classifier, that (1) substantially reduced the computational cost of the transductive 1NN method, while maintaining a high level of effectiveness in identifying regions of chest radiographs with atypical content, and (2) allowed the incremental incorporation of training data from new informative chest radiographs as they are encountered in day-to-day clinical work.
Posters: Motion
icon_mobile_dropdown
Low bandwidth eye tracker for scanning laser ophthalmoscopy
Zachary G. Harvey, Alfredo Dubra, Nathan D. Cahill, et al.
The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.
Estimation of trabecular thickness in gray-scale images through granulometric analysis
Rodrigo Moreno, Magnus Borga, Örjan Smedby
This paper extends to gray-scale the method proposed by Hildebrand and Rüegsegger for estimating thickness of trabecular bone, which is the most used in trabecular bone research, where local thickness at a point is defined as the diameter of the maximum inscribed ball that includes that point. The proposed extension takes advantage of the equivalence between this method and the opening function computed for the granulometry generated by the opening operation of mathematical morphology with ball-shaped structuring elements of different diameter. The proposed extension (a) uses gray-scale instead of binary mathematical morphology, (b) uses all values of the pattern spectrum of the granulometry instead of the maximum peak as used for binary images, (c) corrects bias on local thickness estimations generated by partial volume effects, and (d) uses the gray-scale as a weighting function for global thickness estimation. The proposed extension becomes equivalent to the original method when it is applied to binary images. A new non-flat structuring element is also proposed in order to reduce the discretization errors generated by traditional flat structuring elements. Translation invariance can be attained by up-sampling the images through interpolation by a factor of two. Results for synthetic and real images show that the quality of the measurements obtained through the original method strongly depends on the binarization process, whereas the measurements obtained through the proposed extension do not. Consequently, the proposed extension is more appropriate for images with limited resolution where binarization is not trivial.
SinoCor: motion correction in SPECT
Debasis Mitra, Daniel Eiland, Mahmoud Abdallah, et al.
Motion is a serious artifact in Cardiac nuclear imaging because the scanning operation takes a long time. Since reconstruction algorithms assume consistent or stationary data the quality of resulting image is affected by motion, sometimes significantly. Even after adoption of the gold standard MoCo(R) algorithm from Cedars-Sinai by most vendors, heart motion remains a significant challenge. Also, any serious study in quantitative analysis necessitates correction for motion artifacts. It is generally recognized that human eye is a very sensitive tool for detecting motion. However, two reasons prevent such manual correction: (1) it is costly in terms of specialist's time, and (2) no such tool for manual correction is available currently. Previously, at SPIE-MIC'11, we presented a simple tool (SinoCor) that allows sinograms to be corrected manually or automatically. SinoCor performs correction of sinograms containing inter-frame patient or respiratory motions using rigid-body dynamics. The software is capable of detecting the patient motion and estimating the body-motion vector using scanning geometry parameters. SinoCor applies appropriate geometrical correction to all the frames subsequent to the frame when the movement has occurred in a manual or automated mode. For respiratory motion, it is capable of automatically smoothing small oscillatory (frame-wise local) movements. Lower order image moments are used to represent a frame and the required rigid body movement compensation is computed accordingly. Our current focus is on enhancement of SinoCor with the capability to automatically detect and compensate for intra-frame motion that causes motion blur on the respective frame. Intra-frame movements are expected in both patient and respiratory motions. For a controlled study we also have developed a motion simulator. A stable version of SinoCor is available under license from Lawrence Berkeley National Laboratory.
Automatic analysis of ciliary beat frequency using optical flow
Michael Figl, Manuel Lechner, Tobias Werther, et al.
Ciliary beat frequency (CBF) can be a useful parameter for diagnosis of several diseases, as e.g. primary ciliary dyskinesia. (PCD). CBF computation is usually done using manual evaluation of high speed video sequences, a tedious, observer dependent, and not very accurate procedure. We used the OpenCV's pyramidal implementation of the Lukas-Kanade algorithm for optical flow computation and applied this to certain objects to follow the movements. The objects were chosen by their contrast applying the corner detection by Shi and Tomasi. Discrimination between background/noise and cilia by a frequency histogram allowed to compute the CBF. Frequency analysis was done using the Fourier transform in matlab. The correct number of Fourier summands was found by the slope in an approximation curve. The method showed to be usable to distinguish between healthy and diseased samples. However there remain difficulties in automatically identifying the cilia, and also in finding enough high contrast cilia in the image. Furthermore the some of the higher contrast cilia are lost (and sometimes found) by the method, an easy way to distinguish the correct sub-path of a point's path have yet to be found in the case where the slope methods doesn't work.
Four-dimensional non-rigid cardiac motion estimation
Qiulin Tang, Jochen Cammin, Somesh Srivastava, et al.
Electrocardiogram-gated cardiac CT reconstruction methods have been developed to reduce motion artifacts; however, projection data used in reconstruction are limited to those within gating time windows, resulting in large image noise. Motion compensated image reconstruction is capable of fully utilizing all projection data if a motion vector field is known. In this work, we propose a non-rigid four-dimensional image-based motion estimation method which uses a nested conjugated gradient method to minimize a cost function. The proposed method is implemented on GPU using CUDA, and its performance was verified with patient data.