Proceedings Volume 7962

Medical Imaging 2011: Image Processing

cover
Proceedings Volume 7962

Medical Imaging 2011: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 7 March 2011
Contents: 20 Sessions, 174 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2011
Volume Number: 7962

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7962
  • Keynote and Segmentation I
  • Cardiac Applications
  • Skeletal and Orthopedic Applications
  • 2D Image Analysis
  • Brain Structure and DTI
  • Registration I
  • Shape Methods and Applications
  • Segmentation II
  • Registration II
  • Image Enhancement/Classification
  • Segmentation of Vascular Images
  • Posters: Registration
  • Posters: Atlases
  • Posters: Segmentation
  • Posters: Classification
  • Posters: Shape
  • Posters: Motion Analysis
  • Posters: DTI and Function
  • Posters: Enhancement
Front Matter: Volume 7962
icon_mobile_dropdown
Front Matter: Volume 7962
This PDF file contains the front matter associated with SPIE Proceedings Volume 7962, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Keynote and Segmentation I
icon_mobile_dropdown
Comparison of fuzzy connectedness and graph cut segmentation algorithms
The goal of this paper is a theoretical and experimental comparison of two popular image segmentation algorithms: fuzzy connectedness (FC) and graph cut (GC). On the theoretical side, our emphasis will be on describing a common framework in which both of these methods can be expressed. We will give a full analysis of the framework and describe precisely a place which each of the two methods occupies in it. Within the same framework, other region based segmentation methods, like watershed, can also be expressed. We will also discuss in detail the relationship between FC segmentations obtained via image forest transform (IFT) algorithms, as opposed to FC segmentations obtained by other standard versions of FC algorithms. We also present an experimental comparison of the performance of FC and GC algorithms. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as influence of the choice of the seeds on the output.
Automated multimodality concurrent classification for segmenting vessels in 3D spectral OCT and color fundus images
Zhihong Hu, Michael D. Abràmoff, Meindert Niemeijer, et al.
Segmenting vessels in spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging in the region near and inside the neural canal opening (NCO). Furthermore, accurately segmenting them in color fundus photographs also presents a challenge near the projected NCO. However, both modalities also provide complementary information to help indicate vessels, such as a better NCO contrast from the NCO-aimed OCT projection image and a better vessel contrast inside the NCO from fundus photographs. We thus present a novel multimodal automated classification approach for simultaneously segmenting vessels in SD-OCT volumes and fundus photographs, with a particular focus on better segmenting vessels near and inside the NCO by using a combination of their complementary features. In particular, in each SD-OCT volume, the algorithm pre-segments the NCO using a graph-theoretic approach and then applies oriented Gabor wavelets with oriented NCO-based templates to generate OCT image features. After fundus-to-OCT registration, the fundus image features are computed using Gaussian filter banks and combined with OCT image features. A k-NN classifier is trained on 5 and tested on 10 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 15 subjects with glaucoma. Using ROC analysis, we demonstrate an improvement over two closest previous works performed in single modal SD-OCT volumes with an area under the curve (AUC) of 0.87 (0.81 for our and 0.72 for Niemeijer's single modal approach) in the region around the NCO and 0.90 outside the NCO (0.84 for our and 0.81 for Niemeijer's single modal approach).
Cardiac Applications
icon_mobile_dropdown
Simultaneous detection of landmarks and key-frame in cardiac perfusion MRI using a joint spatial-temporal context model
Xiaoguang Lu, Hui Xue, Marie-Pierre Jolly, et al.
Cardiac perfusion magnetic resonance imaging (MRI) has proven clinical significance in diagnosis of heart diseases. However, analysis of perfusion data is time-consuming, where automatic detection of anatomic landmarks and key-frames from perfusion MR sequences is helpful for anchoring structures and functional analysis of the heart, leading toward fully automated perfusion analysis. Learning-based object detection methods have demonstrated their capabilities to handle large variations of the object by exploring a local region, i.e., context. Conventional 2D approaches take into account spatial context only. Temporal signals in perfusion data present a strong cue for anchoring. We propose a joint context model to encode both spatial and temporal evidence. In addition, our spatial context is constructed not only based on the landmark of interest, but also the landmarks that are correlated in the neighboring anatomies. A discriminative model is learned through a probabilistic boosting tree. A marginal space learning strategy is applied to efficiently learn and search in a high dimensional parameter space. A fully automatic system is developed to simultaneously detect anatomic landmarks and key frames in both RV and LV from perfusion sequences. The proposed approach was evaluated on a database of 373 cardiac perfusion MRI sequences from 77 patients. Experimental results of a 4-fold cross validation show superior landmark detection accuracies of the proposed joint spatial-temporal approach to the 2D approach that is based on spatial context only. The key-frame identification results are promising.
Statistical fusion of continuous labels: identification of cardiac landmarks
Fangxu Xing, Sahar Soleimanifard, Jerry L. Prince, et al.
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Automated planning of ablation targets in atrial fibrillation treatment
Johannes Keustermans, Stijn De Buck, Hein Heidbüchel, et al.
Catheter based radio-frequency ablation is used as an invasive treatment of atrial fibrillation. This procedure is often guided by the use of 3D anatomical models obtained from CT, MRI or rotational angiography. During the intervention the operator accurately guides the catheter to prespecified target ablation lines. The planning stage, however, can be time consuming and operator dependent which is suboptimal both from a cost and health perspective. Therefore, we present a novel statistical model-based algorithm for locating ablation targets from 3D rotational angiography images. Based on a training data set of 20 patients, consisting of 3D rotational angiography images with 30 manually indicated ablation points, a statistical local appearance and shape model is built. The local appearance model is based on local image descriptors to capture the intensity patterns around each ablation point. The local shape model is constructed by embedding the ablation points in an undirected graph and imposing that each ablation point only interacts with its neighbors. Identifying the ablation points on a new 3D rotational angiography image is performed by proposing a set of possible candidate locations for each ablation point, as such, converting the problem into a labeling problem. The algorithm is validated using a leave-one-out-approach on the training data set, by computing the distance between the ablation lines obtained by the algorithm and the manually identified ablation points. The distance error is equal to 3.8±2.9 mm. As ablation lesion size is around 5-7 mm, automated planning of ablation targets by the presented approach is sufficiently accurate.
Groupwise registration of cardiac perfusion MRI sequences using normalized mutual information in high dimension
Sameh Hamrouni, Nicolas Rougon, Françoise Prêteux
In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.
A comparison of cost functions for data-driven motion estimation in myocardial perfusion SPECT imaging
Joyeeta Mitra Mukherjee, P. H. Pretorius, K. L. Johnson, et al.
In myocardial perfusion SPECT imaging patient motion during acquisition causes severe artifacts in about 5% of studies. Motion estimation strategies commonly used are a) data-driven, where the motion may be determined by registration and checking consistency with the SPECT acquisition data, and b) external surrogate-based, where the motion is obtained from a dedicated motion-tracking system. In this paper a data-driven strategy similar to a 2D-3D registration scheme with multiple views is investigated, using a partially reconstructed heart for the 3D model. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The goal of this paper is to compare the performance of different cost-functions in quantifying consistency with the SPECT projection data in a registration-based scheme for motion estimation as the image-quality of the 3D model degrades. Six intensity-based metrics- Mean-squared difference (MSD), Mutual information (MI), Normalized Mutual information NMI), Pattern intensity (PI), normalized cross-correlation (NCC) and Entropy of the difference (EDI) were studied. Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and collimator blurring. Further the image quality of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in acquisitions of anthropomorphic phantoms and patient studies in a real clinical setting. Pattern intensity and Normalized Mutual Information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations and anthropomorphic phantom acquisitions. In patient studies, Normalized Mutual Information based data-driven estimates yielded comparable image quality to that obtained using external motion tracking.
Automatic evaluation of the Valsalva sinuses from cine-MRI
Cédric Blanchard, Tadeusz Sliwa, Alain Lalande, et al.
MRI appears to be particularly attractive for the study of the Sinuses of Valsalva (SV), however there is no global consensus on their suitable measurements. In this paper, we propose a new method, based on the mathematical morphology and combining a numerical geodesic reconstruction with an area estimation, to automatically evaluate the SV from a cine-MRI in a cross-sectional orientation. It consists in the extraction of the shape of the SV, the detection of relevant points (commissures, cusps and the centre of the SV), the measurement of relevant distances and in a classification of the valve as bicuspid or tricuspid by a metric evaluation of the SV. Our method was tested on 23 patient examinations and radii calculations were compared with a manual measurement. The classification of the valve as tricuspid or bicuspid was correct for all the cases. Moreover, there are an excellent correlation and an excellent concordance between manual and automatic measurements for images at diastolic phase (r= 0.97; y = x - 0.02; p=NS; mean of differences = -0.1 mm; standard deviation of differences = 2.3 mm) and at systolic phase (r= 0.96; y = 0.97 x + 0.80; p=NS ; mean of differences = -0.1 mm; standard deviation of differences = 2.4 mm). The cross-sectional orientation of the image acquisition plane conjugated with our automatic method provides a reliable morphometric evaluation of the SV, based on the automatic location of the centre of the SV, the commissure and the cusp positions. Measurements of distances between relevant points allow a precise evaluation of the SV.
Skeletal and Orthopedic Applications
icon_mobile_dropdown
A variational approach to bone segmentation in CT images
Jeff Calder, Amir M. Tahmasebi, Abdol-Reza Mansouri
We present a variational approach for segmenting bone structures in Computed Tomography (CT) images. We introduce a novel functional on the space of image segmentations, and subsequently minimize this functional through a gradient descent partial differential equation. The functional we propose provides a measure of similarity of the intensity characteristics of the bone and tissue regions through a comparison of their cumulative distribution functions; minimizing this similarity measure therefore yields the maximal separation between the two regions. We perform the minimization of our proposed functional using level set partial differential equations; in addition to numerical stability, this yields topology independence, which is especially useful in the context of CT bone segmentation where a bone region may consist of several disjoint pieces. Finally, we present an extensive validation of our method against expert manual segmentation on CT images of the wrist, ankle, foot, and pelvis.
Fully automatic detection of the vertebrae in 2D CT images
Franz Graf, Hans-Peter Kriegel, Matthias Schubert, et al.
Knowledge about the vertebrae is a valuable source of information for several annotation tasks. In recent years, the research community spent a considerable effort for detecting, segmenting and analyzing the vertebrae and the spine in various image modalities like CT or MR. Most of these methods rely on prior knowledge like the location of the vertebrae or other initial information like the manual detection of the spine. Furthermore, the majority of these methods require a complete volume scan. With the existence of use cases where only a single slice is available, there arises a demand for methods allowing the detection of the vertebrae in 2D images. In this paper, we propose a fully automatic and parameterless algorithm for detecting the vertebrae in 2D CT images. Our algorithm starts with detecting candidate locations by taking the density of bone-like structures into account. Afterwards, the candidate locations are extended into candidate regions for which certain image features are extracted. The resulting feature vectors are compared to a sample set of previously annotated and processed images in order to determine the best candidate region. In a final step, the result region is readjusted until convergence to a locally optimal position. Our new method is validated on a real world data set of more than 9 329 images of 34 patients being annotated by a clinician in order to provide a realistic ground truth.
Segmentation of vertebral bodies in CT and MR images based on 3D deterministic models
Darko Štern, Tomaž Vrtovec, Franjo Pernuš, et al.
The evaluation of vertebral deformations is of great importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards the computed tomography (CT) and magnetic resonance (MR) imaging techniques, as they can provide a detailed 3D representation of vertebrae, the established methods for the evaluation of vertebral deformations still provide only a two-dimensional (2D) geometrical description. Segmentation of vertebrae in 3D may therefore not only improve their visualization, but also provide reliable and accurate 3D measurements of vertebral deformations. In this paper we propose a method for 3D segmentation of individual vertebral bodies that can be performed in CT and MR images. Initialized with a single point inside the vertebral body, the segmentation is performed by optimizing the parameters of a 3D deterministic model of the vertebral body to achieve the best match of the model to the vertebral body in the image. The performance of the proposed method was evaluated on five CT (40 vertebrae) and five T2-weighted MR (40 vertebrae) spine images, among them five are normal and five are pathological. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images and that the proposed model can describe a variety of vertebral body shapes. The method may be therefore used for initializing whole vertebra segmentation or reliably describing vertebral body deformations.
Manifold learning for automatically predicting articular cartilage morphology in the knee with data from the osteoarthritis initiative (OAI)
C. Donoghue, A. Rao, A. M. J. Bull, et al.
Osteoarthritis (OA) is a degenerative, debilitating disease with a large socio-economic impact. This study looks to manifold learning as an automatic approach to harness the plethora of data provided by the Osteoarthritis Initiative (OAI). We construct several Laplacian Eigenmap embeddings of articular cartilage appearance from MR images of the knee using multiple MR sequences. A region of interest (ROI) defined as the weight bearing medial femur is automatically located in all images through non-rigid registration. A pairwise intensity based similarity measure is computed between all images, resulting in a fully connected graph, where each vertex represents an image and the weight of edges is the similarity measure. Spectral analysis is then applied to these pairwise similarities, which acts to reduce the dimensionality non-linearly and embeds these images in a manifold representation. In the manifold space, images that are close to each other are considered to be more "similar" than those far away. In the experiment presented here we use manifold learning to automatically predict the morphological changes in the articular cartilage by using the co-ordinates of the images in the manifold as independent variables for multiple linear regression. In the study presented here five manifolds are generated from five sequences of 390 distinct knees. We find statistically significant correlations (up to R2 = 0.75), between our predictors and the results presented in the literature.
Determination of vertebral pose in 3D by minimization of vertebral asymmetry
Tomaž Vrtovec, Franjo Pernuš, Boštjan Likar
The vertebral pose in three dimensions (3D) may provide valuable information for quantitative clinical measurements or aid the initialization of image analysis techniques. We propose a method for automated determination of the vertebral pose in 3D that, in an iterative registration scheme, estimates the position and rotation of the vertebral coordinate system in 3D images. By searching for the hypothetical points, which are located where the boundaries of anatomical structures would have maximal symmetrical correspondences when mirrored over the vertebral planes, the asymmetry of vertebral anatomical structures is minimized. The method was evaluated on 14 normal and 14 scoliotic vertebrae in images acquired by computed tomography (CT). For each vertebra, 1000 randomly initialized experiments were performed. The results show that the vertebral pose can be successfully determined in 3D with mean accuracy of 0.5mm and 0.6° and mean precision of 0.17mm and 0.17. according to the 3D position and 3D rotation, respectively.
Femur specific polyaffine model to regularize the log-domain demons registration
Christof Seiler, Xavier Pennec, Lucas Ritacco, et al.
Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.
Segmentation of knee joints in x-ray images using decomposition-based sweeping and graph search
Jian Mu, Xiaomin Liu, Shuang Luan, et al.
Plain radiography (i.e., X-ray imaging) provides an effective and economical imaging modality for diagnosing knee illnesses and injuries. Automatically segmenting and analyzing knee radiographs is a challenging problem. In this paper, we present a new approach for accurately segmenting the knee joint in X-ray images. We first use the Gaussian high-pass filter to remove homogeneous regions which are unlikely to appear on bone contours. We then presegment the bones and develop a novel decomposition-based sweeping algorithm for extracting bone contour topology from the filtered skeletonized images. Our sweeping algorithm decomposes the bone structures into several relatively simple components and deals with each component separately based on its geometric characteristics using a sweeping strategy. Utilizing the presegmentation, we construct a graph to model the bone topology and apply an optimal graph search algorithm to optimize the segmentation results (with respect to our cost function defined on the bone boundaries). Our segmented results match well with the manual tracing results by radiologists. Our segmentation approach can be a valuable tool for assisting radiologists and X-ray technologists in clinical practice and training.
2D Image Analysis
icon_mobile_dropdown
Integrated segmentation of cellular structures
Peter Ajemba, Yousef Al-Kofahi, Richard Scott, et al.
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Identification and classification of cells in multispectral microscopy images of lymph nodes
Xiaomin Liu, Alvernia Francesca Setiadi, Mark S. Alber, et al.
Accurate detection and classification of stained cells in microscopy images enable quantitative measurements of cell distributions and spatial structures, and are crucial for developing new analysis tools for medical studies and applications such as cancer diagnosis and treatment. In this paper, we present a learning based approach for identifying different types of cells in multi-spectral microscopy images of tumor-draining lymph nodes (TDLNs) and locating their centroid positions. With our approach, a set of features based on the eigenvalues of the Hessian matrix is constructed for each image pixel to determine whether the local shape is elliptic. The elliptic features are then used together with the intensity-based ring scores as the feature set for the supervised learning method. Using this new feature set, a random forest based classifier is trained from a set of training samples of different cell types. In order to overcome the difficulties of classifying cells with varying stain qualities, sizes, and shapes, we build a large set of prior training data from a variety of tissue sections. To deal with the issue of multiple overlapping cell nuclei in images, we propose to utilize the spikes of the outer medial axis of the cells to detect and detach the touching cells. As a result, the centroid position of each identified cell is pinpointed. The experimental data show that our proposed method achieves higher recognition rates than previous methods, reducing significantly the human interaction effort involved in previous cell classification work.
Development of a stained cell nuclei counting system
Niranjan Timilsina, Christopher Moffatt, Kazunori Okada
This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.
Texture analysis of clinical radiographs using radon transform on a local scale for differentiation between post-menopausal women with and without hip fracture
Holger F. Boehm M.D., Markus Körner M.D., Bernhard Baumert M.D., et al.
Osteoporosis is a chronic condition characterized by demineralization and destruction of bone tissue. Fractures associated with the disease are becoming an increasingly relevant issue for public health institutions. Prediction of fracture risk is a major focus research and, over the years, has been approched by various methods. Still, bone mineral density (BMD) obtained by dual-energy X-ray absorptiometry (DXA) remains the clinical gold-standard for diagnosis and follow-up of osteoporosis. However, DXA is restricted to specialized diagnostic centers and there exists considerable overlap in BMD results between populations of individuals with and without fractures. Clinically far more available than DXA is conventional x-ray imaging depicting trabecular bone structure in great detail. In this paper, we demonstrate that bone structure depicted by clinical radiographs can be analysed quantitatively by parameters obtained from the Radon Transform (RT). RT is a global analysis-tool for detection of predefined, parameterized patterns, e.g. straight lines or struts, representing suitable approximations of trabecular bone texture. The proposed algorithm differentiates between patients with and without fractures of the hip by application of various texture-metrics based on the Radon-Transform to standard x-ray images of the proximal femur. We consider three different regions-of-interest in the proximal femur (femoral head, neck, and inter-trochanteric area), and conduct an analysis with respect to correct classification of the fracture status. Performance of the novel approach is compared to DXA. We draw the conclusion that performance of RT is comparable to DXA and may become a useful supplement to densitometry for the prediction of fracture risk.
Detection of rheumatoid arthritis using infrared imaging
Monique Frize, Cynthia Adéa, Pierre Payeur, et al.
Rheumatoid arthritis (RA) is an inflammatory disease causing pain, swelling, stiffness, and loss of function in joints; it is difficult to diagnose in early stages. An early diagnosis and treatment can delay the onset of severe disability. Infrared (IR) imaging offers a potential approach to detect changes in degree of inflammation. In 18 normal subjects and 13 patients diagnosed with Rheumatoid Arthritis (RA), thermal images were collected from joints of hands, wrists, palms, and knees. Regions of interest (ROIs) were manually selected from all subjects and all parts imaged. For each subject, values were calculated from the temperature measurements: Mode/Max, Median/Max, Min/Max, Variance, Max-Min, (Mode-Mean), and Mean/Min. The data sets did not have a normal distribution, therefore non parametric tests (Kruskal-Wallis and Ranksum) were applied to assess if the data from the control group and the patient group were significantly different. Results indicate that: (i) thermal images can be detected on patients with the disease; (ii) the best joints to image are the metacarpophalangeal joints of the 2nd and 3rd fingers and the knees; the difference between the two groups was significant at the 0.05 level; (iii) the best calculations to differentiate between normal subjects and patients with RA are the Mode/Max, Variance, and Max-Min. We concluded that it is possible to reliably detect RA in patients using IR imaging. Future work will include a prospective study of normal subjects and patients that will compare IR results with Magnetic Resonance (MR) analysis.
Brain Structure and DTI
icon_mobile_dropdown
Identifying intrasulcal medial surfaces for anatomically consistent reconstruction of the cerebral cortex
Sergey Osechinskiy, Frithjof Kruggel
A novel approach to identifying poorly resolved boundaries between adjacent sulcal cortical banks in MR images of the human brain is presented. The algorithm calculates an electrostatic potential field in a partial differential equation (PDE) model of an inhomogeneous dielectric layer of gray matter that surrounds conductive white matter. Correspondence trajectories and geodesic distances are computed along the streamlines of the potential field gradient using PDEs in a Eulerian framework. The skeleton of a sulcal medial boundary is identified by a simple procedure that finds irregularities/collisions in the field of correspondences. The skeleton detection procedure is robust to noise, does not produce spurious artifacts and does not require tunable parameters. Results of the algorithm are compared with a closely related technique, called Anatomically Consistent Enhancement (ACE) (Han et al. CRUISE: Cortical reconstruction using implicit surface evolution, 2004). Results demonstrate that the approach proposed here has a number of advantages over ACE and produces skeletons with a more regular structure. This algorithm was developed as a part of a more general PDE-based framework for cortical reconstruction, which integrates the potential field gradient flow and the skeleton barriers into a level set deformable model. This technique is primarily aimed at anatomically consistent and accurate reconstruction of cortical surface models in the presence of imaging noise and partial volume effects, but the identified intrasulcal medial surfaces can serve other purposes as well, e.g. as landmarks in nonrigid registration, or as sulcal ribbons that characterize the cortical folding.
Detection and mapping of delays in early cortical folding derived from in utero MRI
Piotr A. Habas, Vidya Rajagopalan, Julia A. Scott, et al.
Understanding human brain development in utero and detecting cortical abnormalities related to specific clinical conditions is an important area of research. In this paper, we describe and evaluate methodology for detection and mapping of delays in early cortical folding from population-based studies of fetal brain anatomies imaged in utero. We use a general linear modeling framework to describe spatiotemporal changes in curvature of the developing brain and explore the ability to detect and localize delays in cortical folding in the presence of uncertainty in estimation of the fetal age. We apply permutation testing to examine which regions of the brain surface provide the most statistical power to detect a given folding delay at a given developmental stage. The presented methodology is evaluated using MR scans of fetuses with normal brain development and gestational ages ranging from 20.57 to 27.86 weeks. This period is critical in early cortical folding and the formation of the primary and secondary sulci. Finally, we demonstrate a clinical application of the framework for detection and localization of folding delays in fetuses with isolated mild ventriculomegaly.
Topologically correct cortical segmentation using Khalimsky's cubic complex framework
Manuel Jorge Cardoso, Matthew J. Clarkson, Marc Modat, et al.
Automatic segmentation of the cerebral cortex from magnetic resonance brain images is a valuable tool for neuroscience research. Due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cerebral cortex, segmenting the brain in a robust, accurate and topologically correct way still poses a challenge. In this paper we describe a topologically correct Expectation Maximisation based Maximum a Posteriori segmentation algorithm formulated within the Khalimsky cubic complex framework, where both the solution of the EM algorithm and the information derived from a geodesic distance function are used to locally modify the weighting of a Markov Random Field and drive the topology correction operations. Experiments performed on 20 Brainweb datasets show that the proposed method obtains a topologically correct segmentation without significant loss in accuracy when compared to two well established techniques.
A novel Riemannian metric for analyzing HARDI data
Sentibaleng Ncube, Anuj Srivastava
We propose a novel Riemannian framework for analyzing orientation distribution functions (ODFs) in HARDI data sets, for use in comparing, interpolating, averaging, and denoising ODFs. A recently used Fisher-Rao metric does not provide physically feasible solutions, and we suggest a modification that removes orientations from ODFs and treats them as separate variables. This way a comparison of any two ODFs is based on separate comparisons of their shapes and orientations. Furthermore, this provides an explicit orientation at each voxel for use in tractography. We demonstrate these ideas by computing geodesics between ODFs and Karcher means of ODFs, for both the original Fisher-Rao and the proposed framework.
Resolving complex fibre configurations using two-tensor random-walk stochastic algorithms
Nagulan Ratnarajah, Andy Simmons, Alan Colchester, et al.
Fibre tractography using diffusion tensor imaging allows the study of anatomical connectivity of the brain, and is an important diagnostic tool for a range of neurological diseases. Deterministic tractography algorithms assume that the fibre direction coincides with the principal eigenvector of a diffusion tensor. This is, however, not the case for regions with crossing fibres. In addition noise introduces uncertainty and makes the computation of fibre directions difficult. Stochastic tractography algorithms have been developed to overcome the uncertainties of deterministic algorithms. However, generally, both parametric and non-parametric stochastic algorithms require longer computational time and large amounts of memory. Multi-tensor fibre tracking methods can alleviate the problems when crossing fibres are encountered. In this study simple and computationally efficient random-walk algorithms are described for estimating anatomical connectivity in white matter. These algorithms are then applied to a two-tensor model to compute the probabilities of connections between regions with complex fibre configurations. We analyze the random-walk models quantitatively using simulated data and estimate the optimal parameter values of the models. The performance of the tracking algorithms is verified using a physical phantom and an in vivo dataset with a wide variety of seed points. The results confirm the effectiveness of the proposed approach, which gives comparable results to other stochastic methods. Our approach is however significantly faster and requires less memory. The results of two-tensor random-walk algorithms demonstrate that our algorithms can accurately identify fibre bundles in complex fibre regions.
Efficient, graph-based white matter connectivity from orientation distribution functions via multi-directional graph propagation
Alexis Boucharin, Ipek Oguz, Clement Vachet, et al.
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
Registration I
icon_mobile_dropdown
Landmark-driven parameter optimization for non-linear image registration
Alexander Schmidt-Richberg, René Werner, Jan Ehrhardt, et al.
Image registration is one of the most common research areas in medical image processing. It is required for example for image fusion, motion estimation, patient positioning, or generation of medical atlases. In most intensity-based registration approaches, parameters have to be determined, most commonly a parameter indicating to which extend the transformation is required to be smooth. Its optimal value depends on multiple factors like the application and the occurrence of noise in the images, and may therefore vary from case to case. Moreover, multi-scale approaches are commonly applied on registration problems and demand for further adjustment of the parameters. In this paper, we present a landmark-based approach for automatic parameter optimization in non-linear intensity-based image registration. In a first step, corresponding landmarks are automatically detected in the images to match. The landmark-based target registration error (TRE), which is shown to be a valid metric for quantifying registration accuracy, is then used to optimize the parameter choice during the registration process. The approach is evaluated for the registration of lungs based on 22 thoracic 4D CT data sets. Experiments show that the TRE can be reduced on average by 0.07 mm using automatic parameter optimization.
Temporal subtraction of chest radiographs compensating pose differences
Jens von Berg, Jalda Dworzak, Tobias Klinder, et al.
Temporal subtraction techniques using 2D image registration improve the detectability of interval changes from chest radiographs. Although such methods are well known for some time they are not widely used in radiologic practice. The reason is the occurrence of strong pose differences between two acquisitions with a time interval of months to years in between. Such strong perspective differences occur in a reasonable number of cases. They cannot be compensated by available image registration methods and thus mask interval changes to be undetectable. In this paper a method is proposed to estimate a 3D pose difference by the adaptation of a 3D rib cage model to both projections. The difference between both is then compensated for, thus producing a subtraction image with virtually no change in pose. The method generally assumes that no 3D image data is available from the patient. The accuracy of pose estimation is validated with chest phantom images acquired under controlled geometric conditions. A subtle interval change simulated by a piece of plastic foam attached to the phantom becomes visible in subtraction images generated with this technique even at strong angular pose differences like an anterior-posterior inclination of 13 degrees.
An accurate 3D shape context based non-rigid registration method for mouse whole-body skeleton registration
Di Xiao, David Zahra, Pierrick Bourgeat, et al.
Small animal image registration is challenging because of its joint structure, and posture and position difference in each acquisition without a standard scan protocol. In this paper, we face the issue of mouse whole-body skeleton registration from CT images. A novel method is developed for analyzing mouse hind-limb and fore-limb postures based on geodesic path descriptor and then registering the major skeletons and fore limb skeletons initially by thin-plate spline (TPS) transform based on the obtained geodesic paths and their enhanced correspondence fields. A target landmark correction method is proposed for improving the registration accuracy of the improved 3D shape context non-rigid registration method we previously proposed. A novel non-rigid registration framework, combining the skeleton posture analysis, geodesic path based initial alignment and 3D shape context model, is proposed for mouse whole-body skeleton registration. The performance of the proposed methods and framework was tested on 12 pairs of mouse whole-body skeletons. The experimental results demonstrated the flexibility, stability and accuracy of the proposed framework for automatic mouse whole body skeleton registration.
Iterative closest point algorithm with anisotropic weighting and its application to fine surface registration
L. Maier-Hein, T. R. dos Santos, A. M. Franz, et al.
The Iterative Closest Point (ICP) algorithm is a widely used method for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. It can be shown that the method converges to an at least local minimimum with respect to a mean-square distance metric. From a statistical point of view, the algorithm implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we (1) present the first variant of the ICP that accounts for anisotropic localization uncertainty in both shapes as well as in both steps of the algorithm and (2) show how to apply the method for robust fine registration of surface meshes. According to an evaluation on medical imaging data, the proposed method is better suited for fine surface registration than the original ICP, reducing the target registration error (TRE) for a set of targets located inside or near the mesh by 80% on average.
Incorporating hard constraints into non-rigid registration via nonlinear programming
Non-rigid image registration is a key technique in medical image analysis. In conventional non-rigid registration, the whole image is deformed in a non-rigid fashion. However, in some clinical applications, the registration process is required to maintain rigidity in some parts of the image (e.g. bones) while other parts of the image (e.g. soft tissues) can deform in a non-rigid fashion. In this paper, we employ nonlinear programming techniques to solve the registration problem efficiently while ensuring feasibility of the solution with respect to rigidity constraints. Our approach differs from others from an optimization perspective: Unlike the frequently used regularization formulation that incorporates soft constraints into energy function, we impose the local rigidity requirements as hard constraints. The constrained optimization problem is solved by nonlinear programming. The nonlinear programming formulations allow us to exploit the constraints in order to reduce the dimensionality of the optimization problem. In addition, we use dense registration framework to control the deformation at every voxel explicitly. Therefore, unconstrained voxels are not affected by the method. Experimental results from synthetic and MR images of the knee show that our method converges to the optimal solution faster and satisfies the rigidity constraints of the transformation during registration process. The result is a more realistic estimation of rigid and non-rigid deformations.
Shape Methods and Applications
icon_mobile_dropdown
Mapping the distance between the brain and the inner surface of the skull and their global asymmetries
Marc Fournier, Benoît Combès, Neil Roberts, et al.
The primary goal of this paper is to describe i) the pattern of pointwise distances between the human brain (pial surface) and the inner surface of the skull (endocast) and ii) the pattern of pointwise bilateral asymmetries of these two structures. We use a database of MR images to segment meshes representing the outer surface of the brain and the endocast. We propose automated computational techniques to assess the endocast-to-brain distances and endocast-and-brain asymmetries, based on a simplified yet accurate representation of the brain surface, that we call the brain hull. We compute two meshes representing the mean endocast and the mean brain hull to assess the two patterns in a population of normal controls. The results show i) a pattern of endocast-to-brain distances which are symmetrically distributed with respect to the mid-sagittal plane and ii) a pattern of global endocast and brain hull asymmetries which are consistent with the well-known Yakovlevian torque. Our study is a first step to validate the endocranial surface as a surrogate for the brain in fossil studies, where a key question is to elucidate the evolutionary origins of the brain torque. It also offers some insights into the normal configuration of the brain/skull interface, which could be useful in medical imaging studies (e.g. understanding atrophy in neurodegenerative diseases or modeling the brain shift in neurosurgery).
Mandible shape modeling using the second eigenfunction of the Laplace-Beltrami operator
Seongho Seo, Moo K. Chung, Brian J. Whyms, et al.
The second Laplace-Beltrami eigenfunction provides an intrinsic geometric way of establishing natural coordinates for elongated 3D anatomical structures obtained from medical images. The approach is used to establish the centerline of the human mandible from CT and provides automated anatomical landmarks across subjects. These landmarks are then used to quantify the growth pattern of the mandible between ages 0 and 20.
Manifold learning for image-based breathing gating in MRI
Respiratory motion is a challenging factor for image-guided procedures in the abdominal region. Target localization, an important issue in applications like radiation therapy, becomes difficult due to this motion. Therefore, it is necessary to detect the respiratory signal to have a higher accuracy in planning and treatment. We propose a novel image-based breathing gating method to recover the breathing signal directly from the image data. For the gating we use Laplacian eigenmaps, a manifold learning technique, to determine the low-dimensional manifold embedded in the high-dimensional space. Since Laplacian eigenmaps assign each 2D MR slice a coordinate in a low-dimensional space by respecting the neighborhood relationship, they are well suited for analyzing the respiratory motion. We perform the manifold learning on MR slices acquired from a fixed location. Then, we use the resulting respiratory signal to derive a similarity criterion to be used in applications like 4D MRI reconstruction. We perform experiments on liver data using one and three dimensions as the dimension of the manifold and compare the results. The results from the first case show that using only one dimension as the dimension of the manifold is not enough to represent the complex motion of the liver caused by respiration. We successfully recover the changes due to respiratory motion by using three dimensions. The proposed method has the potential of reducing the processing time for the 4D reconstruction significantly by defining a search window for a subsequent registration approach. It is fully automatic and does not require any prior information or training data.
Active shape models unleashed
Matthias Kirschner, Stefan Wesarg
Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym
Automatic shape based deformable registration of multiphase contrast enhanced liver CT volumes
Marius Erdt, Georgios Sakas, Matthias Hammon, et al.
The detection and assessment of many hepatic diseases is based on examination of multiphase liver CT volumes. Since phases contain complementary information, registration enables the radiologist to fuse the needed information for diagnosis or operation planning. This work presents a novel multi-stage approach for automatic registration of the liver in contrast enhanced CT volumes. Unlike other methods, our approach is based on automatic pre-segmentation of the liver in the different phases. Using the resulting shape information the volumes are coarsely registered using a landmark-based registration. Subsequently, deformations caused by the patient's breathing are compensated by an elastic Demons algorithm with a boundary distance based speed function. This allows for a high accuracy natural deformation without having to rely on error-prone extraction and matching of the liver's internal structure in complementary phases. Furthermore, since shape information is given, surrounding structures can be omitted which significantly speeds up registration. We evaluated our method using 22 CT volumes from 11 patients. The matching quality of outer shape and internal structures was validated by radiology experts. The high quality results of our approach suggest its applicability in clinical practice.
Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors
Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error ≤ (0.23 ± 0.04)mm and ≤ (0.30 ± 0.07)mm for outer & inner surfaces respectively.
Segmentation II
icon_mobile_dropdown
A novel adaptive scoring system for segmentation validation with multiple reference masks
Jan Hendrik Moltz, Jan Rühaak, Horst Karl Hahn, et al.
The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.
Automatic model-based 3D segmentation of the breast in MRI
Cristina Gallego, Anne L. Martel
A statistical shape model (SSM) is constructed and applied to automatically segment the breast in 3D MRI. We present an approach to automatically construct a SSM: first, a population of 415 semi-automatically segmented breast MRI volumes is groupwise registered to derive an average shape. Second, a surface mesh is extracted and further decimated to reduce the density of the shape representation. Third, landmarks are obtained from the averaged decimated mesh, which are non-rigidly deformed to each individual shape in the training set, using a set of pairwise deformations. Finally, the resulting landmarks are consistently obtained in all cases of the population for further statistical shape model (SSM) generation. A leave-one-out validation demonstrated that near sub-voxel resolution reconstruction (2.5mm) error is attainable when using a minimum of 15 modes of variation. The model is further applied to automatically segment the anatomy of the breast in 3D. We illustrate the results of our segmentation approach in which the model is adjusted to the image boundaries using an iterative segmentation scheme.
Fully automatic segmentation of complex organ systems: example of trachea, esophagus and heart segmentation in CT images
Carsten Meyer, Jochen Peters, Jürgen Weese
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging modalities. Many algorithms exist to segment individual organs or organ systems. However, new clinical applications and the progress in imaging technology will require the segmentation of more and more complex organ systems composed of a number of substructures, e.g., the heart, the trachea, and the esophagus. The goal of this work is to demonstrate that such complex organ systems can be successfully segmented by integrating the individual organs into a general model-based segmentation framework, without tailoring the core adaptation engine to the individual organs. As an example, we address the fully automatic segmentation of the trachea (around its main bifurcation, including the proximal part of the two main bronchi) and the esophagus in addition to the heart with all chambers and attached major vessels. To this end, we integrate the trachea and the esophagus into a model-based cardiac segmentation framework. Specifically, in a first parametric adaptation step of the segmentation workflow, the trachea and the esophagus share global model transformations with adjacent heart structures. This allows to obtain a robust, approximate segmentation for the trachea even if it is only partly inside the field-of-view, and for the esophagus in spite of limited contrast. The segmentation is then refined in a subsequent deformable adaptation step. We obtained a mean segmentation error of about 0.6mm for the trachea and 2.3mm for the esophagus on a database of 23 volumetric cardiovascular CT images. Furthermore, we show by quantitative evaluation that our integrated framework outperforms individual esophagus segmentation, and individual trachea segmentation if the trachea is only partly inside the field-of-view.
Automatic identification of cochlear implant electrode arrays for post-operative assessment
Jack H. Noble, Theodore A. Schuman, Charles G. Wright, et al.
Cochlear implantation is a procedure performed to treat profound hearing loss. Accurately determining the postoperative position of the implant in vivo would permit studying the correlations between implant position and hearing restoration. To solve this problem, we present an approach based on parametric Gradient Vector Flow snakes to segment the electrode array in post-operative CT. By combining this with existing methods for localizing intra-cochlear anatomy, we have developed a system that permits accurate assessment of the implant position in vivo. The system is validated using a set of seven temporal bone specimens. The algorithms were run on pre- and post-operative CTs of the specimens, and the results were compared to histological images. It was found that the position of the arrays observed in the histological images is in excellent agreement with the position of their automatically generated 3D reconstructions in the CT scans.
Prostate segmentation with local binary patterns guided active appearance models
Real-time fusion of Magnetic Resonance (MR) and Trans Rectal Ultra Sound (TRUS) images aid in the localization of malignant tissues in TRUS guided prostate biopsy. Registration performed on segmented contours of the prostate reduces computational complexity and improves the multimodal registration accuracy. However, accurate and computationally efficient segmentation of the prostate in TRUS images could be challenging in the presence of heterogeneous intensity distribution inside the prostate gland, and other imaging artifacts like speckle noise, shadow regions and low Signal to Noise Ratio (SNR). In this work, we propose to enhance the texture features of the prostate region using Local Binary Patterns (LBP) for the propagation of a shape and appearance based statistical model to segment the prostate in a multi-resolution framework. A parametric model of the propagating contour is derived from Principal Component Analysis (PCA) of the prior shape and texture information of the prostate from the training data. The estimated parameters are then modified with the prior knowledge of the optimization space to achieve an optimal segmentation. The proposed method achieves a mean Dice Similarity Coefficient (DSC) value of 0.94±0.01 and a mean segmentation time of 0.68±0.02 seconds when validated with 70 TRUS images of 7 datasets in a leave-one-patient-out validation framework. Our method performs computationally efficient and accurate prostate segmentation in the presence of intensity heterogeneities and imaging artifacts.
Registration II
icon_mobile_dropdown
Probabilistic framework for subject-specific and population-based analysis of longitudinal changes and disease progression in brain MR images
Annemie Ribbens, Jeroen Hermans, Frederik Maes, et al.
Aging and many neurological diseases cause progressive changes in brain morphology. Both subject-specific detection and measurement of these changes, as well as their population-based analysis are of great interest in many clinical studies. Generally, both problems are handled separately. However, as population-based knowledge facilitates subject-specific analysis and vice versa, we propose a unified statistical framework for subject-specific and population-based analysis of longitudinal brain MR image sequences of subjects suffering from the same neurological disease. The proposed method uses a maximum a posteriori formulation and the expectation maximization algorithm to simultaneously and iteratively segment all images in separate tissue classes, construct a global probabilistic 3D brain atlas and non-rigidly deform the atlas to each of the images to guide their segmentation. In order to enable a population-based analysis of the disease progression, an intermediate 4D probabilistic brain atlas is introduced, representing a discrete set of disease progression stages. The 4D atlas is simultaneously constructed with the 3D brain atlas by incorporating assignments of each input image (voxelwise) to a particular disease progression stage in the statistical framework. Moreover, these assignments enable both temporal and spatial subject-specific disease progression analysis. This includes detecting delayed or advanced disease progression and indicating the affected regions. The method is validated on a publicly available data set on which it shows promising results.
A novel local-phase method of automatic atlas construction in fetal ultrasound
Sana Fathima, Sylvia Rueda, Aris Papageorghiou, et al.
In recent years, fetal diagnostics have relied heavily on clinical assessment and biometric analysis of manually acquired ultrasound images. There is a profound need for automated and standardized evaluation tools to characterize fetal growth and development. This work addresses this need through the novel use of feature-based techniques to develop evaluators of fetal brain gestation. The methodology is comprised of an automated database-driven 2D/3D image atlas construction method, which includes several iterative processes. A unique database was designed to store fetal image data acquired as part of the Intergrowth-21st study. This database drives the proposed automated atlas construction methodology using local phase information to perform affine registration with normalized mutual information as the similarity parameter, followed by wavelet-based image fusion and averaging. The unique feature-based application of local phase and wavelet fusion towards creating the atlas reduces the intensity dependence and difficulties in registering ultrasound images. The method is evaluated on fetal transthalamic head ultrasound images of 20 weeks gestation. The results show that the proposed method is more robust to intensity variations than standard intensity-based methods. Results also suggest that the feature-based approach improves the registration accuracy needed in creating a clinically valid ultrasound image atlas.
Atlas selection strategy in multi-atlas segmentation propagation with locally weighted voting using diversity-based MMR re-ranking
Kaikai Shen, Pierrick Bourgeat, Fabrice Meriaudeau, et al.
In multi-atlas based image segmentation, multiple atlases with label maps are propagated to the query image, and fused into the segmentation result. Voting rule is commonly used classifier fusion method to produce the consensus map. Local weighted voting (LWV) is another method which combines the propagated atlases weighted by local image similarity. When LWV is used, we found that the segmentation accuracy converges slower comparing to simple voting rule. We therefore propose to introduce diversity in addition to image similarity by using Maximal Marginal Relevance (MMR) criteria as a more efficient way to rank and select atlases. We test the MMR re-ranking on a hippocampal atlas set of 138 normal control (NC) subjects and another set of 99 Alzheimer's disease patients provided by ADNI. The result shows that MMR re-ranking performed better than similarity based atlas selection when same number of atlases were selected.
Multi-modal surface comparison and its application to intra-operative range data
Thiago R. dos Santos, Alexander Seitel, Thomas Kilgus, et al.
Time-of-flight (ToF) cameras are a novel, fast, and robust means for intra-operative 3D surface acquisition. They acquire surface information (range images) in real-time. In the intra-operative registration context, these surfaces must be matched to pre-operative CT or MR surfaces, using so called descriptors, which represent surface characteristics. We present a framework for local and global multi-modal comparison of surface descriptors and characterize the differences between ToF and CT data in an in vitro experiment. The framework takes into account various aspects related to the surface characteristics and does not require high resolution input data in order to establish appropriate correspondences. We show that the presentation of local and global comparison data allows for an accurate assessment of ToF-CT discrepancies. The information gained from our study may be used for developing ToF pre-processing and matching algorithms, or for improving calibration procedures for compensating systematic distance errors. The framework is available in the open-source platform Medical Imaging Interaction Toolkit (MITK).
Distance transforms in multichannel MR image registration
Min Chen, Aaron Carass, John Bogovic, et al.
Deformable registration techniques play vital roles in a variety of medical imaging tasks such as image fusion, segmentation, and post-operative surgery assessment. In recent years, mutual information has become one of the most widely used similarity metrics for medical image registration algorithms. Unfortunately, as a matching criteria, mutual information loses much of its effectiveness when there is poor statistical consistency and a lack of structure. This is especially true in areas of images where the intensity is homogeneous and information is sparse. Here we present a method designed to address this problem by integrating distance transforms of anatomical segmentations as part of a multi-channel mutual information framework within the registration algorithm. Our method was tested by registering real MR brain data and comparing the segmentation of the results against that of the target. Our analysis showed that by integrating distance transforms of the the white matter segmentation into the registration, the overall segmentation of the registration result was closer to the target than when the distance transform was not used.
Validation of histology image registration
Rushin Shojaii, Tigran Karavardanyan, Martin Yaffe, et al.
The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set of histology images are registered to their correspondent optical blockface images to make a histology volume. Then multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is visible in medical images, optical blockface images and it can also be localized in histology images. The properties of this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment. This paper reports on the accuracy of a histology image registration approach by calculation of target registration error using these fiducial markers.
Image Enhancement/Classification
icon_mobile_dropdown
Intensity inhomogeneity correction of magnetic resonance images using patches
This paper presents a patch-based non-parametric approach to the correction of intensity inhomogeneity from magnetic resonance (MR) images of the human brain. During image acquisition, the inhomogeneity present in the radio-frequency coil, is usually manifested on the reconstructed MR image as a smooth shading effect. This artifact can significantly deteriorate the performance of any kind of image processing algorithm that uses intensities as a feature. Most of the current inhomogeneity correction techniques use explicit smoothness assumptions on the inhomogeneity field, which sometimes limit their performance if the actual inhomogeneity is not smooth, a problem that becomes prevalent in high fields. The proposed patch-based inhomogeneity correction method does not assume any parametric smoothness model, instead, it uses patches from an atlas of an inhomogeneity-free image to do the correction. Preliminary results show that the proposed method is comparable to N3, a current state of the art method, when the inhomogeneity is smooth, and outperforms N3 when the inhomogeneity contains non-smooth elements.
Initial evaluation of virtual un-enhanced imaging derived from fast kVp-switching dual energy contrast enhanced CT for the abdomen
M. Joshi, P. Mendonca, D. Okerlund, et al.
The feasibility and utility of creating virtual un-enhanced images from contrast enhanced data acquired using a fast switching dual energy CT acquisition, is explored. Utilizing projection based material decomposition data, monochromatic images are generated and a Multi-material decomposition technique is applied. Quantitative and qualitative evaluation is performed to assess the equivalence of Virtual Un-Enhanced (VUE) and True Un-enhanced (TUE) for multiple tissue types and different organs in the abdomen. Ten patient cases were analyzed where a TUE and a subsequent Contrast Enhanced (CE) acquisition were obtained using fast kVp-switching dual energy CT utilizing Gemstone Spectral Imaging. Quantitative measurements were made by placing multiple Regions of Interest on the different tissues and organs in both the TUE and the VUE images. The absolute Hounsfield Unit (HU) differences in the mean values between TUE & VUE were calculated as well as the differences of the standard deviations. Qualitative analysis was done by two radiologists for overall image quality, presence of residual contrast, appearance of pathology, appearance and contrast of normal tissues and organs in comparison to the TUE. There is a very strong correlation between the TUE and VUE images.
A neural network learned information measures for heart motion abnormality detection
M. S. Nambakhsh, Kumaradevan Punithakumar, Ismail Ben Ayed, et al.
In this study, we propose an information theoretic neural network for normal/abnormal left ventricular motion classification which outperforms significantly other recent methods in the literature. The proposed framework consists of a supervised 3-layer artificial neural network (ANN) which uses hyperbolic tangent sigmoid and linear transfer functions for hidden and output layers, respectively. The ANN is fed by information theoretic measures of left ventricular wall motion such as Shannon's differential entropy (SDE), Rényi entropy and Fisher information, which measure global information of subjects distribution. Using 395×20 segmented LV cavities of short-axis magnetic resonance images (MRI) acquired from 48 subjects, the experimental results show that the proposed method outperforms Support Vector Machine (SVM) and thresholding based information theoretic classifiers. It yields a specificity equal to 90%, a sensitivity of 91%, and a remarkable Area Under Curve (AUC) for Receiver Operating Characteristic (ROC), equal to 93.2%.
Content-based image retrieval utilizing explicit shape descriptors: applications to breast MRI and prostate histopathology
Content-based image retrieval (CBIR) systems, in the context of medical image analysis, allow for a user to compare a query image to previously archived database images in terms of diagnostic and/or prognostic similarity. CBIR systems can therefore serve as a powerful computerized decision support tool for clinical diagnostics and also serve as a useful learning tool for medical students, residents, and fellows. An accurate CBIR system relies on two components, (1) image descriptors which are related to a previously defined notion of image similarity and (2) quantification of image descriptors in order to accurately characterize and capture the a priori defined image similarity measure. In many medical applications, the morphology of an object of interest (e.g. breast lesions on DCE-MRI or glands on prostate histopathology) may provide important diagnostic and prognostic information regarding the disease being investigated. Morphological attributes can be broadly categorized as being (a) model-based (MBD) or (b) non-model based (NMBD). Most computerized decision support tools leverage morphological descriptors (e.g. area, contour variation, and compactness) which belong to the latter category in that they do not explicitly model morphology for the object of interest. Conversely, descriptors such as Fourier descriptors (FDs) explicitly model the object of interest. In this paper, we present a CBIR system that leverages a novel set of MBD called Explicit Shape Descriptors (ESDs) which accurately describe the similarity between the morphology of objects of interest. ESDs are computed by: (a) fitting shape models to objects of interest, (b) pairwise comparison between shape models, and (c) a nonlinear dimensionality reduction scheme to extract a concise set of morphological descriptors in a reduced dimensional embedding space. We utilized our ESDs in the context of CBIR in three datasets: (1) the synthetic MPEG-7 Set B containing 1400 silhouette images, (2) DCE-MRI of 91 breast lesions, (3) and digitized prostate histopathology dataset comprised of 888 glands. For each dataset, each image was sequentially selected as a query image and the remaining images in the database were ranked according to how similar they were to the query image based on the ESDs. From this ranking, area under the precision-recall curve (AUPRC) was calculated and averaged over all possible query images, for each of the three datasets. For the MPEG-7 dataset bull's eye accuracy for our CBIR system is 78.65%, comparable to several state of the art shape modeling approaches. For the breast DCE-MRI dataset, ESDs outperforms a set of NMBDs with an AUPRC of 0.55 ± 0.02. For the prostate histopathology dataset, ESDs and FDs perform equivalently with an AUPRC of 0.40 ± .01, but outperform NMBDs.
Amplitude remapping as a step towards standardizing the analysis of MR-images
M. Frommert, I. Sidorenko, J Bauer, et al.
We investigate the utility of amplitude remapping of magnetic resonance (MR)-images for making the analysis of such images more independent of the MR-device, the selected sequence, and its parameters. To this end, we analyze the morphological structure of trabecular bones using weighted scaling indices and Minkowski functionals in the context of osteoporosis. After remapping the amplitude distribution of MR-images onto a normal distribution with zero mean and unit variance, we study how the diagnostic performance of the structure measures is affected by this remapping. The diagnostic performance of the scaling index method is stable under the remapping for both spin echo (SE) and gradient echo (GE) sequences: The area under curve (AUC) value from the ROC analysis changes only slightly from 0.76 (original image) to 0.74 (remapped image) for the SE sequence and from 0.78 to 0.77 for the GE sequence. For the Minkowski functionals, the diagnostic performance suffers significantly for the SE sequence, whereas it is much more robust for the GE sequence. Therefore, the scaling index method should be the method of choice when analyzing MR-images after amplitude remapping. We also find that in the scaling index analysis, the remapping makes the results much more consistent between the SE and the GE sequence by bringing the histograms of the scaling indices closer together. Thus, the amplitude remapping can be used as a first step to standardize the scaling index analysis between different sequences of an MRI device.
Segmentation of Vascular Images
icon_mobile_dropdown
Machine learning based vesselness measurement for coronary artery segmentation in cardiac CT volumes
Yefeng Zheng, Maciej Loziczonek, Bogdan Georgescu, et al.
Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512x512x200 voxels.
Automated vasculature extraction from placenta images
Nizar Almoussa, Brittany Dutra, Bryce Lampe, et al.
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.
Level set based vessel segmentation accelerated with periodic monotonic speed function
To accelerate level-set based abdominal aorta segmentation on CTA data, we propose a periodic monotonic speed function, which allows segments of the contour to expand within one period and to shrink in the next period, i.e., coherent propagation. This strategy avoids the contour's local wiggling behavior which often occurs during the propagating when certain points move faster than the neighbors, as the curvature force will move them backwards even though the whole neighborhood will eventually move forwards. Using coherent propagation, these faster points will, instead, stay in their places waiting for their neighbors to catch up. A period ends when all the expanding/shrinking segments can no longer expand/shrink, which means that they have reached the border of the vessel or stopped by the curvature force. Coherent propagation also allows us to implement a modified narrow band level set algorithm that prevents the endless computation in points that have reached the vessel border. As these points' expanding/shrinking trend changes just after several iterations, the computation in the remaining iterations of one period can focus on the actually growing parts. Finally, a new convergence detection method is used to permanently stop updating the local level set function when the 0-level set is stationary in a voxel for several periods. The segmentation stops naturally when all points on the contour are stationary. In our preliminary experiments, significant speedup (about 10 times) was achieved on 3D data with almost no loss of segmentation accuracy.
Multispectral MRI centerline tracking in carotid arteries
Hui Tang, Theo van Walsum, Robbert S. van Onkelen, et al.
We propose a minimum cost path approach to track the centerlines of the internal and external carotid arteries in multispectral MR data. User interaction is limited to the annotation of three seed points. The cost image is based on both a measure of vessel medialness and lumen intensity similarity in two MRA image sequences: Black Blood MRA and Phase Contrast MRA. After intensity inhomogeneity correction and noise reduction, the two images are aligned using affine registration. The two parameters that control the contrast of the cost image were determined in an optimization experiment on 40 training datasets. Experiments on the training datasets also showed that a cost image composed of a combination of gradient-based medialness and lumen intensity similarity increases the tracking accuracy compared to using only one of the constituents. Furthermore, centerline tracking using both MRA sequences outperformed tracking using only one of these MRA images. An independent test set of 152 images from 38 patients served to validate the technique. The centerlines of 148 images were successfully extracted using the parameters optimized on the training sets. The average mean distance to the reference standard, manually annotated centerlines, was 0.98 mm, which is comparable to the in-plane resolution. This indicates that the proposed method has a high potential to replace the manual centerline annotation.
CARES: Completely Automated Robust Edge Snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images: a two stage system combining an intensity-based feature approach with first order absolute moments
Filippo Molinari, Rajendra Acharya, Guang Zeng, et al.
The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 ± 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-
Gradient-based 3D-2D registration of cerebral angiograms
Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system into the brain and into the aneurysm or AVM. Intra-interventional navigation utilizes digital subtraction angiography (DSA) to visualize vascular structures and X-ray fluoroscopy to localize the endovascular components. Due to the two-dimensional (2D) nature of the intra-interventional images, navigation through a complex three-dimensional (3D) structure is a demanding task. Registration of pre-interventional MRA, CTA, or 3D-DSA images and intra-interventional 2D DSA images can greatly enhance visualization and navigation. As a consequence of better navigation in 3D, the amount of required contrast medium and absorbed dose could be significantly reduced. In the past, development and evaluation of 3D-2D registration methods received considerable attention. Several validation image databases and evaluation criteria were created and made publicly available in the past. However, applications of 3D-2D registration methods to cerebral angiograms and their validation are rather scarce. In this paper, the 3D-2D robust gradient reconstruction-based (RGRB) registration algorithm is applied to CTA and DSA images and analyzed. For the evaluation purposes five image datasets, each comprised of a 3D CTA and several 2D DSA-like digitally reconstructed radiographs (DRRs) generated from the CTA, with accurate gold standard registrations were created. A total of 4000 registrations on these five datasets resulted in mean mTRE values between 0.07 and 0.59 mm, capture ranges between 6 and 11 mm and success rates between 61 and 88% using a failure threshold of 2 mm.
Posters: Registration
icon_mobile_dropdown
Log-Euclidean free-form deformation
Marc Modat, Gerard R. Ridgway, Pankaj Daga, et al.
The Free-Form Deformation (FFD) algorithm is a widely used method for non-rigid registration. Modifications have previously been proposed to ensure topology preservation and invertibility within this framework. However, in practice, none of these yield the inverse transformation itself, and one loses the parsimonious B-spline parametrisation. We present a novel log-Euclidean FFD approach in which a spline model of a stationary velocity field is exponentiated to yield a diffeomorphism, using an efficient scaling-and-squaring algorithm. The log-Euclidean framework allows easy computation of a consistent inverse transformation, and offers advantages in group-wise atlas building and statistical analysis. We optimise the Normalised Mutual Information plus a regularisation term based on the Jacobian determinant of the transformation, and we present a novel analytical gradient of the latter. The proposed method has been assessed against a fast FFD implementation (F3D) using simulated T1- and T2-weighted magnetic resonance brain images. The overlap measures between propagated grey matter tissue probability maps used in the simulations show similar results for both approaches; however, our new method obtains more reasonable Jacobian values, and yields inverse transformations.
Correspondence estimation from non-rigid motion information
Jonas Wulff, Thomas Lotz, Thomas Stehle, et al.
The DIET (Digital Image Elasto Tomography) system is a novel approach to screen for breast cancer using only optical imaging information of the surface of a vibrating breast. 3D tracking of skin surface motion without the requirement of external markers is desirable. A novel approach to establish point correspondences using pure skin images is presented here. Instead of the intensity, motion is used as the primary feature, which can be extracted using optical flow algorithms. Taking sequences of multiple frames into account, this motion information alone is accurate and unambiguous enough to allow for a 3D reconstruction of the breast surface. Two approaches, direct and probabilistic, for this correspondence estimation are presented here, suitable for different levels of calibration information accuracy. Reconstructions show that the results obtained using these methods are comparable in accuracy to marker-based methods while considerably increasing resolution. The presented method has high potential in optical tissue deformation and motion sensing.
Co-registration of high resolution MRI sub-volumes in non-human primates
Jérémy Lecoeur, Feng Wang, Li Min Chen, et al.
Dynamic structural and functional remodeling of the Central Nervous System occurs throughout the lifespan of the organism from the molecular to the systems level. MRI offers several advantages to observe this phenomenon: it is non-invasive and non-destructive, the contrast can be tuned to interrogate different tissue properties and imaging resolution can range from cortical columns to whole brain networks in the same session. To measure these changes reliably, functional maps generated over time with high resolution fMRI need to be registered accurately. This article presents a new method for registering automatically thin cortical MR volumes that are aligned with the functional maps. These acquisitions focus on the primary somato-sensory cortex, a region in the anterior parietal part of the brain, responsible for fine touch and proprioception. Currently, these slabs are acquired in approximately the same orientation from acquisition to acquisition and then registered by hand. Because they only cover a small portion of the cortex, their direct automatic registration is difficult. To address this issue, we propose a method relying on an intermediate image, acquired with a surface coil that covers a larger portion of the head to which the slabs can be registered. Because images acquired with surface coils suffer from severe intensity attenuation artifact, we also propose a method to register these. The results from data sets obtained with three squirrel monkeys show a registration accuracy of thirty micrometers.
Motion analysis for duplicate frame removal in wireless capsule endoscope
Hyun-Gyu Lee, Min-Kook Choi, Sang-Chul Lee
Wireless capsule endoscopy (WCE) has been intensively researched recently due to its convenience for diagnosis and extended detection coverage of some diseases. Typically, a full recording covering entire human digestive system requires about 8 to 12 hours for a patient carrying a capsule endoscope and a portable image receiver/recorder unit, which produces 120,000 image frames on average. In spite of the benefits of close examination, WCE based test has a barrier for quick diagnosis such that a trained diagnostician must examine a huge amount of images for close investigation, normally over 2 hours. The main purpose of our work is to present a novel machine vision approach to reduce diagnosis time by automatically detecting duplicated recordings caused by backward camera movement, typically containing redundant information, in small intestine. The developed technique could be integrated with a visualization tool which supports intelligent inspection method, such as automatic play speed control. Our experimental result shows high accuracy of the technique by detecting 989 duplicate image frames out of 10,000, equivalently to 9.9% data reduction, in a WCE video from a real human subject. With some selected parameters, we achieved the correct detection ratio of 92.85% and the false detection ratio of 13.57%.
Fully automated prone-supine coregistration in computed tomographic colonography
Brynmor J. Davis, James A. Norris, Jerry Y. Bieszczad, et al.
A fully automated, anatomically-based procedure is developed for the coregistration of prone and supine scans in computed tomographic colonography (CTC). Haustral folds, teniae coli and other anatomic landmarks are extracted from the segmented colonic lumen and serve as the basis for iterative optimization-based matching of the colonic surfaces. The three-dimensional coregistration is computed efficiently using a two-dimensional filet representation of the colon. The circumferential positions of longitudinal structures such as teniae coli are used to estimate a rotational prone-to-supine deformation, haustral folds give a longitudinal (stretching) deformation, while other landmarks and anatomical considerations are used to constrain the allowable deformations. The proposed method is robust to changes in the detected anatomical landmarks such as the obscuration or apparent bifurcation of teniae coli. Preliminary validation in the Walter Reed CTC data set shows excellent coregistration accuracy-57 manually identified features (such as polyps and diverticula) are automatically coregistered with a mean three-dimensional error of 16.4 mm. In phantom studies, 210 fiducial pairs are coregistered to a mean three-dimensional error of 8.6 mm. The coregistration allows points of interest in one scan to be automatically located in the other, leading to an expected improvement in per-patient read time and a significant reduction in the cost of CTC.
Local rigid registration for multimodal texture feature extraction from medical images
Sebastian Steger
The joint extraction of texture features from medical images of different modalities requires an accurate image registration at the target structures. In many cases rigid registration of the entire images does not achieve the desired accuracy whereas deformable registration is too complex and may result in undesired deformations. This paper presents a novel region of interest alignment approach based on local rigid registration enabling image fusion for multimodal texture feature extraction. First rigid registration on the entire images is performed to obtain an initial guess. Then small cubic regions around the target structure are clipped from all images and individually rigidly registered. The approach was applied to extract texture features in clinically acquired CT and MR images from lymph nodes in the oropharynx for an oral cancer reoccurrence prediction framework. Visual inspection showed that in all of the 30 cases at least a subtle misalignment was perceivable for the globally rigidly aligned images. After applying the presented approach the alignment of the target structure significantly improved in 19 cases. In 12 cases no alignment mismatch whatsoever was perceptible without requiring the complexity of deformable registration and without deforming the target structure. Further investigation showed that if the resolutions of the individual modalities differ significantly, partial volume effects occur, diminishing the significance of the multimodal features even for perfectly aligned images.
Registration of multi-view apical 3D echocardiography images
H. W. Mulder, M. van Stralen, H. B. van der Zwaan M.D., et al.
Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.
Robust linear registration of CT images using random regression forests
Ender Konukoglu, Antonio Criminisi, Sayan Pathak, et al.
Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.
Ridge-based retinal image registration algorithm involving OCT fundus images
Ying Li, Giovanni Gregori, Robert W. Knighton, et al.
This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 μm to 40 μm. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 μm).
A 2D to 3D ultrasound image registration algorithm for robotically assisted laparoscopic radical prostatectomy
Mehdi Esteghamatian, Stephen E. Pautler, Charles A. McKenzie, et al.
Robotically assisted laparoscopic radical prostatectomy (RARP) is an effective approach to resect the diseased organ, with stereoscopic views of the targeted tissue improving the dexterity of the surgeons. However, since the laparoscopic view acquires only the surface image of the tissue, the underlying distribution of the cancer within the organ is not observed, making it difficult to make informed decisions on surgical margins and sparing of neurovascular bundles. One option to address this problem is to exploit registration to integrate the laparoscopic view with images of pre-operatively acquired dynamic contrast enhanced (DCE) MRI that can demonstrate the regions of malignant tissue within the prostate. Such a view potentially allows the surgeon to visualize the location of the malignancy with respect to the surrounding neurovascular structures, permitting a tissue-sparing strategy to be formulated directly based on the observed tumour distribution. If the tumour is close to the capsule, it may be determined that the adjacent neurovascular bundle (NVB) needs to be sacrificed within the surgical margin to ensure that any erupted tumour was resected. On the other hand, if the cancer is sufficiently far from the capsule, one or both NVBs may be spared. However, in order to realize such image integration, the pre-operative image needs to be fused with the laparoscopic view of the prostate. During the initial stages of the operation, the prostate must be tracked in real time so that the pre-operative MR image remains aligned with patient coordinate system. In this study, we propose and investigate a novel 2D to 3D ultrasound image registration algorithm to track the prostate motion with an accuracy of 2.68±1.31mm.
Multimodal image registration by edge attraction and regularization using a B-spline grid
Almar Klein, Dirk-Jan Kroon, Yvonne Hoogeveen, et al.
Multi modal image registration enables images from different modalities to be analyzed in the same coordinate system. The class of B-spline-based methods that maximize the Mutual Information between images produce satisfactory result in general, but are often complex and can converge slowly. The popular Demons algorithm, while being fast and easy to implement, produces unrealistic deformation fields and is sensitive to illumination differences between the two images, which makes it unsuitable for multi-modal registration in its original form. We propose a registration algorithm that combines a B-spline grid with deformations driven by image forces. The algorithm is easy to implement and is robust against large differences in the appearance between the images to register. The deformation is driven by attraction-forces between the edges in both images, and a B-spline grid is used to regularize the sparse deformation field. The grid is updated using an original approach by weighting the deformation forces for each pixel individually with the edge strengths. This approach makes the algorithm perform well even if not all corresponding edges are present. We report preliminary results by applying the proposed algorithm to a set of (multi-modal) test images. The results show that the proposed method performs well, but is less accurate than state of the art registration methods based on Mutual Information. In addition, the algorithm is used to register test images to manually drawn line images in order to demonstrate the algorithm's robustness.
Non-rigid registration of multiphoton microscopy images using B-splines
Kevin S. Lorenz, Paul Salama, Kenneth W. Dunn, et al.
Optical microscopy poses many challenges for digital image analysis. One particular challenge includes correction of image artifacts due to respiratory motion from specimens imaged in vivo. We describe a non-rigid registration method using B-splines to correct these motion artifacts. Current attempts at non-rigid medical image registration have typically involved only a single pair of images. Extending these techniques to an entire series of images, possibly comprising hundreds of images, is presented in this paper. Our method involves creating a uniform grid of control points across each image in a stack. Each control point is manipulated by optimizing a cost function consisting of two parts: a term to determine image similarity, and a term to evaluate deformation grid smoothness. This process is repeated for all images in the stack. Analysis is evaluated using block motion estimation and other visualization techniques.
Efficient registration method of medical images using GPU
Tsuneya Kurihara, Kazuki Matsuzaki, Kumiko Seto, et al.
Registration of medical images is an important task; however, automatic image-based registration is computationally expensive. Given this task, the authors propose an efficient rigid registration method, which is based on mutual information and uses a graphics processing unit (GPU). Mutual-information-based registration methods require joint-histogram computation. Although a GPU can provide high performance computing, a joint histogram has a large number of bins, and the computation of such a histogram is not suitable for a GPU (whose shared memory is limited). Taking advantage of the fact that one image (the reference image) is not transformed during the registration process, the proposed method computes a joint histogram by computing multiple onedimensional histograms and combining them. The method can therefore be efficiently implemented on a GPU even with limited shared memory. Experimental results for 256 × 256 × 256 image registration show that the method is about 140 times faster than a standard implementation on a CPU and 2.6 times faster than previous methods using GPUs.
Evaluation of optimization methods for intensity-based 2D-3D registration in x-ray guided interventions
I. M. J. van der Bom, S. Klein, M. Staring, et al.
The advantage of 2D-3D image registration methods versus direct image-to-patient registration, is that these methods generally do not require user interaction (such as manual annotations), additional machinery or additional acquisition of 3D data. A variety of intensity-based similarity measures has been proposed and evaluated for different applications. These studies showed that the registration accuracy and capture range are influenced by the choice of similarity measure. However, the influence of the optimization method on intensity-based 2D-3D image registration has not been investigated. We have compared the registration performance of seven optimization methods in combination with three similarity measures: gradient difference, gradient correlation, and pattern intensity. Optimization methods included in this study were: regular step gradient descent, Nelder-Mead, Powell-Brent, Quasi-Newton, nonlinear conjugate gradient, simultaneous perturbation stochastic approximation, and evolution strategy. Registration experiments were performed on multiple patient data sets that were obtained during cerebral interventions. Various component combinations were evaluated on registration accuracy, capture range, and registration time. The results showed that for the same similarity measure, different registration accuracies and capture ranges were obtained when different optimization methods were used. For gradient difference, largest capture ranges were obtained with Powell-Brent and simultaneous perturbation stochastic approximation. Gradient correlation and pattern intensity had the largest capture ranges in combination with Powell-Brent, Nelder-Mead, nonlinear conjugate gradient, and Quasi-Newton. Average registration time, expressed in the number of DRRs required for convergence, was the lowest for Powell-Brent. Based on these results, we conclude that Powell-Brent is a reliable optimization method for intensity-based 2D-3D registration of x-ray images to CBCT, regardless of the similarity measure used.
Posters: Atlases
icon_mobile_dropdown
Evaluation of multi atlas-based approaches for the segmentation of the thyroid gland in IMRT head-and-neck CT images
Antong Chen, Kenneth J. Niermann, Matthew A. Deeley, et al.
Segmenting the thyroid gland in head and neck CT images for IMRT treatment planning is of great importance. In this work, we evaluate and compare multi-atlas methods to segment this structure. The various methods we evaluate range from using a single average atlas representative of the population to selecting one atlas based on three similarity measures. We also compare ways to combine segmentation results obtained with several atlases, i.e., vote rule, and STAPLE, which is a commonly used method to combine multiple segmentations. We show that the best results are obtained when several atlases are combined. We also show that with our data sets, STAPLE does not lead to the best results.
Automatic skull-stripping of rat MRI/DTI scans and atlas building
Ipek Oguz, Joohwi Lee, Francois Budin, et al.
3D Magnetic Resonance (MR) and Diffusion Tensor Imaging (DTI) have become important noninvasive tools for the study of animal models of brain development and neuropathologies. Fully automated analysis methods adapted to rodent scale for these images will allow highthroughput studies. A fundamental first step for most quantitative analysis algorithms is skullstripping, which refers to the segmentation of the image into two tissue categories, brain and non-brain. In this manuscript, we present a fully automatic skull-stripping algorithm in an atlasbased manner. We also demonstrate how to either modify an external atlas or to build an atlas from the population itself to present a self-contained approach. We applied our method to three datasets of rat brain scans, at different ages (PND5, PND14 and adult), different study groups (control, ethanol exposed, intrauterine cocaine exposed), as well as different image acquisition parameters. We validated our method by comparing the automated skull-strip results to manual delineations performed by our expert, which showed a discrepancy of less than a single voxel on average. We thus demonstrate that our algorithm can robustly and accurately perform the skull-stripping within one voxel of the manual delineation, and in a fraction of the time it takes a human expert.
Evaluating and improving label fusion in atlas-based segmentation using the surface distance
T. R. Langerak, U. A. van der Heide, A. N. T. J. Kotte, et al.
Atlas-based segmentation is an increasingly popular method of automatically computing a segmentation. In the past, results of atlas-based segmentation have been evaluated using a volume overlap measure such as the Dice or Jaccard coefficients. However, in the first part of this paper we will argue and show that volume overlap measures are insensitive to local deviations. As a result, a segmentation that is judged to be of good quality when using such a measure may have large local deviations that may be problematic in clinical practice. In this paper, two versions of the surface distance are proposed as an alternative measure to evaluate the results of atlas-based segmentation, as they give more local information and therefore allow the detection of large local deviations. In most current atlas-based segmentation methods, the results of multiple atlases are combined to a single segmentation in a process called 'label fusion'. In a label fusion process it is important that segmentations with a high quality can be distinguished from those with a low quality. In the second part of the paper we will use the surface distance as a similarity measure during label fusion. We will present a modified version of the previously proposed SIMPLE algorithm, which selects propagated atlas segmentations based on their similarity with a preliminary estimate of the ground truth segmentation. The SIMPLE algorithm previously used the Dice coefficient as a similarity measure and in this paper we demonstrate that, using the spatial distance map instead, the results of atlas-based segmentation significantly improve.
Group-wise automatic mesh-based analysis of cortical thickness
Clement Vachet, Heather Cody Hazlett, Marc Niethammer, et al.
The analysis of neuroimaging data from pediatric populations presents several challenges. There are normal variations in brain shape from infancy to adulthood and normal developmental changes related to tissue maturation. Measurement of cortical thickness is one important way to analyze such developmental tissue changes. We developed a novel framework that allows group-wise automatic mesh-based analysis of cortical thickness. Our approach is divided into four main parts. First an individual pre-processing pipeline is applied on each subject to create genus-zero inflated white matter cortical surfaces with cortical thickness measurements. The second part performs an entropy-based group-wise shape correspondence on these meshes using a particle system, which establishes a trade-off between an even sampling of the cortical surfaces and the similarity of corresponding points across the population using sulcal depth information and spatial proximity. A novel automatic initial particle sampling is performed using a matched 98-lobe parcellation map prior to a particle-splitting phase. Third, corresponding re-sampled surfaces are computed with interpolated cortical thickness measurements, which are finally analyzed via a statistical vertex-wise analysis module. This framework consists of a pipeline of automated 3D Slicer compatible modules. It has been tested on a small pediatric dataset and incorporated in an open-source C++ based high-level module called GAMBIT. GAMBIT's setup allows efficient batch processing, grid computing and quality control. The current research focuses on the use of an average template for correspondence and surface re-sampling, as well as thorough validation of the framework and its application to clinical pediatric studies.
A totally deflated lung's CT image construction by means of extrapolated deformable registration
Ali Sadeghi Naini, Rajni V. Patel, Abbas Samani
A novel technique is proposed to construct CT image of a totally deflated lung using breath-hold lung's preoperative CT images acquired during respiration. Such a constructed CT image is very useful in tumor targeting during tumor ablative procedures such as lung brachytherapy used for lung cancer treatment. To minimize motion within the target lung, tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders pre-operative images ineffective for tumor targeting, because those images correspond to the lung while it is partially inflated. Furthermore, the problem cannot be solved using intra-operative Ultrasound (US) images. This is because the quality of lung US images degrades substantially as a result of the residual air inside the deflated lung, thus it is not an effective intra-operative imaging modality by itself. One possible approach for image-guided lung brachytherapy is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intra-operative US images. To obtain the CT images of deflated lung, a novel image construction technique is presented. The proposed technique was implemented using two deformable registration methods: multi-resolution B-spline and multi-resolution demons. The technique was applied to ex vivo porcine lungs where results obtained were found to be very encouraging.
An automated pipeline for cortical surface generation and registration of the cerebral cortex
Wen Li, Luis Ibanez, Arnaud Gelas, et al.
The human cerebral cortex is one of the most complicated structures in the body. It has a highly convoluted structure with much of the cortical sheet buried in sulci. Based on cytoarchitectural and functional imaging studies, it is possible to segment the cerebral cortex into several subregions. While it is only possible to differentiate the true anatomical subregions based on cytoarchitecture, the surface morphometry aligns closely with the underlying cytoarchitecture and provides features that allow the surface of the cortex to be parcellated based on the sulcal and gyral patterns that are readily visible on the MR images. We have developed a fully automated pipeline for the generation and registration of cortical surfaces in the spherical domain. The pipeline initiates with the BRAINS AutoWorkup pipeline. Subsequently, topology correction and surface generation is performed to generate a genus zero surface and mapped to a sphere. Several surface features are then calculated to drive the registration between the atlas surface and other datasets. A spherical diffeomorphic demons algorithm is used to co-register an atlas surface onto a subject surface. A lobar based atlas of the cerebral cortex was created from a manual parcellation of the cortex. The atlas surface was then co-registered to five additional subjects using a spherical diffeomorphic demons algorithm. The labels from the atlas surface were warped on the subject surface and compared to the manual raters. The average Dice overlap index was 0.89 across all regions.
Groupwise consistent image registration: a crucial step for the construction of a standardized near infrared hyper-spectral teeth database
Žiga Špiclin, Peter Usenik, Miran Bürmen, et al.
Construction of a standardized near infrared (NIR) hyper-spectral teeth database is a first step in the development of a reliable diagnostic tool for quantification and early detection of dental diseases. The standardized diffuse reflectance hyper-spectral database was constructed by imaging 12 extracted human teeth with natural lesions of various degrees in the spectral range from 900 to 1700 nm with spectral resolution of 10 nm. Additionally, all the teeth were imaged by X-ray and digital color camera. The color and X-ray teeth images were presented to the expert for localization and classification of the dental diseases, thereby obtaining a dental disease gold standard. Accurate transfer of the dental disease gold standard to the NIR images was achieved by image registration in a groupwise manner, taking advantage of the multichannel image information and promoting image edges as the features for the improvement of spatial correspondence detection. By the presented fully automatic multi-modal groupwise registration method, images of new teeth samples can be accurately and reliably registered and then added to the standardized NIR hyper-spectral teeth database. Adding more samples increases the biological and patho-physiological variability of the NIR hyper-spectral teeth database and can importantly contribute to the objective assessment of the sensitivity and specificity of multivariate image analysis techniques used for the detection of dental diseases. Such assessment is essential for the development and validation of reliable qualitative and especially quantitative diagnostic tools based on NIR spectroscopy.
Posters: Segmentation
icon_mobile_dropdown
Model-based segmentation of the facial nerve and chorda tympani in pediatric CT scans
Fitsum A Reda, Jack H. Noble, Alejandro Rivas, et al.
In image-guided cochlear implant surgery an electrode array is implanted in the cochlea to treat hearing loss. Access to the cochlea is achieved by drilling from the outer skull to the cochlea through the facial recess, a region bounded by the facial nerve and the chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The effectiveness of traditional segmentation approaches to achieve this is severely limited because the facial nerve and chorda are small structures (~1 mm and ~0.3 mm in diameter, respectively) and exhibit poor image contrast. We have recently proposed a technique to achieve this task in adult patients, which relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work we use the same method to segment pediatric scans. We show that substantial differences exist between the anatomy of children and the anatomy of adults, which lead to poor segmentation results when an adult model is used to segment a pediatric volume. We have built a new model for pediatric cases and we have applied it to ten scans. A leave-one-out validation experiment was conducted in which manually segmented structures were compared to automatically segmented structures. The maximum segmentation error was 1 mm. This result indicates that accurate segmentation of the facial nerve and chorda in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.
Estimation of sufficient signal to noise ratio for texture analysis of magnetic resonance images
Sami Savio, Lara Harrison, Pertti Ryymin, et al.
In this study, we have studied the effect of background noise on the texture analysis of muscle, bone marrow and fat tissues in 1.5 T magnetic resonance (MR) images using different statistical methods. Variable levels of noise were first added on 3-mm thick T2 weighted image slices of voluntary subjects to simulate several signal-to-noise ratio (SNR) levels. For each original and simulated image, the values for 264 texture parameters were calculated using MaZda, a texture analysis toolkit. We also determined Fisher coefficients based on the texture parameter values in order to enable high discrimination between different tissues. Linear discriminant analysis (LDA) and two different nearest neighbour (NN) methods were then applied for the texture parameters with the highest Fisher coefficient values. Several training and test sets were used to approximate the variation in the classification results. All the above-mentioned methods had the same classification accuracy, which in turn depended on the image SNR. We conclude that these tissues can be detected by texture analysis methods with a sufficient accuracy (90%) especially if SNR is at least 30-40 dB, even though the separation of different muscles remains a very challenging task.
Variational level-set segmentation and tracking of left ventricle using field prior
Mariam Afshin, Ismail Ben Ayed, Ali Islam, et al.
This study investigates a novel method of tracking Left Ventricle (LV) curve in Magnetic Resonance (MR) sequences. The method focuses on energy minimization by level-set curve boundary evolution. The level-set framework allows introducing knowledge of the field prior on the solution. The segmentation in each particular time relies not only on the current image but also the segmented image from previous phase. Field prior is defined based on the experimental fact that the mean logarithm of intensity inside endo and epi-cardium is approximately constant during a cardiac cycle. The solution is obtained by evolving two curves following the Euler-Lagrange minimization of a functional containing a field constraint. The functional measures the consistency of the field prior over a cardiac sequence. Our preliminary results show that the obtained segmentations are very well correlated with those manually obtained by experts. Furthermore, we observed that the proposed field prior speeds up curve evolution significantly and reduces the computation load.
A novel segmentation method to identify left ventricular infarction in short-axis composite strain-encoded magnetic resonance images
Ahmad O. Algohary, Muhammad K. Metwally, Ahmed M. El-Bialy, et al.
Composite Strain Encoding (CSENC) is a new Magnetic Resonance Imaging (MRI) technique for simultaneously acquiring cardiac functional and viability images. It combines the use of Delayed Enhancement (DE) and the Strain Encoding (SENC) imaging techniques to identify the infracted (dead) tissue and to image the myocardial deformation inside the heart muscle. In this work, a new unsupervised segmentation method is proposed to identify infarcted left ventricular tissue in the images provided by CSENC MRI. The proposed method is based on the sequential application of Bayesian classifier, Otsu's thresholding, morphological opening, radial sweep boundary tracing and the fuzzy C-means (FCM) clustering algorithm. This method is tested on images of twelve patients with and without myocardial infarction (MI) and on simulated heart images with various levels of superimposed noise. The resulting clustered images are compared with those marked up by an expert cardiologist who assisted in validating results coming from the proposed method. Infarcted myocardium is correctly identified using the proposed method with high levels of accuracy and precision.
Automated analysis of infarct heterogeneity on delayed enhancement magnetic resonance images
YingLi Lu, Gideon A. Paul, Kim A. Connelly, et al.
In this work, we propose an automated infarct heterogeneity analysis method for cardiac delayed enhancement magnetic resonance images (DE-MRI). Advantages of this method include that it eliminates manual contouring of the left ventricle and automatically distinguishes infarct, "gray zone" (heterogeneous mixture of healthy and infarct tissue), and healthy tissue pixels despite variability in intensity and noise across images. Quantitative evaluation was performed on 12 patients. The automatically determined infarct core size and gray zone size showed high correlation with that derived from manual delineation (R2 = 0.91 for infarct core size and R2 = 0.87 for gray zone size). The automatic method shortens the evaluation to 5.6 ±2.2 s per image, compared with 3 min for the manual method. These results indicate a promising method for automatic analysis of infarct heterogeneity with DE-MRI that should be beneficial for reducing variability in quantitative analysis and improving workflow.
White matter lesion segmentation using machine learning and weakly labeled MR images
Yuchen Xie, Xiaodong Tao
We propose a fast, learning-based algorithm for segmenting white matter (WM) lesions for magnetic resonance (MR) brain images. The inputs to the algorithm are T1, T2, and FLAIR images. Unlike most of the previously reported learning-based algorithms, which treat expert labeled lesion map as ground truth in the training step, the proposed algorithm only requires the user to provide a few regions of interest (ROI's) containing lesions. An unsupervised clustering algorithm is applied to segment these ROI's into areas. Based on the assumption that lesion voxels have higher intensity on FLAIR image, areas corresponding to lesions are identified and their probability distributions in T1, T2, and FLAIR images are computed. The lesion segmentation in 3D is done by using the probability distributions to generate a confidence map of lesion and applying a graph based segmentation algorithm to label lesion voxels. The initial lesion label is used to further refine the probability distribution estimation for the final lesion segmentation. The advantages of the proposed algorithm are: 1. By using the weak labels, we reduced the dependency of the segmentation performance on the expert discrimination of lesion voxels in the training samples; 2. The training can be done using labels generated by users with only general knowledge of brain anatomy and image characteristics of WM lesion, instead of these carefully labeled by experienced radiologists; 3. The algorithm is fast enough to make interactive segmentation possible. We test the algorithm on nine ACCORD-MIND MRI datasets. Experimental results show that our algorithm agrees well with expert labels and outperforms a support vector machine based WM lesion segmentation algorithm.
Fast 4D segmentation of large datasets using graph cuts
Herve Lombaert, Yiyong Sun, Farida Cheriet
In this paper, we propose to use 4D graph cuts for the segmentation of large spatio-temporal (4D) datasets. Indeed, as 4D datasets grow in popularity in many clinical areas, so will the demand for efficient general segmentation algorithms. The graph cuts method1 has become a leading method for complex 2D and 3D image segmentation in many applications. Despite a few attempts2-5 in 4D, the use of graph cuts on typical medical volume quickly exceeds today's computer capacities. Among all existing graph cuts based methods6-10 the multilevel banded graph cuts9 is the fastest and uses the least amount of memory. Nevertheless, this method has its limitation. Memory becomes an issue when using large 4D volume sequences, and small structures become hardly recoverable when using narrow bands. We thus improve the boundary refinement efficiency by using a 4D competitive region growing. First, we construct a coarse graph at a low resolution with strong temporal links to prevent the shrink bias inherent to the graph cuts method. Second, we use a competitive region growing using a priority queue to capture all fine details. Leaks are prevented by constraining the competitive region growing within a banded region and by adding a viscosity term. This strategy yields results comparable to the multilevel banded graph cuts but is faster and allows its application to large 4D datasets. We applied our method on both cardiac 4D MRI and 4D CT datasets with promising results.
Segmentation of liver and liver tumor for the Liver-Workbench
Jiayin Zhou, Feng Ding, Wei Xiong, et al.
Robust and efficient segmentation tools are important for the quantification of 3D liver and liver tumor volumes which can greatly help clinicians in clinical decision-making and treatment planning. A two-module image analysis procedure which integrates two novel semi-automatic algorithms has been developed to segment 3D liver and liver tumors from multi-detector computed tomography (MDCT) images. The first module is to segment the liver volume using a flippingfree mesh deformation model. In each iteration, before mesh deformation, the algorithm detects and avoids possible flippings which will cause the self-intersection of the mesh and then the undesired segmentation results. After flipping avoidance, Laplacian mesh deformation is performed with various constraints in geometry and shape smoothness. In the second module, the segmented liver volume is used as the ROI and liver tumors are segmented by using support vector machines (SVMs)-based voxel classification and propagational learning. First a SVM classifier was trained to extract tumor region from one single 2D slice in the intermediate part of a tumor by voxel classification. Then the extracted tumor contour, after some morphological operations, was projected to its neighboring slices for automated sampling, learning and further voxel classification in neighboring slices. This propagation procedure continued till all tumorcontaining slices were processed. The performance of the whole procedure was tested using 20 MDCT data sets and the results were promising: Nineteen liver volumes were successfully segmented out, with the mean relative absolute volume difference (RAVD), volume overlap error (VOE) and average symmetric surface distance (ASSD) to reference segmentation of 7.1%, 12.3% and 2.5 mm, respectively. For live tumors segmentation, the median RAVD, VOE and ASSD were 7.3%, 18.4%, 1.7 mm, respectively.
Automatic detection, segmentation and characterization of retinal horizontal neurons in large-scale 3D confocal imagery
Mahmut Karakaya, Ryan A. Kerekes, Shaun S. Gleason, et al.
Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.
3D segmentation of prostate ultrasound images using wavelet transform
Hamed Akbari, Xiaofeng Yang, Luma V. Halig, et al.
The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (WSVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.
Orientation estimation of anatomical structures in medical images for object recognition
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Local morphologic scale: application to segmenting tumor infiltrating lymphocytes in ovarian cancer TMAs
In this paper we present the concept and associated methodological framework for a novel locally adaptive scale notion called local morphological scale (LMS). Broadly speaking, the LMS at every spatial location is defined as the set of spatial locations, with associated morphological descriptors, which characterize the local structure or heterogeneity for the location under consideration. More specifically, the LMS is obtained as the union of all pixels in the polygon obtained by linking the final location of trajectories of particles emanating from the location under consideration, where the path traveled by originating particles is a function of the local gradients and heterogeneity that they encounter along the way. As these particles proceed on their trajectory away from the location under consideration, the velocity of each particle (i.e. do the particles stop, slow down, or simply continue around the object) is modeled using a physics based system. At some time point the particle velocity goes to zero (potentially on account of encountering (a) repeated obstructions, (b) an insurmountable image gradient, or (c) timing out) and comes to a halt. By using a Monte-Carlo sampling technique, LMS is efficiently determined through parallelized computations. LMS is different from previous local scale related formulations in that it is (a) not a locally connected sets of pixels satisfying some pre-defined intensity homogeneity criterion (generalized-scale), nor is it (b) constrained by any prior shape criterion (ball-scale, tensor-scale). Shape descriptors quantifying the morphology of the particle paths are used to define a tensor LMS signature associated with every spatial image location. These features include the number of object collisions per particle, average velocity of a particle, and the length of the individual particle paths. These features can be used in conjunction with a supervised classifier to correctly differentiate between two different object classes based on local structural properties. In this paper, we apply LMS to the specific problem of classifying regions of interest in Ovarian Cancer (OCa) histology images as either tumor or stroma. This approach is used to classify lymphocytes as either tumor infiltrating lymphocytes (TILs) or non-TILs; the presence of TILs having been identified as an important prognostic indicator for disease outcome in patients with OCa. We present preliminary results on the tumor/stroma classification of 11,000 randomly selected locations of interest, across 11 images obtained from 6 patient studies. Using a Probabilistic Boosting Tree (PBT), our supervised classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.8341 ±0.0059 over 5 runs of randomized cross validation. The average LMS computation time at every spatial location for an image patch comprising 2000 pixels with 24 particles at every location was only 18s.
Brain tumour segmentation and tumour tissue classification based on multiple MR protocols
Astrid Franz, Stefanie Remmele, Jochen Keupp
Segmentation of brain tumours in Magnetic Resonance (MR) images and classification of the tumour tissue into vital, necrotic, and perifocal edematous areas is required in a variety of clinical applications. Manual delineation of the tumour tissue boundaries is a tedious and error-prone task, and the results are not reproducible. Furthermore, tissue classification mostly requires information of several MR protocols and contrasts. Here we present a nearly automatic segmentation and classification algorithm for brain tumour tissue working on a combination of T1 weighted contrast enhanced (T1CE) images and fluid attenuated inversion recovery (FLAIR) images. Both image types are included in MR brain tumour protocols that are used in clinical routine. The algorithm is based on a region growing technique, hence it is fast (ten seconds on a standard personal computer). The only required user interaction is a mouse click for providing the starting point. The region growing parameters are automatically adapted in the course of growing, and if a new maximum image intensity is found, the region growing is restarted. This makes the algorithm robust, i.e. independent of the given starting point in a certain capture range. Furthermore, we use a lossless coarse-to-fine approach, which, together with the automatic adaptation of the parameters, can avoid leakage of the region growing procedure. We tested our algorithm on 20 cases of human glioblastoma and meningioma. In the majority of the test cases we got satisfactory results.
Confidence-based ensemble for GBM brain tumor segmentation
It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.
Feature-driven model-based segmentation
Arish A. Qazi, John Kim, David A. Jaffray, et al.
The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.
Cell nuclei segmentation for histopathological image analysis
Hui Kong, Kamel Belkacem-Boussaid, Metin Gurcan
In this paper, we propose a supervised method for segmenting cell nuclei from background and extra-cellular regions in pathological images. To this end, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve the most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For evaluation, our method is compared with the state-of-the-art segmentation algorithms (Graph-cut, Mean-shift, etc.). Empirical results show that our segmentation method achieves better performance than these popular methods.
Automatic ROI identification for fast liver tumor segmentation using graph-cuts
Klaus Drechsler, Michael Strosche, Cristina Oyarzun Laura
The key challenge in tumor segmentation is to determine their exact location and volume. Difficulties arise because of low intensity boundaries, varying shapes and sizes. Furthermore, tumors can be located everywhere in the liver. Interactive segmentation methods seem to be the most appropriate in terms of reliability and robustness. In this work, we use a graph-cut based method to interactively segment tumors. However, complexity of the underlying graphs is enormous for clinical 3D datasets. We propose a method to identify automatically a region of interest using a coarse resolution image, which is then used to construct a reduced graph for final segmentation in the original image in full resolution. We compared our results to ground truth segmentations done by experts. Our results suggest that accuracy is comparable to other approaches. The average overlap was 80%, the average surface distance 0.73 mm and the average maximum surface distance 5.31 mm.rl
Simultaneous automatic detection of optic disc and fovea on fundus photographs
Xiayu Xu, Mona K. Garvin, Michael D. Abràmoff, et al.
We describe an automated and simultaneous localization method for the optic disc and the fovea for fundus photographs. The method is enhanced by a correction step, which allows the detection result of one structure to facilitate the detection of the other one. In the first step of the method, a set of features is extracted from the color fundus image, and the relationship between the features and a distance variable is established during the training phase. For a test image, the same set of features is measured and the distance to the optic disc and the fovea can be estimated using k-nearest-neighbor classification. A probability image with every pixel labeled a probability of within the optic disc or the fovea is generated. In the second step of the method, a second k-nearest-neighbor classification is applied on the probability image. Another set of features is extracted and trained. For a test image, detected high likelihood regions from the first step can be enhanced only if they satisfy the trained relationship. A set of 250 color fundus images from the left eye were used to train the system. Another set of 310 color fundus images were used to test the system. The correct rate for the optic disc is 93.9%. The correct rate for the fovea is 88.1%. This is a fully automatic method to detect the optic disc and fovea simultaneously with excellent performance. We are currently expanding validation on larger datasets.
Supervised segmentation methods for the hippocampus in MR images
Marijn van Stralen, Mirjam I. Geerlings, Koen L. Vincken, et al.
This study compares three different types of fully automated supervised methods for segmentation of the hippocampus in MR images. Many of such methods, trained using example data, have been presented for various medical imaging applications, but comparison of the methods is obscured because of optimization for, and evaluation on, different data. We compare three methods based on different methodological bases: atlas-based segmentation (ABS), active appearance model segmentation (AAM) and k-nearest neighbor voxel classification (KNN). All three methods are trained on 100 T1-weighted images with manual segmentations of the right hippocampus, and applied to 103 different images from the same study. Straightforward implementation of each of the three methods resulted in competitive segmentations, both mutually, as compared with methods currently reported in literature. AAM and KNN are favorable in terms of computational costs, requiring only a fraction of the time needed for ABS. The high accuracy and low computational cost make KNN the most favorable method based on this study. AAM achieves similar results as ABS in significantly less computation time. Further improvements might be achieved by fusion of the presented techniques, either methodologically or by direct fusion of the segmentation results.
Integrating an adaptive region-based appearance model with a landmark-free statistical shape model: application to prostate MRI segmentation
Robert Toth, Julie Bulman, Amish D. Patel, et al.
In this paper we present a system for segmenting medical images using statistical shape models (SSM's) which is landmark free, fully 3D, and accurate. To overcome the limitations associated with previous 3D landmark-based SSM's, our system creates a levelset-based SSM which uses the minimum distance from each voxel in the image to the object's surface to define a shape. Subsequently, an advanced statistical appearance model (SAM) is generated to model the object of interest. This SAM is based on a series of statistical texture features calculated from each image, modeled by a Gaussian Mixture Model. In order to segment the object of interest on a new image, a Bayesian classifier is first employed to pre-classify the image voxels as belonging to the foreground object of interest or the background. The result of the Bayesian classifier is then employed for optimally fitting the SSM so there is maximum agreement between the SAM and the SSM. The SAM is then able to adaptively learn the statistics of the textures of the foreground and background voxels on the new image. The fitting of the SSM, and the adaptive updating of the SAM is repeated until convergence. We have tested our system on 36 T2-w, 3.0 Tesla, in vivo, endorectal prostate images. The results showed that our system achieves a Dice similarity coefficient of .84±.04, with a median Dice value of .86, which is comparable (and in most cases superior) to other state of the art prostate segmentation systems. Further, unlike most other state of the art prostate segmentation schemes, our scheme is fully automated requiring no user intervention.
Segmenting multiple overlapping objects via a hybrid active contour model incorporating shape priors: applications to digital pathology
Active contours and active shape models (ASM) have been widely employed in image segmentation. A major limitation of active contours, however, is in their (a) inability to resolve boundaries of intersecting objects and to (b) handle occlusion. Multiple overlapping objects are typically segmented out as a single object. On the other hand, ASMs are limited by point correspondence issues since object landmarks need to be identified across multiple objects for initial object alignment. ASMs are also are constrained in that they can usually only segment a single object in an image. In this paper, we present a novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation. We demonstrate an application of these synergistic active contour models using multiple level sets to segment nuclear and glandular structures on digitized histopathology images of breast and prostate biopsy specimens. Unlike previous related approaches, our model is able to resolve object overlap and separate occluded boundaries of multiple objects simultaneously. The energy functional of the active contour is comprised of three terms. The first term comprises the prior shape term, modeled on the object of interest, thereby constraining the deformation achievable by the active contour. The second term, a boundary based term detects object boundaries from image gradients. The third term drives the shape prior and the contour towards the object boundary based on region statistics. The results of qualitative and quantitative evaluation on 100 prostate and 14 breast cancer histology images for the task of detecting and segmenting nuclei, lymphocytes, and glands reveals that the model easily outperforms two state of the art segmentation schemes (Geodesic Active Contour (GAC) and Roussons shape based model) and resolves up to 92% of overlapping/occluded lymphocytes and nuclei on prostate and breast cancer histology images.
Automatic three-dimensional rib centerline extraction from CT scans for enhanced visualization and anatomical context
Sowmya Ramakrishnan, Christopher Alvino, Leo Grady, et al.
We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.
Segmentation of in vivo target prior to tracking
Norbert Masson, Philippe Zanne, Florent Nageotte, et al.
Flexible endoscopes are used in many diagnostic and interventional procedures. Physiological motions may be very difficult to handle with such device, hence disturbing the physician in completing his task. One way of dealing with these motions is to have the endoscope following them on its own. To achieve such a goal one needs to motorize the flexible endoscope and to know accurately the position of the region of interest (target), in order to control the motors. To this purpose a tracking algorithm is used, which estimates the position of the target in the images acquired by the camera of the endoscope. But the tracking algorithm needs to be initialized correctly so as not to lose the target. The difficulty is that targets have many different characteristics and we have no prior knowledge about them. Besides we want the algorithm to be user friendly, particularly for the physicians, which means that no parameter has to be tuned even with completely different targets. The proposed algorithm computes a modified gradient image from first-order moments to obtain smooth edges and reduce the number of regions found during the next step. A watershed method is used to detect regions. Thanks to the previous processing of the image, most irrelevant regions will not be detected. Then a merging process is applied which results in a region corresponding to the target. From the border of this region we find a patch that will be used to initialize the tracking algorithm. Experimental results are promising.
Stability-based validation of cellular segmentation algorithms
Peter Ajemba, Richard Scott, Michael Donovan, et al.
Performance assessment of segmentation algorithms compares segmentation outputs to a handful of manually obtained ground-truth. This assumes that the ground-truth images are accurate, reliable and representative of the entire image set. In image cytometry, few ground-truth images are typically used because of the difficulty of manually segmenting images with large numbers of small objects. This violates the aforementioned assumptions. Automated methods of segmentation evaluation without ground-truth are needed. We describe a stable and reliable method for evaluating segmentation performance without ground-truth. Segmentation errors are either statistical or structural. Statistical errors reflect failure to account for random variations in pixel values while structural errors result from inadequate image description models. As statistical errors predominate image cytometry, our method focuses on statistical stability assessment. For any image-algorithm pair, we obtain multiple perturbed variants of the image by applying slight linear blur. We segment the image and its variants with the algorithm and determine the match between the output from the image and the output from its variants. We utilized 48 realistic phantom images with known ground-truth and four segmentation algorithms with large performance differences to assess the efficacy of the method. For each algorithm-image pair, we obtained a ground truth match score and four different statistical validation scores. Analyses show that statistical validation and ground-truth validation scores correlate in over 96% of cases. The statistical validation approach reduces segmentation review time and effort by over 99% and enables assessment of segmentation quality long after an algorithm has been deployed.
Neural stem cell tracking with phase contrast video microscopy
Stéphane U. Rigaud, Nicolas Loménie
Tracking and segmenting objects for video surveillance is a well known field of research and very efficient methods exist. Usually embedded in traffic surveillance camera, these processes are not necessary adapted for biological surveillance context. In stem cell study, the design of a framework to monitor cell development in real time improves the stem cell analysis and biological understanding. In this purpose, we propose to test the Σ - ▵ motion filter, normally developed for security and surveillance camera, in order to track neural stem cells and their evolution over time, based on phase contrast image sequences. The motion filter is based on the difference between the current frame and a reference image of the background and uses a recursive spatio-temporal morphological operator called hybrid reconstruction to compensate for ghost and trace usually occurring with those kinds of methods.
Boundary detection by linear programming with application to lung fields segmentation
Bulat Ibragimov, Boštjan Likar, Franjo Pernuš
Medical image segmentation is typically used to locate boundaries of anatomical structures in images acquired by different modalities. As segmentation is of utmost importance for quantitative measurements and analysis of anatomical structures, tracking anatomical changes over time, building anatomical atlases and visualization of medical images, a huge amount of methods have been developed and tested on a wide range of applications in the past. Deformable or parametric shape models are a class of methods that have been widely used for segmentation. A drawback of deformable model approaches it that they require initialization near the final solution. In this paper, we present a segmentation algorithm that incorporates prior knowledge and is composed of two steps. First, reference points on the boundary of an anatomical structure are found by linear programming incorporating prior knowledge. Second, paths between reference points, representing boundary segments, are searched for by optimal control. The segmentation method has been applied to chest radiographs from the publicly available SCR database.
A liver segmentation approach in contrast-enhanced CT images with patient-specific knowledge
In this work, we propose a shape-based liver segmentation approach using a patient specific knowledge. In which, we exploit the relation between consequent slices in multi-slice CT images to update the shape template that initially determined by the user. Then, the updated shape template is integrated with the graph cuts algorithm to segment the liver in each CT slice. The statistical parameters of the liver and non-liver tissues are initially determined according to the initial shape template and it is consequently updated from the nearby slices. The proposed approach does not require any prior training and it uses a single phase CT images; however, it is talented to deal with complex shape and intensity variations. The proposed approach is evaluated on 20 CT images with different kinds of liver abnormalities, tumors and cysts, and it achieves an average volumetric overlap error of 6.4% and average symmetric surface distance (ASD) of 0.8 compared to the manual segmentation.
Building multiple weak segmentors for strong mass segmentation in mammogram
Yu Zhang, Noriko Tomuro, Jacob Furst, et al.
This paper proposes to build multiple segmentations for identifying mass contours for a suspicious mass in a mammogram. In this study, by using various parameter settings of the image enhancement functions, we perform multiple segmentations for each suspicious mass (region of interest (ROI)), and multiple mass contours are generated. Each of such segmentations is called a "weak segmentor", since there is no single image enhancement which produces the optimal segmentation for all mass images. Then for each image, we select the contour which has the highest overlapping ratio as the final segmentation (i.e., the "strong segmentor"). The results show that the overall success rate (81.22%) of the strong segmentor was higher than that of any single weak segmentor. This indicates that using multiple weak segmentors is an effective method to generate a strong mass segmentation for mammograms.
A framework for automated coronary artery tracking of low axial resolution multi slice CT images
Jing Wu, Gordon Ferns M.D., John Giles M.D., et al.
Low axial resolution data such as multi-slice CT(MSCT) used for coronary artery disease screening must balance the potential loss in image clarity, detail and partial volume effects with the benefits to the patient such as faster acquisition time leading to lower dose exposure. In addition, tracking of the coronary arteries can aid the location of objects contained within, thus helping to differentiate them from similar in appearance, difficult to discern neighbouring regions. A fully automated system has been developed to segment and track the main coronary arteries and visualize the results. Automated heart isolation is carried out for each slice of an MSCT image using active contour methods. Ascending aorta and artery root segmentation is performed using a combination of active contours, morphological operators and geometric analysis of coronary anatomy to identify a starting point for vessel tracking. Artery tracking and backtracking employs analysis of vessel position combined with segmented region shape analysis to obtain artery paths. Robust, accurate threshold parameters are calculated for segmentation utilizing Gaussian Mixture Model fitting and analysis. The low axial resolution of our MSCT data sets, in combination with poor image clarity and noise presented the greatest challenge. Classification techniques such as shape analysis have been utilized to good effect and our results to date have shown that such deficiencies in the data can be overcome, further promoting the positive benefits to patients.
3D segmentation of medical volume image using hybrid level set method
Myungeun Lee, Wanhyun Cho, Sunworl Kim, et al.
We present a new segmentation method using the level set framework for the medical volume images. The method has conducted by the curve evolution model based on the geometric variation principle and the level set theory. And the speed function in the level set approach consists of hybrid combination of three integral measures that are derived by the theory of calculus of variation. They are defined by robust alignment term, active region term, and smoothing term. These measures can help to detect the precise location of the target object and prevent from the boundary leakage problem. The proposed method has been tested on the various medical volume images with tumor region to evaluate its performance on visual and quantitative. From the experimental results, an effectiveness and superior performance of our method is relatively excellent compared with traditional approaches.
Brain MRI segmentation and lesion detection using generalized Gaussian and Rician modeling
Xuqiang Wu, Stéphanie Bricq, Christophe Collet
In this paper we propose a mixed noise modeling so as to segment the brain and to detect lesion. Indeed, accurate segmentation of multimodal (T1, T2 and Flair) brain MR images is of great interest for many brain disorders but requires to efficiently manage multivariate correlated noise between available modalities. We addressed this problem in1 by proposing an entirely unsupervised segmentation scheme, taking into account multivariate Gaussian noise, imaging artifacts,intrinsic tissue variation and partial volume effects in a Bayesian framework. Nevertheless, tissue classification remains a challenging task especially when one addresses the lesion detection during segmentation process2 as we did. In order to improve brain segmentation into White and Gray Matter (resp. WM and GM) and cerebro-spinal fluid (CSF), we propose to fit a Rician (RC) density distribution for CSF whereas Generalized Gaussian (GG) models are used to fit the likelihood between model and data corresponding to WM and GM. In this way, we present in this paper promising results showing that in a multimodal segmentation-detection scheme, this model fits better with the data and increases lesion detection rate. One of the main challenges consists in being able to take into account various pdf (Gaussian and non- Gaussian) for correlated noise between modalities and to show that lesion-detection is then clearly improved, probably because non-Gaussian noise better fits to the physic of MRI image acquisition.
Robust method for extracting the pulmonary vascular trees from 3D MDCT images
Segmentation of pulmonary blood vessels from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents a method for extracting the vascular trees of the pulmonary arteries and veins, applicable to both contrast-enhanced and unenhanced 3D MDCT image data. The method finds 2D elliptical cross-sections and evaluates agreement of these cross-sections in consecutive slices to find likely cross-sections. It next employs morphological multiscale analysis to separate vessels from adjoining airway walls. The method then tracks the center of the likely cross-sections to connect them to the pulmonary vessels in the mediastinum and forms connected vascular trees spanning both lungs. A ground-truth study indicates that the method was able to detect on the order of 98% of the vessel branches having diameter ≥ 3.0 mm. The extracted vascular trees can be utilized for the guidance of safe bronchoscopic biopsy.
A computerized scheme for localization of vertebral bodies on body CT scans
Tatsuro Hayashi, Huayue Chen, Kei Miyamoto, et al.
The multidetector row computed tomography (MDCT) method has the potential to be used for quantitative analysis of osteoporosis with higher accuracy and precision than that provided by conventional two-dimensional methods. It is desirable to develop a computer-assisted scheme for analyzing vertebral geometry using body CT images. The aim of this study was to design a computerized scheme for the localization of vertebral bodies on body CT images. Our new scheme involves the following steps: (i) Re-formation of CT images on the basis of the center line of the spinal canal to visually remove the spinal curvature, (ii) use of information on the position of the ribs relative to the vertebral bodies, (iii) the construction of a simple model on the basis of the contour of the vertebral bodies on CT sections, and (iv) the localization of individual vertebral bodies by using a template matching technique. The proposed scheme was applied to 104 CT cases, and its performance was assessed using the Hausdorff distance. The average Hausdorff distance of T2-L5 was 4.3 mm when learning models with 100 samples were used. On the other hand, the average Hausdorff distance with 10 samples was 5.1 mm. The results of our assessments confirmed that the proposed scheme could provide the location of individual vertebral bodies. Therefore, the proposed scheme may be useful in designing a computer-based application that analyzes vertebral geometry on body CT images.
Unsupervised segmentation of ultrasound images by fusion of spatio-frequential textural features
S. Benameur, M. Mignotte, F. Lavoie M.D.
Image segmentation plays an important role in both qualitative and quantitative analysis of medical ultrasound images. However, due to their poor resolution and strong speckle noise, segmenting objects from this imaging modality remains a challenging task and may not be satisfactory with traditional image segmentation methods. To this end, this paper presents a simple, reliable, and conceptually different segmentation technique to locate and extract bone contours from ultrasound images. Instead of considering a new elaborate (texture) segmentation model specifically adapted for the ultrasound images, our technique proposes to fuse (i.e. efficiently combine) several segmentation maps associated with simpler segmentation models in order to get a final reliable and accurate segmentation result. More precisely, our segmentation model aims at fusing several K-means clustering results, each one exploiting, as simple cues, a set of complementary textural features, either spatial or frequential. Eligible models include the gray-level co-occurrence matrix, the re-quantized histogram, the Gabor filter bank, and local DCT coefficients. The experiments reported in this paper demonstrate the efficiency and illustrate all the potential of this segmentation approach.
Nonlinear band expansion and nonnegative matrix underapproximation for unsupervised segmentation of a liver from a multi-phase CT image
A methodology is proposed for contrast enhanced unsupervised segmentation of a liver from a twodimensional multi-phase CT image. The multi-phase CT image is represented by a linear mixture model, whereupon each single-phase CT image is modeled as a linear mixture of spatial distributions of the organs present in the image. The methodology exploits concentration and spatial diversities between organs present in the image and consists of nonlinear dimensionality expansion followed by matrix factorization that relies on sparseness between spatial distributions of organs. Dimensionality expansion increases concentration diversity (contrast) between organs. The methodology is demonstrated on an experimental three-phase CT image of a liver of two patients.
Automatic segmentation of chromatographic images for region of interest delineation
Ana M. Mendonça, António V. Sousa, M. Clara Sá-Miranda, et al.
This paper describes a segmentation method for automating the region of interest (ROI) delineation in chromatographic images, thus allowing the definition of the image area that contains the fundamental information for further processing while excluding the frame of the chromatographic plate that does not contain relevant data for disease identification. This is the first component of a screening tool for Fabry disease, which will be based on the automatic analysis of the chromatographic patterns extracted from the image ROI. Image segmentation is performed in two phases, where each individual pixel is finally considered as frame or ROI. In the first phase, an unsupervised learning method is used for classifying image pixels into three classes: frame, ROI or unknown. In the second phase, distance features are used for deciding which class the unknown pixels belong to. The segmentation result is post-processed using a sequence of morphological operators in order to obtain the final ROI rectangular area. The proposed methodology was successfully evaluated in a dataset of 41 chromatographic images.
A nonparametric segmentation method based on structural information using level sets
Yingxuan Zhu, Samuel Cheng, Amrit Goel
Segmentation plays an important role in medical imaging, a precise segmentation can significantly improve the accuracy of object detection and localization. Level set based model is robust in image segmentation, but the parameters of level set function are usually decided by empirical method, which discourages its application in medical area, because medical images are various and the users may not be familiar with parameters setting of level set method. In this paper, we present an automatic segmentation method based on variational level set formulation. This method is formulated by statistical measures and solved by using the Euler-Lagrange equation. The segmentation criteria of our method rely on structural similarities of the image, which are luminance, contrast, and correlation coefficients. These criteria are formulated into an energy function to maximize the structural difference between object and background in segmentation. The energy function is solved and implemented by using variational level set method. Unlike prevalent level set methods, the segmentation parameters of our approach are automatically decided by structural information of the image and updated during iteration, so our model is nonparametric. Moreover, our approach does not necessitate any training, nor any a priori assumption about probability density functions of statistical inference. Furthermore, our method is region-based without using gradients, and the parameters in our method are updated according to image information, so our method can significantly reduce computation costs in its numerical implementation. The segmentation results have shown that our method adequately captures the structural differences between object and background during segmentation.
Simultaneous image segmentation and medial structure estimation: application to 2D and 3D vessel tree extraction
Sherif Makram-Ebeid, Jean Stawiaski, Guillaume Pizaine
We propose a variational approach which combines automatic segmentation and medial structure extraction in a single computationally efficient algorithm. In this paper, we apply our approach to the analysis of vessels in 2D X-ray angiography and 3D X-ray rotational angiography of the brain. Other variational methods proposed in the literature encode the medial structure of vessel trees as a skeleton with associated vessel radii. In contrast, our method provides a dense smooth level set map which sign provides the segmentation. The ridges of this map define the segmented regions skeleton. The differential structure of the smooth map (in particular the Hessian) allows the discrimination between tubular and other structures. In 3D, both circular and non-circular tubular cross-sections and tubular branching can be handled conveniently. This algorithm allows accurate segmentation of complex vessel structures. It also provides key tools for extracting anatomically labeled vessel tree graphs and for dealing with challenging issues like kissing vessel discrimination and separation of entangled 3D vessel trees.
A unified framework for concurrent detection of anatomical landmarks for medical image understanding
Mitsutaka Nemoto, Yoshitaka Masutani, Shouhei Hanaoka, et al.
Anatomical landmarks are useful as the primitive anatomical knowledge for medical image understanding. In this study, we construct a unified framework for automated detection of anatomical landmarks distributed within the human body. Our framework includes the following three elements; (1) initial candidate detection based on a local appearance matching technique based on appearance models built by PCA and the generative learning, (2) false positive elimination using classifier ensembles trained by MadaBoost, and (3) final landmark set determination based on a combination optimization method by Gibbs sampling with a priori knowledge of inter-landmark distances. In evaluation of our methods with 50 data sets of body trunk CT, the average sensitivity in detecting candidates of 165 landmarks was 0.948 ± 0.084 while 55 landmarks were detected with 100 % sensitivity. Initially, the amount of false positives per landmark was 462.2 ± 865.1 per case on average, then they were reduced to 152.8 ± 363.9 per case by the MadaBoost classifier ensembles without miss-elimination of the true landmarks. Finally 89.1 % of landmarks were correctly selected by the final combination optimization. These results showed that our framework is promising for an initial step for the subsequent anatomical structure recognition.
Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
Foibles, follies, and fusion: assessment of statistical label fusion techniques for web-based collaborations using minimal training
Andrew J. Asman, Andrew G. Scoggins, Jerry L. Prince, et al.
Labeling or parcellation of structures of interest on magnetic resonance imaging (MRI) is essential in quantifying and characterizing correlation with numerous clinically relevant conditions. The use of statistical methods with automated techniques or complete data sets from several different raters has been proposed to simultaneously estimate both rater reliability and true labels. An extension to these statistical based methodologies was proposed that allowed for missing labels, repeated labels and training trials. Herein, we present and demonstrate the viability of these statistical based methodologies using real world data contributed by minimally trained human raters. The consistency of the statistical estimates, the accuracy compared to the individual observations and the variability of both the estimates and the individual observations with respect to the number of labels are discussed. It is demonstrated that the Gaussian based statistical approach using the previously presented extensions successfully performs label fusion in a variety of contexts using data from online (Internet-based) collaborations among minimally trained raters. This first successful demonstration of a statistically based approach using "wild-type" data opens numerous possibilities for very large scale efforts in collaboration. Extension and generalization of these technologies for new application spaces will certainly present fascinating areas for continuing research.
Automatic tissue classification for high-resolution breast CT images based on bilateral filtering
Breast tissue classification can provide quantitative measurements of breast composition, density and tissue distribution for diagnosis and identification of high-risk patients. In this study, we present an automatic classification method to classify high-resolution dedicated breast CT images. The breast is classified into skin, fat and glandular tissue. First, we use a multiscale bilateral filter to reduce noise and at the same time keep edges on the images. As skin and glandular tissue have similar CT values in breast CT images, we use morphologic operations to get the mask of the skin based on information of its position. Second, we use a modified fuzzy C-mean classification method twice, one for the skin and the other for the fatty and glandular tissue. We compared our classified results with manually segmentation results and used Dice overlap ratios to evaluate our classification method. We also tested our method using added noise in the images. The overlap ratios for glandular tissue were above 94.7% for data from five patients. Evaluation results showed that our method is robust and accurate.
Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning
Steffen Remmele, Julia Ritzerfeld, Walter Nickel, et al.
RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.
Automatic detection of regions of interest in mammographic images
This work is a part of our ongoing study aimed at comparing the topology of anatomical branching structures with the underlying image texture. Detection of regions of interest (ROIs) in clinical breast images serves as the first step in development of an automated system for image analysis and breast cancer diagnosis. In this paper, we have investigated machine learning approaches for the task of identifying ROIs with visible breast ductal trees in a given galactographic image. Specifically, we have developed boosting based framework using the AdaBoost algorithm in combination with Haar wavelet features for the ROI detection. Twenty-eight clinical galactograms with expert annotated ROIs were used for training. Positive samples were generated by resampling near the annotated ROIs, and negative samples were generated randomly by image decomposition. Each detected ROI candidate was given a confidences core. Candidate ROIs with spatial overlap were merged and their confidence scores combined. We have compared three strategies for elimination of false positives. The strategies differed in their approach to combining confidence scores by summation, averaging, or selecting the maximum score.. The strategies were compared based upon the spatial overlap with annotated ROIs. Using a 4-fold cross-validation with the annotated clinical galactographic images, the summation strategy showed the best performance with 75% detection rate. When combining the top two candidates, the selection of maximum score showed the best performance with 96% detection rate.
Plexiform neurofibroma tissue classification
L. Weizman, L. Hoch, L. Ben Sira, et al.
Plexiform Neurofibroma (PN) is a major complication of NeuroFibromatosis-1 (NF1), a common genetic disease that involving the nervous system. PNs are peripheral nerve sheath tumors extending along the length of the nerve in various parts of the body. Treatment decision is based on tumor volume assessment using MRI, which is currently time consuming and error prone, with limited semi-automatic segmentation support. We present in this paper a new method for the segmentation and tumor mass quantification of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically detects the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets yield a mean volume overlap difference of 25% as compared to manual segmentation by expert radiologist with a mean computation and interaction time of 12 minutes vs. over an hour for manual annotation. Since the user interaction in the segmentation process is minimal, our method has the potential to successfully become part of the clinical workflow.
A novel classification method based on membership function
Yaxin Peng, Chaomin Shen, Lijia Wang, et al.
We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.
Automatic 3D kidney segmentation based on shape constrained GC-OAAM
The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.
A new steerable pressure force for parametric deformable models
Jun Kong, Lee Cooper, Ashish Sharma, et al.
Active contour models have been widely used in various image analysis applications. Despite their usefulness, there are problems limiting their utility, such as capture range, concavity conformation, and convergence rate. This paper presents a new pressure-like force that not only improves contour convergence rate, but also encourages contours to conform to concave regions. Unlike the traditional pressure force, this new force does not require users' input for the force direction and is steerable according to the image content. Better convergence rate as well as force normalization consistency of this new force are presented when compared with those of the gradient vector flow force field on synthetic images. Accuracies of these two methods are compared against the manual markups on a set of cardiac MRI images. Moreover, results on a MRI image smoothed at different levels demonstrate the robustness of this new force to noise.
Towards a parts-based approach to sub-cortical brain structure parsing
Digvijay Gagneja, Caiming Xiong, Jason J. Corso
The automatic localization and segmentation, or parsing, of neuroanatomical brain structures is a key step in many neuroscience tasks. However, the inherent variability in these brain structures and their appearance continues to challenge medical image processing methods. The state of the art primarily relies upon local voxelbased morphometry, Markov random field, and probabilistic atlas based approaches, which limits the ability to explicitly capture the parts-based structure inherent in the brain. We propose a method that defines a principled parts-based representation of the sub-cortical brain structures. Our method is based on the pictorial structures model and jointly models the appearance of each part as well as the layout of the parts as a whole. Inference is cast as a maximum a posteriori problem and solved in a steepest-descent manner. Experimental results on a 28-case data set demonstrate high accuracy of our method and substantiate our claim that there is significant promise in a parts-based approach to modeling medical imaging structures.
Region based level set segmentation of the outer wall of the carotid bifurcation in CTA
This paper presents a level set based method for segmenting the outer vessel wall and plaque components of the carotid artery in CTA. The method employs a GentleBoost classification framework that classifies pixels as calcified region or not, and inside or outside the vessel wall. The combined result of both classifications is used to construct a speed function for level set based segmentation of the outer vessel wall; the segmented lumen is used to initialize the level set. The method has been optimized on 20 datasets and evaluated on 80 datasets for which manually annotated data was available as reference. The average Dice similarity of the outer vessel wall segmentation was 92%, which compares favorably to previous methods.
Implicit medial representation for vessel segmentation
Guillaume Pizaine, Elsa Angelini, Isabelle Bloch, et al.
In the context of mathematical modeling of complex vessel tree structures with deformable models, we present a novel level set formulation to evolve both the vessel surface and its centerline. The implicit function is computed as the convolution of a geometric primitive, representing the centerline, with localized kernels of continuously-varying scales allowing accurate estimation of the vessel width. The centerline itself is derived as the characteristic function of an underlying signed medialness function, to enforce a tubular shape for the segmented object, and evolves under shape and medialness constraints. Given a set of initial medial loci and radii, this representation first allows for simultaneous recovery of the vessels centerlines and radii, thus enabling surface reconstruction. Secondly, due to the topological adaptivity of the level set segmentation setting, it can handle tree-like structures and bifurcations without additional junction detection schemes nor user inputs. We discuss the shape parameters involved, their tuning and their influence on the control of the segmented shapes, and we present some segmentation results on synthetic images, 2D angiographies, 3D rotational angiographies and 3D-CT scans.
A study on automated anatomical labeling to arteries concerning with colon from 3D abdominal CT images
Bui Huy Hoang, Masahiro Oda, Zhengang Jiang, et al.
This paper presents an automated anatomical labeling method of arteries extracted from contrasted 3D CT images based on multi-class AdaBoost. In abdominal surgery, understanding of vasculature related to a target organ such as the colon is very important. Therefore, the anatomical structure of blood vessels needs to be understood by computers in a system supporting abdominal surgery. There are several researches on automated anatomical labeling, but there is no research on automated anatomical labeling to arteries concerning with the colon. The proposed method obtains a tree structure of arteries from the artery region and calculates features values of each branch. These feature values are thickness, curvature, direction, and running vectors of branch. Then, candidate arterial names are computed by classifiers that are trained to output artery names. Finally, a global optimization process is applied to the candidate arterial names to determine final names. Target arteries of this paper are nine lower abdominal arteries (AO, LCIA, RCIA, LEIA, REIA, SMA, IMA, LIIA, RIIA). We applied the proposed method to 14 cases of 3D abdominal contrasted CT images, and evaluated the results by leave-one-out scheme. The average precision and recall rates of the proposed method were 87.9% and 93.3%, respectively. The results of this method are applicable for anatomical name display of surgical simulation and computer aided surgery.
Direction-dependent level set segmentation of cerebrovascular structures
Nils Daniel Forkert, Dennis Säring, Till Illies, et al.
Exact cerebrovascular segmentations based on high resolution 3D anatomical datasets are required for many clinical applications. A general problem of most vessel segmentation methods is the insufficient delineation of small vessels, which are often represented by rather low intensities and high surface curvatures. This paper describes an improved direction-dependent level set approach for the cerebrovascular segmentation. The proposed method utilizes the direction information of the eigenvectors computed by vesselness filters for adjusting the weights of the internal energy depending on the location. The basic idea of this is to weight the internal energy lower in case the gradient of the level set is comparable to the direction of the eigenvector extracted by the vesselness filter. A quantitative evaluation of the proposed method based on three clinical Time-of-Flight MRA datasets with available manual segmentations using the Tanimoto coefficient showed that a mean improvement compared to the initial segmentation of 0.081 is achieved, while the corresponding level set segmentation without integration of direction information does not lead to satisfying results. In summary, the proposed method enables an improved delineation of small vessels, especially of those represented by low intensities and high surface curvatures.
Completely automated multiresolution edge snapper (CAMES): a new technique for an accurate carotid ultrasound IMT measurement and its validation on a multi-institutional database
Filippo Molinari, Christos Loizou, Guang Zeng, et al.
Since 2005, our research team has been developing automated techniques for carotid artery (CA) wall segmentation and intima-media thickness (IMT) measurement. We developed a snake-based technique (which we named CULEX1,2), a method based on an integrated approach of feature extraction, fitting, and classification (which we named CALEX3), and a watershed transform based algorithm4. Each of the previous methods substantially consisted in two distinct stages: Stage-I - Automatic carotid artery detection. In this step, intelligent procedures were adopted to automatically locate the CA in the image frame. Stage-II - CA wall segmentation and IMT measurement. In this second step, the CA distal (or far) wall is segmented in order to trace the lumen-intima (LI) and media-adventitia (MA) boundaries. The distance between the LI/MA borders is the IMT estimation. The aim of this paper is the description of a novel and completely automated technique for carotid artery segmentation and IMT measurement based on an innovative multi-resolution approach.
Evaluation of blood vessel detection methods
R. Sadeghzadeh, M. Berks, S. M. Astley, et al.
We address the problem of evaluating the performance of algorithms for detecting curvilinear structures in medical images. As an exemplar we consider the detection of vessel trees which contain structures of variable width and contrast. Results for the conventional approach to evaluation, in which the detector output is compared directly with a groundtruth mask, tend to be dominated by the detection of large vessels and fail to capture adequately whether or not finer, lower contrast vessels have been detected successfully. We propose and investigate three alternative evaluation strategies. We demonstrate the use of the standard and new evaluation strategies to assess the performance of a novel method for detecting vessels in retinograms, using the publicly available DRIVE database.
Automatic segmentation and diameter measurement of coronary artery vessels
Kun Zhao, Zhenyu Tang, Josef Pauli
This work presents a hybrid method for 2D artery vessel segmentation and diameter measurement in X-Ray angiograms. The proposed method is novel in that tracking-based and model-based approaches are combined. A robust and efficient tracking template, the "annular template", is devised for vessel tracking. It can readily be applied on X-Ray angiograms without any preprocessing. Starting from an initial tracking point given by the user the tracking algorithm iteratively repositions the annular template and thereby detects the vessel boundaries and possible bifurcations. With a user selected end point the tracking process results in a set of points that describes the contour and topology of an artery vessel segment between the initial and end points. A "boundary correction and interpolation" operation refines the extracted points which initialize the Snakes algorithm. Boundary correction adjusts the points to ensure that they lie on the vessel segment of interest. Boundary interpolation adds more points, so that there are sufficiently many points for the Snakes algorithm to generate a smooth and accurate vessel segmentation. After the application of Snakes the resulting points are sequentially connected to represent the vessel contour. Then, the diameters are measured along the extracted vessel contour. The segmentation and measurement results are compared with manually extracted and measured vessel segments. The average Precision, Recall and Jaccard Index of 21 vessel samples are 91.5%, 92.1% and 84.9%, respectively. Compared with ground truth measurements of diameters the average relative error is 8.2%, and the average absolute error is 1.13 pixels.
Posters: Classification
icon_mobile_dropdown
Liver fat quantification using fast kVp-switching dual energy CT
Andras Kriston, Paulo Mendonça, Alvin Silva M.D., et al.
Nonalcoholic steatohepatitis (NASH) is a liver disease that occurs in patients that lack a history of the well-proven association of alcohol use. A major symptom of NASH is increased fat deposition in the liver. Gemstone Spectral Imaging (GSI) with fast kVp-switching enables projection-based material decomposition, offering the opportunity to accurately characterize tissue types, e.g., fat and healthy liver tissue, based on their energy-sensitive material attenuation and density. We describe our pilot efforts to apply GSI to locate and quantify the amount of fat deposition in the liver. Two approaches are presented, one that computes percentage fat from the difference in HU values at high and low energies and the second based on directly computing fat volume fraction at each voxel using multi-material decomposition. Simulation software was used to create a phantom with a set of concentric rings, each composed of fat and soft tissue in different relative amounts with attenuation values obtained from the National Institute of Standards and Technology. Monte Carlo 80 and 140 kVp X-ray projections were acquired and CT images of the phantom were reconstructed. Results demonstrated the sensitivity of dual energy CT to the presence of fat and its ability to distinguish fat from soft tissue. Additionally, actual patient (liver) datasets were acquired using GSI and monochromatic images at 70 and 140 keV were reconstructed. Preliminary results demonstrate a tissue sensitivity that appears sufficient to quantify fat content with a degree of accuracy as may be needed for non-invasive clinical assessment of NASH.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
Xue Yang, Lori Beason-Held, Susan M. Resnick, et al.
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Automatic assessment of ultrasound image usability
Luca Valente, Gareth Funka-Lea, Jeffrey Stoll
We present a novel and efficient approach for evaluating the quality of ultrasound images. Image acquisition is sensitive to skin contact and transducer orientation and requires both time and technical skill to be done properly. Images commonly suffer degradation due to acoustic shadows and signal attenuation, which present as regions of low signal intensity masking anatomical details and making the images partly or totally unusable. As ultrasound image acquisition and analysis becomes increasingly automated, it is beneficial to also automate the estimation of image quality. Towards this end, we present an algorithm that classifies regions of an image as usable or un-usable. Example applications of this algorithm include improved compounding of free-hand 3D ultrasound volumes by eliminating unusable data and improved automatic feature detection by limiting detection to only usable areas. The algorithm operates in two steps. First, it classifies the image into bright areas, likely to have image content, and dark areas, likely to have no content. Second, it classifies the dark areas into unusable (i.e. due to shadowing and/or signal loss) and usable (i.e. anatomically accurate dark regions, such as with a blood vessel) sub-areas. The classification considers several factors, including statistical information, gradient intensity and geometric properties such as shape and relative position. Relative weighting of factors was obtained through the training of a Support Vector Machine. Classification results for both human and phantom images are presented and compared to manual classifications. This method achieves 91% sensitivity and 91% specificity for usable regions of human scans.
An image-guided tool to prevent hospital acquired infections
Melinda Nagy, László Szilágyi, Ákos Lehotsky, et al.
Hospital Acquired Infections (HAI) represent the fourth leading cause of death in the United States, and claims hundreds of thousands of lives annually in the rest of the world. This paper presents a novel low-cost mobile device|called Stery-Hand|that helps to avoid HAI by improving hand hygiene control through providing an objective evaluation of the quality of hand washing. The use of the system is intuitive: having performed hand washing with a soap mixed with UV re ective powder, the skin appears brighter in UV illumination on the disinfected surfaces. Washed hands are inserted into the Stery-Hand box, where a digital image is taken under UV lighting. Automated image processing algorithms are employed in three steps to evaluate the quality of hand washing. First, the contour of the hand is extracted in order to distinguish the hand from the background. Next, a semi-supervised clustering algorithm classies the pixels of the hand into three groups, corresponding to clean, partially clean and dirty areas. The clustering algorithm is derived from the histogram-based quick fuzzy c-means approach, using a priori information extracted from reference images, evaluated by experts. Finally, the identied areas are adjusted to suppress shading eects, and quantied in order to give a verdict on hand disinfection quality. The proposed methodology was validated through tests using hundreds of images recorded in our laboratory. The proposed system was found robust and accurate, producing correct estimation for over 98% of the test cases. Stery-Hand may be employed in general practice, and it may also serve educational purposes.
Posters: Shape
icon_mobile_dropdown
Propagating uncertainties in statistical model based shape prediction
Ekaterina Syrkina, Rémi Blanc, Gàbor Székely
This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.
Shape model training for concurrent localization of the left and right knee
Heike Ruppertshofen, Cristian Lorenz, Sarah Schmidt, et al.
An automatic algorithm for training of suitable models for the Generalized Hough Transform (GHT) is presented. The applied iterative approach learns the shape of the target object directly from training images and incorporates variability in pose and scale of the target object exhibited in the images. To make the model more robust and representative for the target object, an individual weight is estimated for each model point using a discriminative approach. These weights will be employed in the voting procedure of the GHT, increasing the impact of important points on the localization result. The proposed procedure is extended here with a new error measure and a revised point weight training to enable the generation of models representing several target objects. Common parts of the target objects will thereby obtain larger weights, while the model might also contain object specific model points, if necessary, to be representative for all targets. The method is applied here to the localization of knee joints in long-leg radiographs. A quantitative comparison of the new approach with the separate localization of right and left knee showed improved results concerning localization precision and performance.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Detecting hippocampal shape changes in Alzheimer's disease using statistical shape models
Kaikai Shen, Pierrick Bourgeat, Jurgen Fripp, et al.
The hippocampus is affected at an early stage in the development of Alzheimer's disease (AD). Using brain Magnetic Resonance (MR) images, we can investigate the effect of AD on the morphology of the hippocampus. Statistical shape models (SSM) are usually used to describe and model the hippocampal shape variations among the population. We use the shape variation from SSM as features to classify AD from normal control cases (NC). Conventional SSM uses principal component analysis (PCA) to compute the modes of variations among the population. Although these modes are representative of variations within the training data, they are not necessarily discriminant on labelled data. In this study, a Hotelling's T 2 test is used to qualify the landmarks which can be used for PCA. The resulting variation modes are used as predictors of AD from NC. The discrimination ability of these predictors is evaluated in terms of their classification performances using support vector machines (SVM). Using only landmarks statistically discriminant between AD and NC in SSM showed a better separation between AD and NC. These predictors also showed better correlation to the cognitive scores such as mini-mental state examination (MMSE) and Alzheimer's disease assessment scale (ADAS).
Classification of mathematics deficiency using shape and scale analysis of 3D brain structures
Sebastian Kurtek, Eric Klassen, John C. Gore, et al.
We investigate the use of a recent technique for shape analysis of brain substructures in identifying learning disabilities in third-grade children. This Riemannian technique provides a quantification of differences in shapes of parameterized surfaces, using a distance that is invariant to rigid motions and re-parameterizations. Additionally, it provides an optimal registration across surfaces for improved matching and comparisons. We utilize an efficient gradient based method to obtain the optimal re-parameterizations of surfaces. In this study we consider 20 different substructures in the human brain and correlate the differences in their shapes with abnormalities manifested in deficiency of mathematical skills in 106 subjects. The selection of these structures is motivated in part by the past links between their shapes and cognitive skills, albeit in broader contexts. We have studied the use of both individual substructures and multiple structures jointly for disease classification. Using a leave-one-out nearest neighbor classifier, we obtained a 62.3% classification rate based on the shape of the left hippocampus. The use of multiple structures resulted in an improved classification rate of 71.4%.
A decision support scheme for vertebral geometry on body CT scans
Tatsuro Hayashi, Huayue Chen, Kei Miyamoto, et al.
For gaining a better understanding of bone quality, a great deal of attention has been paid to vertebral geometry in anatomy. The aim of this study was to design a decision support scheme for vertebral geometries. The proposed scheme consists of four parts: (1) automated extraction of bone, (2) generation of median plane image of spine, (3) detection of vertebrae, (4) quantification of vertebral body width, depth, cross-sectional area (CSA), and trabecular bone mineral density (BMD). The proposed scheme was applied to 10 CT cases and compared with manual tracking performed by an anatomy expert. Mean differences in the width, depth, CSA, and trabecular BMD were 3.1 mm, 1.4 mm, 88.7 mm2, and 7.3 mg/cm3, respectively. We found moderate or high correlations in vertebral geometry between our scheme and manual tracking (r > 0.72). In contrast, measurements obtained by using our scheme were slightly smaller than those acquired from manual tracking. However, the outputs of the proposed scheme in most CT cases were regarded to be appropriate on the basis of the subjective assessment of an anatomy expert. Therefore, if the appropriate outputs from the proposed scheme are selected in advance by an anatomy expert, the results can potentially be used for an analysis of vertebral body geometries.
A joint model for boundaries of multiple anatomical parts
Grégoire Kerr, Sebastian Kurtek, Anuj Srivastava
The use of joint shape analysis of multiple anatomical parts is a promising area of research with applications in medical diagnostics, growth evaluations, and disease characterizations. In this paper, we consider several features (shapes, orientations, scales, and locations) associated with anatomical parts and develop probability models that capture interactions between these features and across objects. The shape component is based on elastic shape analysis of continuous boundary curves. The proposed model is a second order model that considers principal coefficients in tangent spaces of joint manifolds as multivariate normal random variables. Additionally, it models interactions across objects using area-interaction processes. Using given observations of four anatomical parts: caudate, hippocampus, putamen and thalamus, on one side of the brain, we first estimate the model parameters and then generate random samples from them using the Metropolis-Hastings algorithm. The plausibility of these random samples validates the proposed models.
Global-to-local, shape-based, real and virtual landmarks for shape modeling by recursive boundary subdivision
Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.
Automatic cortical thickness analysis on rodent brain
Joohwi Lee, Cindy Ehlers, Fulton Crews, et al.
Localized difference in the cortex is one of the most useful morphometric traits in human and animal brain studies. There are many tools and methods already developed to automatically measure and analyze cortical thickness for the human brain. However, these tools cannot be directly applied to rodent brains due to the different scales; even adult rodent brains are 50 to 100 times smaller than humans. This paper describes an algorithm for automatically measuring the cortical thickness of mouse and rat brains. The algorithm consists of three steps: segmentation, thickness measurement, and statistical analysis among experimental groups. The segmentation step provides the neocortex separation from other brain structures and thus is a preprocessing step for the thickness measurement. In the thickness measurement step, the thickness is computed by solving a Laplacian PDE and a transport equation. The Laplacian PDE first creates streamlines as an analogy of cortical columns; the transport equation computes the length of the streamlines. The result is stored as a thickness map over the neocortex surface. For the statistical analysis, it is important to sample thickness at corresponding points. This is achieved by the particle correspondence algorithm which minimizes entropy between dynamically moving sample points called particles. Since the computational cost of the correspondence algorithm may limit the number of corresponding points, we use thin-plate spline based interpolation to increase the number of corresponding sample points. As a driving application, we measured the thickness difference to assess the effects of adolescent intermittent ethanol exposure that persist into adulthood and performed t-test between the control and exposed rat groups. We found significantly differing regions in both hemispheres.
Statistical modeling of the arterial vascular tree
Thomas Beck, Christian Godenschwager, Miriam Bauer, et al.
Automatic examination of medical images becomes increasingly important due to the rising amount of data. Therefore automated methods are required which combine anatomical knowledge and robust segmentation to examine the structure of interest. We propose a statistical model of the vascular tree based on vascular landmarks and unbranched vessel sections. An undirected graph provides anatomical topology, semantics, existing landmarks and attached vessel sections. The atlas was built using semi-automatically generated geometric models of various body regions ranging from carotid arteries to the lower legs. Geometric models contain vessel centerlines as well as orthogonal cross-sections in equidistant intervals with the vessel contour having the form of a polygon path. The geometric vascular model is supplemented by anatomical landmarks which are not necessarily related to the vascular system. These anatomical landmarks define point correspondences which are used for registration with a Thin-Plate-Spline interpolation. After the registration process, the models were merged to form the statistical model which can be mapped to unseen images based on a subset of anatomical landmarks. This approach provides probability distributions for the location of landmarks, vessel-specific geometric properties including shape, expected radii and branching points and vascular topology. The applications of this statistical model include model-based extraction of the vascular tree which greatly benefits from vessel-specific geometry description and variation ranges. Furthermore, the statistical model can be applied as a basis for computer aided diagnosis systems as indicator for pathologically deformed vessels and the interaction with the geometric model is significantly more user friendly for physicians through anatomical names.
Posters: Motion Analysis
icon_mobile_dropdown
Motion tracking of left ventricle and coronaries in 4D CTA
Dong Ping Zhang, Xiahai Zhuang, Sebastien Ourselin, et al.
In this paper, we present a novel approach for simultaneous motion tracking of left ventricle and coronary arteries from cardiac Computed Tomography Angiography (CTA) images. We first use the multi-scale vesselness filter proposed by Frangi et al.1 to enhance vessels in the cardiac CTA images. The vessel centrelines are then extracted as the minimal cost path from the enhanced images. The centrelines at end-diastolic (ED) are used as prior input for the motion tracking. All other centrelines are used to evaluate the accuracy of the motion tracking. To segment the left ventricle automatically, we perform three levels of registration using a cardiac atlas obtained from MR images. The cardiac motion is derived from cardiac CTA sequences by using local-phase information to derive a non-rigid registration algorithm. The CTA image at each time frame is registered to the ED frame by maximising the proposed similarity function and following a serial registration scheme. Once the images have been aligned, a dynamic motion model of the left ventricle can be obtained by applying the computed free-form deformations to the segmented left ventricle at ED phase. A similar propagation method also applies to the coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries and left ventricle in each time frame with the predicted ones as estimated from the proposed tracking method.
Three-dimensional kinematic estimation of mobile-bearing total knee arthroplasty from x-ray fluoroscopic images
Takaharu Yamazaki, Kazuma Futai, Tetsuya Tomita, et al.
To achieve 3D kinematic analysis of total knee arthroplasty (TKA), 2D/3D registration techniques, which use X-ray fluoroscopic images and computer-aided design (CAD) model of the knee implant, have attracted attention in recent years. These techniques could provide information regarding the movement of radiopaque femoral and tibial components but could not provide information of radiolucent polyethylene insert, because the insert silhouette on X-ray image did not appear clearly. Therefore, it was difficult to obtain 3D kinemaitcs of polyethylene insert, particularly mobile-bearing insert that move on the tibial component. This study presents a technique and the accuracy for 3D kinematic analysis of mobile-bearing insert in TKA using X-ray fluoroscopy, and finally performs clinical applications. For a 3D pose estimation technique of the mobile-bearing insert in TKA using X-ray fluoroscopy, tantalum beads and CAD model with its beads are utilized, and the 3D pose of the insert model is estimated using a feature-based 2D/3D registration technique. In order to validate the accuracy of the present technique, experiments including computer simulation test were performed. The results showed the pose estimation accuracy was sufficient for analyzing mobile-bearing TKA kinematics (the RMS error: about 1.0 mm, 1.0 degree). In the clinical applications, seven patients with mobile-bearing TKA in deep knee bending motion were studied and analyzed. Consequently, present technique enables us to better understand mobile-bearing TKA kinematics, and this type of evaluation was thought to be helpful for improving implant design and optimizing TKA surgical techniques.
An iterative particle filter approach for respiratory motion estimation in nuclear medicine imaging
The continual improvement in spatial resolution of Nuclear Medicine (NM) scanners has made accurate compensation of patient motion increasingly important. A major source of corrupting motion in NM acquisition is due to respiration. Therefore a particle filter (PF) approach has been proposed as a powerful method for motion correction in NM. The probabilistic view of the system in the PF is seen as an advantage that considers the complexity and uncertainties in estimating respiratory motion. Previous tests using XCAT has shown the possibility of estimating unseen organ configuration using training data that only consist of a single respiratory cycle. This paper augments application specific adaptation methods that have been implemented for better PF estimates with an iterative model update step. Results show that errors are further reduced to an extent up to a small number of iterations and such improvements will be advantageous for the PF to cope with more realistic and complex applications.
SLIMMER: SLIce MRI motion estimation and reconstruction tool for studies of fetal anatomy
Kio Kim, Piotr A. Habas, Vidya Rajagopalan, et al.
We describe a free software tool which combines a set of algorithms that provide a framework for building 3D volumetric images of regions of moving anatomy using multiple fast multi-slice MRI studies. It is specifically motivated by the clinical application of unsedated fetal brain imaging, which has emerged as an important area for image analysis. The tool reads multiple DICOM image stacks acquired in any angulation into a consistent patient coordinate frame and allows the user to select regions to be locally motion corrected. It combines algorithms for slice motion estimation, bias field inconsistency correction and 3D volume reconstruction from multiple scattered slice stacks. The tool is built onto the RView (http://rview.colin-studholme.net) medical image display software and allows the user to inspect slice stacks, and apply both stack and slice level motion estimation that incorporates temporal constraints based on slice timing and interleave information read from the DICOM data. Following motion estimation an algorithm for bias field inconsistency correction provides the user with the ability to remove artifacts arising from the motion of the local anatomy relative to the imaging coils. Full 3D visualization of the slice stacks and individual slice orientations is provided to assist in evaluating the quality of the motion correction and final image reconstruction. The tool has been evaluated on a range of clinical data acquired on GE, Siemens and Philips MRI scanners.
Development of an automated processing method to detect still timing of cardiac motion for coronary magnetic resonance angiography
Hiroya Asou, Katsuhiro Ichikawa, Naoyuki Imada, et al.
Whole-heart coronary magnetic resonance angiography (WH-MRA) is useful noninvasive examination. Its signal acquisition is performed during very short still timing in each cardiac motion cycle, and therefore the adequate still timing selection is important to obtain the better image quality. However, since the current available selection method is only manual one using visual comparison of cine MRI images with different phases, the selected timings are often incorrect and their reproducibility is not sufficient. We developed an automated selection method to detect the best still timing for the WH-MRA and compared the automated method with conventional manual one. Cine MRI images were used for the analysis. In order to extract the high-speed cardiac cine image, each phase directional pixel set at each pixel position in all cine images were processed by a high-pass filtering using the Fourie transform. After this process, the cine images with low speed timing became dark, and the optimal timing could be determined by a threshold processing. We took ten volunteers' WH-MRA with the manually and automatically selected timings, and visually assessed image quality of each image on a 5-point scale (1=excellent, 2=very good, 3=good, 4=fair, 5=poor). The mean scores of the manual and automatic methods for right coronary arteries (RCA), LDA left anterior descending arteries (LAD) and LCX left circumflex arteries (LCX) were 4.2±0.38, 4.1±0.44, 3.9±0.52 and 4.1±0.42, 4.1±0.24, 3.2±0.35 respectively. The score were increased by our method in the RCA and LCX, and the LCX was significant (p<0.05). As the results, it was indicated that our automated method could determine the optimal cardiac phase more accurately than or equally to the conventional manual method.
Posters: DTI and Function
icon_mobile_dropdown
Shape anisotropy: tensor distance to anisotropy measure
Fractional anisotropy, defined as the distance of a diffusion tensor from its closest isotropic tensor, has been extensively studied as quantitative anisotropy measure for diffusion tensor magnetic resonance images (DT-MRI). It has been used to reveal the white matter profile of brain images, as guiding feature for seeding and stopping in fiber tractography and for the diagnosis and assessment of degenerative brain diseases. Despite its extensive use in DT-MRI community, however, not much attention has been given to the mathematical correctness of its derivation from diffusion tensors which is achieved using Euclidean dot product in 9D space. But, recent progress in DT-MRI has shown that the space of diffusion tensors does not form a Euclidean vector space and thus Euclidean dot product is not appropriate for tensors. In this paper, we propose a novel and robust rotationally invariant diffusion anisotropy measure derived using the recently proposed Log-Euclidean and J-divergence tensor distance measures. An interesting finding of our work is that given a diffusion tensor, its closest isotropic tensor is different for different tensor distance metrics used. We demonstrate qualitatively that our new anisotropy measure reveals superior white matter profile of DT-MR brain images and analytically show that it has a higher signal to noise ratio than fractional anisotropy.
Scalable brain network construction on white matter fibers
Moo K. Chung, Nagesh Adluru, Kim M. Dalton, et al.
DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ε-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.
Comparison between fourth and second order DT-MR image segmentations
A second order tensor is usually used to describe the diffusion of water for each voxel within Diffusion Tensor Magnetic Resonance (DT-MR) images. However, a second order tensor approximation fails to accurately represent complex local tissue structures such as crossing fibers. Therefore, higher order tensors are used to represent more complex diffusivity profiles. In this work we examine and compare segmentations of both second order and fourth order DT-MR images using the Random Walker segmentation algorithm with the emphasis of pointing-out the shortcomings of second order tensor model in segmenting regions with complex fiber structures. We first adopt the Random Walker algorithm for segmenting diffusion tensor data by using appropriate tensor distance metrics and then demonstrate the advantages of performing segmentation on higher order DT-MR data. The approach proposed takes advantage of all the information provided by the tensors by using suitable tensor distance metrics. The distance metrics used are: the Log-Euclidean for the second order tensors and the normalized L2 distance for the fourth order tensors. The segmentation is carried out on a weighted graph that represents the image, where the tensors are the nodes and the edge weights are computed using the tensor distance metrics. Applying the approach to both synthetic and real DT-MRI data yields segmentations that are both robust and qualitatively accurate.
Second order DTMR image segmentation using random walker
Image segmentation is a method of separating an image into regions of interest, such as separating an object from the background. The random walker image segmentation technique has been applied extensively to scalar images and has demonstrated robust results. In this paper we propose a novel method to apply the random walker method to segmenting non-scalar diffusion tensor magnetic resonance imaging (DT-MRI) data. Moreover, we used a non-parametric probability density model to provide estimates of the regional distributions enabling the random walker method to successfully segment disconnected objects. Our approach utilizes all the information provided by the tensors by using suitable dissimilarity tensor distance metrics. The method uses hard constraints for the segmentation provided interactively by the user, such that certain tensors are labeled as object or background. Then, a graph structure is created with the tensors representing the nodes and edge weights computed using the dissimilarity tensor distance metrics. The distance metrics used are the Log-Euclidean and the J-divergence. The results of the segmentations using these two different dissimilarity metrics are compared and evaluated. Applying the approach to both synthetic and real DT-MRI data yields segmentations that are both robust and qualitatively accurate.
Effect of regularization parameter and scan time on crossing fibers with constrained compressed sensing
Fatma Elzahraa A. ElShahaby, Bennett A. Landman, Jerry L. Prince
Diffusion tensor imaging (DTI) is an MR imaging technique that uses a set of diffusion weighted measurements in order to determine the water diffusion tensor at each voxel. In DTI, a single dominant fiber orientation is calculated at each measured voxel, even if multiple populations of fibers are present within this voxel. A new approach called Crossing Fiber Angular Resolution of Intra-voxel structure (CFARI) for processing diffusion weighted magnetic resonance data has been recently introduced. Based on compressed sensing, CFARI is able to resolve intra-voxel structure from limited number of measurements, but its performance as a function of the scan and algorithm parameters is poorly understood at present. This paper describes simulation experiments to help understand CFARI performance tradeoffs as a function of the data signal-to-noise ratio and the algorithm regularization parameter. In the compressed sensing criterion, the choice of the regularization parameter β is critical. If β is too small, then the solution is the conventional least squares solution, while if β is too large then the solution is identically zero. The correct selection of β turns out to be data dependent, which means that it is also spatially varying. In this paper, simulations using two random tensors with different diffusivities having the same fractional anisotropy but with different principle eigenvalues are carried out. Results reveal that for a fixed scan time, acquisition of repeated measurements can improve CFARI performance and that a spatially variable, data adaptive regularization parameter is beneficial in stabilizing results.
A new metric to measure shape differences in fMRI activity
Siddharth Khullar, Andrew M. Michael, Nicolle Correa, et al.
We present a novel shape metric for quantification of shape differences between the spatial components obtained from independent component analysis (ICA) of group functional magnetic resonance imaging (fMRI) data. This metric is utilized to measure the difference in shapes of the activation regions obtained from different subjects within a group (healthy controls or patients). The parameters comprising the metric are computed for each pixel on the outermost contour (edge) of an activation region for each slice. These parameters are in the form of (r, θ) pairs that may be interpreted as the length and orientation of a vector originating from the centroid of the activation region to the pixel belonging to the boundary contour. Using this information we extract three features that quantify the shape difference between the two shapes under observation. The reference and observation shapes may be selected in two ways: (a) activation maps from two different subjects or (b) mean activation map compared against subject-wise activations, as obtained from group ICA. We present different methods to visualize the shape differences, thus providing a tool to observe the spatial differences within a group or across groups. In addition to the above results, we also address a few special cases where two or more activation contours are present in a single slice and present potential solutions for accounting for these regions as special measures. Our results show that this metric has utility in creating a better understanding of the variability in brain activity among different groups of subjects performing the same task.
Fast computation of functional networks from fMRI activity: a multi-platform comparison
A. Ravishankar Rao, Rajesh Bordawekar, Guillermo Cecchi
The recent deployment of functional networks to analyze fMRI images has been very promising. In this method, the spatio-temporal fMRI data is converted to a graph-based representation, where the nodes are voxels and edges indicate the relationship between the nodes, such as the strength of correlation or causality. Graph-theoretic measures can then be used to compare different fMRI scans. However, there is a significant computational bottleneck, as the computation of functional networks with directed links takes several hours on conventional machines with single CPUs. The study in this paper shows that a GPU can be advantageously used to accelerate the computation, such that the network computation takes a few minutes. Though GPUs have been used for the purposes of displaying fMRI images, their use in computing functional networks is novel. We describe specific techniques such as load balancing, and the use of a large number of threads to achieve the desired speedup. Our experience in utilizing the GPU for functional network computations should prove useful to the scientific community investigating fMRI as GPUs are a low-cost platform for addressing the computational bottleneck.
Posters: Enhancement
icon_mobile_dropdown
Detector defect correction of medical images on graphics processors
Richard Membarth, Frank Hannig, Jürgen Teich, et al.
The ever increasing complexity and power dissipation of computer architectures in the last decade blazed the trail for more power efficient parallel architectures. Hence, such architectures like field-programmable gate arrays (FPGAs) and particular graphics cards attained great interest and are consequently adopted for parallel execution of many number crunching loop programs from fields like image processing or linear algebra. However, there is little effort to deploy barely computational, but memory intensive applications to graphics hardware. This paper considers a memory intensive detector defect correction pipeline for medical imaging with strict latency requirements. The image pipeline compensates for different effects caused by the detector during exposure of X-ray images and calculates parameters to control the subsequent dosage. So far, dedicated hardware setups with special processors like DSPs were used for such critical processing. We show that this is today feasible with commodity graphics hardware. Using CUDA as programming model, it is demonstrated that the detector defect correction pipeline consisting of more than ten algorithms is significantly accelerated and that a speedup of 20x can be achieved on NVIDIA's Quadro FX 5800 compared to our reference implementation. For deployment in a streaming application with steadily new incoming data, it is shown that the memory transfer overhead of successive images to the graphics card memory is reduced by 83% using double buffering.
Modeling of the rhodopsin bleaching with variational analysis of retinal images
J. Dobrosotskaya, M. Ehler, E. J. King, et al.
This paper discusses a variational method of processing the scanning laser ophthalmoscope(cSLO) image sequences in the context of extracting the local rhodopsin density and modeling the bleaching kinetics. This work supports the characterization and detection of early pathological changes in clinical retinal data. Our goals include providing automated tools for tracing early pathological changes over time, in particular rhodopsin density variations and local lesion progression. Our computational approach is a variational technique that approximates measured cSLO image sets optimally within the range of the bleaching model. The characterizing parameters of the approximating curves are computed locally and their spatial changes reflect variations in bleaching kinetics and hence changes in the local rhodopsin density. The curve fitting in the temporal direction of the image stack can be also viewed as a denoising/enhancement routine. The advantages of the temporal correction include a better fit of the image intensity function to the model and the avoidance of local averaging that would impair the spatial resolution.
Reconstruction of high-resolution fluorescence microscopy images based on axial tomography
Steffen Remmele, Bianca Oehm, Florian Staier, et al.
For a reliable understanding of cellular processes, high resolution 3D images of the investigated cells are necessary. Unfortunately, the ability of fluorescence microscopes to image a cell in 3D is limited since the resolution along the optical axis is by a factor of two to three worse than the transversal resolution. Standard microscopy image deblurring algorithms like the Total Variation regularized Richardson Lucy algorithm are able to improve the resolution but the problem of a lower resolution in direction along the optical axis remains. However, it is possible to overcome this problem using Axial Tomography providing tilted views of the object by rotating it under the microscope. The rotated images contain additional information about the objects and an advanced method to reconstruct a 3D image with an isotropic resolution is presented here. First, bleaching has to be corrected in order to allow a valid registration correcting translational and rotational shifts. Hereby, a multi-resolution rigid registration method is used in our method. A single high-resolution image can be reconstructed on basis of all aligned images using an extended Richardson Lucy method. In addition, a Total Variation regularization is applied in order to guarantee a stable reconstruction result. The results for both simulated and real data show a considerable improvement of the resolution in direction of the optical axis.
Improved 3D wavelet-based de-noising of fMRI data
Siddharth Khullar, Andrew M. Michael, Nicolle Correa, et al.
Functional MRI (fMRI) data analysis deals with the problem of detecting very weak signals in very noisy data. Smoothing with a Gaussian kernel is often used to decrease noise at the cost of losing spatial specificity. We present a novel wavelet-based 3-D technique to remove noise in fMRI data while preserving the spatial features in the component maps obtained through group independent component analysis (ICA). Each volume is decomposed into eight volumetric sub-bands using a separable 3-D stationary wavelet transform. Each of the detail sub-bands are then treated through the main denoising module. This module facilitates computation of shrinkage factors through a hierarchical framework. It utilizes information iteratively from the sub-band at next higher level to estimate denoised coefficients at the current level. These de-noised sub-bands are then reconstructed back to the spatial domain using an inverse wavelet transform. Finally, the denoised group fMRI data is analyzed using ICA where the data is decomposed in to clusters of functionally correlated voxels (spatial maps) as indicators of task-related neural activity. The proposed method enables the preservation of shape of the actual activation regions associated with the BOLD activity. In addition it is able to achieve high specificity as compared to the conventionally used FWHM (full width half maximum) Gaussian kernels for smoothing fMRI data.
Phase-unwrapping of differential phase-contrast data using attenuation information
Wilhelm Haas, M. Bech, P. Bartl, et al.
Phase-contrast imaging approaches suffer from a severe problem which is known in Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) as phase-wrapping. This work focuses on an unwrapping solution for the grating based phase-contrast interferometer with X-rays. The approach delivers three types of information about the x-rayed object - the absorption, differential phase-contrast and dark-field information whereas the observed differential phase values are physically limited to the interval (-π, π]; values higher or lower than the interval borders are mapped (wrapped) back into it. In contrast to existing phase-unwrapping algorithms for MRI and SAR the presented algorithm uses the absorption image as additional information to identify and correct phase-wrapped values. The idea of the unwrapping algorithm is based on the observation that at locations with phase-wrapped values the contrast in the absorption image is high and the behavior of the gradient is similar to the real (unwrapped) phase values. This can be expressed as a cost function which has to be minimized by an integer optimizer. Applied on simulated and real datasets showed that 95.6% of phase-wraps were correctly unwrapped. Based on the results we conclude that it is possible to use the absorption information in order to identify and correct phase-wrapped values.
Iterative wavelet thresholding for rapid MRI reconstruction
Mohammad H. Kayvanrad, Charles A. McKenzie, Terry M. Peters
According to the developments in the field of compressed sampling and and sparse recovery, one might take advantage of the sparsity of an object, as an additional a priori knowledge about the object, to reconstruct it from fewer samples than that needed by the traditional sampling strategies. Since most magnetic resonance (MR) images are sparse in some domain, in this work we consider the problem of MR reconstruction and how one could apply this idea to accelerate the process of MR image/map acquisition. In particular, based on the Paupolis-Gerchgerg algorithm, an iterative thresholding algorithm for reconstruction of MR images from limited k-space observations is proposed. The proposed method takes advantage of the sparsity of most MR images in the wavelet domain. Initializing with a minimum-energy reconstruction, the object of interest is reconstructed by going through a sequence of thresholding and recovery iterations. Furthermore, MR studies often involve acquisition of multiple images in time that are highly correlated. This correlation can be used as additional knowledge on the object beside the sparsity to further reduce the reconstruction time. The performance of the proposed algorithms is experimentally evaluated and compared to other state-of-the-art methods. In particular, we show that the quality of reconstruction is increased compared to total variation (TV) regularization, and the conventional Papoulis-Gerchberg algorithm both in the absence and in the presence of noise. Also, phantom experiments show good accuracy in the reconstruction of relaxation maps from a set of highly undersampled k-space observations.
Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope
The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.
A maximum likelihood estimation method for denoising magnitude MRI using restricted local neighborhood
Jeny Rajan, Marleen Verhoye, Jan Sijbers
In this paper, we propose a method to denoise magnitude Magnetic Resonance (MR) images based on the maximum likelihood (ML) estimation method using a restricted local neighborhood. Conventionally, methods that estimate the true, underlying signal from a local neighborhood assume this signal to be constant within that neighborhood. However, this assumption is not always valid and, as a result, the edges in the image will be blurred and fine structures will be destroyed. As a solution to this problem, we put forward the concept of using a restricted local neighborhood where the true intensity for each noisy pixel is estimated from a set of selected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed non local means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Matrix (SSIM) and Bhattacharrya coefficient from synthetic and real MRI demonstrate the superior performance of the proposed method over other state of the art methods.
SinoCor: a clinical tool for sinogram-level patient motion correction in SPECT
Debasis Mitra, Daniel Eiland, Thomas Walsh, et al.
We present a simple method for correcting patient motion in SPECT. The targeted type of motion is a momentary shift in patient's body position due to coughing, sneezing or a need to shift weight during a long scan. When detected by the radiologist, this motion sometimes causes the scan data being discarded and the scan being repeated, thus imposing extra costs and unnecessary health risks to the patients. We propose a partial solution to this problem in the form of a graphical user interface based software tool SinoCor, integrated with the sinogram viewing software that allows instant correction for the simplest types of motion. When used during the initial check of the scan data, this tool allows the technologists to interactively detect the instances of motion and determine the parameters of the motion by achieving consistent picture of the sinogram. Two types of motion are corrected by using the algorithms: translational motion of the patient and small angle rotation about in-plane axes. All of the motion corrections are performed at the sinogram level, after which the images may be reconstructed using hospital's/organization's standard reconstruction software. SinoCor is platform independent, it requires no modification of the acquisition protocol and other processing software, and it requires minimal personnel training. In this article we describe the principal architecture of SinoCor software and illustrate its performance using both a phantom and a patient scan data.
Noise-resistant adaptive scale using stabilized diffusion
Semi-locally adaptive models have appeared in medical imaging literature in the past years. In particular, generalized scale models (or g-scale for short) have been introduced to effectively overcome the shape, size, or anisotropic constraints imposed by previous local morphometric scale models. The g-scale models have shown interesting theoretical properties and an ability to drive improved image processing as shown in previous works. In this paper, we present a noise-resistant variant for g-scale set formation, which we refer to as stabilized scale (s-scale) because of its stabilized diffusive properties. This is a modified diffusion process wherein a well-conditioned and stable behavior in the vicinity of boundaries is defined. Yet, s-scale includes an intensity-merging dynamics behavior in the same manner as that found in the switching control of a nonlinear system. Basically we introduce, in the evolution of the diffusive model, a behavior state to drive neighboring voxel intensities to larger and larger iso-intensity regions. In other words, we drive our diffusion process to a coarser and coarser piecewise-constant approximation of the original scene. This strategy reveals a well-known behavior in control theory, called sliding modes. Evaluations on a mathematical phantom, the Brainweb, MR and CT data sets were conducted. The s-scale has shown better performance than the original g-scale under moderate to high noise levels.
Mapping spatio-temporal filtering algorithms used in fluoroscopy to single core and multicore DSP architectures
Low dose X-ray image sequences, as obtained in fluoroscopy, exhibit high levels of noise that must be suppressed in real-time, while preserving diagnostic structures. Multi-step adaptive filtering approaches, often involving spatio-temporal filters, are typically used to achieve this goal. In this work typical fluoroscopic image sequences, corrupted with Poisson noise, were processed using various filtering schemes. The noise suppression of the schemes was evaluated using objective image quality measures. Two adaptive spatio-temporal schemes, the first one using object detection and the second one using unsharp masking, were chosen as representative approaches for different fluoroscopy procedures and mapped on to Texas Instrument's (TI) high performance digital signal processors (DSP). The paper explains the fixed point design of these algorithms and evaluates its impact on overall system performance. The fixed point versions of these algorithms are mapped onto the C64x+TM core using instruction-level parallelism to effectively use its VLIW architecture. The overall data flow was carefully planned to reduce cache and data movement overhead, while working with large medical data sets. Apart from mapping these algorithms on to TI's single core DSP architecture, this work also distributes the operations to leverage multi-core DSP architectures. The data arrangement and flow were optimized to minimize inter-processor messaging and data movement overhead.