Proceedings Volume 9413

Medical Imaging 2015: Image Processing

cover
Proceedings Volume 9413

Medical Imaging 2015: Image Processing

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 13 May 2015
Contents: 14 Sessions, 145 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2015
Volume Number: 9413

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9413
  • Quantitative Image Analysis
  • Keynote and Diffusion MRI Analysis
  • Image Representation and Reconstruction
  • Compressed Sensing/Sparse Methods
  • Machine Learning
  • Shape and Models
  • Computational Anatomy
  • Segmentation: Brain
  • Segmentation
  • Classification
  • Motion/Time Series
  • Registration
  • Poster Session
Front Matter: Volume 9413
icon_mobile_dropdown
Front Matter: Volume 9413
This PDF file contains the front matter associated with SPIE Proceedings Volume 9413, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Quantitative Image Analysis
icon_mobile_dropdown
Highly accurate volumetry of the spinal cord
Florian Weiler, Marita Daams, Carsten Lukas, et al.
Quantitative analysis of the spinal cord from MR images is of significant clinical interest when studying certain neurologic diseases. Especially for multiple sclerosis, a number of studies have analyzed the relation between spinal cord atrophy and clinically monitored progression of the disease. A commonly analyzed parameter in this field is the mean cross-sectional area of the cord, which can also be expressed as the average volume per cm. In this paper, we present a novel approach for precise measurement of the volume, length, and cross-sectional area of the spinal cord from T1-weighted MR images. It is computationally fast, with a low effort of required user interaction. It is based on a semi-automated pre-segmentation of a sub-section of the spinal cord, followed by an automated Gaussian mixture-model fit for volume calculation. Additionally, the centerline of the cord is extracted, which allows for calculation of the mean cross-sectional area of the measured section. We evaluate the accuracy of our method with respect to scan/re-scan reproducibility as well as intra- and inter-rater agreement. We achieved a mean coefficient of variation of 0.62% over repeated MR acquisitions, mean CoV of 0.39% for intra-rater comparison, and a mean CoV of 0.28% for inter-rater comparison by five different observers. These results prove a high sensitivity to detect even small changes in atrophy, as it could typically be observed over the temporal progression of MS
Constructing a statistical atlas of the radii of the optic nerve and cerebrospinal fluid sheath in young healthy adults
Optic neuritis is a sudden inflammation of the optic nerve (ON) and is marked by pain on eye movement, and visual symptoms such as a decrease in visual acuity, color vision, contrast and visual field defects. The ON is closely linked with multiple sclerosis (MS) and patients have a 50% chance of developing MS within 15 years. Recent advances in multi-atlas segmentation methods have omitted volumetric assessment. In the past, measuring the size of the ON has been done by hand. We utilize a new method of automatically segmenting the ON to measure the radii of both the ON and surrounding cerebrospinal fluid (CSF) sheath to develop a normative distribution of healthy young adults. We examine this distribution for any trends and find that ON and CSF sheath radii do not vary between 20-35 years of age and between sexes. We evaluate how six patients suffering from optic neuropathy compare to this distribution of controls. We find that of these six patients, five of them qualitatively differ from the normative distribution which suggests this technique could be used in the future to distinguish between optic neuritis patients and healthy controls
Adaptive sampling of CT data for myocardial blood flow estimation from dose-reduced dynamic CT
Dimple Modgil, Michael D. Bindschadler, Adam M. Alessio, et al.
Quantification of myocardial blood flow (MBF) can aid in the diagnosis and treatment of coronary artery disease (CAD). However, there are no widely accepted clinical methods for estimating MBF. Dynamic CT holds the promise of providing a quick and easy method to measure MBF quantitatively, however the need for repeated scans has raised concerns about the potential for high radiation dose. In our previous work, we explored techniques to reduce the patient dose by either uniformly reducing the tube current or by uniformly reducing the number of temporal frames in the dynamic CT sequence. These dose reduction techniques result in very noisy data, which can give rise to large errors in MBF estimation. In this work, we seek to investigate whether nonuniformly varying the tube current or sampling intervals can yield more accurate MBF estimates. Specifically, we try to minimize the dose and obtain the most accurate MBF estimate through addressing the following questions: when in the time attenuation curve (TAC) should the CT data be collected and at what tube current(s). We hypothesize that increasing the sampling rate and/or tube current during the time frames when the myocardial CT number is most sensitive to the flow rate, while reducing them elsewhere, can achieve better estimation accuracy for the same dose. We perform simulations of contrast agent kinetics and CT acquisitions to evaluate the relative MBF estimation performance of several clinically viable adaptive acquisition methods. We found that adaptive temporal and tube current sequences can be performed that impart an effective dose of about 5 mSv and allow for reductions in MBF estimation RMSE on the order of 11% compared to uniform acquisition sequences with comparable or higher radiation doses.
Segmentation of vascular structures and hematopoietic cells in 3D microscopy images and quantitative analysis
Jian Mu, Lin Yang, Malgorzata M. Kamocka, et al.
In this paper, we present image processing methods for quantitative study of how the bone marrow microenvironment changes (characterized by altered vascular structure and hematopoietic cell distribution) caused by diseases or various factors. We develop algorithms that automatically segment vascular structures and hematopoietic cells in 3-D microscopy images, perform quantitative analysis of the properties of the segmented vascular structures and cells, and examine how such properties change. In processing images, we apply local thresholding to segment vessels, and add post-processing steps to deal with imaging artifacts. We propose an improved watershed algorithm that relies on both intensity and shape information and can separate multiple overlapping cells better than common watershed methods. We then quantitatively compute various features of the vascular structures and hematopoietic cells, such as the branches and sizes of vessels and the distribution of cells. In analyzing vascular properties, we provide algorithms for pruning fake vessel segments and branches based on vessel skeletons. Our algorithms can segment vascular structures and hematopoietic cells with good quality. We use our methods to quantitatively examine the changes in the bone marrow microenvironment caused by the deletion of Notch pathway. Our quantitative analysis reveals property changes in samples with deleted Notch pathway. Our tool is useful for biologists to quantitatively measure changes in the bone marrow microenvironment, for developing possible therapeutic strategies to help the bone marrow microenvironment recovery.
Fast left ventricle tracking in CMR images using localized anatomical affine optical flow
Sandro Queirós, João L. Vilaça, Pedro Morais, et al.
In daily cardiology practice, assessment of left ventricular (LV) global function using non-invasive imaging remains central for the diagnosis and follow-up of patients with cardiovascular diseases. Despite the different methodologies currently accessible for LV segmentation in cardiac magnetic resonance (CMR) images, a fast and complete LV delineation is still limitedly available for routine use.

In this study, a localized anatomically constrained affine optical flow method is proposed for fast and automatic LV tracking throughout the full cardiac cycle in short-axis CMR images. Starting from an automatically delineated LV in the end-diastolic frame, the endocardial and epicardial boundaries are propagated by estimating the motion between adjacent cardiac phases using optical flow. In order to reduce the computational burden, the motion is only estimated in an anatomical region of interest around the tracked boundaries and subsequently integrated into a local affine motion model. Such localized estimation enables to capture complex motion patterns, while still being spatially consistent.

The method was validated on 45 CMR datasets taken from the 2009 MICCAI LV segmentation challenge. The proposed approach proved to be robust and efficient, with an average distance error of 2.1 mm and a correlation with reference ejection fraction of 0.98 (1.9 ± 4.5%). Moreover, it showed to be fast, taking 5 seconds for the tracking of a full 4D dataset (30 ms per image). Overall, a novel fast, robust and accurate LV tracking methodology was proposed, enabling accurate assessment of relevant global function cardiac indices, such as volumes and ejection fraction
Keynote and Diffusion MRI Analysis
icon_mobile_dropdown
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
Josh Moore, Melissa Linkert, Colin Blackburn, et al.
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
7T multi-shell hybrid diffusion imaging (HYDI) for mapping brain connectivity in mice
Madelaine Daianu, Neda Jahanshad, Julio E. Villalon-Reina, et al.
Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. High angular resolution diffusion imaging (HARDI) samples diffusivity at a large number of spherical angles, to better resolve neural fibers that mix or cross. Here, we implemented a framework for advanced mathematical analysis of mouse 5-shell HARDI (b=1000, 3000, 4000, 8000, 12000 s/mm2), also known as hybrid diffusion imaging (HYDI). Using q-ball imaging (QBI) at ultra-high field strength (7 Tesla), we computed diffusion and fiber orientation distribution functions (dODF, fODF) to better detect crossing fibers. We also computed a quantitative anisotropy (QA) index, and deterministic tractography, from the peak orientation of the fODFs. We found that the signal to noise ratio (SNR) of the QA was significantly higher in single and multi-shell reconstructed data at the lower b-values (b=1000, 3000, 4000 s/mm2) than at higher b-values (b=8000, 12000 s/mm2); the b=1000 s/mm2 shell increased the SNR of the QA in all multi-shell reconstructions, but when used alone or in <5-shell reconstruction, it led to higher angular error for the major fibers, compared to 5-shell HYDI. Multi-shell data reconstructed major fibers with less error than single-shell data, and was most successful at reducing the angular error when the lowest shell was excluded (b=1000 s/mm2). Overall, high-resolution connectivity mapping with 7T HYDI offers great potential for understanding unresolved changes in mouse models of brain disease.
Measuring the lesion load of multiple sclerosis patients within the corticospinal tract
Jan Klein, Katrin Hanken, Jasna Koceva, et al.
In this paper we present a framework for reliable determination of the lesion load within the corticospinal tract (CST) of multiple sclerosis patients. The basis constitutes a probabilistic fiber tracking approach which checks possible parameter intervals on the fly using an anatomical brain atlas. By exploiting the range of those intervals, the algorithm is able to resolve fiber crossings and to determine the CST in its full entity although it can use a simple diffusion tensor model. Another advantage is its short running time, tracking the CST takes less than a minute. For segmenting the lesions we developed a semi-automatic approach. First, a trained classifier is applied to multimodal MRI data (T1/FLAIR) where the spectrum of lesions has been determined in advance by a clustering algorithm. This leads to an automatic detection of the lesions which can be manually corrected afterwards using a threshold-based approach. For evaluation we scanned 46 MS patients and 16 healthy controls. Fiber tracking has been performed using our novel fiber tracking and a standard defection based algorithm. Regression analysis of the old and new version of the algorithm showed a highly significant superiority of the new algorithm for disease duration. Additionally, a low correlation between old and new approach supports the observation that standard DTI fiber tracking is not always able to track and quantify the CST reliably.
Image Representation and Reconstruction
icon_mobile_dropdown
Joint multi-shot multi-channel image reconstruction in compressive diffusion weighted MR imaging
Hao Zhang, Yunmei Chen, Eduardo Pasiliao Jr., et al.
Single-Shot Echo-Planar-Imaging (SS-EPI) is the most common method to acquire Diffusion-Weighted-Imaging (DWI) data in clinic due to its immunity to patient motion. However, its image quality is impacted by geometric distortions and poor spatial resolution. While Multi-Shot EPI (MS-EPI) has the potential to achieve high spatial resolution, it suffers from significant motion-induced artifacts. Partially Parallel Imaging (PPI) reconstruction techniques such as Sensitivity-Encoding (SENSE) has shown its ability to improve the image quality of MRI. In this paper we proposed a SENSE based model to reconstruct DW images from MS-EPI data by solving a minimization problem. Under the condition when the motion is not significantly large, We assume the images reconstructed from different shots are low rank except for sparse errors and our model is solved by an accelerated Alternating direction method of multipliers (AADMM) scheme.
Multi-contrast magnetic resonance image reconstruction
Meng Liu, Yunmei Chen, Hao Zhang, et al.
In clinical exams, multi-contrast images from conventional MRI are scanned with the same field of view (FOV) for complementary diagnostic information, such as proton density- (PD-), T1- and T2-weighted images. Their sharable information can be utilized for more robust and accurate image reconstruction. In this work, we propose a novel model and an efficient algorithm for joint image reconstruction and coil sensitivity estimation in multi-contrast partially parallel imaging (PPI) in MRI. Our algorithm restores the multi-contrast images by minimizing an energy function consisting of an L2-norm fidelity term to reduce construction errors caused by motion, a regularization term of underlying images to preserve common anatomical features by using vectorial total variation (VTV) regularizer, and updating sensitivity maps by Tikhonov smoothness based on their physical property. We present the numerical results including T1- and T2-weighted MR images recovered from partially scanned k-space data and provide the comparisons between our results and those obtained from the related existing works. Our numerical results indicate that the proposed method using vectorial TV and penalties on sensitivities can be made promising and widely used for multi-contrast multi-channel MR image reconstruction.
Image-based compensation for involuntary motion in weight-bearing C-arm cone-beam CT scanning of knees
Mathias Unberath, Jang-Hwan Choi, Martin Berger, et al.

We previously introduced four fiducial marker-based strategies to compensate for involuntary knee-joint motion during weight-bearing C-arm CT scanning of the lower body. 2D methods showed significant reduction of motion- related artifacts, but 3D methods worked best.

However, previous methods led to increased examination times and patient discomfort caused by the marker attachment process. Moreover, sub-optimal marker placement may lead to decreased marker detectability and therefore unstable motion estimates. In order to reduce overall patient discomfort, we developed a new image-based 2D projection shifting method.

A C-arm cone-beam CT system was used to acquire projection images of five healthy volunteers at various flexion angles. Projection matrices for the horizontal scanning trajectory were calibrated using the Siemens standard PDS-2 phantom. The initial reconstruction was forward projected using maximum-intensity projections (MIP), yielding an estimate of a static scan. This estimate was then used to obtain the 2D projection shifts via registration.

For the scan with the most motion, the proposed method reproduced the marker-based results with a mean error of 2.90 mm +/- 1.43 mm (compared to a mean error of 4.10 mm +/- 3.03 mm in the uncorrected case). Bone contour surrounding modeling clay layer was improved. The proposed method is a first step towards automatic image-based, marker-free motion-compensation.
Super-resolution for medical images corrupted by heavy noise
Dai-Viet Tran, Marie Luong, Sébastien Li-Thao-Té, et al.
Medical images often suffer from noise and low-resolution, which may compromise the accuracy of diagnosis. How to improve the image resolution in cases of heavy noise is still a challenging issue. This paper introduces a novel Examplebased Super-resolution (SR) method for medical images corrupted by heavy Poisson noise, integrating efficiently denoising and SR in the same framework. The purpose is to estimate a high-resolution (HR) image from a single noisy low-resolution (LR) image, with the help of a given set of standard images which are used as examples to construct the database. Precisely, for each patch in the noisy LR image, the idea is to find its nearest neighbor patches from the database and use them to estimate the HR patch by computing a regression function based on the construction of a reproducing kernel Hilbert space. To obtain the corresponding set of k-nearest neighbors in the database, a coarse search using the shortest Euclidean distance is first performed, followed by a refined search using a criterion based on the distribution of Poisson noise and the Anscombe transformation. This paper also evaluates the performance of the method comparing to other state-of-the-art denoising methods and SR methods. The obtained results demonstrate its efficiency, especially for heavy Poisson noise.
Spline-based sparse tomographic reconstruction with Besov priors
Elham Sakhaee, Alireza Entezari
Tomographic reconstruction from limited X-ray data is an ill-posed inverse problem. A common Bayesian approach is to search for the maximum a posteriori (MAP) estimate of the unknowns that integrates the prior knowledge, about the nature of biomedical images, into the reconstruction process. Recent results on the Bayesian inversion have shown the advantages of Besov priors for the convergence of the estimates as the discretization of the image is refined. We present a spline framework for sparse tomographic reconstruction that leverages higher-order basis functions for image discretization while incorporating Besov space priors to obtain the MAP estimate. Our method leverages tensor-product B-splines and box splines, as higher order basis functions for image discretization, that are shown to improve accuracy compared to the standard, first-order, pixel-basis.

Our experiments show that the synergy produced from higher order B-splines for image discretization together with the discretization-invariant Besov priors leads to significant improvements in tomographic reconstruction. The advantages of the proposed Bayesian inversion framework are examined for image reconstruction from limited number of projections in a few-view setting.
Compressed Sensing/Sparse Methods
icon_mobile_dropdown
Rank-sparsity constrained atlas construction and phenotyping
D. P. Clark, C. T. Badea
Atlas construction is of great interest in the medical imaging community as a tool to visually and quantitatively characterize anatomic variability within a population. Because such atlases generally exhibit superior data fidelity relative to the individual data sets from which they are constructed, they have also proven invaluable in numerous informatics applications such as automated segmentation and classification, regularization of individual-specific reconstructions from undersampled data, and for characterizing physiologically relevant functional metrics. Perhaps the most valuable role of an anatomic atlas is not to define what is “normal,” but, in fact, to recognize what is “abnormal.” Here, we propose and demonstrate a novel anatomic atlas construction strategy that simultaneously recovers the average anatomy and the deviation from average in a visually meaningful way. The proposed approach treats the problem of atlas construction within the context of robust principal component analysis (RPCA) in which the redundant portion of the data (i.e. the low rank atlas) is separated from the spatially and gradient sparse portion of the data unique to each individual (i.e. the sparse variation). In this paper, we demonstrate the application of RPCA to the Shepp-Logan phantom, including several forms of variability encountered with in vivo data: population variability, class variability, contrast variability, and individual variability. We then present preliminary results produced by applying the proposed approach to in vivo, murine cardiac micro-CT data acquired in a model of right ventricle hypertrophy induced by pulmonary arteriole hypertension.
Compressed sensing MRI using higher order multi-scale FREBAS for sparsifying transform function
S. Ito, K. Ito, M. Shibuya, et al.
In recent years, compressed sensing (CS) has attracted considerable attention in areas such as rapid magnetic resonance (MR) imaging. Signal sparsity is an essential condition for compressed sensing. In this work, a multi-scale sparsifying transform based on the Fresnel transform (FREBAS) is adopted in order to improve the quality of CS images. The experimental results demonstrate that by increasing the sparsity of the image in the FREBAS transform domain, curved features in MR images can be more faithfully reconstructed than is possible using the traditional wavelet transform or curvelet transform particularly for low sampling rates in k-space. In addition, proposed method is robust to the selection of sampling trajectory of NMR signal.
Intraparenchymal hemorrhage segmentation from clinical head CT of patients with traumatic brain injury
Snehashis Roy, Sean Wilkes, Ramon Diaz-Arrastia M.D., et al.
Quantification of hemorrhages in head computed tomography (CT) images from patients with traumatic brain injury (TBI) has potential applications in monitoring disease progression and better understanding of the patho-physiology of TBI. Although manual segmentations can provide accurate measures of hemorrhages, the processing time and inter-rater variability make it infeasible for large studies. In this paper, we propose a fully automatic novel pipeline for segmenting intraparenchymal hemorrhages (IPH) from clinical head CT images. Unlike previous methods of model based segmentation or active contour techniques, we rely on relevant and matching examples from already segmented images by trained raters. The CT images are first skull-stripped. Then example patches from an "atlas" CT and its manual segmentation are used to learn a two-class sparse dictionary for hemorrhage and normal tissue. Next, for a given "subject" CT, a subject patch is modeled as a sparse convex combination of a few atlas patches from the dictionary. The same convex combination is applied to the atlas segmentation patches to generate a membership for the hemorrhages at each voxel. Hemorrhages are segmented from 25 subjects with various degrees of TBI. Results are compared with segmentations obtained from an expert rater. A median Dice coefficient of 0.85 between automated and manual segmentations is achieved. A linear fit between automated and manual volumes show a slope of 1.0047, indicating a negligible bias in volume estimation.
Alternating minimization algorithm with iteratively reweighted quadratic penalties for compressive transmission tomography
Yan Kaganovsky, Soysal Degirmenci, Shaobo Han, et al.
We propose an alternating minimization (AM) algorithm for estimating attenuation functions in x-ray transmission tomography using priors that promote sparsity in the pixel/voxel differences domain. As opposed to standard maximum-a-posteriori (MAP) estimation, we use the automatic relevance determination (ARD) framework. In the ARD approach, sparsity (or compressibility) is promoted by introducing latent variables which serve as the weights of quadratic penalties, with one weight for each pixel/voxel; these weights are then automatically learned from the data. This leads to an algorithm where the quadratic penalty is reweighted in order to effectively promote sparsity. In addition to the usual object estimate, ARD also provides measures of uncertainty (posterior variances) which are used at each iteration to automatically determine the trade-off between data fidelity and the prior, thus potentially circumventing the need for any tuning parameters. We apply the convex decomposition lemma in a novel way and derive a separable surrogate function that leads to a parallel algorithm. We propose an extension of branchless distance-driven forward/back-projections which allows us to considerably speed up the computations associated with the posterior variances. We also study the acceleration of the algorithm using ordered subsets.
Machine Learning
icon_mobile_dropdown
Revealing latent value of clinically acquired CTs of traumatic brain injury through multi-atlas segmentation in a retrospective study of 1,003 with external cross-validation
Andrew J. Plassard, Patrick D. Kelly, Andrew J. Asman, et al.
Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a “big data” paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold crossvalidation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R2 to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.
Efficient abdominal segmentation on clinically acquired CT with SIMPLE context learning
Zhoubing Xu, Ryan P. Burke, Christopher P. Lee, et al.
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining.
Longitudinal graph-based segmentation of macular OCT using fundus alignment
Andrew Lang, Aaron Carass, Omar Al-Louzi, et al.
Segmentation of retinal layers in optical coherence tomography (OCT) has become an important diagnostic tool for a variety of ocular and neurological diseases. Currently all OCT segmentation algorithms analyze data independently, ignoring previous scans, which can lead to spurious measurements due to algorithm variability and failure to identify subtle changes in retinal layers. In this paper, we present a graph-based segmentation framework to provide consistent longitudinal segmentation results. Regularization over time is accomplished by adding weighted edges between corresponding voxels at each visit. We align the scans to a common subject space before connecting the graphs by registering the data using both the retinal vasculature and retinal thickness generated from a low resolution segmentation. This initial segmentation also allows the higher dimensional temporal problem to be solved more efficiently by reducing the graph size. Validation is performed on longitudinal data from 24 subjects, where we explore the variability between our longitudinal graph method and a cross-sectional graph approach. Our results demonstrate that the longitudinal component improves segmentation consistency, particularly in areas where the boundaries are difficult to visualize due to poor scan quality.
Machine learning for the automatic localisation of foetal body parts in cine-MRI scans
Christopher Bowles, Niamh C. Nowlan, Tayyib T. A. Hayat, et al.
Being able to automate the location of individual foetal body parts has the potential to dramatically reduce the work required to analyse time resolved foetal Magnetic Resonance Imaging (cine-MRI) scans, for example, for use in the automatic evaluation of the foetal development. Currently, manual preprocessing of every scan is required to locate body parts before analysis can be performed, leading to a significant time overhead. With the volume of scans becoming available set to increase as cine-MRI scans become more prevalent in clinical practice, this stage of manual preprocessing is a bottleneck, limiting the data available for further analysis. Any tools which can automate this process will therefore save many hours of research time and increase the rate of new discoveries in what is a key area in understanding early human development. Here we present a series of techniques which can be applied to foetal cine-MRI scans in order to first locate and then differentiate between individual body parts. A novel approach to maternal movement suppression and segmentation using Fourier transforms is put forward as a preprocessing step, allowing for easy extraction of short movements of individual foetal body parts via the clustering of optical flow vector fields. These body part movements are compared to a labelled database and probabilistically classified before being spatially and temporally combined to give a final estimate for the location of each body part.
MS lesion segmentation using a multi-channel patch-based approach with spatial consistency
Roey Mechrez, Jacob Goldberger, Hayit Greenspan
This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.
Shape and Models
icon_mobile_dropdown
Automatic sulcal curve extraction on the human cortical surface
The recognition of sulcal regions on the cortical surface is an important task to shape analysis and landmark detection. However, it is challenging especially in a complex, rough human cortex. In this paper, we focus on the extraction of sulcal curves from the human cortical surface. The previous sulcal extraction methods are time-consuming in practice and often have a difficulty to delineate curves correctly along the sulcal regions in the presence of significant noise. Our pipeline is summarized in two main steps: 1) We extract candidate sulcal points spread over the sulcal regions. We further reduce the size of the candidate points by applying a line simplification method. 2) Since the candidate points are potentially located away from the exact valley regions, we propose a novel approach to connect candidate sulcal points so as to obtain a set of complete curves (line segments). We have shown in experiment that our method achieves high computational efficiency, improved robustness to noise, and high reliability in a test-retest situation as compared to a well-known existing method.
Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data
Tobias Klinder, Hannes Wendland, Irina Wachter-Stehle, et al.
The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03±3.36 mm between target and estimated landmark positions.
Interpretable exemplar-based shape classification using constrained sparse linear models
Gunnar A. Sigurdsson, Zhen Yang, Trac D. Tran, et al.
Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.
Reference geometry-based detection of (4D-)CT motion artifacts: a feasibility study
René Werner, Tobias Gauer
Respiration-correlated computed tomography (4D or 3D+t CT) can be considered as standard of care in radiation therapy treatment planning for lung and liver lesions. The decision about an application of motion management devices and the estimation of patient-specific motion effects on the dose distribution relies on precise motion assessment in the planning 4D CT data { which is impeded in case of CT motion artifacts. The development of image-based/post-processing approaches to reduce motion artifacts would benefit from precise detection and localization of the artifacts. Simple slice-by-slice comparison of intensity values and threshold-based analysis of related metrics suffer from-- depending on the threshold-- high false-positive or -negative rates. In this work, we propose exploiting prior knowledge about `ideal' (= artifact free) reference geometries to stabilize metric-based artifact detection by transferring (multi-)atlas-based concepts to this specific task. Two variants are introduced and evaluated: (S1) analysis and comparison of warped atlas data obtained by repeated non-linear atlas-to-patient registration with different levels of regularization; (S2) direct analysis of vector field properties (divergence, curl magnitude) of the atlas-to-patient transformation. Feasibility of approaches (S1) and (S2) is evaluated by motion-phantom data and intra-subject experiments (four patients) as well as -- adopting a multi-atlas strategy-- inter-subject investigations (twelve patients involved). It is demonstrated that especially sorting/double structure artifacts can be precisely detected and localized by (S1). In contrast, (S2) suffers from high false positive rates.
Hierarchical pictorial structures for simultaneously localizing multiple organs in volumetric pre-scan CT
Albert Montillo, Qi Song, Bipul Das, et al.
Parsing volumetric computed tomography (CT) into 10 or more salient organs simultaneously is a challenging task with many applications such as personalized scan planning and dose reporting. In the clinic, pre-scan data can come in the form of very low dose volumes acquired just prior to the primary scan or from an existing primary scan. To localize organs in such diverse data, we propose a new learning based framework that we call hierarchical pictorial structures (HPS) which builds multiple levels of models in a tree-like hierarchy that mirrors the natural decomposition of human anatomy from gross structures to finer structures. Each node of our hierarchical model learns (1) the local appearance and shape of structures, and (2) a generative global model that learns probabilistic, structural arrangement. Our main contribution is twofold. First we embed the pictorial structures approach in a hierarchical framework which reduces test time image interpretation and allows for the incorporation of additional geometric constraints that robustly guide model fitting in the presence of noise. Second we guide our HPS framework with the probabilistic cost maps extracted using random decision forests using volumetric 3D HOG features which makes our model fast to train and fast to apply to novel test data and posses a high degree of invariance to shape distortion and imaging artifacts. All steps require approximate 3 mins to compute and all organs are located with suitably high accuracy for our clinical applications such as personalized scan planning for radiation dose reduction. We assess our method using a database of volumetric CT scans from 81 subjects with widely varying age and pathology and with simulated ultra-low dose cadaver pre-scan data.
Skeletal shape correspondence via entropy minimization
Liyun Tu, Martin Styner, Jared Vicory, et al.
Purpose: Improving the shape statistics of medical image objects by generating correspondence of interior skeletal points. Data: Synthetic objects and real world lateral ventricles segmented from MR images. Method(s): Each object’s interior is modeled by a skeletal representation called the s-rep, which is a quadrilaterally sampled, folded 2-sided skeletal sheet with spoke vectors proceeding from the sheet to the boundary. The skeleton is divided into three parts: up-side, down-side and fold-curve. The spokes on each part are treated separately and, using spoke interpolation, are shifted along their skeletal parts in each training sample so as to tighten the probability distribution on those spokes’ geometric properties while sampling the object interior regularly. As with the surface-based correspondence method of Cates et al., entropy is used to measure both the probability distribution tightness and sampling regularity. The spokes’ geometric properties are skeletal position, spoke length and spoke direction. The properties used to measure the regularity are the volumetric subregions bounded by the spokes, their quadrilateral sub-area and edge lengths on the skeletal surface and on the boundary. Results: Evaluation on synthetic and real world lateral ventricles demonstrated improvement in the performance of statistics using the resulting probability distributions, as compared to methods based on boundary models. The evaluation measures used were generalization, specificity, and compactness. Conclusions: S-rep models with the proposed improved correspondence provide significantly enhanced statistics as compared to standard boundary models.
Computational Anatomy
icon_mobile_dropdown
Probabilistic atlas based labeling of the cerebral vessel tree
Martijn Van de Giessen, Jasper P. Janssen, Patrick A. Brouwer, et al.
Preoperative imaging of the cerebral vessel tree is essential for planning therapy on intracranial stenoses and aneurysms. Usually, a magnetic resonance angiography (MRA) or computed tomography angiography (CTA) is acquired from which the cerebral vessel tree is segmented. Accurate analysis is helped by the labeling of the cerebral vessels, but labeling is non-trivial due to anatomical topological variability and missing branches due to acquisition issues. In recent literature, labeling the cerebral vasculature around the Circle of Willis has mainly been approached as a graph-based problem. The most successful method, however, requires the definition of all possible permutations of missing vessels, which limits application to subsets of the tree and ignores spatial information about the vessel locations.

This research aims to perform labeling using probabilistic atlases that model spatial vessel and label likelihoods. A cerebral vessel tree is aligned to a probabilistic atlas and subsequently each vessel is labeled by computing the maximum label likelihood per segment from label-specific atlases.

The proposed method was validated on 25 segmented cerebral vessel trees. Labeling accuracies were close to 100% for large vessels, but dropped to 50-60% for small vessels that were only present in less than 50% of the set.

With this work we showed that using solely spatial information of the vessel labels, vessel segments from stable vessels (>50% presence) were reliably classified. This spatial information will form the basis for a future labeling strategy with a very loose topological model.
Simultaneous skull-stripping and lateral ventricle segmentation via fast multi-atlas likelihood fusion
Xiaoying Tang, Kwame Kutten, Can Ceritoglu, et al.
In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels – the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term “fast MALF”. The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.
A transformation similarity constraint for groupwise nonlinear registration in longitudinal neuroimaging studies
Greg M. Fleishman, Boris A. Gutman, P. Thomas Fletcher, et al.
Patients with Alzheimer's disease and other brain disorders often show a similar spatial distribution of volume change throughout the brain over time, but this information is not yet used in registration algorithms to refine the quantification of change. Here, we develop a mathematical basis to incorporate that prior information into a longitudinal structural neuroimaging study. We modify the canonical minimization problem for non-linear registration to include a term that couples a collection of registrations together to enforce group similarity. More specifically, throughout the computation we maintain a group-level representation of the transformations and constrain updates to individual transformations to be similar to this representation. The derivations necessary to produce the Euler-Lagrange equations for the coupling term are presented and a gradient descent algorithm based on the formulation was implemented. We demonstrate using 57 longitudinal image pairs from the Alzheimer's Disease Neuroimaging Initiative (ADNI) that longitudinal registration with such a groupwise coupling prior is more robust to noise in estimating change, suggesting such change maps may have several important applications.
Automatic brain extraction in fetal MRI using multi-atlas-based segmentation
Sébastien Tourbier, Patric Hagmann, Maud Cagneaux, et al.
In fetal brain MRI, most of the high-resolution reconstruction algorithms rely on brain segmentation as a preprocessing step. Manual brain segmentation is however highly time-consuming and therefore not a realistic solution. In this work, we assess on a large dataset the performance of Multiple Atlas Fusion (MAF) strategies to automatically address this problem. Firstly, we show that MAF significantly increase the accuracy of brain segmentation as regards single-atlas strategy. Secondly, we show that MAF compares favorably with the most recent approach (Dice above 0.90). Finally, we show that MAF could in turn provide an enhancement in terms of reconstruction quality.
Automatic parcellation of longitudinal cortical surfaces
Manal H. Alassaf, James K. Hahn
We present a novel automatic method to parcellate the cortical surfaces of the neonatal brain longitudinal atlas at different stages of development. A labeled brain atlas of newborn at 41 weeks gestational age (GA) is used to propagate labels of anatomical regions of interest to an unlabeled spatio-temporal atlas, which provides a dynamic model of brain development at each week between 28-44 GA weeks. First, labels from the cortical volume of the labeled newborn brain are propagated to an age-matched cortical surface from the spatio-temporal atlas. Then, labels are propagated across the cortical surfaces of each week of the spatio-temporal atlas by registering successive cortical surfaces using a novel approach and an energy optimization function. This procedure incorporates local and global, spatial and temporal information when assigning the labels to each surface. The result is a complete parcellation of 17 neonatal brain surfaces of the spatio-temporal atlas with similar points per labels distributions across weeks.
Segmentation: Brain
icon_mobile_dropdown
3D MR ventricle segmentation in pre-term infants with post-hemorrhagic ventricle dilation
Wu Qiu, Jing Yuan, Jessica Kishimoto, et al.
Intraventricular hemorrhage (IVH) or bleed within the brain is a common condition among pre-term infants that occurs in very low birth weight preterm neonates. The prognosis is further worsened by the development of progressive ventricular dilatation, i.e., post-hemorrhagic ventricle dilation (PHVD), which occurs in 10-30% of IVH patients. In practice, predicting PHVD accurately and determining if that specific patient with ventricular dilatation requires the ability to measure accurately ventricular volume. While monitoring of PHVD in infants is typically done by repeated US and not MRI, once the patient has been treated, the follow-up over the lifetime of the patient is done by MRI. While manual segmentation is still seen as a gold standard, it is extremely time consuming, and therefore not feasible in a clinical context, and it also has a large inter- and intra-observer variability. This paper proposes a segmentation algorithm to extract the cerebral ventricles from 3D T1- weighted MR images of pre-term infants with PHVD. The proposed segmentation algorithm makes use of the convex optimization technique combined with the learned priors of image intensities and label probabilistic map, which is built from a multi-atlas registration scheme. The leave-one-out cross validation using 7 PHVD patient T1 weighted MR images showed that the proposed method yielded a mean DSC of 89.7% ± 4.2%, a MAD of 2.6 ± 1.1 mm, a MAXD of 17.8 ± 6.2 mm, and a VD of 11.6% ± 5.9%, suggesting a good agreement with manual segmentations.
Automatic tissue segmentation of neonate brain MR Images with subject-specific atlases
Marie Cherel, Francois Budin, Marcel Prastawa, et al.
Automatic tissue segmentation of the neonate brain using Magnetic Resonance Images (MRI) is extremely important to study brain development and perform early diagnostics but is challenging due to high variability and inhomogeneity in contrast throughout the image due to incomplete myelination of the white matter tracts. For these reasons, current methods often totally fail or give unsatisfying results. Furthermore, most of the subcortical midbrain structures are misclassified due to a lack of contrast in these regions. We have developed a novel method that creates a probabilistic subject-specific atlas based on a population atlas currently containing a number of manually segmented cases. The generated subject-specific atlas is sharp and adapted to the subject that is being processed. We then segment brain tissue classes using the newly created atlas with a single-atlas expectation maximization based method. Our proposed method leads to a much lower failure rate in our experiments. The overall segmentation results are considerably improved when compared to using a non-subject-specific, population average atlas. Additionally, we have incorporated diffusion information obtained from Diffusion Tensor Images (DTI) to improve the detection of white matter that is not visible at this early age in structural MRI (sMRI) due to a lack of myelination. Although this necessitates the acquisition of an additional sequence, the diffusion information improves the white matter segmentation throughout the brain, especially for the mid-brain structures such as the corpus callosum and the internal capsule.
Shape-based multi-region segmentation framework: application to 3D infants MRI data
Sonia Dahdouh, Isabelle Bloch
This paper presents a novel shape-guided multi-region variational region growing framework for extract- ing simultaneously thoracic and abdominal organs on 3D infants whole body MRI. Due to the inherent low quality of these data, classical segmentation methods tend to fail at the multi-segmentation task. To compensate for the low resolution and the lack of contrast and to enable the simultaneous segmentation of multiple organs, we introduce a segmentation framework on a graph of supervoxels that combines supervoxels intensity distribution weighted by gradient vector ow value and a shape prior per tissue. The intensity-based homogeneity criteria and the shape prior, encoded using Legendre moments, are added as energy terms in the functional to be optimized. The intensity-based energy is computed using both local (voxel value) and global (neighboring regions mean values, adjacent voxels values and distance to the neighboring regions) criteria. Inter-region con ict resolution is handled using a weighted Voronoi decomposition method, the weights being determined using tissues densities. The energy terms of the global energy equation are weighted using an information on growth direction and on gradient vector flow value. This allows us to either guide the segmentation toward the image natural edges if it is consistent with image and shape prior terms, or enforce the shape prior term otherwise. Results on 3D infants MRI data are presented and compared to a set of manual segmentations. Both visual comparison and quantitative measurements show good results.
LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement
Ipek Oguz, Martin Styner, Mar Sanchez, et al.
Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method.

Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 ± 0.03mm and an unsigned surface error of 0.42 ± 0.03mm over the whole brain.

Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 ± 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.
Robust detection of multiple sclerosis lesions from intensity-normalized multi-channel MRI
Yogesh Karpate, Olivier Commowick, Christian Barillot
Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.
Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images
Pim Moeskops, Max A. Viergever, Manon J. N. L. Benders, et al.
Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.
Segmentation
icon_mobile_dropdown
Active contour based segmentation of resected livers in CT images
Simon Oelmann, Cristina Oyarzun Laura, Klaus Drechsler, et al.
The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.
FIST: a fast interactive segmentation technique
Dirk Padfield, Rahul Bhotika, Alexander Natanzon
Radiologists are required to read thousands of patient images every day, and any tools that can improve their workflow to help them make efficient and accurate measurements is of great value. Such an interactive tool must be intuitive to use, and we have found that users are accustomed to clicking on the contour of the object for segmentation and would like the final segmentation to pass through these points. The tool must also be fast to enable real-time interactive feedback. To meet these needs, we present a segmentation workflow that enables an intuitive method for fast interactive segmentation of 2D and 3D objects. Given simple user clicks on the contour of an object in one 2D view, the algorithm generates foreground and background seeds and computes foreground and background distributions that are used to segment the object in 2D. It then propagates the information to the two orthogonal planes in a 3D volume and segments all three 2D views. The automated segmentation is automatically updated as the user continues to add points around the contour, and the algorithm is re-run using the total set of points. Based on the segmented objects in these three views, the algorithm then computes a 3D segmentation of the object. This process requires only limited user interaction to segment complex shapes and significantly improves the workflow of the user.
A supervoxel-based segmentation method for prostate MR images
Zhiqiang Tian, LiZhi Liu M.D., Baowei Fei
Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a “Supervoxel” based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%±3.2%. The segmentation method can be used not only for the prostate but also for other organs.
A 3D neurovascular bundles segmentation method based on MR-TRUS deformable registration
Xiaofeng Yang, Peter Rossi, Ashesh B. Jani, et al.
In this paper, we propose a 3D neurovascular bundles (NVB) segmentation method for ultrasound (US) image by integrating MR and transrectal ultrasound (TRUS) images through MR-TRUS deformable registration. First, 3D NVB was contoured by a physician in MR images, and the 3D MRdefined NVB was then transformed into US images using a MR-TRUS registration method, which models the prostate tissue as an elastic material, and jointly estimates the boundary deformation and the volumetric deformations under the elastic constraint. This technique was validated with a clinical study of 6 patients undergoing radiation therapy (RT) treatment for prostate cancer. The accuracy of our approach was assessed through the locations of landmarks, as well as previous ultrasound Doppler images of patients. MR-TRUS registration was successfully performed for all patients. The mean displacement of the landmarks between the post-registration MR and TRUS images was less than 2 mm, and the average NVB volume Dice Overlap Coefficient was over 89%. This NVB segmentation technique could be a useful tool as we try to spare the NVB in prostate RT, monitor NVB response to RT, and potentially improve post-RT potency outcomes.
Pancreas segmentation from 3D abdominal CT images using patient-specific weighted subspatial probabilistic atlases
Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.
Classification
icon_mobile_dropdown
Random local binary pattern based label learning for multi-atlas segmentation
Multi-atlas segmentation method has attracted increasing attention in the field of medical image segmentation. It segments the target image by combining warped atlas labels according to a label fusion strategy, usually based on the intensity information of the target and atlas images. However, it has been demonstrated that image intensity information itself is not discriminative enough for distinguishing different subcortical structures in brain magnetic resonance (MR) images. Recent advance in multi-atlas based segmentation has witnessed success of label fusion methods built on informative image features. The key component in these methods is the image feature extraction. Conventional image feature extraction methods, such as textural feature extraction, are built on manually designed image filters and their performance varies when applied to different segmentation problems. In this paper, we propose a random local binary pattern (RLBP) method to generate image features in a random fashion. Based on RLBP features, we use a local learning strategy to fuse labels in multi-atlas based segmentation. Our method has been validated for segmenting hippocampus from MR images. The experiment results have demonstrated that our method can achieve competitive segmentation performance as the state-of-the-art methods.
Multi-output decision trees for lesion segmentation in multiple sclerosis
Amod Jog, Aaron Carass, Dzung L. Pham, et al.
Multiple Sclerosis (MS) is a disease of the central nervous system in which the protective myelin sheath of the neurons is damaged. MS leads to the formation of lesions, predominantly in the white matter of the brain and the spinal cord. The number and volume of lesions visible in magnetic resonance (MR) imaging (MRI) are important criteria for diagnosing and tracking the progression of MS. Locating and delineating lesions manually requires the tedious and expensive efforts of highly trained raters. In this paper, we propose an automated algorithm to segment lesions in MR images using multi-output decision trees. We evaluated our algorithm on the publicly available MICCAI 2008 MS Lesion Segmentation Challenge training dataset of 20 subjects, and showed improved results in comparison to state-of-the-art methods. We also evaluated our algorithm on an in-house dataset of 49 subjects with a true positive rate of 0.41 and a positive predictive value 0.36.
Trabecular bone class mapping across resolutions: translating methods from HR-pQCT to clinical CT
Alexander Valentinitsch, Lukas Fischer, Janina M. Patsch, et al.
Quantitative assessment of 3D bone microarchitecture in high-resolution peripheral quantitative computed tomography (HR-pQCT) has shown promise in fracture risk assessment and biomechanics, but is limited to the distal radius and tibia. Trabecular microarchitecture classes (TMACs), based on voxel-wise clustering texture and structure tensor features in HRpQCT, is extended in this paper to quantify trabecular bone classes in clinical multi-detector CT (MDCT) images. Our comparison of TMACs in 12 cadaver radii imaged using both HRpQCT and MDCT yields a mean Dice score of up to 0.717±0.40 and visually concordant bone quality maps. Further work to develop clinically viable bone quantitative imaging using HR-pQCT validation could have a significant impact on overall bone health assessment.
Cerebral microbleed segmentation from susceptibility weighted images
Snehashis Roy, Amod Jog, Elizabeth Magrath, et al.
Cerebral microbleeds (CMB) are a common marker of traumatic brain injury. Accurate detection and quantification of the CMBs are important for better understanding the progression and prognosis of the injury. Previous microbleed detection methods have suffered from a high rate of false positives, which is time consuming to manually correct. In this paper, we propose a fully automatic, example-based method to segment CMBs from susceptibility-weighted (SWI) scans, where examples from an already segmented template SWI image are used to detect CMBs in a new image. First, multiple radial symmetry transforms (RST) are performed on the template SWI to detect small ellipsoidal structures, which serve as potential microbleed candidates. Then 3D patches from the SWI and its RSTs are combined to form a feature vector at each voxel of the image. A random forest regression is trained using the feature vectors, where the dependent variable is the binary segmentation voxel of the template. Once the regression is learnt, it is applied to a new SWI scan, whose feature vectors contain patches from SWI and its RSTs. Experiments on 26 subjects with mild to severe brain injury show a CMB detection sensitivity of 85:7%, specificity 99:5%, and a false positive to true positive ratio of 1:73, which is competitive with published methods while providing a significant reduction in computation time.
Rotation invariant eigenvessels and auto-context for retinal vessel detection
Alessio Montuoro, Christian Simader, Georg Langs, et al.
Retinal vessels are one of the few anatomical landmarks that are clearly visible in various imaging modalities of the eye. As they are also relatively invariant to disease progression, retinal vessel segmentation allows cross-modal and temporal registration enabling exact diagnosing for various eye diseases like diabetic retinopathy, hypertensive retinopathy or age-related macular degeneration (AMD). Due to the clinical significance of retinal vessels many different approaches for segmentation have been published in the literature. In contrast to other segmentation approaches our method is not specifically tailored to the task of retinal vessel segmentation. Instead we utilize a more general image classification approach and show that this can achieve comparable results. In the proposed method we utilize the concepts of eigenfaces and auto-context. Eigenfaces have been described quite extensively in the literature and their performance is well known. They are however quite sensitive to translation and rotation. The former was addressed by computing the eigenvessels in local image windows of different scales, the latter by estimating and correcting the local orientation. Auto-context aims to incorporate automatically generated context information into the training phase of classification approaches. It has been shown to improve the performance of spinal cord segmentation4 and 3D brain image segmentation. The proposed method achieves an area under the receiver operating characteristic (ROC) curve of Az = 0.941 on the DRIVE data set, being comparable to current state-of-the-art approaches.
Deep convolutional networks for pancreas segmentation in CT imaging
Holger R. Roth, Amal Farag, Le Lu, et al.
Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data.

We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not.

We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% ± 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.
Motion/Time Series
icon_mobile_dropdown
Robust bladder image registration by redefining data-term in total variational approach
Sharib Ali, Christian Daul, Ernest Galbrun, et al.
Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.
Joint registration of location and orientation of intravascular ultrasound pullbacks using a 3D graph based method
Ling Zhang, Andreas Wahle, Zhi Chen, et al.
A novel method for simultaneous registration of location and orientation of baseline and follow-up intravascular ultrasound (IVUS) pullbacks is reported. The main idea is to represent the registration problem as a 3D graph optimization problem (finding a minimum-cost path) solvable by dynamic programming. Thus, global optimality of the resulting location and orientation registration is guaranteed according to the employed cost function and node connections. The cost function integrates information related to vessel/plaque morphology, plaque shape and plaque/perivascular image data. The node connections incorporate the prior information about angular twisting between consecutively co-registered IVUS image pairs. Pilot validation of our method is currently available for four pairs of IVUS pullback sequences consisting of 323 IVUS image frames from four patients. Results showed the average location and orientation registration errors were 0.26 mm and 5.2°, respectively. Compared with our previous results, the new method offers significant alignment improvement (p < 0.001).
Optimal-mass-transfer-based estimation of glymphatic transport in living brain
Vadim Ratner, Liangjia Zhu, Ivan Kolesov, et al.
It was recently shown that the brain-wide cerebrospinal fluid (CSF) and interstitial fluid exchange system designated the ‘glymphatic pathway’ plays a key role in removing waste products from the brain, similarly to the lymphatic system in other body organs . It is therefore important to study the flow patterns of glymphatic transport through the live brain in order to better understand its functionality in normal and pathological states. Unlike blood, the CSF does not flow rapidly through a network of dedicated vessels, but rather through para-vascular channels and brain parenchyma in a slower time-domain, and thus conventional fMRI or other blood-flow sensitive MRI sequences do not provide much useful information about the desired flow patterns. We have accordingly analyzed a series of MRI images, taken at different times, of the brain of a live rat, which was injected with a paramagnetic tracer into the CSF via the lumbar intrathecal space of the spine. Our goal is twofold: (a) find glymphatic (tracer) flow directions in the live rodent brain; and (b) provide a model of a (healthy) brain that will allow the prediction of tracer concentrations given initial conditions. We model the liquid flow through the brain by the diffusion equation. We then use the Optimal Mass Transfer (OMT) approach to derive the glymphatic flow vector field, and estimate the diffusion tensors by analyzing the (changes in the) flow. Simulations show that the resulting model successfully reproduces the dominant features of the experimental data. Keywords: inverse problem, optimal mass transport, diffusion equation, cerebrospinal fluid flow in brain, optical flow, liquid flow modeling, Monge Kantorovich problem, diffusion tensor estimation
Robust temporal alignment of multimodal cardiac sequences
Andrea Perissinotto, Sandro Queirós, Pedro Morais, et al.
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 ± 1.9% and 4.0 ± 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Relating speech production to tongue muscle compressions using tagged and high-resolution magnetic resonance imaging
The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue’s motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles’ activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject’s tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients’ muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.
Registration
icon_mobile_dropdown
Automatic assessment of volume asymmetries applied to hip abductor muscles in patients with hip arthroplasty
Christian Klemt, Marc Modat, Jonas Pichat, et al.
Metal-on-metal (MoM) hip arthroplasties have been utilised over the last 15 years to restore hip function for 1.5 million patients worldwide. Althoug widely used, this hip arthroplasty releases metal wear debris which lead to muscle atrophy. The degree of muscle wastage differs across patients ranging from mild to severe. The longterm outcomes for patients with MoM hip arthroplasty are reduced for increasing degrees of muscle atrophy, highlighting the need to automatically segment pathological muscles. The automated segmentation of pathological soft tissues is challenging as these lack distinct boundaries and morphologically differ across subjects. As a result, there is no method reported in the literature which has been successfully applied to automatically segment pathological muscles. We propose the first automated framework to delineate severely atrophied muscles by applying a novel automated segmentation propagation framework to patients with MoM hip arthroplasty. The proposed algorithm was used to automatically quantify muscle wastage in these patients.
Evaluation of five image registration tools for abdominal CT: pitfalls and opportunities with soft anatomy
Christopher P. Lee, Zhoubing Xu, Ryan P. Burke, et al.
Image registration has become an essential image processing technique to compare data across time and individuals. With the successes in volumetric brain registration, general-purpose software tools are beginning to be applied to abdominal computed tomography (CT) scans. Herein, we evaluate five current tools for registering clinically acquired abdominal CT scans. Twelve abdominal organs were labeled on a set of 20 atlases to enable assessment of correspondence. The 20 atlases were pairwise registered based on only intensity information with five registration tools (affine IRTK, FNIRT, Non-Rigid IRTK, NiftyReg, and ANTs). Following the brain literature, the Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. However, interpretation was confounded due to a significant proportion of outliers. Examining the retrospectively selected top 1 and 5 atlases for each target revealed that there was a substantive performance difference between methods. To further our understanding, we constructed majority vote segmentation with the top 5 DSC values for each organ and target. The results illustrated a median improvement of 85% in DSC between the raw results and majority vote. These experiments show that some images may be well registered to some targets using the available software tools, but there is significant room for improvement and reveals the need for innovation and research in the field of registration in abdominal CTs. If image registration is to be used for local interpretation of abdominal CT, great care must be taken to account for outliers (e.g., atlas selection in statistical fusion).
Remapping of digital subtraction angiography on a standard fluoroscopy system using 2D-3D registration
Mazen G. Alhrishy, Andreas Varnavas, Alexis Guyot, et al.
Fluoroscopy-guided endovascular interventions are being performing for more and more complex cases with longer screening times. However, X-ray is much better at visualizing interventional devices and dense structures compared to vasculature. To visualise vasculature, angiography screening is essential but requires the use of iodinated contrast medium (ICM) which is nephrotoxic. Acute kidney injury is the main life-threatening complication of ICM. Digital subtraction angiography (DSA) is also often a major contributor to overall patient radiation dose (81% reported). Furthermore, a DSA image is only valid for the current interventional view and not the new view once the C-arm is moved. In this paper, we propose the use of 2D-3D image registration between intraoperative images and the preoperative CT volume to facilitate DSA remapping using a standard fluoroscopy system. This allows repeated ICM-free DSA and has the potential to enable a reduction in ICM usage and radiation dose. Experiments were carried out using 9 clinical datasets. In total, 41 DSA images were remapped. For each dataset, the maximum and averaged remapping accuracy error were calculated and presented. Numerical results showed an overall averaged error of 2.50 mm, with 7 patients scoring averaged errors < 3 mm and 2 patients < 6 mm.
Discontinuous nonrigid registration using extended free-form deformations
Rui Hua, Jose M. Pozo, Zeike A. Taylor, et al.
This paper presents a novel method to treat discontinuities in a 3D piece-wise non-rigid registration framework, coined as EXtended Free-Form Deformation (XFFD). Existing discontinuities in the image, such as sliding motion of the lungs or the cardiac boundary adjacent to the blood pool, should be handled to obtain physically plausible deformation fields for motion analysis. However, conventional Free-form deformations (FFDs) impose continuity over the whole image, introducing inaccuracy near discontinuity boundaries. The proposed method incorporates enrichment functions into the FFD formalism, inspired by the linear interpolation method in the EXtended Finite Element Method (XFEM). Enrichment functions enable B-splines to handle discontinuities with minimal increase of computational complexity, while avoiding boundary-matching problem. It retains all properties of the framework of FFDs yet seamlessly handles general discontinuities and can also coexist with other proposed improvements of the FFD formalism. The proposed method showed high performance on synthetic and 3D lung CT images. The target registration error on the CT images is comparable to the previous methods, while being a generic method without assuming any type of motion constraint. Therefore, it does not include any penalty term. However, any of these terms could be included to achieve higher accuracy for specific applications.
Using image synthesis for multi-channel registration of different image modalities
Min Chen, Amog Jog, Aaron Carass, et al.
This paper presents a multi-channel approach for performing registration between magnetic resonance (MR) images with different modalities. In general, a multi-channel registration cannot be used when the moving and target images do not have analogous modalities. In this work, we address this limitation by using a random forest regression technique to synthesize the missing modalities from the available ones. This allows a single channel registration between two different modalities to be converted into a multi-channel registration with two mono- modal channels. To validate our approach, two openly available registration algorithms and five cost functions were used to compare the label transfer accuracy of the registration with (and without) our multi-channel synthesis approach. Our results show that the proposed method produced statistically significant improvements in registration accuracy (at an α level of 0.001) for both algorithms and all cost functions when compared to a standard multi-modal registration using the same algorithms with mutual information.
Getting the most out of additional guidance information in deformable image registration by leveraging multi-objective optimization
Incorporating additional guidance information, e.g., landmark/contour correspondence, in deformable image registration is often desirable and is typically done by adding constraints or cost terms to the optimization function. Commonly, deciding between a “hard” constraint and a “soft” additional cost term as well as the weighting of cost terms in the optimization function is done on a trial-and-error basis. The aim of this study is to investigate the advantages of exploiting guidance information by taking a multi-objective optimization perspective. Hereto, next to objectives related to match quality and amount of deformation, we define a third objective related to guidance information. Multi-objective optimization eliminates the need to a-priori tune a weighting of objectives in a single optimization function or the strict requirement of fulfilling hard guidance constraints. Instead, Pareto-efficient trade-offs between all objectives are found, effectively making the introduction of guidance information straightforward, independent of its type or scale. Further, since complete Pareto fronts also contain less interesting parts (i.e., solutions with near-zero deformation effort), we study how adaptive steering mechanisms can be incorporated to automatically focus more on solutions of interest. We performed experiments on artificial and real clinical data with large differences, including disappearing structures. Results show the substantial benefit of using additional guidance information. Moreover, compared to the 2-objective case, additional computational cost is negligible. Finally, with the same computational budget, use of the adaptive steering mechanism provides superior solutions in the area of interest.
Poster Session
icon_mobile_dropdown
Evaluating intensity normalization for multispectral classification of carotid atherosclerotic plaque
Shan Gao, Ronald van’t Klooster, Diederik F. van Wijk, et al.
Intensity normalization is an important preprocessing step for automatic plaque analysis in MR images as most segmentation algorithms require the images to have a standardized intensity range. In this study, we derived several intensity normalization approaches with inspiration from expert manual analysis protocols, for classification of carotid vessel wall plaque from in vivo multispectral MRI. We investigated intensity normalization based on a circular region centered at lumen (nCircle); based on sternocleidomastoid muscle (nSCM); based on intensity scaling (nScaling); based on manually classified fibrous tissue (nManuFibrous) and based on automatic classified fibrous tissue (nAutoFibrous). The proposed normalization methods were evaluated using three metrics: (1) Dice similarity coefficient (DSC) between manual and automatic segmentation obtained by classifiers using different normalizations; (2) correlation between proposed normalizations and normalization used by expert; (3) Mahalanobis Distance between pairs of components. In the performed classification experiments, features of normalized image, smoothed, gradient magnitude and Laplacian images at multi-scales, distance to lumen, distance to outer wall, wall thickness were calculated for each vessel wall (VW) pixel. A supervised pattern recognition system, based on a linear discriminate classifier, was trained using the manual segmentation result to classify each VW pixel to be one of the four classes: fibrous tissue, lipid, calcification, and loose matrix according to the highest posterior probability. We evaluated our method on image data of 23 patients. Compared to the result of conventional square region based intensity normalizatio n, nScaling resulted in significant increase in DSC for lipid (p = 0.006) and nAutoFibrous resulted in significant increase in DSC for calcification (p = 0.004). In conclusion, it was demonstrated that the conventional region based normalization approach is not optimal and nAutoFibrous and nScaling are promising approaches deserving further studies.
Segmentation of skin strata in reflectance confocal microscopy depth stacks
Samuel C. Hames, Marco Ardigò, H. Peter Soyer, et al.
Reflectance confocal microscopy is an emerging tool for imaging human skin, but currently requires expert human assessment. To overcome the need for human experts it is necessary to develop automated tools for automatically assessing reflectance confocal microscopy imagery.

This work presents a novel approach to this task, using a bag of visual words approach to represent and classify en-face optical sections from four distinct strata of the skin. A dictionary of representative features is learned from whitened and normalised patches using hierarchical spherical k-means. Each image is then represented by extracting a dense array of patches and encoding each with the most similar element in the dictionary. Linear discriminant analysis is used as a simple linear classifier.

The proposed framework was tested on 308 depth stacks from 54 volunteers. Parameters are tuned using 10 fold cross validation on a training sub-set of the data, and final evaluation was performed on a held out test set.

The proposed method generated physically plausible profiles of the distinct strata of human skin, and correctly classified 81.4% of sections in the test set.
Towards high-throughput mouse embryonic phenotyping: a novel approach to classifying ventricular septal defects
Xi Liang, Zhongliu Xie, Masaru Tamura, et al.
The goal of the International Mouse Phenotyping Consortium (IMPC, www.mousephenotype.org) is to study all the over 23,000 genes in the mouse by knocking them out one-by-one for comparative analysis. Large amounts of knockout mouse lines have been raised, leading to a strong demand for high-throughput phenotyping technologies. Traditional means via time-consuming histological examination is clearly unsuitable in this scenario. Biomedical imaging technologies such as CT and MRI therefore have started being used to develop more efficient phenotyping approaches. Existing work however primarily rests on volumetric analytics over anatomical structures to detect anomaly, yet this type of methods generally fail when features are subtle such as ventricular septal defects (VSD) in the heart, and meanwhile phenotypic assessment normally requires expert manual labor. This study proposes, to the best of our knowledge, the first automatic VSD diagnostic system for mouse embryos. Our algorithm starts with the creation of an atlas using wild-type mouse images, followed by registration of knockouts to the atlas to perform atlas-based segmentation on the heart and then ventricles, after which ventricle segmentation is further refined using a region growing technique. VSD classification is completed by checking the existence of an overlap between left and right ventricles. Our approach has been validated on a database of 14 mouse embryo images, and achieved an overall accuracy of 90.9%, with sensitivity of 66.7% and specificity of 100%.
A primal dual fixed point algorithm for constrained optimization problems with applications to image reconstruction
Computed tomography (CT) image reconstruction problems can be solved by finding the minimization of a suitable objective function. The first-order methods for image reconstruction in CT have been popularized in recent years. These methods are interesting because they need only the first derivative information of the objective function and can solve non-smooth regularization functions. In this paper, we consider a constrained optimization problem which often appeared in the CT image reconstruction problems. For the unconstrained case, it has been studied recently. We dedicate to propose an efficient algorithm to solve the constrained optimization problem. Numerical experiments to image reconstruction benchmark problem show that the proposed algorithms can produce better reconstructed images in signal-to-noise than the original algorithm and other state-of-the-art methods.
Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit
Haixiao Liu, Zhenhua Hu, Kun Wang, et al.
Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.
Beam hardening correction for sparse-view CT reconstruction
Wenlei Liu, Junyan Rong, Peng Gao, et al.
Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant’s modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.
Heritability analysis of surface-based cortical thickness estimation on a large twin cohort
Kaikai Shen, Vincent Doré, Stephen Rose, et al.
The aim of this paper is to assess the heritability of cerebral cortex, based on measurements of grey matter (GM) thickness derived from structural MR images (sMRI). With data acquired from a large twin cohort (328 subjects), an automated method was used to estimate the cortical thickness, and EM-ICP surface registration algorithm was used to establish the correspondence of cortex across the population. An ACE model was then employed to compute the heritability of cortical thickness. Heritable cortical thickness measures various cortical regions, especially in frontal and parietal lobes, such as bilateral postcentral gyri, superior occipital gyri, superior parietal gyri, precuneus, the orbital part of the right frontal gyrus, right medial superior frontal gyrus, right middle occipital gyrus, right paracentral lobule, left precentral gyrus, and left dorsolateral superior frontal gyrus.
Estimating diffusion properties in complex fiber configurations based on structure-adaptive multi-valued tensor-field filtering
Jianfei Yang, Dirk H. J. Poot, Georgius A. M. Arkesteijn, et al.
Conventionally, a single rank-2 tensor is used to assess the white matter integrity in diffusion imaging of the human brain. However, a single tensor fails to describe the diffusion in fiber crossings. Although a dual tensor model is able to do so, the low signal-to-noise ratio hampers reliable parameter estimation as the number of parameters is doubled.

We present a framework for structure-adaptive tensor field filtering to enhance the statistical analysis in complex fiber structures. In our framework, a tensor model will be fitted based on an automated relevance determination method. Particularly, a single tensor model is applied to voxels in which the data seems to represent a single fiber and a dualtensor model to voxels appearing to contain crossing fibers. To improve the estimation of the model parameters we propose a structure-adaptive tensor filter that is applied to tensors belonging to the same fiber compartment only.

It is demonstrated that the structure-adaptive tensor-field filter improves the continuity and regularity of the estimated tensor field. It outperforms an existing denoising approach called LMMSE, which is applied to the diffusion-weighted images. Track-based spatial statistics analysis of fiber-specific FA maps show that the method sustains the detection of more subtle changes in white matter tracts than the classical single-tensor-based analysis.

Thus, the filter enhances the applicability of the dual-tensor model in diffusion imaging research. Specifically, the reliable estimation of two tensor diffusion properties facilitates fiber-specific extraction of diffusion features.
Joint brain connectivity estimation from diffusion and functional MRI data
Shu-Hsien Chu, Christophe Lenglet, Keshab K. Parhi
Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations.

We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks.

Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).
Communication of brain network core connections altered in behavioral variant frontotemporal dementia but possibly preserved in early-onset Alzheimer's disease
Madelaine Daianu, Neda Jahanshad, Mario F. Mendez, et al.
Diffusion imaging and brain connectivity analyses can assess white matter deterioration in the brain, revealing the underlying patterns of how brain structure declines. Fiber tractography methods can infer neural pathways and connectivity patterns, yielding sensitive mathematical metrics of network integrity. Here, we analyzed 1.5-Tesla wholebrain diffusion-weighted images from 64 participants – 15 patients with behavioral variant frontotemporal dementia (bvFTD), 19 with early-onset Alzheimer’s disease (EOAD), and 30 healthy elderly controls. Using whole-brain tractography, we reconstructed structural brain connectivity networks to map connections between cortical regions. We evaluated the brain’s networks focusing on the most highly central and connected regions, also known as hubs, in each diagnostic group – specifically the “high-cost” structural backbone used in global and regional communication. The high-cost backbone of the brain, predicted by fiber density and minimally short pathways between brain regions, accounted for 81-92% of the overall brain communication metric in all diagnostic groups. Furthermore, we found that the set of pathways interconnecting high-cost and high-capacity regions of the brain’s communication network are globally and regionally altered in bvFTD, compared to healthy participants; however, the overall organization of the high-cost and high-capacity networks were relatively preserved in EOAD participants, relative to controls. Disruption of the major central hubs that transfer information between brain regions may impair neural communication and functional integrity in characteristic ways typical of each subtype of dementia.
Comparisons of topological properties in autism for the brain network construction methods
Min-Hee Lee, Dong Youn Kim, Sang Hyeon Lee, et al.
Structural brain networks can be constructed from the white matter fiber tractography of diffusion tensor imaging (DTI), and the structural characteristics of the brain can be analyzed from its networks. When brain networks are constructed by the parcellation method, their network structures change according to the parcellation scale selection and arbitrary thresholding. To overcome these issues, we modified the Ɛ -neighbor construction method proposed by Chung et al. (2011). The purpose of this study was to construct brain networks for 14 control subjects and 16 subjects with autism using both the parcellation and the Ɛ-neighbor construction method and to compare their topological properties between two methods. As the number of nodes increased, connectedness decreased in the parcellation method. However in the Ɛ-neighbor construction method, connectedness remained at a high level even with the rising number of nodes. In addition, statistical analysis for the parcellation method showed significant difference only in the path length. However, statistical analysis for the Ɛ-neighbor construction method showed significant difference with the path length, the degree and the density.
A novel method for 4D cone-beam computer-tomography reconstruction
Hao Zhang, Justin C. Park, Yunmei Chen, et al.
Image quality of Four Dimensional Cone-Beam Computer-Tomography (4DCBCT) is severely impaired by highly insufficient amount of projection data available for each phase. Therefore, making good use of limited projection data is crucial to solve this problem. Noticing that usually only a portion of the images is affected by motion, we separate the moving part (different between phases) of the images from the static part (identical among all phases) with the help of prior image reconstructed using all projection data. Then we update the moving part and the static part of images alternatively through solving minimization problems based on a global (use full projection data) and several local (use projection data for respective phase) linear systems. In the other word, we rebuild a large over-determined linear system for static part from the original under-determined systems and we reduce the number of unknowns in the original system for each phase as well. As a result, image quality for both static part and moving part are greatly improved and reliable 4D CBCT images are then reconstructed.
Partial volume correction for arterial spin labeling data using spatial-temporal information
Yang Liu, Baojuan Li, Xi Zhang, et al.
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. In general ASL sequence, multiple scans of perfusion image pairs are acquired temporally to improve the signal to noise ratio. Several spatial PV correction methods have been proposed for the simple averaging of pair-difference images, while the perfusion information of gray matter and white matter existed in multiple image pairs was totally ignored. In this study, a statistical model of perfusion mixtures inside each voxel for the 4D ASL sequence is first proposed. To solve the model, a simplified method is proposed, in which the linear regression (LR) method is first used to obtain initial estimates of spatial correction, then an EM (expectation maximization) method is used to obtain accurate estimation using temporal information. The combination of LR and EM method (EM-LR) can effectively utilize the spatial-temporal information of ASL data for PV correction and provide a theoretical solution to estimate the perfusion mixtures. Both simulated and in vivo data were used to evaluate the performance of proposed method, which demonstrated its superiority on PV correction, edge preserving, and noise suppression.
Intensity transform and Wiener filter in measurement of blood flow in arteriography
Polyana F. Nunes, Marcelo L. N. Franco, João B. D. Filho, et al.
Using the arteriography examination, it is possible to check anomalies in blood vessels and diseases such as stroke, stenosis, bleeding and especially in the diagnosis of Encephalic Death in comatose individuals. Encephalic death can be diagnosed only when there is complete interruption of all brain functions, and hence the blood stream. During the examination, there may be some interference on the sensors, such as environmental factors, poor maintenance of equipment, patient movement, among other interference, which can directly affect the noise produced in angiography images. Then, we need to use digital image processing techniques to minimize this noise and improve the pixel count. Therefore, this paper proposes to use median filter and enhancement techniques for transformation of intensity using the sigmoid function together with the Wiener filter so you can get less noisy images. It’s been realized two filtering techniques to remove the noise of images, one with the median filter and the other with the Wiener filter along the sigmoid function. For 14 tests quantified, including 7 Encephalic Death and 7 other cases, the technique that achieved a most satisfactory number of pixels quantified, also presenting a lesser amount of noise, is the Wiener filter sigmoid function, and in this case used with 0.03 cuttof.
Tchebichef moments based nonlocal-means method for despeckling optical coherence tomography images
Wanying Jiang, Wenqi Xiang, Mingyue Ding, et al.
Speckle reduction in optical coherence tomography (OCT) images plays an important role in further image analysis. Although numerous despeckling methods, such as the Kuan’s filter, the Frost’s filter, wavelet based methods, anisotropic diffusion methods, have been proposed for despeckling OCT images, these methods generally tend to provide insufficient speckle suppression or limited detail preservation especially at high speckle corruption because of the insufficient utilization of image information. Different from these denoising methods, the nonlocal means (NLM) method explores nonlocal image self-similarities for image denoising, thereby providing a new method for speckle reduction in OCT images. However, the NLM method determines image self-similarities based on the intensities of noisy pixels, which will degrade its performance in restoring OCT images.

To address this problem, the Tchebichef moments based nonlocal means (TNLM) method is proposed for speckle suppression. Distinctively, he TNLM method determines the nonlocal self-similarities of the OCT images by computing the Euclidean distance between Tchebichef moments of two image patches centered at two pixels of interest in the prefiltered image. Due to the superior feature representation capability of Tchebichef moments, the proposed method can utilize more image structural information for the accurate computation of image self-similarities. The experiments on the clinical OCT images indicate that the TNLM method outperforms numerous despeckling methods in that it can suppress speckle noise more effectively while preserving image details better in terms of human vision, and it can provide higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross correlation (XCOR).
Multi-session complex averaging for high resolution high SNR 3T MR visualization of ex vivo hippocampus and insula
Aymeric Stamm, Jolene M. Singh, Benoit Scherrer, et al.
The hippocampus and the insula are responsible for episodic memory formation and retrieval. Hence, visualization of the cytoarchitecture of such structures is of primary importance to understand the underpinnings of conscious experience. Magnetic Resonance Imaging (MRI) offers an opportunity to non-invasively image these crucial structures. However, current clinical MR imaging operates at the millimeter scale while these anatomical landmarks are organized into sub-millimeter structures. For instance, the hippocampus contains several layers, including the CA3-dentate network responsible for encoding events and experiences. To investigate whether memory loss is a result of injury or degradation of CA3/dentate, spatial resolution must exceed one hundred micron, isotropic, voxel size. Going from one millimeter voxels to one hundred micron voxels results in a 1000× signal loss, making the measured signal close to or even way below the precision of the receiving coils. Consequently, the signal magnitude that forms the structural images will be biased and noisy, which results in inaccurate contrast and less than optimal signal-to-noise ratio (SNR).

In this paper, we propose a strategy to perform high spatial resolution MR imaging of the hippocampus and insula with 3T scanners that enables accurate contrast (no systematic bias) and arbitrarily high SNR. This requires the collection of additional repeated measurements of the same image and a proper averaging of the k-space data in the complex domain. This comes at the cost of additional scan time, but long single-session scan times are not practical for obvious reasons. Hence, we also develop an approach to combine k-space data from multiple sessions, which enables the total scan time to be split into arbitrarily short sessions, where the patient is allowed to move and rest in-between. For validation, we hereby illustrate our multi-session complex averaging strategy by providing high spatial resolution 3T MR visualization of the hippocampus and insula using an ex-vivo specimen, so that the number of sessions and the duration of each session are not limited by physiological motion or poor subject compliance.
Total variation based image deconvolution for extended depth-of-field microscopy images
F. Hausser, I. Beckers, M. Gierlak, et al.
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
Beyond Frangi: an improved multiscale vesselness filter
Tim Jerman, Franjo Pernuš, Boštjan Likar, et al.
Vascular diseases are among the top three causes of death in the developed countries. Effective diagnosis of vascular pathologies from angiographic images is therefore very important and usually relies on segmentation and visualization of vascular structures. To enhance the vascular structures prior to their segmentation and visualization, and to suppress non-vascular structures and image noise, the filters enhancing vascular structures are used extensively. Even though several enhancement filters are widely used, the responses of these filters are typically not uniform between vessels of different radii and, compared to the response in the central part of vessels, their response is lower at vessels' edges and bifurcations, and vascular pathologies like aneurysm. In this paper, we propose a novel enhancement filter based on ratio of multiscale Hessian eigenvalues, which yields a close-to-uniform response in all vascular structures and accurately enhances the border between the vascular structures and the background. The proposed and four state-of-the-art enhancement filters were evaluated and compared on a 3D synthetic image containing tubular structures and a clinical dataset of 15 cerebral 3D digitally subtracted angiograms with manual expert segmentations. The evaluation was based on quantitative metrics of segmentation performance, computed as area under the precision-recall curve, signal-to-noise ratio of the vessel enhancement and the response uniformity within vascular structures. The proposed filter achieved the best scores in all three metrics and thus has a high potential to further improve the performance of existing or encourage the development of more advanced methods for segmentation and visualization of vascular structures.
High performance 3D adaptive filtering for DSP based portable medical imaging systems
Olivier Bockenbach, Murtaza Ali, Ian Wainwright, et al.
Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images.

3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform.

In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform.

In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.
Directional denoising and line enhancement for device segmentation in real time fluoroscopic imaging
Purpose: The purpose of this work is to improve the segmentation of interventional devices (e.g. guidewires) in fluoroscopic images. This is required for the real time 3D reconstruction from two angiographic views where noise can cause severe reconstruction artifacts and incomplete reconstruction. The proposed method reduces the noise while enhancing the thin line structures of the device in images with subtracted background.

Methods: A two-step approach is presented here. The first step estimates, for each pixel and a given number of directions, a measure for the probability that the point is part of a line segment in the corresponding direction. This can be done efficiently using binary masks. In the second step, a directional filter kernel is applied for pixel that are assumed to be part of a line. For all other pixels a mean filter is used.

Results: The proposed algorithm was able to achieve an average contrast to noise ratio (CNR) of 6.3 compared to the bilateral filter with 5.8. For the device segmentation using global thresholding the number of missing or wrong pixels is reduced to 25 % compared to 40 % using the bilateral approach.

Conclusion: The proposed algorithm is a simple and efficient approach, which can easily be parallelized for the use on modern graphics processing units. It improves the segmentation results of the device compared to other denoising methods, and therefore reduces artifacts and increases the quality of the reconstruction without increasing the delay in real time applications notably.
Trade-off between speed and performance for colorectal endoscopic NBI image classification
Shoji Sonoyama, Toru Tamaki, Tsubasa Hirakawa, et al.
This paper investigates a trade-off between computation time and recognition rate of local descriptor-based recognition for colorectal endoscopic NBI image classification. Recent recognition methods using descriptors have been successfully applied to medical image classification. The accuracy of these methods might depend on the quality of vector quantization (VQ) and encoding of descriptors, however an accurate quantization takes a long time. This paper reports how a simple sampling strategy affects performances with different encoding methods. First, we extract about 7.7 million local descriptors from training images of a dataset of 908 NBI endoscopic images. Second, we randomly choose a subset of between 7.7M and 19K descriptors for VQ. Third, we use three encoding methods (BoVW, VLAD, and Fisher vector) with different number of descriptors. Linear SVM is used for classification of a three-class problem. The computation time for VQ was drastically reduced by the factor of 100, while the peak performance was retained. The performance improved roughly 1% to 2% when more descriptors by over-sampling were used for encoding. Performances with descriptors extracted every pixel ("grid1") or every two pixels ("grid2") are similar, while the computation time is very different; grid2 is 5 to 30 times faster than grid1. The main finding of this work is twofold. First, recent encoding methods such as VLAD and Fisher vector are as insensitive to the quality of VQ as BoVW. Second, there is a trade-off between computation time and performance in encoding over-sampled descriptors with BoVW and Fisher vector, but not with VLAD.
Automatic localization of vertebrae based on convolutional neural networks
Wei Shen, Feng Yang, Wei Mu, et al.
Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.
Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering
Karamatou A. Yacoubou Djima, Lucia D. Simonelli, Denise Cunningham, et al.
We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.
Direct volume estimation without segmentation
X. Zhen, Z. Wang, A. Islam, et al.
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model
Sijia Liu, Ruhan Sa, Orla Maguire, et al.
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Automatic detection of endothelial cells in 3D angiogenic sprouts from experimental phase contrast images
MengMeng Wang, Lee-Ling Sharon Ong, Justin Dauwels, et al.
Cell migration studies in 3D environments become more popular, as cell behaviors in 3D are more similar to the behaviors of cells in a living organism (in vivo). We focus on the 3D angiogenic sprouting in microfluidic devices, where Endothelial Cells (ECs) burrow into the gel matrix and form solid lumen vessels. Phase contrast microscopy is used for long-term observation of the unlabeled ECs in the 3D microfluidic devices. Two template matching based approaches are proposed to automatically detect the unlabeled ECs in the angiogenic sprouts from the acquired experimental phase contrast images. Cell and non-cell templates are obtained from these phase contrast images as the training data. The first approach applies Partial Least Square Regression (PLSR) to find the discriminative features and their corresponding weight to distinguish cells and non-cells, whereas the second approach relies on Principal Component Analysis (PCA) to reduce the template feature dimension and Support Vector Machine (SVM) to find their corresponding weight. Through a sliding window manner, the cells in the test images are detected. We then validate the detection accuracy by comparing the results with the same images acquired with a confocal microscope after cells are fixed and their nuclei are stained. More accurate numerical results are obtained for approach I (PLSR) compared to approach II (PCA & SVM) for cell detection. Automatic cell detection will aid in the understanding of cell migration in 3D environment and in turn result in a better understanding of angiogenesis.
Method for accurate sizing of pulmonary vessels from 3D medical images
Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many “On” pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient’s chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.
Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation
Bulat Ibragimov, Boštjan Likar, Franjo Pernuš, et al.
During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.
Piecewise recognition of bone skeleton profiles via an iterative Hough transform approach without re-voting
Giorgio Ricca, Mauro C. Beltrametti, Anna Maria Massone
Many bone shapes in the human skeleton are characterized by profiles that can be associated to equations of algebraic curves. Fixing the parameters in the curve equation, by means of a classical pattern recognition procedure like the Hough transform technique, it is then possible to associate an equation to a specific bone profile. However, most skeleton districts are more accurately described by piecewise defined curves. This paper utilizes an iterative approach of the Hough transform without re-voting, to provide an efficient procedure for describing the profile of a bone in the human skeleton as a collection of different but continuously attached curves.
A novel Hessian based algorithm for rat kidney glomerulus detection in 3D MRI
Min Zhang, Teresa Wu, Kevin M. Bennett
The glomeruli of the kidney perform the key role of blood filtration and the number of glomeruli in a kidney is correlated with susceptibility to chronic kidney disease and chronic cardiovascular disease. This motivates the development of new technology using magnetic resonance imaging (MRI) to measure the number of glomeruli and nephrons in vivo. However, there is currently a lack of computationally efficient techniques to perform fast, reliable and accurate counts of glomeruli in MR images due to the issues inherent in MRI, such as acquisition noise, partial volume effects (the mixture of several tissue signals in a voxel) and bias field (spatial intensity inhomogeneity). Such challenges are particularly severe because the glomeruli are very small, (in our case, a MRI image is ~16 million voxels, each glomerulus is in the size of 8~20 voxels), and the number of glomeruli is very large. To address this, we have developed an efficient Hessian based Difference of Gaussians (HDoG) detector to identify the glomeruli on 3D rat MR images. The image is first smoothed via DoG followed by the Hessian process to pre-segment and delineate the boundary of the glomerulus candidates. This then provides a basis to extract regional features used in an unsupervised clustering algorithm, completing segmentation by removing the false identifications occurred in the pre-segmentation. The experimental results show that Hessian based DoG has the potential to automatically detect glomeruli,from MRI in 3D, enabling new measurements of renal microstructure and pathology in preclinical and clinical studies.
Detection method of visible and invisible nipples on digital breast tomosynthesis
Seung-Hoon Chae, Ji-Wook Jeong, Sooyeul Lee, et al.
Digital Breast Tomosynthesis(DBT) with 3D breast image can improve detection sensitivity of breast cancer more than 2D mammogram on dense breast. The nipple location information is needed to analyze DBT. The nipple location is invaluable information in registration and as a reference point for classifying mass or micro-calcification clusters. Since there are visible nipple and invisible nipple in 2D mammogram or DBT, the nipple detection of breast must be possible to detect visible and invisible nipple of breast. The detection method of visible nipple using shape information of nipple is simple and highly efficient. However, it is difficult to detect invisible nipple because it doesn’t have prominent shape. Mammary glands in breast connect nipple, anatomically. The nipple location is detected through analyzing location of mammary glands in breast. In this paper, therefore, we propose a method to detect the nipple on a breast, which has a visible or invisible nipple using changes of breast area and mammary glands, respectively. The result shows that our proposed method has average error of 2.54±1.47mm.
Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine
Benjamin Narang, Michael Phillips, Karen Knapp, et al.
Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.
Evaluating MRI based vascular wall motion as a biomarker of Fontan hemodynamic performance
Prahlad G. Menon, Haifa Hong M.D.
The Fontan procedure for single-ventricle heart disease involves creation of pathways to divert venous blood from the superior & inferior venacavae (SVC, IVC) directly into the pulmonary arteries (PA), bypassing the right ventricle. For optimal surgical outcomes, venous flow energy loss in the resulting vascular construction must be minimized and ensuring close to equal flow distribution from the Fontan conduit connecting IVC to the left & right PA is paramount. This requires patient-specific hemodynamic evaluation using computational fluid dynamics (CFD) simulations which are often time and resource intensive, limiting applicability for real-time patient management in the clinic. In this study, we report preliminary efforts at identifying a new non-invasive imaging based surrogate for CFD simulated hemodynamics. We establish correlations between computed hemodynamic criteria from CFD modeling and cumulative wall displacement characteristics of the Fontan conduit quantified from cine cardiovascular MRI segmentations over time (i.e. 20 cardiac phases gated from the start of ventricular systole), in 5 unique Fontan surgical connections. To focus our attention on diameter variations while discounting side-to-side swaying motion of the Fontan conduit, the difference between its instantaneous regional expansion and inward contraction (averaged across the conduit) was computed and analyzed. Maximum Fontan conduit-average expansion over the cardiac cycle correlated with the anatomy-specific diametric offset between the axis of the IVC and SVC (r2=0.13, p=0.55) – a known factor correlated with Fontan energy loss and IVC-to-PA flow distribution. Investigation in a larger study cohort is needed to establish stronger statistical correlations.
Evaluation of COPD's diaphragm motion extracted from 4D-MRI
Windra Swastika, Yoshitada Masuda, Naoko Kawata, et al.
We have developed a method called intersection profile method to construct a 4D-MRI (3D+time) from time-series of 2D-MRI. The basic idea is to find the best matching of the intersection profile from the time series of 2D-MRI in sagittal plane (navigator slice) and time series of 2D-MRI in coronal plane (data slice). In this study, we use 4D-MRI to semiautomatically extract the right diaphragm motion of 16 subjects (8 healthy subjects and 8 COPD patients). The diaphragm motion is then evaluated quantitatively by calculating the displacement of each subjects and normalized it. We also generate phase-length map to view and locate paradoxical motion of the COPD patients. The quantitative results of the normalized displacement shows that COPD patients tend to have smaller displacement compared to healthy subjects. The average normalized displacement of total 8 COPD patients is 9.4mm and the average of normalized displacement of 8 healthy volunteers is 15.3mm. The generated phase-length maps show that not all of the COPD patients have paradoxical motion, however if it has paradoxical motion, the phase-length map is able to locate where does it occur.
Calculation of brain atrophy using computed tomography and a new atrophy measurement tool
Abdullah Bin Zahid, Artem Mikheev, Andrew Il Yang, et al.
Purpose: To determine if brain atrophy can be calculated by performing volumetric analysis on conventional computed tomography (CT) scans in spite of relatively low contrast for this modality.

Materials & Method: CTs for 73 patients from the local Veteran Affairs database were selected. Exclusion criteria: AD, NPH, tumor, and alcohol abuse. Protocol: conventional clinical acquisition (Toshiba; helical, 120 kVp, X-ray tube current 300mA, slice thickness 3-5mm). Locally developed, automatic algorithm was used to segment intracranial cavity (ICC) using (a) white matter seed (b) constrained growth, limited by inner skull layer and (c) topological connectivity. ICC was further segmented into CSF and brain parenchyma using a threshold of 16 Hu.

Results: Age distribution: 25–95yrs; (Mean 67±17.5yrs.). Significant correlation was found between age and CSF/ICC(r=0.695, p<0.01 2-tailed). A quadratic model (y=0.06–0.001x+2.56x10-5x2 ; where y=CSF/ICC and x=age) was a better fit to data (r=0.716, p < 0.01). This is in agreement with MRI literature. For example, Smith et al. found annual CSF/ICC increase in 58 – 94.5 y.o. individuals to be 0.2%/year, whereas our data, restricted to the same age group yield 0.3%/year(0.2–0.4%/yrs. 95%C.I.). Slightly increased atrophy among elderly VA patients is attributable to the presence of other comorbidities.

Conclusion: Brain atrophy can be reliably calculated using automated software and conventional CT. Compared to MRI, CT is more widely available, cheaper, and less affected by head motion due to ~100 times shorter scan time. Work is in progress to improve the precision of the measurements, possibly leading to assessment of longitudinal changes within the patient.
Automated detection of periventricular veins on 7 T brain MRI
Hugo J. Kuijf, Willem H. Bouvy, Jaco J. M. Zwanenburg, et al.
Cerebral small vessel disease is common in elderly persons and a leading cause of cognitive decline, dementia, and acute stroke. With the introduction of ultra-high field strength 7.0T MRI, it is possible to visualize small vessels in the brain. In this work, a proof-of-principle study is conducted to assess the feasibility of automatically detecting periventricular veins.

Periventricular veins are organized in a fan-pattern and drain venous blood from the brain towards the caudate vein of Schlesinger, which is situated along the lateral ventricles. Just outside this vein, a region-of- interest (ROI) through which all periventricular veins must cross is defined. Within this ROI, a combination of the vesselness filter, tubular tracking, and hysteresis thresholding is applied to locate periventricular veins.

All detected locations were evaluated by an expert human observer. The results showed a positive predictive value of 88% and a sensitivity of 95% for detecting periventricular veins.

The proposed method shows good results in detecting periventricular veins in the brain on 7.0T MR images. Compared to previous works, that only use a 1D or 2D ROI and limited image processing, our work presents a more comprehensive definition of the ROI, advanced image processing techniques to detect periventricular veins, and a quantitative analysis of the performance. The results of this proof-of-principle study are promising and will be used to assess periventricular veins on 7.0T brain MRI.
Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm
Xiaowei Ding, Piotr J. Slomka, Mariana Diaz-Zamudio, et al.
Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.
Computational analysis of PET by AIBL (CapAIBL): a cloud-based processing pipeline for the quantification of PET images
Pierrick Bourgeat, Vincent Dore, Jurgen Fripp, et al.
With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.
Image-based reconstruction of 3D myocardial infarct geometry for patient specific applications
Eranga Ukwatta, Martin Rajchl, James White, et al.
Accurate reconstruction of the three-dimensional (3D) geometry of a myocardial infarct from two-dimensional (2D) multi-slice image sequences has important applications in the clinical evaluation and treatment of patients with ischemic cardiomyopathy. However, this reconstruction is challenging because the resolution of common clinical scans used to acquire infarct structure, such as short-axis, late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, is low, especially in the out-of-plane direction. In this study, we propose a novel technique to reconstruct the 3D infarct geometry from low resolution clinical images. Our methodology is based on a function called logarithm of odds (LogOdds), which allows the broader class of linear combinations in the LogOdds vector space as opposed to being limited to only a convex combination in the binary label space. To assess the efficacy of the method, we used high-resolution LGE-CMR images of 36 human hearts in vivo, and 3 canine hearts ex vivo. The infarct was manually segmented in each slice of the acquired images, and the manually segmented data were downsampled to clinical resolution. The developed method was then applied to the downsampled image slices, and the resulting reconstructions were compared with the manually segmented data. Several existing reconstruction techniques were also implemented, and compared with the proposed method. The results show that the LogOdds method significantly outperforms all the other tested methods in terms of region overlap.
Initial evaluation of a modified dual-energy window scatter correction method for CZT-based gamma cameras for breast SPECT
Steve D. Mann, Martin P. Tornai
Solid state Cadmium Zinc Telluride (CZT) gamma cameras for SPECT imaging offer significantly improved energy resolution compared to traditional scintillation detectors. However, the photopeak resolution is often asymmetric due to incomplete charge collection within the detector, resulting in many photopeak events incorrectly sorted into lower energy bins (“tailing”). These misplaced events contaminate the true scatter signal, which may negatively impact scatter correction methods that rely on estimates of scatter from the spectra. Additionally, because CZT detectors are organized into arrays, each individual detector element may exhibit different degrees of tailing. Here, we present a modified dualenergy window scatter correction method for emission detection and imaging that attempts to account for positiondependent effects of incomplete charge collection in the CZT gamma camera of our dedicated breast SPECT-CT system. Point source measurements and geometric phantoms were used to estimate the impact of tailing on the scatter signal and extract a better estimate of the ratio of scatter within two energy windows. To evaluate the method, cylindrical phantoms with and without a separate fillable chamber were scanned to determine the impact on quantification in hot, cold, and uniform background regions. Projections were reconstructed using OSEM, and the results for the traditional and modified scatter correction methods were compared. Results show that while modest reduced quantification accuracy was observed in hot and cold regions of the multi-chamber phantoms, the modified scatter correction method yields up to 8% improved quantification accuracy with 4% less added noise than the traditional DEW method within uniform background regions.
Schizophrenia patients differentiation based on MR vascular perfusion and volumetric imaging
A. B. Spanier, L. Joskowicz, S. Moshel, et al.
Candecomp/Parafac Decomposition (CPD) has emerged as a framework for modeling N-way arrays (higher-order matrices). CPD is naturally well suited for the analysis of data sets comprised of observations of a function of multiple discrete indices. In this study we evaluate the prospects of using CPD for modeling MRI brain properties (i.e. brain volume and gray-level) for schizophrenia diagnosis. Taking into account that 3D imaging data consists of millions of pixels per patient, the diagnosis of a schizophrenia patient based on pixel analysis constitutes a methodological challenge (e.g. multiple comparison problem). We show that the CPD could potentially be used as a dimensionality redaction method and as a discriminator between schizophrenia patients and match control, using the gradient of pre- and post Gd-T1-weighted MRI data, which is strongly correlated with cerebral blood perfusion. Our approach was tested on 68 MRI scans: 40 first-episode schizophrenia patients and 28 matched controls. The CPD subject’s scores exhibit statistically significant result (P < 0.001). In the context of diagnosing schizophrenia with MRI, the results suggest that the CPD could potentially be used to discriminate between schizophrenia patients and matched control. In addition, the CPD model suggests for brain regions that might exhibit abnormalities in schizophrenia patients for future research.
Image registration based on the structure tensor of the local phase
Zhang Li, Lucas J. van Vliet, Jaap Stoker, et al.
Image registration of medical images in the presence of large intra-image signal fluctuations is a challenging task. Our paper addresses this problem by introducing a new concept based on the structure tensor of the local phase. The local phase is calculated from the monogenic signal representation of the images. The local phase image is hardly affected by unwanted signal fluctuations due to a space-variant background and a space-variant contrast. The boundary structure tensor combines the responses of edges and corners/junctions in one tensor, which has several advantages, compared to other structure tensors. We reorient the structure tensor during the registration by means of the finite-strain technique. The structure tensor is only calculated once during a preprocessing step. The results demonstrate that the proposed method effectively deals with large signal fluctuations. It performs significantly better than competing techniques.
A liver registration method for segmented multi-phase CT images
Shuyue Shi, Rong Yuan, Zhi Sun, et al.
In order to build high quality geometric models for liver containing vascular system, multi-phase CT series used in a computer–aided diagnosis and surgical planning system aims at liver diseases have to be accurately registered. In this paper we model the segmented liver containing vascular system as a complex shape and propose a two-step registration method. Without any tree modeling for vessel this method can carry out a simultaneous registration for both liver tissue and vascular system inside. Firstly a rigid aligning using vessel as feature is applied on the complex shape model while genetic algorithm is used as the optimization method. Secondly we achieve the elastic shape registration by combine the incremental free form deformation (IFFD) with a modified iterative closest point (ICP) algorithm. Inspired by the concept of demons method, we propose to calculate a fastest diffusion vector (FDV) for each control point on the IFFD lattice to replace the points correspondence needed in ICP iterations. Under the iterative framework of the modified ICP, the optimal solution of control points’ displacement in every IFFD level can be obtained efficiently. The method has been quantitatively evaluated on clinical multi-phase CT series.
Non-rigid MRI-TRUS registration in targeted prostate biopsy
Bahram Marami, Shahin Sirouspour, Suha Ghoul, et al.
A non-rigid registration method is presented for the alignment of pre-procedural magnetic resonance (MR) images with delineated suspicious regions to intra-procedural 3D transrectal ultrasound (TRUS) images in TRUS-guided prostate biopsy. In the first step, 3D MR and TRUS images are aligned rigidly using six pairs of manually identified approximate matching points on the boundary of the prostate. Then, two image volumes are non-rigidly registered using a finite element method (FEM)-based linear elastic deformation model. A vector of observation prediction errors at some points of interest within the prostate volume is computed using an intensity-based similarity metric called the modality independent neighborhood descriptor (MIND). The error vector is employed in a classical state estimation framework to estimate prostate deformation between MR and TRUS images. The points of interests are identified using speeded-up robust features (SURF) that are scale and rotation-invariant descriptors in MR images. The proposed registration method on 10 sets of prostate MR and TRUS images yielded a target registration error of 1.99±0.83 mm, and 1.97±0.87 mm in the peripheral zone (PZ) and whole gland (WG), respectively, using 68 manually-identified fiducial points. The Dice similarity coefficient (DSC) was 87.9±2.9, 82.3±4.8, 93.0±1.7, and 84.2±6.2 percent for the WG, apex, mid-gland and base of the prostate, respectively. Moreover, the mean absolute distances (MAD) between the WG surfaces in the TRUS and registered MR images was 1.6±0.3 mm. Registration results indicate effectiveness of the proposed method in improving the targeting accuracy in the TRUS-guided prostate biopsy.
Deformable registration of CT and cone-beam CT by local CBCT intensity correction
Seyoun Park, William Plishker, Raj Shekhar, et al.
In this paper, we propose a method to accurately register CT to cone-beam CT (CBCT) by iteratively correcting local CBCT intensity. CBCT is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. To address this issue, we correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. This correction-registration step is repeated until the result image converges. We tested the proposed method on eight head-and-neck cancer cases and compared its performance with state-of-the-art registration methods, Bspline, demons, and optical flow, which are widely used for CT-CBCT registration. Normalized mutual-information (NMI), normalized cross-correlation (NCC), and structural similarity (SSIM) were computed as similarity measures for the performance evaluation. Our method produced overall NMI of 0.59, NCC of 0.96, and SSIM of 0.93, outperforming existing methods by 3.6%, 2.4%, and 2.8% in terms of NMI, NCC, and SSIM scores, respectively. Experimental results show that our method is more consistent and roust than existing algorithms, and also computationally efficient with faster convergence.
A fast alignment method for breast MRI follow-up studies using automated breast segmentation and current-prior registration
Lei Wang, Jan Strehlow, Jan Rühaak, et al.
In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process.

In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
Akshay Pai, Stefan Sommer, Lauge Sørensen, et al.
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
Tracking of deformable target in 2D ultrasound images
Lucas Royer, Maud Marchal, Anthony Le Bras, et al.
In this paper, we propose a novel approach for automatically tracking deformable target within 2D ultrasound images. Our approach uses only dense information combined with a physically-based model and has therefore the advantage of not using any fiducial marker nor a priori knowledge on the anatomical environment. The physical model is represented by a mass-spring damper system driven by different types of forces where the external forces are obtained by maximizing image similarity metric between a reference target and a deformed target across the time. This deformation is represented by a parametric warping model where the optimal parameters are estimated from the intensity variation. This warping function is well-suited to represent localized deformations in the ultrasound images because it directly links the forces applied on each mass with the motion of all the pixels in its vicinity. The internal forces constrain the deformation to physically plausible motions, and reduce the sensitivity to the speckle noise. The approach was validated on simulated and real data, both for rigid and free-form motions of soft tissues. The results are very promising since the deformable target could be tracked with a good accuracy for both types of motion. Our approach opens novel possibilities for computer-assisted interventions where deformable organs are involved and could be used as a new tool for interactive tracking of soft tissues in ultrasound images.
Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study
Jan Rühaak, Alexander Derksen, Stefan Heldmann, et al.
Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans.

In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR.

The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.
Annotation-free probabilistic atlas learning for robust anatomy detection in CT images
Astrid Franz, Nicole Schadewaldt, Heinrich Schulz, et al.
A fully automatic method generating a whole body atlas from CT images is presented. The atlas serves as a reference space for annotations. It is based on a large collection of partially overlapping medical images and a registration scheme. The atlas itself consists of probabilistic tissue type maps and can represent anatomical variations. The registration scheme is based on an entropy-like measure of these maps and is robust with respect to field-of-view variations. In contrast to other atlas generation methods, which typically rely on a sufficiently large set of annotations on training cases, the presented method requires only the images. An iterative refinement strategy is used to automatically stitch the images to build the atlas.

Affine registration of unseen CT images to the probabilistic atlas can be used to transfer reference annotations, e.g. organ models for segmentation initialization or reference bounding boxes for field-of-view selection. The robustness and generality of the method is shown using a three-fold cross-validation of the registration on a set of 316 CT images of unknown content and large anatomical variability. As an example, 17 organs are annotated in the atlas reference space and their localization in the test images is evaluated. The method yields a recall (sensitivity), specificity and precision of at least 96% and thus performs excellent in comparison to competitors.
On the usefulness of gradient information in multi-objective deformable image registration using a B-spline-based dual-dynamic transformation model: comparison of three optimization algorithms
Kleopatra Pirpinia, Peter A. N. Bosman, Jan-Jakob Sonke, et al.
The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial
Piecewise nonlinear image registration using DCT basis functions
Lin Gan, Gady Agam
The deformation field in nonlinear image registration is usually modeled by a global model. Such models are often faced with the problem that a locally complex deformation cannot be accurately modeled by simply increasing degrees of freedom (DOF). In addition, highly complex models require additional regularization which is usually ineffective when applied globally. Registering locally corresponding regions addresses this problem in a divide and conquer strategy. In this paper we propose a piecewise image registration approach using Discrete Cosine Transform (DCT) basis functions for a nonlinear model. The contributions of this paper are three-folds. First, we develop a multi-level piecewise registration framework that extends the concept of piecewise linear registration and works with any nonlinear deformation model. This framework is then applied to nonlinear DCT registration. Second, we show how adaptive model complexity and regularization could be applied for local piece registration, thus accounting for higher variability. Third, we show how the proposed piecewise DCT can overcome the fundamental problem of a large curvature matrix inversion in global DCT when using high degrees of freedoms. The proposed approach can be viewed as an extension of global DCT registration where the overall model complexity is increased while achieving effective local regularization. Experimental evaluation results provide comparison of the proposed approach to piecewise linear registration using an affine transformation model and a global nonlinear registration using DCT model. Preliminary results show that the proposed approach achieves improved performance.
Personalized x-ray reconstruction of the proximal femur via a non-rigid 2D-3D registration
Weimin Yu, Philippe Zysset, Guoyan Zheng
In this paper we present a new approach for a personalized X-ray reconstruction of the proximal femur via a non-rigid registration of a 3D volumetric template to 2D calibrated C-arm images. The 2D-3D registration is done with a hierarchical two-stage strategy: the global scaled rigid registration stage followed by a regularized deformable b-spline registration stage. In both stages, a set of control points with uniform spacing are placed over the domain of the 3D volumetric template and the registrations are driven by computing updated positions of these control points, which then allows to accurately register the 3D volumetric template to the reference space of the C-arm images. Comprehensive experiments on simulated images, on images of cadaveric femurs and on clinical datasets are designed and conducted to evaluate the performance of the proposed approach. Quantitative and qualitative evaluation results are given, which demonstrate the efficacy of the present approach.
A fast and memory efficient stationary wavelet transform for 3D cell segmentation
Wavelet approaches have proven effective in many segmentation applications and in particular in the segmentation of cells, which are blob-like in shape. We build upon an established wavelet segmentation algorithm and demonstrate how to overcome some of its limitations based on the theoretical derivation of the compounding process of iterative convolutions. We demonstrate that the wavelet decomposition can be computed for any desired level directly without iterative decompositions that require additional computation and memory. This is especially important when dealing with large 3D volumes that consume significant amounts of memory and require intense computation. Our approach is generalized to automatically handle both 2D and 3D and also implicitly handles the anisotropic pixel size inherent in such datasets. Our results demonstrate a 28X improvement in speed and 8X improvement in memory efficiency for standard size 3D confocal image volumes without adversely affecting the accuracy.
Automated retinal fovea type distinction in spectral-domain optical coherence tomography of retinal vein occlusion
Jing Wu, Sebastian M. Waldstein, Bianca S. Gerendas, et al.
Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high- resolution, three-dimensional (3D) cross-sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD), glaucoma and retinal vein occlusion (RVO). Disease diagnosis, assessment, and treatment will require a patient to undergo multiple OCT scans, possibly using multiple scanners, to accurately and precisely gauge disease activity, progression and treatment success. However, cross-vendor imaging and patient movement may result in poor scan spatial correlation potentially leading to incorrect diagnosis or treatment analysis. The retinal fovea is the location of the highest visual acuity and is present in all patients, thus it is critical to vision and highly suitable for use as a primary landmark for cross-vendor/cross-patient registration for precise comparison of disease states. However, the location of the fovea in diseased eyes is extremely challenging to locate due to varying appearance and the presence of retinal layer destroying pathology. Thus categorising and detecting the fovea type is an important prior stage to automatically computing the fovea position.

Presented here is an automated cross-vendor method for fovea distinction in 3D SD-OCT scans of patients suffering from RVO, categorising scans into three distinct types. OCT scans are preprocessed by motion correction and noise filing followed by segmentation using a kernel graph-cut approach. A statistically derived mask is applied to the resulting scan creating an ROI around the probable fovea location from which the uppermost retinal surface is delineated. For a normal appearance retina, minimisation to zero thickness is computed using the top two retinal surfaces. 3D local minima detection and layer thickness analysis are used to differentiate between the remaining two fovea types. Validation employs ground truth fovea types identified by clinical experts at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and reproducible distinction of retinal fovea types from multiple vendor 3D SD-OCT scans of patients suffering from RVO, and for use in fovea position detection systems as a landmark for intra- and cross-vendor 3D OCT registration.
Bootstrapping white matter segmentation, Eve++
Andrew Plassard, Kendra E. Hinton, Vijay Venkatraman, et al.
Multi-atlas labeling has come in wide spread use for whole brain labeling on magnetic resonance imaging. Recent challenges have shown that leading techniques are near (or at) human expert reproducibility for cortical gray matter labels. However, these approaches tend to treat white matter as essentially homogeneous (as white matter exhibits isointense signal on structural MRI). The state-of-the-art for white matter atlas is the single-subject Johns Hopkins Eve atlas. Numerous approaches have attempted to use tractography and/or orientation information to identify homologous white matter structures across subjects. Despite success with large tracts, these approaches have been plagued by difficulties in with subtle differences in course, low signal to noise, and complex structural relationships for smaller tracts. Here, we investigate use of atlas-based labeling to propagate the Eve atlas to unlabeled datasets. We evaluate single atlas labeling and multi-atlas labeling using synthetic atlases derived from the single manually labeled atlas. On 5 representative tracts for 10 subjects, we demonstrate that (1) single atlas labeling generally provides segmentations within 2mm mean surface distance, (2) morphologically constraining DTI labels within structural MRI white matter reduces variability, and (3) multi-atlas labeling did not improve accuracy. These efforts present a preliminary indication that single atlas labels with correction is reasonable, but caution should be applied. To purse multi-atlas labeling and more fully characterize overall performance, more labeled datasets would be necessary.
Bright-field cell image segmentation by principal component pursuit with an Ncut penalization
Yuehuan Chen, Justin W. L. Wan
Segmentation of cells in time-lapse bright-field microscopic images is crucial in understanding cell behaviours for oncological research. However, the complex nature of the cells makes it difficult to segment cells accurately. Furthermore, poor contrast, broken cell boundaries and the halo artifact pose additional challenges to this problem. Standard segmentation techniques such as edged-based methods, watershed, or active contours result in poor segmentation. Other existing methods for bright-field images cannot provide good results without localized segmentation steps. In this paper, we present two robust mathematical models to segment bright-field cells automatically for the entire image. These models treat cell image segmentation as a background subtraction problem, which can be formulated as a Principal Component Pursuit (PCP) problem. Our first segmentation model is formulated as a PCP with nonnegative constraints. We exploit the sparse component of the PCP solution for identifying the cell pixels. However, there is no control on the quality of the sparse component and the nonzero entries can scatter all over the image, resulting in a noisy segmentation. The second model is an improvement of the first model by combining PCP with spectral clustering. Seemingly unrelated approaches, we combine the two techniques by incorporating normalized-cut in the PCP as a measure for the quality of the segmentation. These two models have been applied to a set of C2C12 cells obtained from bright-field microscopy. Experimental results demonstrate that the proposed models are effective in segmenting cells from bright-field images.
Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions
Alfiia Galimzianova, Žiga Lesjak, Boštjan Likar, et al.
Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.
Improving the robustness of interventional 4D ultrasound segmentation through the use of personalized prior shape models
Daniel Barbosa, Sandro Queirós, Pedro Morais, et al.
While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.
Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging
Wenda He, Arne Juette, Erica R. E. Denton, et al.
Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.
Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis
Miaofei Han, Jinfeng Ma, Yan Li, et al.
Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of “good” segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.
Graph cut based co-segmentation of lung tumor in PET-CT images
Wei Ju, Dehui Xiang, Bin Zhang, et al.
Accurate segmentation of pulmonary tumor is important for clinicians to make appropriate diagnosis and treatment. Positron Emission Tomography (PET) and Computed Tomography (CT) are two commonly used imaging technologies for image-guided radiation therapy. In this study, we present a graph-based method to integrate the two modalities to segment the tumor simultaneously on PET and CT images. The co-segmentation problem is formulated as an energy minimization problem. Two weighted sub-graphs are constructed for PET and CT. The characteristic information of the two modalities is encoded on the edges of the graph. A context cost is enforced by adding context arcs to achieve consistent results between the two modalities. An optimal solution can be achieved by solving a maximum flow problem. The proposed segmentation method was validated on 18 sets of PET-CT images from different patients with non-small cell lung cancer (NSCLC). The quantitative results show significant improvement of our method with a mean DSC value 0.82.
Segmentation of the liver from abdominal MR images: a level-set approach
Anwar Abdalbari, Xishi Huang, Jing Ren
The usage of prior knowledge in segmentation of abdominal MR images enables more accurate and comprehensive interpretation about the organ to segment. Prior knowledge about abdominal organ like liver vessels can be employed to get an accurate segmentation of the liver that leads to accurate diagnosis or treatment plan. In this paper, a new method for segmenting the liver from abdominal MR images using liver vessels as prior knowledge is proposed. This paper employs the technique of level set method to segment the liver from MR abdominal images. The speed image used in the level set method is responsible for propagating and stopping region growing at boundaries. As a result of the poor contrast of the MR images between the liver and the surrounding organs i.e. stomach, kidneys, and heart causes leak of the segmented liver to those organs that lead to inaccurate or incorrect segmentation. For that reason, a second speed image is developed, as an extra term to the level set, to control the front propagation at weak edges with the help of the original speed image. The basic idea of the proposed approach is to use the second speed image as a boundary surface which is approximately orthogonal to the area of the leak. The aim of the new speed image is to slow down the level set propagation and prevent the leak in the regions close to liver boundary. The new speed image is a surface created by filling holes to reconstruct the liver surface. These holes are formed as a result of the exit and the entry of the liver vessels, and are considered the main cause of the segmentation leak. The result of the proposed method shows superior outcome than other methods in the literature.
Semi-automatic 3D segmentation of costal cartilage in CT data from Pectus Excavatum patients
Daniel Barbosa, Sandro Queirós, Nuno Rodrigues, et al.
One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.
Automatic anatomy recognition of sparse objects
Liming Zhao, Jayaram K. Udupa, Dewey Odhner, et al.
A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object’s exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.
Phase congruency map driven brain tumour segmentation
Computer Aided Diagnostic (CAD) systems are already of proven value in healthcare, especially for surgical planning, nevertheless much remains to be done. Gliomas are the most common brain tumours (70%) in adults, with a survival time of just 2-3 months if detected at WHO grades III or higher. Such tumours are extremely variable, necessitating multi-modal Magnetic Resonance Images (MRI). The use of Gadolinium-based contrast agents is only relevant at later stages of the disease where it highlights the enhancing rim of the tumour. Currently, there is no single accepted method that can be used as a reference. There are three main challenges with such images: to decide whether there is tumour present and is so localize it; to construct a mask that separates healthy and diseased tissue; and to differentiate between the tumour core and the surrounding oedema. This paper presents two contributions. First, we develop tumour seed selection based on multiscale multi-modal texture feature vectors. Second, we develop a method based on a local phase congruency based feature map to drive level-set segmentation. The segmentations achieved with our method are more accurate than previously presented methods, particularly for challenging low grade tumours.
Tumor segmentation on FDG-PET: usefulness of locally connected conditional random fields
Mizuho Nishio, Atsushi K. Kono, Hisanobu Koyama, et al.
This study aimed to develop software for tumor segmentation on 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET). To segment the tumor from the background, we used graph cut, whose segmentation energy was generally divided into two terms: the unary and pairwise terms. Locally connected conditional random fields (LCRF) was proposed for the pairwise term. In LCRF, a three-dimensional cubic window with length L was set for each voxel, and voxels within the window were considered for the pairwise term. To evaluate our method, 64 clinically suspected metastatic bone tumors were tested, which were revealed by FDG-PET. To obtain ground truth, the tumors were manually delineated via consensus of two board-certified radiologists. To compare the LCRF accuracy, other types of segmentation were also applied such as region-growing based on 35%, 40%, and 45% of the tumor maximum standardized uptake value (RG35, RG40, and RG45, respectively), SLIC superpixels (SS), and region-based active contour models (AC). To validate the tumor segmentation accuracy, a dice similarity coefficient (DSC) was calculated between manual segmentation and result of each technique. The DSC difference was tested using the Wilcoxon signed rank test. The mean DSCs of LCRF at L = 3, 5, 7, and 9 were 0.784, 0.801, 0.809, and 0.812, respectively. The mean DSCs of other techniques were RG35, 0.633; RG40, 0.675; RG45, 0.689; SS, 0.709; and AC, 0.758. The DSC differences between LCRF and other techniques were statistically significant (p <0.05). In conclusion, tumor segmentation was more reliably performed with LCRF relative to other techniques.
Automated segmentation of serous pigment epithelium detachment in SD-OCT images
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch’s membrane, which doesn’t show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.
Multi-atlas based segmentation of multiple organs in breast MRI
Xi Liang, Suman Sedai, Hongzhi Wang, et al.
Automatic segmentation of the breast, chest wall and heart is an important pre-processing step for automatic lesion detection of breast MR and dynamic contrast-enhanced MR studies. In this paper, we present a fully automatic segmentation procedure of multiple organs in breast MRI images using multi-atlas based methods. Our method starts by reducing the image inhomogeneity using anisotropic fusion method. We then build multiple atlases with labels of breast, chest wall and heart. These atlases are registered to a target image to obtain warped organ labels that are aligned to the target image. Given the warped organ labels, segmentation is performed via label fusion. In this paper, we evaluate various label fusion methods and compare their performance on segmenting multiple anatomical structures in breast MRI.
Locating seed points for automatic multi-organ segmentation using non-rigid registration and organ annotations
Ranveer R. Joyseeree, Henning Müller
Organ segmentation is helpful for decision-support in diagnostic medicine. Region-growing segmentation algorithms are popular but usually require that clinicians place seed points in structures manually. A method to automatically calculate the seed points for segmenting organs in three-dimensional (3D), non-annotated Computed Tomography (CT) and Magnetic Resonance (MR) volumes from the VISCERAL dataset is presented in this paper. It precludes the need for manual placement of seeds, thereby saving time. It also has the advantage of being a simple yet effective means of finding reliable seed points for segmentation. Affine registration followed by B-spline registration are used to align expert annotations of each organ of interest in order to build a probability map for their respective location in a chosen reference frame. The centroid of each is determined. The same registration framework as above is used to warp the calculated centroids onto the volumes to be segmented. Existing segmentation algorithms may then be applied with the mapped centroids as seed points and the warped probability maps as an aid to the stopping criteria for segmentation. The above method was tested on contrast{enhanced, thorax-abdomen CT images to see if calculated centroids lay within target organs, which would equate to successful segmentation if an effective segmentation algorithm were used. Promising results were obtained and are presented in this paper. The causes for observed failures were identified and countermeasures were proposed in order to achieve even better results in the next stage of development that will involve a wider variety of MR and CT images.
Optimization-based interactive segmentation interface for multi-region problems
John S. H. Baxter, Martin Rajchl, Terry M. Peters, et al.
Interactive segmentation is becoming of increasing interest in medical imaging, combining the positive aspects of manual and automated segmentation. However, general purpose tools have been lacking in terms of segmenting multiple regions with a high degree of coupling simultaneously. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. With generalized hierarchical max-flow solvers, the hierarchy is specified in run-time, allowing different hierarchies to be explored. This paper presents a novel interactive segmentation interface, using generalized hierarchical max-flow for multi-region segmentation.
Live minimal path for interactive segmentation of medical images
Gabriel Chartrand, An Tang, Ramnada Chav, et al.
Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..
Combined use of high-definition and volumetric optical coherence tomography for the segmentation of neural canal opening in cases of optic nerve edema
Jui-Kai Wang, Randy H. Kardon, Mona K. Garvin
In cases of optic-nerve-head edema, the presence of the swelling reduces the visibility of the underlying neural canal opening (NCO) within spectral-domain optical coherence tomography (SD-OCT) volumes. Consequently, traditional SD-OCT-based NCO segmentation methods often overestimate the size of the NCO. The visibility of the NCO can be improved using high-definition 2D raster scans, but such scans do not provide 3D contextual image information. In this work, we present a semi-automated approach for the segmentation of the NCO in cases of optic disc edema by combining image information from volumetric and high-definition raster SD-OCT image sequences. In particular, for each subject, five high-definition OCT B-scans and the OCT volume are first separately segmented, and then the five high-definition B-scans are automatically registered to the OCT volume. Next, six NCO points are placed (manually, in this work) in the central three high-definition OCT B-scans (two points for each central B-scans) and are automatically transferred into the OCT volume. Utilizing a combination of these mapped points and the 3D image information from the volumetric scans, a graph-based approach is used to identify the complete NCO on the OCT en-face image. The segmented NCO points using the new approach were significantly closer to expert-marked points than the segmented NCO points using a traditional approach (root mean square differences in pixels: 5.34 vs. 21.71, p < 0.001).
Intelligent editing for post-processing of ROI segmentation
Segmentation of regions of interest (ROIs), such as suspect lesions, is a preliminary but vital step for computeraided breast cancer diagnosis, but the task is quite challenging due to image quality and the complicated phenomena that are usually involved with the ROIs. On one hand, it is possible for physicians and clinicians to dig out more information from imaging; on another hand, efficient, robust, and accurate segmentation of such kind of anatomical lesions is often a difficult and open task to researcher and technical development. As a counterbalance between automatic methods, which are usually highly application dependent, and manual approaches, which are too time consuming, live wire, which provide full user control during segmentation while minimizing user interaction, is a promising option for assisting in breast lesion segmentation in ultrasound (US) images. This work proposes a live-wire-based adjustment method to further extend its potentials in computer-aided diagnosis (CAD) applications. It allows for local boundary adjustment, based on the live-wire paradigms, for a given segmentation, and can be attached as a post-process step to the live wire method or other segmentation approaches.
Fast and memory-efficient LOGISMOS graph search for intraretinal layer segmentation of 3D macular OCT scans
Kyungmoo Lee, Li Zhang, Michael D. Abramoff M.D., et al.
Image segmentation is important for quantitative analysis of medical image data. Recently, our research group has introduced a 3-D graph search method which can simultaneously segment optimal interacting surfaces with respect to the cost function in volumetric images. Although it provides excellent segmentation accuracy, it is computationally demanding (both CPU and memory) to simultaneously segment multiple surfaces from large volumetric images. Therefore, we propose a new, fast, and memory-efficient graph search method for intraretinal layer segmentation of 3-D macular optical coherence tomograpy (OCT) scans. The key idea is to reduce the size of a graph by combining the nodes with high costs based on the multiscale approach. The new approach requires significantly less memory and achieves significantly faster processing speeds (p < 0.01) with only small segmentation differences compared to the original graph search method. This paper discusses sub-optimality of this approach and assesses trade-off relationships between decreasing processing speed and increasing segmentation differences from that of the original method as a function of employed scale of the underlying graph construction.
Segmentation of bone structures in 3D CT images based on continuous max-flow optimization
In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.
Segmentation of branching vascular structures using adaptive subdivision surface fitting
Pieter H. Kitslaar, Ronald van’t Klooster, Marius Staring, et al.
This paper describes a novel method for segmentation and modeling of branching vessel structures in medical images using adaptive subdivision surfaces fitting. The method starts with a rough initial skeleton model of the vessel structure. A coarse triangular control mesh consisting of hexagonal rings and dedicated bifurcation elements is constructed from this skeleton. Special attention is paid to ensure a topological sound control mesh is created around the bifurcation areas. Then, a smooth tubular surface is obtained from this coarse mesh using a standard subdivision scheme. This subdivision surface is iteratively fitted to the image. During the fitting, the target update locations of the subdivision surface are obtained using a scanline search along the surface normals, finding the maximum gradient magnitude (of the imaging data). In addition to this surface fitting framework, we propose an adaptive mesh refinement scheme. In this step the coarse control mesh topology is updated based on the current segmentation result, enabling adaptation to varying vessel lumen diameters. This enhances the robustness and flexibility of the method and reduces the amount of prior knowledge needed to create the initial skeletal model. The method was applied to publicly available CTA data from the Carotid Bifurcation Algorithm Evaluation Framework resulting in an average dice index of 89.2% with the ground truth. Application of the method to the complex vascular structure of a coronary artery tree in CTA and to MRI images were performed to show the versatility and flexibility of the proposed framework.
A fully automatic multi-atlas based segmentation method for prostate MR images
Zhiqiang Tian, LiZhi Liu M.D., Baowei Fei
Most of multi-atlas segmentation methods focus on the registration between the full-size volumes of the data set. Although the transformations obtained from these registrations may be accurate for the global field of view of the images, they may not be accurate for the local prostate region. This is because different magnetic resonance (MR) images have different fields of view and may have large anatomical variability around the prostate. To overcome this limitation, we proposed a two-stage prostate segmentation method based on a fully automatic multi-atlas framework, which includes the detection stage i.e. locating the prostate, and the segmentation stage i.e. extracting the prostate. The purpose of the first stage is to find a cuboid that contains the whole prostate as small cubage as possible. In this paper, the cuboid including the prostate is detected by registering atlas edge volumes to the target volume while an edge detection algorithm is applied to every slice in the volumes. At the second stage, the proposed method focuses on the registration in the region of the prostate vicinity, which can improve the accuracy of the prostate segmentation. We evaluated the proposed method on 12 patient MR volumes by performing a leave-one-out study. Dice similarity coefficient (DSC) and Hausdorff distance (HD) are used to quantify the difference between our method and the manual ground truth. The proposed method yielded a DSC of 83.4%±4.3%, and a HD of 9.3 mm±2.6 mm. The fully automated segmentation method can provide a useful tool in many prostate imaging applications.
Relaxation time based classification of magnetic resonance brain images
Fabio Baselice, Giampaolo Ferraioli, Vito Pascazio
Brain tissue classification in Magnetic Resonance Imaging is useful for a wide range of applications. Within this manuscript a novel approach for brain tissue joint segmentation and classification is presented. Starting from the relaxation time estimation, we propose a novel method for identifying the optimal decision regions. The approach exploits the statistical distribution of the involved signals in the complex domain. The technique, compared to classical threshold based ones, is able to improve the correct classification rate. The effectiveness of the approach is evaluated on a simulated case study.
Identifying the optimal segmentors for mass classification in mammograms
Yu Zhang, Noriko Tomuro, Jacob Furst, et al.
In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a “weak segmentor” because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.
Interactive image segmentation framework based on control theory
Liangjia Zhu, Ivan Kolesov, Vadim Ratner, et al.
Segmentation of anatomical structures in medical imagery is a key step in a variety of clinical applications. Designing a generic, automated method that works for various structures and imaging modalities is a daunting task. Instead of proposing a new specific segmentation algorithm, in this paper, we present a general design principle on how to integrate user interactions from the perspective of control theory. In this formulation, Lyapunov stability analysis is employed to design an interactive segmentation system. The effectiveness and robustness of the proposed method are demonstrated.
Shape index distribution based local surface complexity applied to the human cortex
Sun Hyung Kim, Vladimir Fonov, D. Louis Collins, et al.
The quantification of local surface complexity in the human cortex has shown to be of interest in investigating population differences as well as developmental changes in neurodegenerative or neurodevelopment diseases. We propose a novel assessment method that represents local complexity as the difference between the observed distributions of local surface topology to its best-fit basic topology model within a given local neighborhood. This distribution difference is estimated via Earth Move Distance (EMD) over the histogram within the local neighborhood of the surface topology quantified via the Shape Index (SI) measure. The EMD scores have a range from simple complexity (0.0), which indicates a consistent local surface topology, up to high complexity (1.0), which indicates a highly variable local surface topology. The basic topology models are categorized as 9 geometric situation modeling situations such as crowns, ridges and fundi of cortical gyro and sulci. We apply a geodesic kernel to calculate the local SI histogram distribution within a given region. In our experiments, we obtained the results of local complexity that shows generally higher complexity in the gyral/sulcal wall regions and lower complexity in some gyral ridges and lowest complexity in sulcal fundus areas. In addition, we show expected, preliminary results of increased surface complexity across most of the cortical surface within the first years of postnatal life, hypothesized to be due to the changes such as development of sulcal pits.
Cochlear shape description and analyzing via medial models
Johannes Gaa, Lüder A. Kahrs, Samuel Müller, et al.
Planning and analyzing of surgical interventions are often based on computer models derived from computed tomography images of the patient. In the field of cochlear implant insertion the modeling of several structures of the inner ear is needed. One structure is the overall helical shape of the cochlea itself. In this paper we analyze the cochlea by applying statistical shape models with medial representation. The cochlea is considered as tubular structure. A model representing the skeleton of training data and an atomic composition of the structure is built. We reduce the representation to a linear chain of atoms. As result a compact discrete model is possible. It is demonstrated how to place the atoms and build up their correspondence through a population of training data. The outcome of the applied representation is discussed in terms of impact on automated segmentation algorithms and known advantages of medial models are revisited.