Proceedings Volume 6144

Medical Imaging 2006: Image Processing

cover
Proceedings Volume 6144

Medical Imaging 2006: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 8 March 2006
Contents: 23 Sessions, 243 Papers, 0 Presentations
Conference: Medical Imaging 2006
Volume Number: 6144

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Segmentation I
  • Segmentation II
  • Texture
  • Segmentation III
  • Multiresolution and Wavelets
  • Registration I
  • Registration II
  • Registration III
  • Restoration and Filtering
  • Shape
  • Pattern Recognition
  • CAD I
  • CAD II
  • Registration Poster Session
  • Segmentation Poster Session
  • Shape Poster Session
  • CAD Poster Session
  • Mathematical Morphology Poster Session
  • Pattern Recognition Poster Session
  • Quality/Restoration/Deblurring Poster Session
  • Statistical Methods Poster Session
  • Texture Poster Session
  • Validation Poster Session
Segmentation I
icon_mobile_dropdown
Image segmentation using local shape and gray-level appearance models
Dieter Seghers D.D.S., Dirk Loeckx, Frederik Maes, et al.
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
Toward fully automatic object detection and segmentation
An automatic procedure for detecting and segmenting anatomical objects in 3-D images is necessary for achieving a high level of automation in many medical applications. Since today's segmentation techniques typically rely on user input for initialization, they do not allow for a fully automatic workflow. In this work, the generalized Hough transform is used for detecting anatomical objects with well defined shape in 3-D medical images. This well-known technique has frequently been used for object detection in 2-D images and is known to be robust and reliable. However, its computational and memory requirements are generally huge, especially in case of considering 3-D images and various free transformation parameters. Our approach limits the complexity of the generalized Hough transform to a reasonable amount by (1) using object prior knowledge during the preprocessing in order to suppress unlikely regions in the image, (2) restricting the flexibility of the applied transformation to only scaling and translation, and (3) using a simple shape model which does not cover any inter-individual shape variability. Despite these limitations, the approach is demonstrated to allow for a coarse 3-D delineation of the femur, vertebra and heart in a number of experiments. Additionally it is shown that the quality of the object localization is in nearly all cases sufficient to initialize a successful segmentation using shape constrained deformable models.
Automatic generation of dynamic 3D models for medical segmentation tasks
Models of geometry or appearance of three-dimensional objects may be used for locating and specifying object instances in 3D image data. Such models are necessary for segmentation if the object to be segmented is not separable based on image information only. They provide a-priori knowledge about the expected shape of the target structure. The success of such a segmentation task depends on the incorporated model knowledge. We present an automatic method to generate such a model for a given target structure. This knowledge is created in the form of a 3D Stable Mass-Spring Model (SMSM) and can be computed from a single sample segmentation. The model is built from different image features using a bottom-up strategy, which allows for different levels of model abstraction. We show the adequacy of the generated models in two practical medical applications: the anatomical segmentation of the left ventricle in myocardial perfusion SPECT, and the segmentation of the thyroid cartilage of the larynx in CT datasets. In both cases, the model generation was performed in a few seconds.
Oriented active shape models
Active Shape Models (ASM) are widely employed for recognizing anatomic structures and for delineating them in medical images. In this paper, we present a novel strategy called Oriented Active Shape Models (OASM) in an attempt to overcome the following three major limitations of ASM: (1) poor delineation accuracy, (2) the requirement of a large number of landmarks, (3) the problem of sensitivity to search range to recognize the object boundary. OASM effectively combines the rich statistical shape information embodied in ASM with the boundary orientedness property and the globally optimal delineation capability of the live wire methodology of boundary segmentation. The latter allow live wire to effectively separate an object boundary from other non object boundaries with similar properties that come very close in the image domain. Our approach leads us to a 2-level dynamic programming method, wherein the first level corresponds to boundary recognition and the second level corresponds to boundary delineation. Our experiments in segmenting breast, liver, bones of the foot, and cervical vertebrae of the spine in MR and CT images indicate the following: (1) The accuracy of segmentation via OASM is considerably better than that of ASM. (2) The number of landmarks can be reduced by a factor of 3 in OASM over that in ASM. (3) OASM becomes largely independent of search range. All three benefits of OASM ensue mainly from the severe constraints brought in by the boundary-orientedness property of live wire and the globally optimal solution of dynamic programming.
Segmentation by surface-to-image registration
Zhiyong Xie, Jose Tamez-Pena, Michael Gieseg, et al.
This paper presents a new image segmentation algorithm using surface-to-image registration. The algorithm employs multi-level transformations and multi-resolution image representations to progressively register atlas surfaces (modeling anatomical structures) to subject images based on weighted external forces in which weights and forces are determined by gradients and local intensity profiles obtained from images. The algorithm is designed to prevent atlas surfaces converging to unintended strong edges or leaking out of structures of interest through weak edges where the image contrast is low. Segmentation of bone structures on MR images of rat knees analyzed in this manner performs comparably to technical experts using a semi-automatic tool.
Segmentation II
icon_mobile_dropdown
Robust local intervertebral disc alignment for spinal MRI
James G. Reisman, Jan Höppner, Szu-Hao Huang, et al.
Magnetic resonance (MR) imaging is frequently used to diagnose abnormalities in the spinal intervertebral discs. Owing to the non-isotropic resolution of typical MR spinal scans, physicians prefer to align the scanner plane with the disc in order to maximize the diagnostic value and to facilitate comparison with prior and follow-up studies. Commonly a planning scan is acquired of the whole spine, followed by a diagnostic scan aligned with selected discs of interest. Manual determination of the optimal disc plane is tedious and prone to operator variation. A fast and accurate method to automatically determine the disc alignment can decrease examination time and increase the reliability of diagnosis. We present a validation study of an automatic spine alignment system for determining the orientation of intervertebral discs in MR studies. In order to measure the effectiveness of the automatic alignment system, we compared its performance with human observers. 12 MR spinal scans of adult spines were tested. Two observers independently indicated the intervertebral plane for each disc, and then repeated the procedure on another day, in order to determine the inter- and intra-observer variability associated with manual alignment. Results were also collected for the observers utilizing the automatic spine alignment system, in order to determine the method's consistency and its accuracy with respect to human observers. We found that the results from the automatic alignment system are comparable with the alignment determined by human observers, with the computer showing greater speed and consistency.
Level set based vertebra segmentation for the evaluation of Ankylosing Spondylitis
Sovira Tan, Jianhua Yao, Michael M. Ward M.D., et al.
Ankylosing Spondylitis is a disease of the vertebra where abnormal bone structures (syndesmophytes) grow at intervertebral disk spaces. Because this growth is so slow as to be undetectable on plain radiographs taken over years, it is necessary to resort to computerized techniques to complement qualitative human judgment with precise quantitative measures on 3-D CT images. Very fine segmentation of the vertebral body is required to capture the small structures caused by the pathology. We propose a segmentation algorithm based on a cascade of three level set stages and requiring no training or prior knowledge. First, the noise inside the vertebral body that often blocks the proper evolution of level set surfaces is attenuated by a sigmoid function whose parameters are determined automatically. The 1st level set (geodesic active contour) is designed to roughly segment the interior of the vertebra despite often highly inhomogeneous and even discontinuous boundaries. The result is used as an initial contour for the 2nd level set (Laplacian level set) that closely captures the inner boundary of the cortical bone. The last level set (reversed Laplacian level set) segments the outer boundary of the cortical bone and also corrects small flaws of the previous stage. We carried out extensive tests on 30 vertebrae (5 from each of 6 patients). Two medical experts scored the results at intervertebral disk spaces focusing on end plates and syndesmophytes. Only two minor segmentation errors at vertebral end plates were reported and two syndesmophytes were considered slightly under-segmented.
Segmentation of hand radiographs using fast marching methods
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
Knowledge-based segmentation of the heart from respiratory-gated CT datasets acquired without cardiac contrast-enhancement
Joyoni Dey, Tin-Su Pan, David J. Choi M.D., et al.
Respiratory motion degrades image quality in PET and SPECT imaging. Patient specific information on the motion of structures such as the heart if obtained from CT slices from a dual-modality imaging system can be employed to compensate for motion during emission reconstruction. The CT datasets may not be contrast enhanced. Since each patient may have 100-120 coronal slices covering the heart, an automated but accurate segmentation of the heart is important. We developed and implemented an algorithm to segment the heart in non-contrast CT datasets. The algorithm has two steps. In the first step we place a truncated-ellipse curve on a mid-slice of the heart, optimize its pose, and then track the contour through the other slices of the same dataset. During the second step the contour points are drawn to the local edge points by minimizing an distance measure. The segmentation algorithm was tested on 10 patients and the boundaries were determined to be accurate to within 2 mm of the visually ascertained locations of the borders of the heart. The segmentation was automatic except for initial placement of the first truncated-ellipse and for having to re-initialize the contour for 3 patients for less than 3% (1-3 slices) of the coronal slices of the heart. These end-slices constituted less than 0.3% of the heart volume.
Automatic cardiac MRI myocardium segmentation using graphcut
Gunnar Kedenburg, Chris A. Cocosco, Ullrich Köthe, et al.
Segmentation of the left myocardium in four-dimensional (space-time) cardiac MRI data sets is a prerequisite of many diagnostic tasks. We propose a fully automatic method based on global minimization of an energy functional by means of the graphcut algorithm. Starting from automatically obtained segmentations of the left and right ventricles and a cardiac region of interest, a spatial model is constructed using simple and plausible assumptions. This model is used to learn the appearance of different tissue types by non parametric robust estimation. Our method does not require previously trained shape or appearance models. Processing takes 30-40s on current hardware. We evaluated our method on 11 clinical cardiac MRI data sets acquired using cine balanced fast field echo. Linear regression of the automatically segmented myocardium volume against manual segmentations (performed by a radiologist) showed an RMS error of about 12ml.
Anatomical-based segmentation with stenosis bridging and gap closing in atherosclerotic cardiac MSCT
In the diagnosis of coronary artery disease, 3D-multi-slice computed tomography (MSCT) has recently become more and more important. In this work, an anatomical-based method for the segmentation of atherosclerotic coronary arteries in MSCT is presented. This technique is able to bridge severe stenosis, image artifacts or even full vessel occlusions. Different anatomical structures (aorta, blood-pool of the heart chambers, coronary arteries and their orifices) are detected successively to incorporate anatomical knowledge into the algorithm. The coronary arteries are segmented by a simulated wave propagation method to be able to extract anatomically spatial relations from the result. In order to bridge segmentation breaks caused by stenosis or image artifacts, the spatial location, its anatomical relation and vessel curvature-propagation are taken into account to span a dynamic search space for vessel bridging and gap closing. This allows the prevention of vessel misidentifications and improves segmentation results significantly. The robustness of this method is proven on representative medical data sets.
Texture
icon_mobile_dropdown
Quantifying changes in the bone microarchitecture using Minkowski-functionals and scaling vectors: a comparative study
Christoph W. Raeth, Dirk Mueller, Thomas M. Link, et al.
Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities exist that allow to depict structural details of trabecular bone tissue. Recently, non-linear techniques in 2D and 3D based on the scaling vector method (SVM) and the Minkowski functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. We test and compare the two methodologies using realistic two-dimensional simulations of bone structures, which model the effect of osteoblasts and osteoclasts on the local change of relative bone density. Different realizations with slightly varying control parameters are considered. Our results show that even small changes in the trabecular structures, which are induced by variation of a control parameter of the system, become discernible by applying both the MF and the locally adapted scaling vector method. The results obtained with SVM are superior to those obtained with the Minkowski functionals. An additive combination of both measures drastically increases the sensitivity to slight changes in bone structures. These findings may be especially important for monitoring the treatment of patients, where the early recognition of (drug-induced) changes in the trabecular structure is crucial.
Variogram methods for texture classification of atherosclerotic plaque ultrasound images
Oliver M. Jeromin, Marios S. Pattichis, Constantinos Pattichis, et al.
Stroke is the third leading cause of death in the western world and the major cause of disability in adults. The type and stenosis of extracranial carotid artery disease is often responsible for ischemic strokes, transient ischemic attacks (TIAs) or amaurosis fugax (AF). The identification and grading of stenosis can be done using gray scale ultrasound scans. The appearance of B-scan pictures containing various granular structures makes the use of texture analysis techniques suitable for computer assisted tissue characterization purposes. The objective of this study is to investigate the usefulness of variogram analysis in the assessment of ultrasound plague morphology. The variogram estimates the variance of random fields, from arbitrary samples in space. We explore stationary random field models based on the variogram, which can be applied in ultrasound plaque imaging leading to a Computer Aided Diagnosis (CAD) system for the early detection of symptomatic atherosclerotic plaques. Non-parametric tests on the variogram coefficients show that the cofficients coming from symptomatic versus asymptomatic plaques come from distinct distributions. Furthermore, we show significant improvement in class separation, when a log point-transformation is applied to the images, prior to variogram estimation. Model fitting using least squares is explored for anisotropic variograms along specific directions. Comparative classification results, show that variogram coefficients can be used for the early detection of symptomatic cases, and also exhibit the largest class distances between symptomatic and asymptomatic plaque images, as compared to over 60 other texture features, used in the literature.
Optimizing texture measures quantifying bone structures as well as MR-sequences at 3 Tesla: an integrative statistical approach
Christoph W. Raeth, Dirk Mueller, Ernst J. Rummeny, et al.
High resolution MR-scanners working with magnetic field strengths of 3 Tesla are clinically available nowadays. They offer the possibility to obtain 3D images with unprecedented spatial resolution and/or signal-to-noise-ratio (SNR) allowing for an accurate visualization of the trabecular bone structure. It has been demonstrated that scaling indices are well suited to quantify these structures, especially to discriminate between plate-like and rod-like structural elements, which is crucial for the diagnosis of osteoporosis. Until now image quality has mainly been assessed by visual impression or by measures based on the SNR. In this work we present a methodology to assess different MR-sequences with respect to the texture measure that is used later in the image analysis. We acquired for a bone specimen HR-MR-sequences with different spatial resolution and signal to noise ratio. For these data sets we selected two volumes of interest (VOI) of same size located in the trabecular bone and in the background of the image. For both VOIs the scaling indices are calculated for different scale parameters. Subsequently the 'texture contrast' between structure and background is calculated by comparing the probability distributions of the scaling indices using a quadratic distance measure. By means of the contrast the optimal set of scale parameters is determined. By comparing the contrast for the different MR sequences the best suited ones are determined. It turns out that sequences with slightly lower spatial resolution but better signal to noise ration yield a better texture contrast than sequences with the best spatial resolution. The presented methodology offers the possibility to optimize simultaneously texture measures and MR-sequences, which will allow for an adapted and thus optimized analysis of image structures, e.g. trabecular bone, in the HR-MR data.
Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme
Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.
Mammographic density measured as changes in tissue structure caused by HRT
Numerous studies have investigated the relation between mammographic density and breast cancer risk. These studies indicate that women with high breast density have a four to six fold risk increase. An investigation of whether or not this relation is causal is important for, e.g., hormone replacement therapy (HRT), which has been shown to actually increase the density. No gold standard for automatic assessment of mammographic density exists. Manual methods such as Wolfe patterns and BI-RADS are helpful for communication of diagnostic sensitivity, but they are both time consuming and crude. They may be sufficient in certain cases and for single measurements, but for serial, temporal analysis it is necessary to be able to detect more subtle changes and, in addition, to be more reproducible. In this work an automated method for measuring the effect of HRT w.r.t. changes in biological density in the breast is presented. This measure is a novel measure, which provides structural information orthogonal to intensity-based methods. Hessian eigenvalues at different scales are used as features and a clustering of these is employed to divide a mammogram into four structurally different areas. Subsequently, based on the relative size of the areas, a density score is determined. In the experiments, two sets of mammograms of 50 patients from a double blind, placebo controlled HRT experiment were used. The change in density for the HRT group, measured with the new method, was significantly higher (p = 0.0002) than the change in the control group.
Early detection of glaucoma using fully automated disparity analysis of optic nerve head (ONH) from stereo fundus images
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
Segmentation III
icon_mobile_dropdown
Automatic segmentation of vessels in breast MR sequences as a false positive elimination technique for automatic lesion detection and segmentation using the shape tensor
We present a new algorithm for automatic detection of bright tubular structures and its performance for automatic segmentation of vessels in breast MR sequences. This problem is interesting because vessels are the main type of false positive structures when automatically detecting lesions as regions that enhance after injection of the contrast agent. Our algorithm is based on the eigenvalues of what we call the shape tensor. It is new in that it does not rely on image derivatives of either first order, like methods based on the eigenvalues of the mean structure tensor, or second order, like methods based on the eigenvalues of the Hessian. It is therefore more precise and less sensitive to noise than those methods. In addition, the smoothing of the output which is inherent to approaches based on the Hessian or structure tensor is avoided. The output of our filter does not present the typical over-smoothed look of the output of the two differential filters that affects both their precision and sensitivity. The scale selection problem appears also less difficult in our approach compared to the differential techniques. Our algorithm is fast, needing only a few seconds per sequence. We present results of testing our method on a large number of motion-corrected breast MR sequences. These results show that our algorithm reliably segments vessels while leaving lesions intact. We also compare our method to the differential techniques and show that it significantly out-performs them both in sensitivity and localization precision and that it is less sensitive to scale selection parameters.
A dorsolateral prefrontal cortex semi-automatic segmenter
Ramsey Al-Hakim, James Fallon, Delphine Nain, et al.
Structural, functional, and clinical studies in schizophrenia have, for several decades, consistently implicated dysfunction of the prefrontal cortex in the etiology of the disease. Functional and structural imaging studies, combined with clinical, psychometric, and genetic analyses in schizophrenia have confirmed the key roles played by the prefrontal cortex and closely linked "prefrontal system" structures such as the striatum, amygdala, mediodorsal thalamus, substantia nigra-ventral tegmental area, and anterior cingulate cortices. The nodal structure of the prefrontal system circuit is the dorsal lateral prefrontal cortex (DLPFC), or Brodmann area 46, which also appears to be the most commonly studied and cited brain area with respect to schizophrenia.1, 2, 3, 4 In 1986, Weinberger et. al. tied cerebral blood flow in the DLPFC to schizophrenia.1 In 2001, Perlstein et. al. demonstrated that DLPFC activation is essential for working memory tasks commonly deficient in schizophrenia. 2 More recently, groups have linked morphological changes due to gene deletion and increased DLPFC glutamate concentration to schizophrenia.3, 4 Despite the experimental and clinical focus on the DLPFC in structural and functional imaging, the variability of the location of this area, differences in opinion on exactly what constitutes DLPFC, and inherent difficulties in segmenting this highly convoluted cortical region have contributed to a lack of widely used standards for manual or semi-automated segmentation programs. Given these implications, we developed a semi-automatic tool to segment the DLPFC from brain MRI scans in a reproducible way to conduct further morphological and statistical studies. The segmenter is based on expert neuroanatomist rules (Fallon-Kindermann rules), inspired by cytoarchitectonic data and reconstructions presented by Rajkowska and Goldman-Rakic.5 It is semi-automated to provide essential user interactivity. We present our results and provide details on our DLPFC open-source tool.
Competitive segmentation of the hippocampus and the amygdala from MRI data: validation on young healthy controls and Alzheimer’s disease patients
Marie Chupin, Dominique Hasboun, Romain Mukuna-Bantumbakulu, et al.
The hippocampus (Hc) and the amygdala (Am) are two cerebral structures that play a central role in main cognitive processes. Their segmentation allows atrophy in specific neurological illnesses to be quantified, but is made difficult by the complexity of the structures. In this work, a new algorithm for the simultaneous segmentation of Hc and Am based on competitive homotopic region deformations is presented. The deformations are constrained by relational priors derived from anatomical knowledge, namely probabilities for each structure around automatically retrieved landmarks at the border of the objects. The approach is designed to perform well on data from diseased subjects. The segmentation is initialized by extracting a bounding box and positioning two seeds; total execution time for both sides is between 10 and 15 minutes including initialization for the two structures. We present the results of validation based on comparison with manual segmentation, using volume error, spatial overlap and border distance measures. For 8 young healthy subjects the mean volume error was 7% for Hc and 11% for Am, the overlap: 84% for Hc and 83% for Am, the maximal distance: 4.2mm for Hc and 3.1mm for Am; for 4 Alzheimer's disease patients the mean volume error was 9% for Hc and Am, the overlap: 83% for Hc and 78% for Am, the maximal distance: 6mm for Hc and 4.4mm for Am. We conclude that the performance of the proposed method compares favourably with that of other published approaches in terms of accuracy and has a short execution time.
Improved 3D live-wire method with application to 3D CT chest image analysis
The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.
Automatic segmentation of pulmonary nodules on CT images by use of NCI lung image database consortium
Accurate segmentation of small pulmonary nodules (SPNs) on thoracic CT images is an important technique for volumetric doubling time estimation and feature characterization for the diagnosis of SPNs. Most of the nodule segmentation algorithms that have been previously presented were designed to handle solid pulmonary nodules. However, SPNs with ground-glass opacity (GGO) also affects a diagnosis. Therefore, we have developed an automated volumetric segmentation algorithm of SPNs with GGO on thoracic CT images. This paper presents our segmentation algorithm with multiple fixed-thresholds, template-matching method, a distance-transformation method, and a watershed method. For quantitative evaluation of the performance of our algorithm, we used the first dataset provided by NCI Lung Image Database Consortium (LIDC). In the evaluation, we employed the coincident rate which was calculated with both the computerized segmented region of a SPN and the matching probability map (pmap) images provided by LIDC. As the result of 23 cases, the mean of the total coincident rate was 0.507 +/- 0.219. From these results, we concluded that our algorithm is useful for extracting SPNs with GGO and solid pattern as well as wide variety of SPNs in size.
Automatic segmentation of pulmonary fissures in x-ray CT images using anatomic guidance
The pulmonary lobes are the five distinct anatomic divisions of the human lungs. The physical boundaries between the lobes are called the lobar fissures. Detection of lobar fissure positions in pulmonary X-ray CT images is of increasing interest for the early detection of pathologies, and also for the regional functional analysis of the lungs. We have developed a two-step automatic method for the accurate segmentation of the three pulmonary fissures. In the first step, an approximation of the actual fissure locations is made using a 3-D watershed transform on the distance map of the segmented vasculature. Information from the anatomically labeled human airway tree is used to guide the watershed segmentation. These approximate fissure boundaries are then used to define the region of interest (ROI) for a more exact 3-D graph search to locate the fissures. Within the ROI the fissures are enhanced by computing a ridgeness measure, and this is used as the cost function for the graph search. The fissures are detected as the optimal surface within the graph defined by the cost function, which is computed by transforming the problem to the problem of finding a minimum s-t cut on a derived graph. The accuracy of the lobar borders is assessed by comparing the automatic results to manually traced lobe segments. The mean distance error between manually traced and computer detected left oblique, right oblique and right horizontal fissures is 2.3 ± 0.8 mm, 2.3 ± 0.7 mm and 1.0 ± 0.1 mm, respectively.
Multiresolution and Wavelets
icon_mobile_dropdown
A pseudo wavelet-based method for accurate tagline tracing on tagged MR images of the tongue
Xiaohui Yuan, Cengizhan Ozturk, Gloria Chi-Fishman
In this paper, we present a pseudo wavelet-based tagline detection method. The tagged MR image is transformed to the wavelet domain, and the prominent tagline coefficients are retained while others are eliminated. Significant stripes are extracted via segmentation, which are mixtures of tags and anatomical boundary that resembles line. A refinement step follows such that broken lines or isolated points are grouped or eliminated. Without assumption on tag models, our method extracts taglines automatically regardless their width and spacing. In addition, founded on the multi-resolution wavelet analysis, our method reconstructs taglines precisely and shows great robustness to various types of taglines.
An improved method of wavelet image fusion for extended depth-of-field microscope imaging
Samuel Cheng, Qiang Wu, Hyohoon Choi, et al.
Digital image fusion is a useful technique that can be applied to achieve extended depth-of-field microscope imaging. The central idea is to incorporate from all input images the regions that contain most in-focus signals into a single composite image. The amount of signal content of a particular region is estimated by an activity measure in general. To our best knowledge, all existing approaches in the literatures rely on estimates based on the strength of high frequency signal components as the activity measure. However, such measure does not distinguish true image signals from noise. We propose to use a multiscale point-wise product as the activity measure, which does not amplify the effect of noise. The resulting scheme shows a significant improvement on imaging of cytological specimens in terms of both subjective and objective quality even under a noise-free environment. More importantly, the scheme has a significant advantage over existing methods in the presence of noise.
Three-band MRI image fusion utilizing the wavelet-based method optimized with two quantitative fusion metrics
In magnetic resonance imaging (MRI), there are three bands of images ("MRI triplet") available, which are T1-, T2- and PD-weighted images. The three images of a MRI triplet provide complementary structure information and therefore it is useful for diagnosis and subsequent analysis to combine three-band images into one. We propose an advanced discrete wavelet transform (αDWT) for three-band MRI image fusion and the αDWT algorithm is further optimized utilizing two quantitative fusion metrics - the image quality index (IQI) and ratio spatial frequency error (rSFe). In the aDWT method, principle component analysis (PCA) and morphological processing are incorporated into a regular DWT fusion algorithm. Furthermore, the αDWT has two adjustable parameters - the level of DWT decomposition (Ld) and the length of the selected wavelet (Lw) that determinately affect the fusion result. The fused image quality can be quantitatively measured with the established metrics - IQI and rSFe. Varying the control parameters (Ld and Lw), an iterative fusion procedure can be implemented and running until an optimized fusion is achieved. We fused and analyzed several MRI triplets from the Visible Human Project® female dataset. From the quantitative and qualitative evaluations of fused images, we found that (1) the αDWTi-IQI algorithm produces a smoothed image whereas the αDWTi-rSFe algorithm yields a sharpened image, (2) fused image "T1+T2" is the most informative one in comparison with other two-in-one fusions (PD+T1 and PD+T2), (3) for three-in-one fusions, no significant difference is observed among the three fusions of (PD+T1)+T2, (PD+T2)+T1 and (T1+T2)+PD, thus the order of fusion does not play an important role. The fused images can significantly benefit medical diagnosis and also the further image processing such as multi-modality image fusion (with CT images), visualization (colorization), segmentation, classification and computer-aided diagnosis (CAD).
Registration I
icon_mobile_dropdown
Large-scale validation of non-rigid registration algorithms for atlas-based brain image segmentation
Qian Wang, Emiliano D'Agostino, Dieter Seghers, et al.
In this paper, we evaluate different non-rigid image registration methodologies in the context of atlas-based brain image segmentation. Three non-rigid voxel-based registration regularization schemes (viscous fluid, elastic and curvature-based registration) combined with the mutual information similarity measure are compared. We conduct large-scale atlas-based segmentation experiments on a set of 20 anatomically labelled MR brain images in order to find the optimal parameter settings for each scheme. The performance of the optimal registration schemes is evaluated in their capability of accurately segmenting 49 different brain sub-structures of varying size and shape.
Multi-modal inter-subject registration of mouse brain images
Xia Li, Thomas E. Yankeelov, Glenn Rosen, et al.
The importance of small animal imaging in fundamental and clinical research is growing rapidly. These studies typically involve micro PET, micro MR, and micro CT images as well as optical or fluorescence images. Histological images are also often used to complement and/or validate the in vivo data. As is the case for human studies, automatic registration of these imaging modalities is a critical component of the overall analysis process, but, the small size of the animals and thus the limited spatial resolution of the in vivo images present specific challenges. In this paper, we propose a series of methods and techniques that permit the inter-subject registration of micro MR and histological images. We then compare results obtained by registering directly MR volumes to each other using a non-rigid registration algorithm we have developed at our institution with results obtained by registering first the MR volumes to their corresponding histological volume, which we reconstruct from 2D cross-sections, and then registering histological volumes to each other. We show that the second approach is preferable.
Improved method for correction of systematic bias introduced by the sub-voxel image registration process in functional magnetic resonance imaging (fMRI)
During functional magnetic resonance imaging (fMRI) brain examinations, the signal extraction from a large number of images is used to evaluate changes in blood oxygenation levels by applying statistical methodology. Image registration is essential as it assists in providing accurate fractional positioning accomplished by using interpolation between sequentially acquired fMRI images. Unfortunately, current subvoxel registration methods found in standard software may produce significant bias in the variance estimator when interpolating with fractional, spatial voxel shifts. It was found that interpolation schemes, as currently applied during the registration of functional brain images, could introduce statistical bias, but there is a possible correction scheme. This bias was shown to result from the "weighted-averaging" process employed by conventional implementation of interpolation schemes. The most severe consequence of inaccurate variance estimators is the undesirable violation of the fundamental 'stationary' assumption required for many statistical methods and Gaussian random field analysis. Thus, this bias violates assumptions of the general linear model (GLM) and/or t-tests commonly used in fMRI studies. Using simulated data as well as actual human data in this, it was demonstrated that this artifact can significantly alter the magnitude and location of the resulting activation patterns/results. Further, the work detailed here introduces a bias correction scheme and evaluates the improved accuracy of its sample variance calculation and influence on fMRI results through comparison with traditional fMRI image registered data.
Quantification of the migration and deformation of abdominal aortic aneurysm stent grafts
Julian Mattes, Iris Steingruber, Michael Netzer, et al.
The endovascular repair of an abdominal aortic aneurysm is a minimal invasive therapy which has been established during the past 15 years. A stent-graft is placed inside the aorta in order to cover the weakened regions of its wall. During a time interval of one or more years the stent-graft can migrate and deform with the risk of the occlusion of one of its limbs or of the rupture of the aneurysm. In this work we developed several strategies to quantify the migration and deformation in order to assess the risk coming with these movements and especially to characterize appearing complications by them. We calculated the rigid movement of the stent-graft and the aorta relative to the spinal canal. For this purpose, firstly, we rigidly registered the spinal canals, extracted for the different points in time, in order to establish a fixed reference system. All objects have been segmented first and surface points have been determined before applying a rigid and non-rigid point set registration algorithm. The change in the residual error after registration of the stent-graft with an increasing number of degrees of freedom indicates the amount of change in the stent-graft's morphology. We investigated a sample of 9. Two cases could be clearly distinguished by the quantified parameters: a high global migration and a strong reduction of the residual error after non-rigid registration. In both cases, strong complications have been detected by the examination of clinical experts but only by means of the images acquired one year later.
Deformable registration of abdominal CT images: tissue stiffness constraints using B-splines
Jay B. West, Calvin R. Maurer Jr., John R. Dooley, et al.
One method of modelling respiratory motion of the abdomen is to acquire CT images at different points in the respiratory cycle and develop a deformation model that gives a mapping between corresponding anatomical points in the images. In this work, we use such a method, and the target application is radiosurgery, particularly radiosurgical treatment of lesions that move during respiration, for example those in the liver, lung, or pancreas. In order to accurately calculate the treatment dose, it is necessary to have a good deformation map both globally and locally (in the vicinity of the treatment target). We use a dual-resolution method in order to allow a more accurate deformation model to be computed in the region of interest. We also introduce a tissue stiffness constraint, along with an application of matrix algebra that allows this constraint to be applied in an effective way with respect to the control point values.
Multi-modal 2D-3D non-rigid registration
M. Prümmer, J. Hornegger, M. Pfister, et al.
In this paper, we propose a multi-modal non-rigid 2D-3D registration technique. This method allows a non-rigid alignment of a patient pre-operatively computed tomography (CT) to few intra operatively acquired fluoroscopic X-ray images obtained with a C-arm system. This multi-modal approach is especially focused on the 3D alignment of high contrast reconstructed volumes with intra-interventional low contrast X-ray images in order to make use of up-to-date information for surgical guidance and other interventions. The key issue of non-rigid 2D-3D registration is how to define the distance measure between high contrast 3D data and low contrast 2D projections. In this work, we use algebraic reconstruction theory to handle this problem. We modify the Euler-Lagrange equation by introducing a new 3D force. This external force term is computed from the residual of the algebraic reconstruction procedures. In the multi-modal case we replace the residual between the digitally reconstructed radiographs (DRR) and observed X-ray images with a statistical based distance measure. We integrate the algebraic reconstruction technique into a variational registration framework, so that the 3D displacement field is driven to minimize the reconstruction distance between the volumetric data and its 2D projections using mutual information (MI). The benefits of this 2D-3D registration approach are its scalability in the number of used X-ray reference images and the proposed distance that can handle low contrast fluoroscopies as well. Experimental results are presented on both artificial phantom and 3D C-arm CT images.
Registration II
icon_mobile_dropdown
Explicit rigid and similarity image registration
Most of the suggested image registration methods are based on the optimization of an objective function. Drawbacks of this approach are the problem of local minima and the need to initialize the transformation close to the true solution. This paper presents a method for N-dimensional rigid and similarity image registration that is not optimization-based and consequently it doesn't involve local minima and initialization. Instead of obtaining the transformation parameters implicitly through an iterative optimization process, they are obtained explicitly. The proposed method has advantages over existing explicit methods. The explicit expressions for transformation parameters involve image integrals and no image derivatives, which makes the method robust to noise. It is shown that the method has a few desired properties including symmetry and transitivity, and that it is invariant to initial alignment of the images. The method has been tested on simulated and real brain 2D and 3D MR image pairs and the achieved average registration error was one voxel.
On the alignment of shapes represented by Fourier descriptors
Karl Sjöstrand, Anders Ericsson, Rasmus Larsen
The representation of shapes by Fourier descriptors is a time-honored technique that has received relatively little attention lately. Nevertheless, it has its benefits and is suitable for describing a range of medical structures in two dimensions. Delineations in medical applications often consist of continuous outlines of structures, where no information of correspondence between samples exist. In this article, we discuss a Euclidean alignment method that works directly with the functional representation of Fourier descriptors, and that is optimal in a least-squares sense. With corresponding starting points, the alignment of one shape onto another consists of a single expression. If the starting points are arbitrary, we present a simple algorithm to bring a set of shapes into correspondence. Results are given for three different data sets; 62 outlines of the corpus callosum brain structure, 61 outlines of the brain ventricles, and 50 outlines of the right lung. The results show that even though starting points, translations, rotations and scales have been randomized, the alignment succeeds in all cases. As an application of the proposed method, we show how high-quality shape models represented by common landmarks can be constructed in an automatic fashion. If the aligned Fourier descriptors are inverse transformed from the frequency domain to the spatial domain, a set of roughly aligned landmarks are obtained. The positions of these are then adjusted along the contour of the objects using the minimum description length criterion, producing ample correspondences. Results on this are also presented for all three data sets.
Mjolnir: deformable image registration using feature diffusion
Image registration is the process of aligning separate images into a common reference frame so that they can be compared visually or statistically. In order for this alignment to be accurate and correct it is important to identify the correct anatomical correspondences between different subjects. We propose a new approach for a feature-based, inter-subject deformable image registration method using a novel displacement field interpolation. Among the top deformable registration algorithms in the literature today is the work of Shen et al. called HAMMER. This is a feature-based, hierarchical registration algorithm, which introduces the novel idea of fusing feature and intensity matching. The algorithm presented in this paper is an implementation of that method, where significant improvements of some important aspects have been made. A new approach to the algorithm will be introduced as well as clarification of some key features of the work of Shen et al. which have not been elaborated in previous publications. The new algorithm, which is referred to as Mjolnir (Thor's hammer), was validated on both synthesized and real T1 weighted MR brain images. The results were compared with results generated by HAMMER and show significant improvements in accuracy with reduction in computation time.
Non-rigid brain image registration using a statistical deformation model
Jeroen Wouters, Emiliano D'Agostino, Frederik Maes, et al.
In this article, we propose a new registration method, based on a statistical analysis of deformation fields. At first, a set of MRI brain images was registered using a viscous fluid algorithm. The obtained deformation fields are then used to calculate a Principal Component Analysis (PCA) based decomposition. Since PCA models the deformations as a linear combination of statistically uncorrelated principal components, new deformations can be created by changing the coefficients in the linear combination. We then use the PCA representation of the deformation fields to non-rigidly align new sets of images. We use a gradient descent method to adjust the coefficients of the principal components, such that the resulting deformation maximizes the mutual information between the deformed image and an atlas image. The results of our method are promising. Viscous fluid registrations of new images can be recovered with an accuracy of about half a voxel. Better results can be obtained by using a more extensive database of learning images (we only used 84). Also, the optimization method used here can be improved, especially to shorten computation time.
Nonrigid registration using regularization that accommodates local tissue rigidity
Dan Ruan, Jeffrey A. Fessler, Michael Roberson, et al.
Regularized nonrigid medical image registration algorithms usually estimate the deformation by minimizing a cost function, consisting of a similarity measure and a penalty term that discourages "unreasonable" deformations. Conventional regularization methods enforce homogeneous smoothness properties of the deformation field; less work has been done to incorporate tissue-type-specific elasticity information. Yet ignoring the elasticity differences between tissue types can result in non-physical results, such as bone warping. Bone structures should move rigidly (locally), unlike the more elastic deformation of soft issues. Existing solutions for this problem either treat different regions of an image independently, which requires precise segmentation and incurs boundary issues; or use an empirical spatial varying "filter" to "correct" the deformation field, which requires the knowledge of a stiffness map and departs from the cost-function formulation. We propose a new approach to incorporate tissue rigidity information into the nonrigid registration problem, by developing a space variant regularization function that encourages the local Jacobian of the deformation to be a nearly orthogonal matrix in rigid image regions, while allowing more elastic deformations elsewhere. For the case of X-ray CT data, we use a simple monotonic increasing function of the CT numbers (in HU) as a "rigidity index" since bones typically have the highest CT numbers. Unlike segmentation-based methods, this approach is flexible enough to account for partial volume effects. Results using a B-spline deformation parameterization illustrate that the proposed approach improves registration accuracy in inhale-exhale CT scans with minimal computational penalty.
Nonrigid registration using a rigidity constraint
Nonrigid registration is a technique commonly used in the field of medical imaging. A drawback of most current nonrigid registration algorithms is that they model all tissue as being nonrigid. When a nonrigid registration is performed, the rigid objects in the image, such as bony structures or surgical instruments, may also transform nonrigidly. Other consequences are that tumour growth between follow-up images may be concealed, or that structures containing contrast material in one image and not in the other may be compressed by the registration algorithm. In this paper we propose a novel regularisation term, which is added to the cost function in order to penalise nonrigid deformations of rigid objects. This regularisation term can be used for any representation of the deformation field capable of modelling locally rigid deformations. By using a B-spline representation of the deformation field, a fast algorithm can be devised. We show on 2D synthetic data, on clinical CT slices, and on clinical DSA images, that the proposed rigidity constraint is successful, thus improving registration results.
Registration III
icon_mobile_dropdown
Reconstruction of 4D-CT data sets acquired during free breathing for the analysis of respiratory motion
Jan Ehrhardt, Rene Werner, Thorsten Frenzel, et al.
Respiratory motion is a significant source of error in radiotherapy treatment planning. 4D-CT data sets can be useful to measure the impact of organ motion caused by breathing. But modern CT scanners can only scan a limited region of the body simultaneously and patients have to be scanned in segments consisting of multiple slices. For studying free breathing motion multislice CT scans can be collected simultaneously with digital spirometry over several breathing cycles. The 4D data set is assembled by sorting the free breathing multislice CT scans according to the couch position and the tidal volume. But artifacts can occur because there are no data segments for exactly the same tidal volume and all couch positions. We present an optical flow based method for the reconstruction of 4D-CT data sets from multislice CT scans, which are collected simultaneously with digital spirometry. The optical flow between the scans is estimated by a non-linear registration method. The calculated velocity field is used to reconstruct a 4D-CT data set by interpolating data at user-defined tidal volumes. By this technique, artifacts can be reduced significantly. The reconstructed 4D-CT data sets are used for studying inner organ motion during the respiratory cycle. The procedures described were applied to reconstruct 4D-CT data sets for four tumour patients who have been scanned during free breathing. The reconstructed 4D data sets were used to quantify organ displacements and to visualize the abdominothoracic organ motion.
Globally optimal model-based matching of anatomical trees
Modern MDCT and micro-CT scanners are able to produce high-resolution three-dimensional (3D) images of anatomical trees, such as the airway tree and the heart and liver vasculature. An important problem arising in many contexts is the matching of trees depicted in two different images. Three basic steps are used in order to match two trees: (1) image segmentation, to extract the raw trees from a given pair of 3D images; (2) axial-analysis, to define the underlying centerline structure of the trees; and (3) tree matching, to match the centerline structures of the trees. We focus on step (3). This task is complicated by several problems associated with current segmentation and axial-analysis methods, including missing branches, false branches, and other topological errors in the extracted trees. We propose a model-based approach in which the extracted trees are assumed to arise from an initially unknown common structure corrupted by a sequence of modelled topological deformations. We employ a novel mathematical framework to directly incorporate this model into the matching problem. Under this framework, it is possible to define the set of matches that are consistent with a given deformation model. The optimal match is the member of this set that maximizes a user-definable similarity measure. We present several such similarity measures based upon geometrical attributes (e.g., branch lengths, branching angles, and relative branchpoint locations as measured from the 3D image data). We locate the globally optimal match via an efficient dynamic programming algorithm. Our primary analytical result is a set of sufficient conditions on the user-definable similarity measure such that our dynamic programming algorithm is guaranteed to locate an optimal match. Experimental results have been generated for 3D human CT chest scans and micro-CT coronary arterial-tree images of mice. The resulting matches are in good agreement with correspondences defined by human experts.
A comparison of FFD-based nonrigid registration and AAMs applied to myocardial perfusion MRI
Hildur Ólafsdóttir, Mikkel B. Stegmann, Bjarne K. Ersbøll, et al.
Little work has been done on comparing the performance of statistical model-based approaches and nonrigid registration algorithms. This paper deals with the qualitative and quantitative comparison of active appearance models (AAMs) and a nonrigid registration algorithm based on free-form deformations (FFDs). AAMs are known to be much faster than nonrigid registration algorithms. On the other hand nonrigid registration algorithms are independent of a training set as required to build an AAM. To obtain a further comparison of the two methods, they are both applied to automatically register multi-slice myocardial perfusion images. The images are acquired by magnetic resonance imaging, from infarct patients. A registration of these sequences is crucial for clinical practice, which currently is subjected to manual labor. In the paper, the pros and cons of the two registration approaches are discussed and qualitative and quantitative comparisons are provided. The quantitative comparison is obtained by an analysis of variance of landmark errors, i.e. point to point and point to curve errors. Even though the FFD-based approach does not include a training phase it gave similar accuracy as the AAMs in terms of point to point errors. For the point to curve errors the AAMs provided higher accuracy. In both cases AAMs gave higher precision due to the training procedure.
Cardiac motion estimation by using high-dimensional features and K-means clustering method
Tagged Magnetic Resonance Imaging (MRI) is currently the reference modality for myocardial motion and strain analysis. Mutual Information (MI) based non rigid registration has proven to be an accurate method to retrieve cardiac motion and overcome many drawbacks present on previous approaches. In a previous work1, we used Wavelet-based Attribute Vectors (WAVs) instead of pixel intensity to measure similarity between frames. Since the curse of dimensionality forbids the use of histograms to estimate MI of high dimensional features, k-Nearest Neighbors Graphs (kNNG) were applied to calculate α-MI. Results showed that cardiac motion estimation was feasible with that approach. In this paper, K-Means clustering method is applied to compute MI from the same set of WAVs. The proposed method was applied to four tagging MRI sequences, and the resulting displacements were compared with respect to manual measurements made by two observers. Results show that more accurate motion estimation is obtained with respect to the use of pixel intensity.
Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography
K. Y. Esther Leung, Marijn van Stralen, Marco M. Voormolen, et al.
Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.
Real-time registration by tracking for MR-guided cardiac interventions
Desmond Chung, Janakan Satkunasingham, Graham Wright, et al.
Cardiac interventional procedures such as myocardial stem cell delivery and radiofrequency ablation require a high degree of accuracy and efficiency. Real-time, 2-D MR technology is being developed to guide such procedures; the associated challenges include the relatively low resolution and image quality in real-time images. Real-time MR guidance can be enhanced by acquiring a 4-D (3-D + phase) volume prior to the procedure and aligning it to the 2-D real-time images, so that corresponding features in the prior volume can be integrated into the real-time image visualization. This technique provides spatial context with high resolution and SNR. A left ventricular (LV) myocardial wall contour tracking system was developed to maintain spatial alignment of prior volume images to real-time MR images. Over 9 test images sequences, each comprising 100 frames of simulated respiratory motion, the tracker maintained alignment with a mean displacement error of 1.61mm in a region of interest around the LV, as compared to a mean displacement error of 5.2mm without tracking.
Restoration and Filtering
icon_mobile_dropdown
A homomorphic filtering framework for DT-MRI
Carlos-Alberto Castaño-Moraga, Carl-Fredrik Westin, Juan Ruiz-Alzola
In this paper we develop a new filtering framework for tensor signal processing using the theory of vector spaces. From this point of view, signals are regarded as elements of vector spaces and operators as mappings from the input space to the output space. Hence, it is possible to generalize the principle of superposition to any operator defined on the signal spaces. Systems that obey that generalization of the principle of superposition are referred to as homomorphic and they can be decomposed in a cascade of three homomorphic subsystems: the first one operates on the input signal space, the second one is a linear system in the usual sense and the third one operates on the output signal space. Thus, suitable input and output subsystems can be chosen to deal with input signals, which defines a whole family of homomorphic filters. To apply this idea for DT-MRI signals, which consist of positive semi-definite matrices, we identify input and output signal spaces as the set of those real symmetric positive semi-definite matrices. Our homomorphic filtering framework not only guarantees a positive-semidefinite output tensor field whatever linear filter is used to regularize the noisy input, but also reduces the swelling effect produced by a faster regularization of diffusivities rather than orientations, as demonstrate the encouraging results that have been obtained.
Theoretical framework for analyzing MR imaging of dynamic objects using filters and downsamplers
Reconstruction methods for MR imaging of dynamic objects have traditionally been analyzed using the projection slice theorem. In this paper, we present a new theoretical framework for analyzing MR imaging of dynamic objects. Our framework reinterprets the object stationarity assumption in the MR reconstruction techniques as a combination of filtering and downsampling operations performed on the acquired k-space data. We have analyzed our results in x-f (spatial coordinate - temporal frequency) space using a time-sequential analysis. While the projection slice theorem has only be used to analyze the Cartesian sampling pattern, the new framework can analyze any arbitrary sampling pattern with a given reconstruction algorithm. Further, the new theoretical framework can be used to analyze the effect of relaxing the object stationarity assumption over the reconstructed MR images. We have demonstrated the use of our framework by analyzing two popular image reconstruction techniques, namely view-sharing and UNFOLD. In the analysis of view-sharing, we have confirmed the fact that interleaved and bit reversed k-space sampling patterns provide better artifact suppression for dynamic MR imaging. We propose using a different filter to further reduce artifacts in the reconstructed images. In the case of UNFOLD, we have analyzed the effect of relaxing the object stationarity assumption and have shown that it leads to an increase in motion artifacts.
A novel strategy for segmentation of magnetic resonance (MR) images corrupted by intensity inhomogeneity artifacts
Magnetic resonance images are often corrupted by intensity inhomogeneity (i.e., bias field effects), which manifests itself as slow intensity variations over the image domain. Such shading artifacts must be corrected before performing computerized analyses such as intensity-based segmentation and quantitative analysis. In this paper, we present a novel strategy in the fuzzy c-means (FCM) framework that simultaneously estimates the bias field while segmenting the image. An additive field term that models the bias field is incorporated into the FCM objective function. We propose a new term based on the spectral parameterization (i.e., wavelet coefficients) of the bias field that serves as a regularizer to enforce the smoothness of the estimated bias field. We also introduce a second regularization term that causes the labeling of each pixel to be influenced by its immediate neighborhood pixels. The latter regularization term renders the algorithm less sensitive to noise. We show that the novel objective functional could be optimized efficiently using an iterative process. The efficacy of the algorithm is demonstrated on synthesized images as well as on clinical breast MR images. With the synthesize images, segmentation accuracy using standard FCM is 89.07% while segmentation accuracy with the proposed algorithm is 99.95%.
Apparent diffusion profile estimation from high angular resolution diffusion images
Maxime Descoteaux, Elaine Angelino, Shaun Fitzgibbons, et al.
High angular resolution diffusion imaging (HARDI) has recently been of great interest to characterize non-Gaussian diffusion process. In the white matter of the brain, this occurs when fiber bundles cross, kiss or diverge within the same voxel. One of the important goal is to better describe the apparent diffusion process in these multiple fiber regions, thus overcoming the limitations of classical diffusion tensor imaging (DTI). In this paper, we design the appropriate mathematical tools to describe noisy HARDI data. Using a meaningful modified spherical harmonics basis to capture the physical constraints of the problem, we propose a new regularization algorithm to estimate a smoother and closer diffusivity profile to the true diffusivities without noise. We exploit properties of the spherical harmonics to define a smoothing term based on the Laplace-Beltrami for functions defined on the unit sphere. An additional contribution of the paper is the derivation of the general transformation taking the spherical harmonics coefficients to the high order tensor independent elements. This allows the careful study of the state of the art high order anisotropy measures computed from either spherical harmonics or tensor coefficients. We analyze their ability to characterize the underlying diffusion process. We are able to recover voxels with isotropic, single fiber anisotropic and multiple fiber anisotropic diffusion. We test and validate the approach on diffusion profiles from synthetic data and from a biological rat phantom.
Restoration of 3D medical images with total variation scheme on wavelet domains (TVW)
Arnaud Ogier, Pierre Hellier, Christian Barillot
The multiplicity of sensors used in medical imaging leads to different noises. Non informative noise can damage the image interpretation process and the performance of automatic analysis. The method proposed in this paper allows compensating highly noisy image data from non informative noise without sophisticated modeling of the noise statistics. This generic approach uses jointly a wavelet decomposition scheme and a non-isotropic Total Variation filtering of the transform coefficients. This framework benefits from both the hierarchical capabilities of the wavelet transform and the well-posed regularization scheme of the Total Variation. This algorithm has been tested and validated on test-bed data, as well as different clinical MR and 3D ultrasound images, enhancing the capabilities of the proposed method to cope with different noise models.
A pixelwise inpainting-based refinement scheme for quantizing calcification in the lumbar aorta on 2D lateral x-ray images
In this paper we seek to improve the standard method of assessing the degree of calcification in the lumbar aorta visualized on lateral 2-D X-rays. The semiquantitative method does not take density of calcification within the individual plaques into account and is unable to measure subtle changes in the severity of calcification over time. Both of these parameters would be desirable to assess, since they are the keys to assessing important information on the impact of risk factors and candidate drugs aiming at the prevention of atherosclerosis. As a further step for solving this task, we propose a pixelwise inpainting-based refinement scheme that seeks to optimize the individual plaque shape by maximizing the signal-to-noise ratio. Contrary to previous work the algorithm developped for this study uses a sorted candidate list, which omits possible bias introduced by the choice of starting pixel. The signal-to-noise optimization scheme will be discussed in different settings using TV as well as Harmonic inpainting and comparing these with a simple averaging process.
Shape
icon_mobile_dropdown
Sparse modeling of landmark and texture variability using the orthomax criterion
In the past decade, statistical shape modeling has been widely popularized in the medical image analysis community. Predominantly, principal component analysis (PCA) has been employed to model biological shape variability. Here, a reparameterization with orthogonal basis vectors is obtained such that the variance of the input data is maximized. This property drives models toward global shape deformations and has been highly successful in fitting shape models to new images. However, recent literature has indicated that this uncorrelated basis may be suboptimal for exploratory analyses and disease characterization. This paper explores the orthomax class of statistical methods for transforming variable loadings into a simple structure which is more easily interpreted by favoring sparsity. Further, we introduce these transformations into a particular framework traditionally based on PCA; the Active Appearance Models (AAMs). We note that the orthomax transformations are independent of domain dimensionality (2D/3D etc.) and spatial structure. Decompositions of both shape and texture models are carried out. Further, the issue of component ordering is treated by establishing a set of relevant criteria. Experimental results are given on chest radiographs, magnetic resonance images of the brain, and face images. Since pathologies are typically spatially localized, either with respect to shape or texture, we anticipate many medical applications where sparse parameterizations are preferable to the conventional global PCA approach.
A representation and classification scheme for tree-like structures in medical images: an application on branching pattern analysis of ductal trees in x-ray galactograms
Vasileios Megalooikonomou, Despina Kontos, Joseph Danglemaier, et al.
We propose a multi-step approach for representing and classifying tree-like structures in medical images. Examples of such tree-like structures are encountered in the bronchial system, the vessel topology and the breast ductal network. We assume that the tree-like structures are already segmented. To avoid the tree isomorphism problem we obtain the breadth-first canonical form of a tree. Our approach is based on employing tree encoding techniques, such as the depth-first string encoding and the Prüfer encoding, to obtain a symbolic representation. Thus, the problem of classifying trees is reduced to string classification where node labels are the string terms. We employ the tf-idf text mining technique to assign a weight of significance to each string term (i.e., tree node label). We perform similarity searches and k-nearest neighbor classification of the trees using the tf-idf weight vectors and the cosine similarity metric. We applied our approach to the breast ductal network manually extracted from clinical x-ray galactograms. The goal was to characterize the ductal tree-like parenchymal structures in order to distinguish among different groups of women. Our best classification accuracy reached up to 90% for certain experimental settings (k=4), outperforming on the average by 10% that of a previous state-of-the-art method based on ramification matrices. These results illustrate the effectiveness of the proposed approach in analyzing tree-like patterns in breast images. Developing such automated tools for the analysis of tree-like structures in medical images can potentially provide insight to the relationship between the topology of branching and function or pathology.
Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images
We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.
Optimal landmark distributions for statistical shape model construction
Tobias Heimann, Ivo Wolf, Hans-Peter Meinzer
Minimizing the description length (MDL) is one of the most promising methods to automatically generate 3D statistical shape models. By modifying an initial landmark distribution according to the MDL cost function, points across the different training shapes are brought into correspondence. A drawback of the current approach is that the user has no influence on the final landmark positions, which often do not represent the modeled shape adequately. We extend an existing remeshing technique to work with statistical shape models and show how the landmark distribution can be modified anytime during the model construction phase. This procedure is guided by a control map in parameter space that can be set up to produce any desired point distribution, e.g. equally spaced landmarks. To compare our remeshed models with the original approach, we generalize the established generalization and specificity measures to be independent of the underlying landmark distribution. This is accomplished by switching the internal metric from landmark distances to the Tanimoto coefficient, a volumetric overlap measure. In a concluding evaluation, we generate models for two medical datasets with and without landmark redistribution. As the outcome reveals, redistributing landmarks to an equally spaced distribution during the model construction phase improves the quality of the resulting models significantly if the shapes feature prominent bulges or other complex geometry.
An ISO-surface folding analysis method applied to premature neonatal brain development
In this paper we describe the application of folding measures to tracking in vivo cortical brain development in premature neonatal brain anatomy. The outer gray matter and the gray-white matter interface surfaces were extracted from semi-interactively segmented high-resolution T1 MRI data. Nine curvature- and geometric descriptor-based folding measures were applied to six premature infants, aged 28-37 weeks, using a direct voxelwise iso-surface representation. We have shown that using such an approach it is feasible to extract meaningful surfaces of adequate quality from typical clinically acquired neonatal MRI data. We have shown that most of the folding measures, including a new proposed measure, are sensitive to changes in age and therefore applicable in developing a model that tracks development in premature infants. For the first time gyrification measures have been computed on the gray-white matter interface and on cases whose age is representative of a period of intense brain development.
Image-based metrology of porous tissue engineering scaffolds
Tissue engineering is an interdisciplinary effort aimed at the repair and regeneration of biological tissues through the application and control of cells, porous scaffolds and growth factors. The regeneration of specific tissues guided by tissue analogous substrates is dependent on diverse scaffold architectural indices that can be derived quantitatively from the microCT and microMR images of the scaffolds. However, the randomness of pore-solid distributions in conventional stochastic scaffolds presents unique computational challenges. As a result, image-based characterization of scaffolds has been predominantly qualitative. In this paper, we discuss quantitative image-based techniques that can be used to compute the metrological indices of porous tissue engineering scaffolds. While bulk averaged quantities such as porosity and surface are derived directly from the optimal pore-solid delineations, the spatially distributed geometric indices are derived from the medial axis representations of the pore network. The computational framework proposed (to the best of our knowledge for the first time in tissue engineering) in this paper might have profound implications towards unraveling the symbiotic structure-function relationship of porous tissue engineering scaffolds.
Pattern Recognition
icon_mobile_dropdown
Automated planning of MRI neuro scans
In clinical MRI examinations, the geometry of diagnostic scans is defined in an initial planning phase. The operator plans the scan volumes (off-centre, angulation, field-of-view) with respect to patient anatomy in 'scout' images. Often multiple plans are required within a single examination, distracting attention from the patient waiting in the scanner. A novel and robust method is described for automated planning of neurological MRI scans, capable of handling strong shape deviations from healthy anatomy. The expert knowledge required to position scan geometries is learned from previous example plans, allowing site-specific styles to be readily taken into account. The proposed method first fits an anatomical model to the scout data, and then new scan geometries are positioned with respect to extracted landmarks. The accuracy of landmark extraction was measured to be comparable to the inter-observer variability, and automated plans are shown to be highly consistent with those created by expert operators using clinical data. The results of the presented evaluation demonstrate the robustness and applicability of the proposed approach, which has the potential to significantly improve clinical workflow.
A classification framework for content-based extraction of biomedical objects from hierarchically decomposed images
Christian Thies, Marcel Schmidt Borreda, Thomas Seidl, et al.
Multiscale analysis provides a complete hierarchical partitioning of images into visually plausible regions. Each of them is formally characterized by a feature vector describing shape, texture and scale properties. Consequently, object extraction becomes a classification of the feature vectors. Classifiers are trained by relevant and irrelevant regions labeled as object and remaining partitions, respectively. A trained classifier is applicable to yet uncategorized partitionings to identify the corresponding region's classes. Such an approach enables retrieval of a-priori unknown objects within a point-and-click interface. In this work, the classification pipeline consists of a framework for data selection, feature selection, classifier training, classification of testing data, and evaluation. According to the no-free-lunch-theorem of supervised learning, the appropriate classification pipeline is determined experimentally. Therefore, each of the steps is varied by state-of-the-art methods and the respective classification quality is measured. Selection of training data from the ground truth is supported by bootstrapping, variance pooling, virtual training data, and cross validation. Feature selection for dimension reduction is performed by linear discriminant analysis, principal component analysis, and greedy selection. Competing classifiers are k-nearest-neighbor, Bayesian classifier, and the support vector machine. Quality is measured by precision and recall to reflect the retrieval task. A set of 105 hand radiographs from clinical routine serves as ground truth, where the metacarpal bones have been labeled manually. In total, 368 out of 39.017 regions are identified as relevant. In initial experiments for feature selection with the support vector machine have been obtained recall, precision and F-measure of 0.58, 0.67, and 0,62, respectively.
A pattern recognition approach to enhancing structures in 3D CT data
In medical image processing, several attempts have been made to develop filters which enhance certain structures in 3D data based on analysis of the Hessian matrix. These filters also tend to respond to other structures, e.g. most vessel enhancement filters also enhance nodule-like objects. In this paper, we use pattern recognition techniques to design more optimal filters. The essential difference with previous approaches is that we provide a system with examples of what it should enhance and suppress. These examples are used to train a classifier that determines the probability that a voxel in an unseen image belongs to the desired structures. The advantages of such an approach are excellent performance and flexibility: it can be used for any structure by providing the appropriate examples. We evaluated our approach on enhancing pulmonary fissures, which appear as plate-like structures in 3D CT chest scans. We compared our approach to the results of a recently proposed fissure enhancement filter. The results show that both methods are able to enhance the fissures, but our approach shows better performance; the areas under the ROC curves are 0.9044 and 0.7650, respectively.
Blood detection in wireless capsule endoscopy using expectation maximization clustering
Sae Hwang, JungHwan Oh, Jay Cox, et al.
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
Functional feature subspace mapping of fMRI data in the spectral domain
We propose a new method for the analysis of functional magnetic resonance imaging (fMRI) which is called functional feature subspace mapping (FFSM). We mainly focused on the experimental design with periodic stimuli which can be described by a number of Fourier coefficients in the spectral domain. Then the subspace is obtained through the dimension reduction technique. Finally, the presence of activated time series is identified by the clustering method. Experiments with simulated data and the real human experiments are conducted to demonstrate that the algorithm we proposed is feasible. Although we focus on analyzing periodic fMRI data, the approach could be extended to analyze non-periodic fMRI data (event-related fMRI) by replacing the spectral analysis with a wavelet analysis.
Analysis of first-pass myocardial perfusion MRI using independent component analysis
Julien Milles, Rob J. van der Geest, Michael Jerosch-Herold, et al.
Myocardial perfusion MRI has emerged as a suitable imaging technique for the detection of ischemic regions of the heart. However, manual post-processing is labor intensive, seriously hampering its daily clinical use. We propose a novel, data driven analysis method based on Independent Component Analysis (ICA). By performing ICA on the complete perfusion sequence, physiologically meaningful feature images, representing events occurring during the perfusion sequence, can be factored out. Results obtained using our method are compared with results obtained using manual contouring by a medical expert. The estimated weight functions are correlated against the perfusion time-intensity curves from manual contours, yielding promising results.
Using image similarity and asymmetry to detect breast cancer
Dave Tahmoush, Hanan Samet
Radiologists can use the differences between the left and right breasts, or asymmetry, in mammograms to help detect certain malignant breast cancers. An image similarity method is introduced to make use of this knowledge base to recognize breast cancer. Image similarity is determined using a contextual and then a spatial comparison. The mammograms are filtered to find the most contextually significant points, and then the resulting point set is analyzed for spatial similarity. We develop the analysis through a combination of modeling and supervised learning of model parameters. This process correctly classifies mammograms 80% of the time and thus asymmetry is a measure that can play an important role in significantly improving computer-aided breast cancer detection systems.
CAD I
icon_mobile_dropdown
Probabilistic nodule filtering in thoracic CT scans
Automated detection of lung nodules in thoracic CT scans is an important clinical challenge. Blood vessels form a major source of false positives in automated nodule detection systems. Hence, the performance of such systems may be improved by enhancing nodules while suppressing blood vessels. Ideally, nodule enhancement filters should enhance nodules while suppressing vessels and lung tissue. A distinction between vessels and nodules is normally obtained through eigenvalue analysis of the Hessian matrix. The Hessian matrix is a second order differential quantity and so is sensitive to noise. Furthermore, by relying on principal curvatures alone, existing filters are incapable of distinguishing between nodules and vessel junctions, and are incapable of handling cases in which nodules touch vessels. In this paper we develop novel nodule enhancement filters that are capable of suppressing junctions and are capable of handling cases in which nodules appear to touch or even overlap with vessels. The proposed filters are based on optimized probabilistic models derived from eigenvalue analysis of the gradient correlation matrix which is a first order differential quantity and so are less sensitive to noise compared with known vessel enhancement filters. The proposed filters are evaluated and compared to known techniques both qualitatively, quantitatively. The evaluation includes both synthetic and actual clinical data.
Development of a computerized scheme for detection of very subtle lung nodules located in opaque areas on chest radiographs
The detection of lung nodules located in opaque areas including the mediastinum, retrocardiac lung, and lung projected below or on the diaphragm has been very difficult, because the contrast of these nodules is usually extremely low, and sometimes radiologists may not pay attention to these locations. In this study, we have developed a new computer-aided diagnostic (CAD) scheme designed specifically for the detection of these difficult-to-detect lung nodules located in opaque areas. We used 1,000 chest images with 1,076 lung nodules, which included 73 very difficult lung nodules in these opaque areas. In this new computerized scheme, opaque areas within a chest image were segmented by use of an adaptive multi-thresholding method based on edge-gradient values, and then the gray level and contrast of the chest image were adjusted for the opaque areas. Initial candidates were identified by use of the nodule-enhanced image obtained with the average radial-gradient (ARG) filtering technique based on radial gradient values. We employed a total of 35 image features for sequential application of artificial neural networks (ANNs) in order to reduce the number of false-positive candidates. The ANNs were trained and tested by use of a k-fold cross-validation test method (k=100), in which each of 100 different combinations of training and test image data sets included 990 and 10 chest images, respectively. The overall performance determined from the results of 100 test data sets indicated that the average sensitivity in detecting lung nodules was 52.1% with 1.89 false positives per image, which was considered "acceptable", because these nodules were very subtle and difficult to detect. By combination of this advanced CAD scheme with our standard CAD scheme for lung-nodule detection, the clinical usefulness of the CAD scheme would be improved significantly.
Computerized detection of pulmonary nodules using a combination of 3D global and local shape information based on helical CT images
A novel method called local shape controlled voting has been developed for spherical object detection in 3D voxel images. By combining local shape properties into the global tracking procedure of normal overlap, the proposed method solved the ambiguities of normal overlap between a small size sphere and a possible large size cylinder, as the normal overlap technique can only measures the 'density' of normal overlapping, while how the normal vectors are distributed in 3D is not discovered. The proposed method was applied to computer aided detection of small size pulmonary nodules based on helical CT images. Experiments showed that this method attained a better performance compared to the original normal overlap technique.
Automated detection of ureter abnormalities on multi-detector row CT urography
Lubomir Hadjiiski, Berkman Sahiner, Elaine M. Caoili, et al.
We are developing a CAD system for automated detection of ureter abnormalities on multi-detector row CT urography, which potentially can assist radiologists in detecting ureter cancer. In the first stage of the CAD system, given an initial starting point, the ureter is tracked based on the CT values of the contrast-filled lumen. In the second stage, lesion candidates are detected using histogram and shape analysis to separate the abnormality from the background, which is the ureter filled with contrast material. A uniformity measure is designed to detect non-uniformity of the CT values within the ureter volume. If a ureter abnormality is present, the CT values uniformity will be distorted, resulting in a reduced uniformity measure. The smoothness of the ureter wall is also estimated using a shape measure. A rule-based system is used to combine the two measures. In this pilot study, a limited data set of 11 patients with biopsy-proven lesions was used. Nine patients had 12 ureter cancers and 6 benign lesions and the remaining two patients had 2 benign lesions. The average lesions size for the 12 cancers was 7.8mm (range: 2.1mm-9.5mm). The tracking program successfully tracked the ureters in 10 of the patients. Our system detected 75% (15/20) of the ureter lesions with 2.6 (28/11) false positives per patient. 83% (10/12) of the ureter cancers were detected. The preliminary results show that our detection system can track the ureter and detect ureter cancer of medium conspicuity and relatively small size.
Biplane correlation imaging for lung nodule detection: initial human subject results
In this paper, we present performance of biplane correlation imaging (BCI) on set of chest x-ray projections of human data. BCI significantly minimizes the number of false positives (FPs) when used in conjunction with computer aided detection (CAD) by eliminating non-correlated nodule candidates. Sixty-one low exposure posterior projections were acquired from more than 20 human subjects with small angular separations (0.32 degree) over a range of 20 degrees along the vertical axis. All patients were previously diagnosed for the presence of lung nodules based on computed tomography (CT) examination. Images were processed following two steps. First, all images were analyzed using our CAD routine for chest radiography. This process proceeded with a BCI processing in which the results of CAD on each single projection were examined in terms of their geometrical correlation with those found in the other 60 projections based on the predetermined shift of possible nodule locations in each projection. The suspect entities with a geometrical correlation that coincided with the known location of the lesions were selected as nodules; otherwise they were ignored. An expert radiologist with reference to the associated CT dataset determined the truth regarding nodule location and sizes, which were then used to determine if the found nodules are true positive or false positive. The preliminary results indicated that the best performance was obtained when the angular separation of the projection pair was greater than about 6.7 degrees. Within the range of optimum angular separation, the number of FPs per image was 0-1 without impacting the number of true positives (TPs), averaged around 92%.
Increasing sensitivity of masses cued on both views by CAD
Bin Zheng, Glen Maitz, Joseph K. Leader, et al.
Although CAD schemes can detect a high percentage of subtle cancers depicted on "prior" and false-negative cases, radiologists frequently discard most of CAD-cued subtle masses in the clinical environment. As a result of the relatively high false-positive detection rate and the fact that a large number of subtle masses are typically cued only on one view cause radiologists to rely less on (and often ignore) CAD-cued masses. In this study, we present a multi-view based method to increase the number of actual masses that are cued by the CAD scheme on both (ipso-lateral) views and at the same time limiting the overall "case-based" false-positive detection rate. The new scheme includes a traditional single-image based CAD scheme and a multi-view processing module. An image database from 435 examinations (or a total of 1,740 images) consisting 235 examinations depicting a verified malignant mass each and 200 negative examinations was used in this study. The single-image based CAD scheme with a fixed operating threshold (i.e. 0.55) was applied to all images. For each CAD identified region (i.e. with detection score ≥ 0.55), the multi-view processing module defined a matched strip on the corresponding ipso-lateral image and identified all "matched" regions located inside the strip (including those with detection score < 0.55). All matched regions are cued on both views and unmatched regions were discarded. CAD scheme initially detected 172 true masses and 576 false-positive regions. Of the 172 masses, 90 were detected on two views (52%) and 82 were detected only on one view. Of the 576 false-positive detections, only 72 pairs (14%) were considered "matched" and 432 were not, resulting in 504 case-based ("independent") cues. Case-based sensitivity and false-positive rate of the CAD scheme were 73.2% and 1.16 per case. When only matched region pairs were cued, 160 masses (68.1%) and 308 false-positive detections (0.71 per case) were identified on two views. Reducing the original operating threshold from 0.55 to 0.49 resulted in 172 masses and 413 false-positive detections (or 0.95 per case) being cued on two views. By increasing sensitivity through the reduction of the detection threshold only inside the matched strip regions in ipso-lateral images the scheme successfully cued masses on two views by maintaining detection sensitivity while reducing the "case-based" false-positive detection rate.
CAD II
icon_mobile_dropdown
Reconstruction-independent 3D CAD for mass detection in digital breast tomosynthesis using fuzzy particles
In this paper we present a novel approach for mass detection in Digital Breast Tomosynthesis (DBT) datasets. A reconstruction-independent approach, working directly on the projected views, is proposed. Wavelet filter responses on the projections are thresholded and combined to obtain candidate masses. For each candidate, we create a fuzzy contour through a multi-level thresholding process. We introduce a fuzzy set definition for the class mass contour that allows the computation of fuzzy membership values for each candidate contour. Then, an aggregation operator is presented that combines information over the complete set of projected views, resulting in 3D fuzzy particles. A final decision is made taking into account all available information. The performance of the presented algorithm was evaluated on a database of 11 one-breast-cases resulting in a sensitivity (Se) of 0.86 and a false positive rate (FPR) of 3.5 per case.
Micro-calcification detection in digital tomosynthesis mammography
Frederick W. Wheeler, A. G. Amitha Perera, Bernhard E. Claus, et al.
A novel technique for the detection and enhancement of microcalcifications in digital tomosynthesis mammography (DTM) is presented. In this method, the DTM projection images are used directly, instead of using a 3D reconstruction. Calcification residual images are computed for each of the projection images. Calcification detection is then performed over 3D space, based on the values of the calcification residual images at projection points for each 3D point under test. The quantum, electronic, and tissue noise variance at each pixel in each of the calcification residuals is incorporated into the detection algorithm. The 3D calcification detection algorithm finds a minimum variance estimate of calcification attenuation present in 3D space based on the signal and variance of the calcification residual images at the corresponding points in the projection images. The method effectively detects calcifications in 3D in a way that both ameliorates the difficulties of joint tissue/microcalcification tomosynthetic reconstruction (streak artifacts, etc.) and exploits the well understood image properties of microcalcifications as they appear in 2D mammograms. In this method, 3D reconstruction and calcification detection and enhancement are effectively combined to create a calcification detection specific reconstruction. Motivation and details of the technique and statistical results for DTM data are provided.
Computer-aided detection of breast masses on mammograms: bilateral analysis for false positive reduction
In this study, our purpose was to develop a false positive (FP) reduction method for computerized mass detection systems based on the analysis of bilateral mammograms. We first detect the mass candidates on each view by utilizing our unilateral computer-aided detection (CAD) system. For each detected object, the regional registration technique is used to define a region of interest (ROI) that is "symmetrical" to the object location on the contralateral mammogram. Spatial gray level dependence matrices (SGLD) texture features and morphological features are extracted from both the ROI containing the detected object on a mammogram and its corresponding ROI on the contralateral mammogram. Bilateral features are then generated from the extracted unilateral features and a final bilateral score is formed as a new feature to differentiate symmetric from asymmetric ROIs. By incorporating the unilateral features of the mass candidates and their bilateral scores, a bilateral classifier was trained to reduce the FPs. It was found that our bilateral CAD system achieved a case-based sensitivity of 70%, 80%, and 85% at 0.52, 0.83, and 1.05 FPs/image on the test data set. In comparison to the FP rates for the unilateral CAD system of 0.67, 1.11, and 1.69, respectively, at the corresponding sensitivities, the FP rates were reduced by 22%, 25%, and 37% with the bilateral symmetry information.
Digital bowel cleansing for computer-aided detection of polyps in fecal-tagging CT colonography
Wenli Cai, Janne Näppi, Micheal E. Zalis, et al.
Digital bowel cleansing (DBC) is an emerging method for segmentation of fecal materials, which are tagged by an X-ray-opaque oral contrast agent in CT colonography (CTC) images, effectively removing them for digital cleansing of the colon. Existing DBC approaches tend to use simple thresholding-based methods for the removal of tagged fecal materials; however, because of the pseudo-enhancement of polyps caused by the surrounding tagged fecal materials, such methods tend erroneously to remove a part of or the entire polyps submerged in these materials. In this study, we developed a novel DBC method that preserves the soft-tissue structures submerged in or partially covered by tagged fecal materials. In our approach, submerged soft-tissue structures are characterized by their local shape signatures that are calculated based on the eigenvalues of a Hessian matrix. A structure-enhancement function is formulated for enhancing of the soft-tissue structures, and the values of the function are integrated into the speed function of a level set method to delineate the submerged soft-tissue structures while removing the tagged fecal materials. In an analysis of 10 submerged polyps, our new DBC method was shown to delineate polyps better than was possible with our previously reported cleansing method based on thresholding. Application of our computer-aided detection (CAD) scheme showed that the use of the new DBS method substantially reduced the number of false-positive detections compared with those of our previous, thresholding-based method.
Quantitative analysis of two-phase 3D+time aortic MR images
Fei Zhao, Honghai Zhang, Nicholas E. Walker M.D., et al.
Automated and accurate segmentation of the aorta in 3D+time MR image data is important for early detection of connective tissue disorders leading to aortic aneurysms and dissections. A computer-aided diagnosis method is reported that allows the objective identification of subjects with connective tissue disorders from two-phase 3D+time aortic MR images. Our automated segmentation method combines level-set and optimal border detection. The resulting aortic lumen surface was registered with an aortic model followed by calculation of modal indices of aortic shape and motion. The modal indices reflect the differences of any individual aortic shape and motion from an average aortic behavior. The indices were input to a Support Vector Machine (SVM) classifier and a discrimination model was constructed. 3D+time MR image data sets acquired from 22 normal and connective tissue disorder subjects at end-diastole (R-wave peak) and at 45% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta from the left-ventricular outflow tract to the diaphragm and yielded subvoxel accuracy with signed surface positioning errors of -0.09±1.21 voxel (-0.15±2.11 mm). The computer aided diagnosis method distinguished between normal and connective tissue disorder subjects with a classification correctness of 90.1 %.
Two-view information fusion for improvement of computer-aided detection (CAD) of breast masses on mammograms
We are developing a two-view information fusion method to improve the performance of our CAD system for mass detection. Mass candidates on each mammogram were first detected with our single-view CAD system. Potential object pairs on the two-view mammograms were then identified by using the distance between the object and the nipple. Morphological features, Hessian feature, correlation coefficients between the two paired objects and texture features were used as input to train a similarity classifier that estimated a similarity scores for each pair. Finally, a linear discriminant analysis (LDA) classifier was used to fuse the score from the single-view CAD system and the similarity score. A data set of 475 patients containing 972 mammograms with 475 biopsy-proven masses was used to train and test the CAD system. All cases contained the CC view and the MLO or LM view. We randomly divided the data set into two independent sets of 243 cases and 232 cases. The training and testing were performed using the 2-fold cross validation method. The detection performance of the CAD system was assessed by free response receiver operating characteristic (FROC) analysis. The average test FROC curve was obtained from averaging the FP rates at the same sensitivity along the two corresponding test FROC curves from the 2-fold cross validation. At the case-based sensitivities of 90%, 85% and 80% on the test set, the single-view CAD system achieved an FP rate of 2.0, 1.5, and 1.2 FPs/image, respectively. With the two-view fusion system, the FP rates were reduced to 1.7, 1.3, and 1.0 FPs/image, respectively, at the corresponding sensitivities. The improvement was found to be statistically significant (p<0.05) by the AFROC method. Our results indicate that the two-view fusion scheme can improve the performance of mass detection on mammograms.
Comparison of breast ductal branching pattern classification using x-ray galactograms and MR autogalactograms
We have analyzed the branching patterns of the breast ductal network visible in magnetic resonance (MR) autogalactograms - images of breast ducts which appear enhanced due to the presence of proteinacous or hemorrhagic material in the ducts. The enhanced portions of the ductal network were segmented separately in MRI slices acquired with a 3D GRASS sequence. A semi-automated region growing algorithm was used for segmentation. The ductal network was manually constructed from the segmented portions in each slice. The branching pattern was analyzed by calculating ramification (R-) matrices, whose elements represent the probabilities of branching at various levels of a ductal tree. The R-matrix elements have been used to classify the analyzed cases into those with and without radiological findings. The classification accuracy was estimated using the radiologists' reports as ground truth. An ROC analysis was performed to assess the classification accuracy. The classification of nine MR autogalactograms from eight women yielded an area under the ROC curve of A=0.73. This performance is comparable with our previous analysis of 25 2D x-ray galactograms from 15 women (A=0.88). The observed results support our hypothesis that a relationship exists between the topological properties of the breast ductal network and the underlying breast pathology.
Registration Poster Session
icon_mobile_dropdown
Validation of elastic registration algorithms based on adaptive irregular grids for medical applications
Astrid Franz, Ingwer C. Carlsen, Steffen Renisch, et al.
Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.
A strategy based on maximum spanning trees to stitch together microscope images
Assembling partial views is an attractive means to extend the field of view of microscope images. In this paper, we propose a semi-automated solution to achieve this goal. Its intended audience is the microscopist who desires to scan a large area while acquiring a series of partial views, but who does not wish to--or cannot--planify the path of the scan. In a first stage, this freedom is dealt with by interactive manipulation of the resulting partial views, or tiles. In a second stage, the position of the tiles is refined by a fully automatic pairwise registration process. The contribution of this paper is a strategy that determines which pairs of tiles to register, among all possible pairs. The central tenet of our proposed strategy is that two tiles that happen to possess a large common area will register with higher accuracy than two tiles with a smaller overlap. Our strategy is then to minimize the number of pairwise registrations while maximizing the global amount of overlap, and while ensuring that the local registration efforts are sufficient to link all tiles together to yield a global mosaic. By stating this requirement in a graph-theoretic context, we are able to derive the optimal solution thanks to Kruskal's algorithm.
Weighted medical image registration with automatic mask generation
Hanno Schumacher, Astrid Franz, Bernd Fischer
Registration of images is a crucial part of many medical imaging tasks. The problem is to find a transformation which aligns two given images. The resulting displacement fields may be for example described as a linear combination of pre-selected basis functions (parametric approach), or, as in our case, they may be computed as the solution of an associated partial differential equation (non-parametric approach). Here, the underlying functional consists of a smoothness term ensuring that the transformation is anatomically meaningful and a distance term describing the similarity between the two images. To be successful, the registration scheme has to be tuned for the problem under consideration. One way of incorporating user knowledge is the employment of weighting masks into the distance measure, and thereby enhancing or hiding dedicated image parts. In general, these masks are based on a given segmentation of both images. We present a method which generates a weighting mask for the second image, given the mask for the first image. The scheme is based on active contours and makes use of a gradient vector flow method. As an example application, we consider the registration of abdominal computer tomography (CT) images used for radiation therapy. The reference image is acquired well ahead of time and is used for setting up the radiation plan. The second image is taken just before the treatment and its processing is time-critical. We show that the proposed automatic mask generation scheme yields similar results as compared to the approach based on a pre-segmentation of both images. Hence for time-critical applications, as intra-surgery registration, we are able to significantly speed up the computation by avoiding a pre-segmentation of the second image.
Dedicated registration for DCE MRI mammography
Steffen Renisch, Thomas Bülow, Ingwer C. Carlsen, et al.
Dynamic contrast enhanced (DCE) MRI mammography is currently receiving much interest in clinical research. It bears the potential to discriminate between benign and malignant lesions by analysis of the contrast uptake of the lesion. However, a registration of the individual images of a contrast-uptake series is crucial in order to avoid motion artefacts in the uptake curves, which could affect the diagnosis. It is on the other hand well known from the registration literature that a registration that uses a standard similarity measure (e.g. mean sum of squared differences, cross-correlation) may cause artefacts if contrast agent is taken up between the images to be registered. Thus we propose a registration on the basis of an application-specific similarity measure that explicitly uses features of the contrast uptake. We report initial results using this registration method.
A linear programming based algorithm for determining corresponding point tuples in multiple vascular images
Vikas Singh, Jinhui Xu, Kenneth R. Hoffmann, et al.
Multi-view imaging is the primary modality for high-spatial-resolution imaging of the vasculature. The 3D vascular structure can be reconstructed if the imaging geometries are determined using known corresponding point-pairs (or k-tuples) in two or more images. Because the accuracy improves with more input corresponding point-pairs, we propose a new technique to automatically determine corresponding point-pairs in multi-view (k-view) images, from 2D vessel image centerlines. We formulate the problem, first as a multi-partite graph-matching problem. Each 2D centerline point is a vertex; each individual graph contains all vessel-points (vertices) in an image. The weight ('cost') of the edges between vertices (in different graphs) is the shortest distance between the points' respective projection-lines. Using this construction, a universe of mappings (k-tuples) is created, each k-tuple having k vertices (one from each image). A k-tuple's weight is the sum of pair-wise 'costs' of its members. Ideally, a set of such mappings is desired that preserves the ordering of points along the vessel and minimizes an appropriate global cost function, such that all vertices (in all graphs) participate in at least one mapping. We formulate this problem as a special case of the well-studied Set-Cover problem with additional constraints. Then, the equivalent linear program is solved, and randomized-rounding techniques are used to yield a feasible set of mappings. Our algorithm is efficient and yields a theoretical quality guarantee. In simulations, the correct matching is achieved in ~98% cases, even with high input error. In clinical data, apparently correct matching is achieved in >90% cases. This method should provide the basis for improving the calculated 3D vasculature from multi-view data-sets.
Non-rigid registration for fusion of carotid vascular ultrasound and MRI volumetric datasets
R. C. Chan, S. Sokka, D. Hinton, et al.
In carotid plaque imaging, MRI provides exquisite soft-tissue characterization, but lacks the temporal resolution for tissue strain imaging that real-time 3D ultrasound (3DUS) can provide. On the other hand, real-time 3DUS currently lacks the spatial resolution of carotid MRI. Non-rigid alignment of ultrasound and MRI data is essential for integrating complementary morphology and biomechanical information for carotid vascular assessment. We assessed non-rigid registration for fusion of 3DUS and MRI carotid data based on deformable models which are warped to maximize voxel similarity. We performed validation in vitro using isolated carotid artery imaging. These samples were subjected to soft-tissue deformations during 3DUS and were imaged in a static configuration with standard MR carotid pulse sequences. Registration of the source ultrasound sequences to the target MR volume was performed and the mean absolute distance between fiducials within the ultrasound and MR datasets was measured to determine inter-modality alignment quality. Our results indicate that registration errors on the order of 1mm are possible in vitro despite the low-resolution of current generation 3DUS transducers. Registration performance should be further improved with the use of higher frequency 3DUS prototypes and efforts are underway to test those probes for in vivo 3DUS carotid imaging.
Evaluation of similarity measures for 3D/2D image registration
Darko Škerl, Boštjan Likar, Franjo Pernuš
Several 3D/2D registration algorithms for image-guided therapy have been introduced in the past years. Recently, we have proposed a method which first reconstructs a 3D image from a few intraoperative 2D X-ray images and then establishes the rigid transformation between the preoperative 3D CT or MR image and the 3D reconstructed image. The similarity measure applied in this registration method should be able to cope, among others, with the low quality of the reconstructed image. Using the recently proposed similarity measure evaluation protocol, we have evaluated the behavior of five similarity measures. The measures have been evaluated with respect to: a) preoperative imaging modalities (CT and MR); b) number of 2D images used for reconstruction; and c) number of reconstruction iterations. Increasing the number of 2D projections or reconstruction iterations improves the accuracy but slightly worsens the robustness. We have shown that almost all similarity measures have better properties if the optimal parameters are chosen. The most appropriate similarity measure for this type of registration is the asymmetric multi-feature mutual information.
Deformable registration using scale space keypoints
In this paper, we describe a new methodology for keypoint-based affine and deformable medical image registration. This fast and computationally efficient method is automatic and does not rely on segmentation of images. The keypoint pixels used in this technique are extreme points in the scale space and are characterized by descriptor vectors which summarize the intensity gradient profile of the surrounding pixels. For each of the keypoints in the scene image, a corresponding keypoint is identified in the model image using the feature space nearest neighbor criteria. For deformable registration, B-splines are used to extrapolate a regular deformation grid for all of the pixels in the scene image based on the relative displacement vectors of the corresponding pairs. This approach results in a fast and accurate registration in the brain MRI images (an average target registration error of less than 2mm was acquired). We have also studied the affine registration problem in the liver ultrasound and brain MRI images and have acquired acceptable registrations using a mean square solution for affine parameters based on only around 30 corresponding keypoint pairs.
Automatic sub-volume registration by probabilistic random search
Jingfeng Han, Min Qiao, Joachim Hornegger, et al.
Registration of an individual's image data set to an anatomical atlas provides valuable information to the physician. In many cases, the individual image data sets are partial data, which may be mapped to one part or one organ of the entire atlas data. Most of the existing intensity based image registration approaches are designed to align images of the entire view. When they are applied to the registration with partial data, a manual pre-registration is usually required. This paper proposes a fully automatic approach to the registration of the incomplete image data to an anatomical atlas. The spatial transformations are modelled as any parametric functions. The proposed method is built upon a random search mechanism, which allows to find the optimal transformation randomly and globally even when the initialization is not ideal. It works more reliably than the existing methods for the partial data registration because it successfully overcomes the local optimum problem. With appropriate similarity measures, this framework is applicable to both mono-modal and multi-modal registration problems with partial data. The contribution of this work is the description of the mathematical framework of the proposed algorithm and the implementation of the related software. The medical evaluation on the MRI data and the comparison of the proposed method with different existing registration methods show the feasibility and superiority of the proposed method.
Physics-based constraints for correction of geometric distortions in gradient echo EP images via nonrigid registration
Geometric distortion is a well-recognized problem in echo planar (EP) images. One strategy for the correction of these distortions is to register an EP image to a reference image, such as a high resolution anatomical MR image in which geometric distortion is minimal. Non-rigid registration methods, which warp images locally, have been used for this purpose. While a physics-based distortion model for spin-echo (SE) EP image has been developed and used as a constraint in nonrigid registration algorithms, such a model for gradient-echo (GE) EP image has not been investigated. Here, we propose to use a physics-based model for GE EP image that incorporates a term that takes dephasing into consideration. To evaluate this technique, we generate a distortion-free EP image using an MR simulator we have developed. We then distort the image and modify its intensity values using a real field map and an analytical expression that includes dephasing. The geometric distortion computed from the field map is used as the ground truth to which the deformation fields obtained with our method is compared. We show that including the dephasing term improves the results.
An ITK framework for deterministic global optimization for medical image registration
Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.
Optical flow based interpolation of temporal image sequences
Jan Ehrhardt, Dennis Säring, Heinz Handels
Modern tomographic imaging devices enable the acquisition of temporal image sequences. In our project, we study cine MRI sequences of patients with myocardial infarction. Because the sequences are acquired with different temporal resolutions, a temporal interpolation is necessary to compare images at predefined phases of the cardiac cycle. This paper presents an interpolation method for temporal image sequences. We derive our interpolation scheme from the optical flow equation. The spatiotemporal velocity field between the images is determined using an optical flow based registration method. Here, an iterative algorithm is applied, using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. The behavior and capability of the algorithm is demonstrated by synthetic image examples. Furthermore, quantitative measures are calculated to compare this optical flow based interpolation method to linear interpolation and shape-based interpolation in 5 cine MRI data sets. Results indicate that the presented method outperforms both linear and shape-based interpolation significantly.
Modeling lung motion using consistent image registration in four-dimensional computed tomography for radiation therapy
Wei Lu, Joo Hyun Song, Gary E. Christensen, et al.
Respiratory motion is a significant source of error in conformal radiation therapy for the thorax and upper abdomen. Four-dimensional computed tomography (4D CT) has been proposed to reduce the uncertainty caused by internal respiratory organ motion. A 4D CT dataset is retrospectively reconstructed at various stages of a respiratory cycle. An important tool for 4D treatment planning is deformable image registration. An inverse consistent image registration is used to model lung motion from one respiratory stage to another during a breathing cycle. This diffeomorphic registration jointly estimates the forward and reverse transformations providing more accurate correspondence between two images. Registration results and modeled motions in the lung are shown for three example respiratory stages. The results demonstrate that the consistent image registration satisfactorily models the large motions in the lung, providing a useful tool for 4D planning and delivering.
Enhancing skeletal features in digitally reconstructed radiographs
Dongshan Fu, Gopinath Kuduvalli
Generation of digitally reconstructed radiographs (DRR) is a critical part of 2D-3D image registration that is utilized in patient position alignment for image-guided radiotherapy and radiosurgery. The DRRs are generated from a pre-operative CT scan and used as the references to match the X-ray images for determining the change of patient position. Skeletal structures are the primary image features to facilitate the registration between the DRR and X-ray images. In this paper, we present a method to enhance skeletal features of spinal regions in DRRs. The attenuation coefficient at each voxel is first calculated by applying an exponential transformation of the original attenuation coefficient in the CT scan. This is a preprocessing step that is performed prior to DRR generation. The DRR is then generated by integrating the newly calculated attenuation coefficients along the ray that connects the X-ray source and the pixel in the DRR. Finally, the DRR is further enhanced using a weighted top-hat filter. During the entire process, because there is no original CT information lost, even the small skeletal features contributed by low intensity part of CT data are preserved in the enhanced DRRs. Experiments on clinical data were conducted to compare the image quality of DRRs with and without enhancement. The results showed that the image contrast of skeletal features in the enhanced DRRs is significantly improved. This method has potential to be applied for more accurate and robust 2D-3D image registration.
Fully automatic hybrid registration method based on point feature detection without user intervention
Bang-Bon Koo, Jong-Min Lee, June-Sic Kim, et al.
In earlier work (KIM, J.S, MBEC, 2003), we demonstrated the registration method with a non-linear transformation using intensity similarity and feature similarity. Although the former approach showed good match in global shape of brain and feature-defined region, method contains user interventions for defining appropriate and sufficient number features. While manual delineating the region of interests for sufficient number of feature is a very time-consuming and can provide intra-, inter-rater variability, we proposed fully automatic hybrid registration via automatic feature defining method. Automatic feature definition was performed on the cortical surface from CLASP (KIM, J.S, Neuroimage, 2005) with using cortical surface matching algorithm (Robbins, S., MIA, 2004) and then applied to hybrid registration. The object of this work is to develop fully automated hybrid registration method which reveals enhanced performance in comparison to previous automated registration methods. In the result, our proposed scheme showed efficient performance from maintaining the strong points of hybrid registration without any user intervention.
A variational approach to spatially dependent non-rigid registration
Florian Jäger, Jingfeng Han, Joachim Hornegger, et al.
In this paper we propose a new method for non-rigid registration of PET/CT datasets incorporating prior knowledge about the rigidity of regions within the PET volumes into the matching process. State-of-the-art medical image registration approaches usually assume that the whole image domain is associated with a homogeneous deformation property, thus bone structure and soft tissue have the same stiffness, for instance. This assumption, however, is invalid in the majority of cases. In many applications the deformation properties can be estimated automatically by a segmentation step, beforehand. The presented non-rigid registration method integrates knowledge about the tissue directly into the deformation field computation. For this reason, no additional post-processing steps, like filtering of the deformation field, are required. To integrate the tissue constraints the regularizer is replaced by a novel spatially dependent smoother. Dependent on the location within the image, the smoother is able to explicitly adjust the rigidity. Thus, different tissue classes can be treated in the registration process. To pass the stiffness coefficients to the algorithm an additional mask image is used. The registration results are illustrated on synthetic data first to give a good intuition about the effectiveness of the proposed method. Finally, we illustrate the improvement of the registration using real clinical data. It is shown that the mono-modal registration of PET images yields more reasonable results using a spatially dependent regularizer constraining the deformations of regions with high tracer concentration than using a normal curvature regularizer. Furthermore, the method is evaluated on multi-modal PET/CT registration problems.
A practical salient region feature based 3D multi-modality registration method for medical images
Dieter A. Hahn, Gabriele Wolz, Yiyong Sun, et al.
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
A framework for parameter optimization in mutual information (MI)-based registration algorithms
In this paper, we present a framework that one could use to set optimized parameter values, while performing image registration using mutual information as a metric to be maximized. Our experiment details these steps for the registration of X-ray Computer Tomography (CT) images with Positron Emission Tomography (PET) images. Selection of different parameters that influence the mutual information between two images is crucial for both accuracy and speed of registration. These implementation issues need to be handled in an orderly fashion by designing experiments in their operating ranges. The conclusions from this study seem vital towards obtaining allowable parameter range for a fusion software.
Seed based registration for intraoperative brachytherapy dosimetry: a comparison of methods
Yi Su, Brian J. Davis M.D., Michael G. Herman, et al.
Several approaches for registering a subset of imaged points to their true origins were analyzed and compared for seed based TRUS-fluoroscopy registration. The methods include the Downhill Simplex method (DS), the Powell's method (POW), the Iterative Closest Point (ICP) method, the Robust Point Matching method (RPM) and variants of RPM. Several modifications were made to the standard RPM method to improve its performance. One hundred simulations were performed for each combination of noise level, seed detection rate and spurious points and the registration accuracy was evaluated and compared. The noise level ranges from 0 to 5mm, the seed detection ratio ranges from 0.2 to 0.6, and the number of spurious points ranges from 0 to 20. An actual clinical post-implant dataset from permanent prostate brachytherapy was used for the simulation study. The experiments provided evidence that our modified RPM method is superior to other methods, especially when there are many outliers. The RPM based method produced the best results at all noise levels and seed detection rates. The DS based method performed reasonably well, especially at low noise levels without spurious points. There was no significant performance difference between the standard RPM and our modified RPM methods without spurious points. The modified RPM methods outperformed the standard RPM method with large number of spurious points. The registration error was within 2mm, even with 20 outlier points and a noise level of 3mm.
Image registration using shape-constrained mutual information and genetic algorithms
Xiaohui Yuan, Gloria Chi-Fishman
We present a novel, shape-constrained mutual information-based image registration method using a Genetic Algorithm (GA) for aligning 2D MR images of the in vivo human tongue. By restricting the computation of Mutual Information (MI) using segmented object, we are able to register images in a short time without compromising accuracy. In addition, due to the employment of GA, the robustness of this method is greatly improved such that good registration results can be achieved given large initial transformation difference. Hence, this method eliminates the failure of matching caused by partial object overlap. Our experiments clearly demonstrate the robustness and accuracy of the method.
Automatic lung nodule matching for the follow-up in temporal chest CT scans
We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.
Distance transform for automatic dermatologic images composition
C. Grana, G. Pellacani M.D., S. Seidenari M.D., et al.
In this paper we focus on the problem of automatically registering dermatological images, because even if different products are available, most of them share the problem of a limited field of view on the skin. A possible solution is then the composition of multiple takes of the same lesion with digital software, such as that for panorama images creation. In this work, to perform an automatic selection of matching points the Harris Corner Detector is used, and to cope with outlier couples we employed the RANSAC method. Projective mapping is then used to match the two images. Given a set of correspondence points, Singular Value Decomposition was used to compute the transform parameters. At this point the two images need to be blended together. One initial assumption is often implicitly made: the aim is to merge two rectangular images. But when merging occurs between more than two images iteratively, this assumption will fail. To cope with differently shaped images, we employed the Distance Transform and provided a weighted merging of images. Different tests were conducted with dermatological images, both with standard rectangular frame and with not typical shapes, as for example a ring due to the objective and lens selection. The successive composition of different circular images with other blending functions, such as the Hat function, doesn't correctly get rid of the border and residuals of the circular mask are still visible. By applying Distance Transform blending, the result produced is insensitive of the outer shape of the image.
Registration and 3D visualization of large microscopy images
Kishore Mosaliganti, Tony Pan, Richard Sharp, et al.
Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb- specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb- specimens which are not obvious prior to registration.
Registration of knee joint surfaces for the in vivo study of joint injuries based on magnetic resonance imaging
Rita W. T. Cheng, Ayman F. Habib, Richard Frayne, et al.
In-vivo quantitative assessments of joint conditions and health status can help to increase understanding of the pathology of osteoarthritis, a degenerative joint disease that affects a large population each year. Magnetic resonance imaging (MRI) provides a non-invasive and accurate means to assess and monitor joint properties, and has become widely used for diagnosis and biomechanics studies. Quantitative analyses and comparisons of MR datasets require accurate alignment of anatomical structures, thus image registration becomes a necessary procedure for these applications. This research focuses on developing a registration technique for MR knee joint surfaces to allow quantitative study of joint injuries and health status. It introduces a novel idea of translating techniques originally developed for geographic data in the field of photogrammetry and remote sensing to register 3D MR data. The proposed algorithm works with surfaces that are represented by randomly distributed points with no requirement of known correspondences. The algorithm performs matching locally by identifying corresponding surface elements, and solves for the transformation parameters relating the surfaces by minimizing normal distances between them. This technique was used in three applications to: 1) register temporal MR data to verify the feasibility of the algorithm to help monitor diseases, 2) quantify patellar movement with respect to the femur based on the transformation parameters, and 3) quantify changes in contact area locations between the patellar and femoral cartilage at different knee flexion angles. The results indicate accurate registration and the proposed algorithm can be applied for in-vivo study of joint injuries with MRI.
Elastic registration using 3D ChainMail: application to virtual colonoscopy
We present an elastic registration algorithm based on local deformations modeled using cubic B-splines and controlled using 3D ChainMail. Our algorithm eliminates the appearance of folding artifacts and allows local rigidity and compressibility control independent of the image similarity metric being used. 3D ChainMail propagates large internal deformations between neighboring B-Spline control points, thereby preserving the topology of the transformed image without requiring the addition of penalty terms based on rigidity of the transformation field to the equation used to maximize image similarity. A novel application to virtual colonoscopy is presented where the algorithm is used to significantly improve cross-localization between colon locations in prone and supine CT images.
Automated feature-based alignment for 3D volume reconstruction of CLSM imagery
We address the problem of automated image alignment for 3D volume reconstruction from stacks of fluorescent confocal laser scanning microscope (CLSM) imagery acquired at multiple confocal depths, from a sequence of consecutive slides. We focus on automated image alignment based on centroid and area shape features by solving feature correspondence problem, also known as Procrustes problem, in highly deformable and ill-conditioned feature space. In result, we compare image alignment accuracy of a fully automated method with registration accuracy achieved by human subjects using a manual alignment method. Our work demonstrates significant benefits of automation for 3D volume reconstruction in terms of accuracy, consistency, and performance time. We also outline the limitations of fully automated and manual 3D volume reconstruction system.
Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm
Wei Huang, John M. Sullivan Jr., Praveen Kulkarni, et al.
An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.
Improved 2D/3D registration robustness using local spatial information
Elena De Momi, Kort Eckman, Branislav Jaramaz, et al.
Xalign is a tool designed to measure implant orientation after joint arthroplasty by co-registering a projection of an implant model and a digitally reconstructed radiograph of the patient's anatomy with a post operative x-ray. A mutual information based registration method is used to automate alignment. When using basic mutual information, the presence of local maxima can result in misregistration. To increase robustness of registration, our research is aimed at improving the similarity function by modifying the information measure and incorporating local spatial information. A test dataset with known groundtruth parameters was created to evaluate the performance of this measure. A synthetic radiograph was generated first from a preoperative pelvic CT scan to act as the gold standard. The voxel weights used to generate the image were then modified and new images were generated with the CT rigidly transformed. The roll, pitch and yaw angles span a range of -10/+10 degrees, while x, y and z translations range from -10mm to +10mm. These images were compared with the reference image. The proposed cost function correctly identified the correct pose in all tests and did not exhibit any local maxima which would slow or prevent locating the global maximum.
Vessel-based registration with application to nodule detection in thoracic CT scans
Volume registration is fundamental to multiple medical imaging algorithms. Specifically, non-rigid registration of thoracic CT scans taken at different time instances can be used to detect new nodules more reliably and assess the growth rate of existing nodules. Voxel-based registration techniques are generally sensitive to intensity variation and structural differences, which are common in CT scans due to partial volume effects and naturally occurring motion and deformations. The approach we propose in this paper is based on vessel tree extraction which is then used to infer the complete volume registration. Vessels form unique features with good localization. Using extracted vessel trees, a minimization process is used to estimate the motion vectors at vessels. Accurate motion vectors are obtained at vessel junctions whereas vessel segments support only normal component estimation. The obtained motion vectors are then interpolated to produce a dense motion field using thin plate splines. The proposed approach is evaluated on both real and synthetically deformed volumes. The obtained results are compared to several standard registration techniques. It is shown that by using vessel structure, the proposed approach results in improved performance.
Mathematical properties of information theoretic image similarity measures
Joint entropy, mutual information, and normalized mutual information are widely used image similarity measures in multimodality image registration and other problems that involve comparing images with arbitrary intensity relationships. While these image similarity measures have been successfully used in various applications, their mathematical properties have not been studied thoroughly. This paper analyzes several properties of practical interest of the three image similarity measures. It is shown that mutual information, despite its popularity, and joint entropy have a few undesirable properties. On the other hand, normalized mutual information does not suffer from these problems. The properties are proven mathematically, which renders the conclusions independent of image type, noise, and artifacts. The conclusions are in line with the results of previous experimental studies, in which normalized mutual information outperformed other information theoretic image similarity measures.
Fast surface alignment for cardiac spatio-temporal modeling: application to ischemic cardiac shape modeling
Heng Huang, Li Shen, Rong Zhang, et al.
The visualization and comparison of local deformation from 3D image sequences is of critical importance in understanding the etiology of Ischemic cardiac disease. In this paper we describe a framework to combine our previous fast spherical harmonic surface alignment algorithm with a new local special surface reconstruction method to reconstruct the surface of LV with Ischaemic cardiac disease. Our new surface computational model allows people to extract the valuable ischemic tissues behavior from the dynamic shape. We have demonstrated our approaches by the experiments on cardiac MRI. A brief description of motivation is put forth, as well as an overview of the approaches and some initial results are described.
Iterative deformable FEM model for nonrigid PET/MRI breast image coregistration
We implemented an iterative nonrigid registration algorithm to accurately combine functional (PET) and anatomical (MRI) images in 3D. Our method relies on a Finite Element Method (FEM) and a set of fiducial skin markers (FSM) placed on breast surface. The method is applicable if the stress conditions in the imaged breast are virtually the same in PET and MRI. In the first phase, the displacement vectors of the corresponding FSM observed in MRI and PET are determined, then FEM is used to distribute FSM displacements linearly over the entire breast volume. Our FEM model relies on the analogy between each of the orthogonal components of displacement field, and the temperature distribution field in a steady state heat transfer (SSHT) in solids. The problem can thus be solved via standard heat-conduction FEM software, with arbitrary conductivity of surface elements set much higher than that of volume elements. After determining the displacements at all mesh nodes, moving (MRI) breast volume is registered to target (PET) breast volume using an image-warping algorithm. In the second iteration, to correct for any residual surface and volume misregistration, a refinement process is applied to the moving image, which was already grossly aligned with the target image in 3D using FSM. To perform this process we determine a number of corresponding points on each moving and target image surfaces using a nearest-point approach. Then, after estimating the displacement vectors between the corresponding points on the surfaces we apply our SSHT model again. We tested our model on twelve patients with suspicious breast lesions. By using lesions visible in both PET and MRI, we established that the target registration error is below two PET voxels. The surface registration error is comparable to the spatial resolution of PET.
Temporal registration of 2D x-ray mammogram using triangular B-splines finite element method (TBFEM)
Kexiang Wang, Ying He, Hong Qin, et al.
In this paper we develop a novel image processing technique to register two dimensional temporal mammograms for effective diagnosis and therapy. Our registration framework is founded upon triangular B-spline finite element method (TBFEM). In contrast to tensor-product B-splines, which is widely used in medical imaging, triangular B-splines are much more powerful, associated with many desirable advantages for image registration, such as flexible triangular domain, local control, space-varying smoothness, and sharp feature modeling. Empowered by the rigorous theory of triangular B-splines, our method can explicitly model the transformation between temporal mammogram pairs over irregular region of interest(ROI), using a collection of triangular B-splines. In addition, it is also capable of describing C0 continuous deformation at the interfaces between different elastic tissues, while the overall displacement field is smooth. Our registration process consists of two steps: 1) The template image is first nonlinearly deformed using TBFEM model, subject to pre-segmented feature constraints; 2) The deformed template image is further perturbed by applying pseudo image forces, aiming to reducing intensity-based discrepancies. The proposed registration framework has been tested extensively on practical clinical data, and the experimental results demonstrates that the registration accuracy is improved comparing to using conventional FEMs. Besides, the modeling of local C0 continuities of the displacement field helps to further increase the registration quality considerably.
Establishing multi-modality datasets with the incorporation of 3D histopathology for soft tissue classification
The development of multi-modality image analysis has gained increasing popularity over recent years. Multi-modality image databases are being developed to benefit patient clinical care, research and education. The incorporation of histopathology in these multi-modality datasets is complicated by the large differences in image quality, content and spatial association. We have developed a novel system, the large-scale image microtome array (LIMA), to bridge the gap between non-structurally destructive and destructive imaging such that reliable registration and incorporation of three-dimensional (3D) histopathology can be achieved. We have developed registration algorithms to align the micro-CT, LIMA and histopathology data to a common coordinate system. Using this multi-modality image dataset we have developed a classification algorithm to identify on a pixel basis, the tissue types present. The output from the classification processing is a 3D color coded map of tissue distributions. The resulting complete dataset provides an abundance of valuable information relating to the tissue sample including density, anatomical structure, color, texture and cellular information in three dimensions. In this study we have chosen to use normal and diseased lung tissue, however the flexibility of the image acquisition and subsequent processing algorithms makes it applicable to any soft organ tissue.
Initial comparison of registration and fusion of SPECT-CmT mammotomography images
A hybrid, dual modality single photon emission computed tomography (SPECT) and x-ray computed mammotomography (CmT) scanner for dedicated breast and axillary imaging is under development. CmT imaging provides high resolution anatomical images, whereas SPECT provides functional images albeit with coarser resolution. As is being seen clinically in whole body imaging, integration of the images is expected to enhance (visually) and improve (with attenuation correction of SPECT) information provided by either modality for the detection, characterization and potentially staging of breast cancer. The registration of these images considers variations in object positions between the different modalities and imaging parameters (pixel size, conditions of acquisition, scan limitations). Automatic methods can be used which find the geometric transformations of the different imaging modalities involved. Here we demonstrate the initial stages of iterative 2-dimensional registration and fusion of SPECT with parallel beam geometry and CmT with offset cone-beam acquisition geometry for mammotomography with images acquired and reconstructed independently on each system. Two registration algorithms are considered: the first is an intrinsic correlation, Mutual Information (MI) method based on intrinsic image content; the second is a rigid body transform method, Iterative Closest Point (ICP) method based on identification of fiducial markers visible to both emission (SPECT) and transmission (CmT) imaging modalities. Experiments include use of a geometric resolution/frequency phantom imaged under different conditions, and two different anthropomorphic breast phantom sizes (325 and 935mL). Initial results with the geometric phantom demonstrate that MI can be misled by highly symmetric features, and ICP using control points is more accurate to within fractions of a voxel. Initial breast phantom studies indicate that object size and SPECT resolution limitations may contribute to registration errors.
Motion correction via nonrigid coregistration of dynamic MR mammography series
The objectives of this investigation are to improve quality of subtraction MR breast images and improve accuracy of time-signal intensity curves (TSIC) related to local contrast-agent concentration in dynamic MR mammography. The patients, with up to nine fiducial skin markers (FSMs) taped to each breast, were prone with both breasts suspended into a single well that housed the receiver coil. After a preliminary scan, paramagnetic contrast agent gadopentate digmeglumine (Gd) was delivered intravenously, followed by physiological saline. The field of view was centered over the breasts. We used a gradient recalled echo (GRE) technique for pre-Gd baseline, and five more measurements at 60s intervals. Centroids were determined for corresponding FSMs visible on pre-Gd and any post-Gd images. This was followed by segmentation of breast surfaces in all dynamic-series images, and meshing of all post-Gd breast images. Tetrahedral volume and triangular surface elements were used to construct a finite element method (FEM) model. We used ANSYSTM software and an analogy between orthogonal components of the displacement field and the temperature differences in steady-state heat transfer (SSHT) in solids. The floating images were warped to a fixed image using an appropriate shape function for interpolation from mesh nodes to voxels. To reduce any residual misregistration, we performed surface matching between the previously warped floating image and the target image. Our method of motion correction via nonrigid coregistration yielded excellent differential-image series that clearly revealed lesions not visible in unregistered differential-image series. Further, it produced clinically useful maximum intensity projection (MIP) 3D images.
Alignment of full and partial CT thoracic scans using bony structures
Diagnostic thoracic procedures using computed tomography (CT) often include comparisons of scans acquired with different slice thicknesses. In this manuscript, we investigated the potential for alignment of different CT scans from the same patient using skeletal knowledge of the thoracic region. Skeletal matching was selected because it is expected to be less susceptible to differences associated with patient breath hold, positioning and cardiac motion. Our method utilized the positioning of the ribs relative to the vertebra for matching. It also included matching the scapula when visible in the scans. Rib positioning was described by the angles formed between the vertebra centroid and combinations of pairs of rib centroids visible on each CT slice; this was used as the primary matching mechanism. Scapula morphology was described using a feature based on the local maxima of the distance transform. Since the scapula is not visible in all slices of a full scan, its description was limited to only defining the potential range of slices. A cost function incorporating the difference of features from rib positioning and scapula morphology between two slices was derived and used to match slices. The method was evaluated on an independent set of 10 pairs of full and partial CT scans. Assessment was based on whether or not slices containing known nodules between each pair of scans were overlapping after the alignment procedure. Results showed that the proposed metric correctly aligned 9 out of 10 scans. The preliminary results are encouraging for using this method as a first step towards temporal analysis of lung nodules.
Image registration of proximal femur with substantial bone changes: application in 3D visualization of bone loss of astronauts after long-duration spaceflight
Wenjun Li, Miki Sode, Isra Saeed, et al.
We recently studied bone loss in crewmembers making 4 to 6 months flights on the International Space Station. We employed Quantitative Computed Tomography (QCT) technology (Lang et. al., J Bone Miner Res. 2004; v. 19, p. 1006), which made measurements of both cortical and trabecular bone loss that could not be obtained by using 2-dimensional dual x-ray absorptiometry (DXA) imaging technology. To further investigate the bone loss after spaceflight, we have developed image registration technologies to align serial scans so that bone changes can be directly visualized in a subregional level, which can provide more detailed information for understanding bone physiology during long-term spaceflight. To achieve effective and robust registration when large bone changes exist, we have developed technical adaptations to standard registration methods. Our automated image registration is mutual-information based. We have applied an automatically adaptive binning method in calculating the mutual information. After the pre- and post-flight scans are geometrically aligned, the interior bone changes can be clearly visualized. Image registration can also be applied to Finite Element Modeling (FEM) to compare bone strength change, where consistent loading conditions must be applied to serial scans.
Supporting registration decisions during 3D medical volume reconstructions
Peter Bajcsy, Sang-Chul Lee, David Clutter
We propose a methodology for making optimal registration decisions during 3D volume reconstruction in terms of (a) anticipated accuracy of aligned images, (b) uncertainty of obtained results during the registration process, (c) algorithmic repeatability of alignment procedure, and (d) computational requirements. We researched and developed a web-enabled, web services based, data-driven, registration decision support system. The registration decisions include (1) image spatial size (image sub-area or entire image), (2) transformation model (e.g., rigid, affine or elastic), (3) invariant registration feature (intensity, morphology or a sequential combination of the two), (4) automation level (manual, semi-automated, or fully-automated), (5) evaluations of registration results (multiple metrics and methods for establishing ground truth), and (6) assessment of resources (computational resources and human expertise, geographically local or distributed). Our goal is to provide mechanisms for evaluating the tradeoffs of each registration decision in terms of the aforementioned impacts. First, we present a medical registration methodology for making registration decisions that lead to registration results with well-understood accuracy, uncertainty, consistency and computational complexity characteristics. Second, we have built software tools that enable geographically distributed researchers to optimize their data-driven registration decisions by using web services and supercomputing resources. The support developed for registration decisions about 3D volume reconstruction is available to the general community with the access to the NCSA supercomputing resources. We illustrate performance by considering 3D volume reconstruction of blood vessels in histological sections of uveal melanoma from serial fluorescent labeled paraffin sections labeled with antibodies to CD34 and laminin. The specimens are studied by fluorescence confocal laser scanning microscopy (CLSM) images.
Segmentation Poster Session
icon_mobile_dropdown
An automatic segmentation method for multispectral microscopic cervical cell images
Hongbo Zhang, Libo Zeng, Hengyu Ke, et al.
We have been developing a computer-aided diagnosis (CAD) system for automatically recognizing cervical cancer cells from Papanicolaou smear. Considering that pathological changes of cervix can be indicated by the abnormity of the nucleus of intermediate cell, the key task of this system is to find the intermediate cells and segment the nucleus precisely. This paper presents a novel approach for automatic segmentation of microscopic cervical cell images using multispectral imaging techniques. In order to capture images at different wavelengths, a Liquid Crystal Tunable Filter (LCTF) device is used to provide wavelength selection from 400nm to 720nm with an increment of 10nm. Considering the spectral variances of background, nucleus and cytoplasm, background is extracted firstly from the microscopic images by calculating pixel intensity variance at 470nm, 530nm, 570nm, 580nm and 650nm. Then superficial cells are extracted apart from intermediate cells easily at 530nm 650nm because of the different pixel intensity distribution of the two kinds of cells at these two wavelengths. To segment the nucleus from intermediate cells, we adopt two procedures. Firstly, the nuclei are roughly segmented apart by using an iterative maximum deviation between-cluster algorithm. Secondly, a novel rigorous algorithm based on active contour model is adopted to achieve more exact nuclei segmentation. Using the method proposed in this paper, we did experiments on over 300 cervical smears, and the results show that this method is more robust and precise.
Characterization of the optic disc in retinal imagery using a probabilistic approach
Kenneth W. Tobin Jr., Edward Chaum M.D., V. Priya Govindasamy, et al.
The application of computer based image analysis to the diagnosis of retinal disease is rapidly becoming a reality due to the broad-based acceptance of electronic imaging devices throughout the medical community and through the collection and accumulation of large patient histories in picture archiving and communications systems. Advances in the imaging of ocular anatomy and pathology can now provide data to diagnose and quantify specific diseases such as diabetic retinopathy (DR). Visual disability and blindness have a profound socioeconomic impact upon the diabetic population and DR is the leading cause of new blindness in working-age adults in the industrialized world. To reduce the impact of diabetes on vision loss, robust automation is required to achieve productive computer-based screening of large at-risk populations at lower cost. Through this research we are developing automation methods for locating and characterizing important structures in the human retina such as the vascular arcades, optic nerve, macula, and lesions. In this paper we present results for the automatic detection of the optic nerve using digital red-free fundus photography. Our method relies on the accurate segmentation of the vasculature of the retina along with spatial probability distributions describing the luminance across the retina and the density, average thickness, and average orientation of the vasculature in relation to the position of the optic nerve. With these features and other prior knowledge, we predict the location of the optic nerve in the retina using a two-class, Bayesian classifier. We report 81% detection performance on a broad range of red-free fundus images representing a population of over 345 patients with 19 different pathologies associated with DR.
Automatic segmentation method which divides a cerebral artery tree in time-of-flight MR-angiography into artery segments
Akihiro Takemura, Masayuki Suzuki, Hajime Harauchi, et al.
To achieve sufficient accuracy and robustness, 2D/3D registration methods between DSA and MRA of the cerebral artery require an automatic extraction method that can isolate wanted segments from the cerebral artery tree. Here, we described an automatic segmentation method that divides the cerebral artery tree in time-of-flight magnetic resonance angiography (TOF-MRA) into each artery. This method requires a 3D dataset of the cerebral artery tree obtained by TOF-MRA. The processes of this method are: 1) every branch in the cerebral artery tree is labeled with a unique index number, 2) the 3D center of the Circle of Willis is determined using 2D and 3D templates, and 3) the labeled branches are classified with reference to the 3D territory map of cerebral arteries centered on the Circle of Willis. This method classifies all branches into internal carotid arteries (ICA), basilar artery (BA), middle cerebral artery (MCA), a1 segment of anterior cerebral artery (ACA(A1)), other segments of the anterior cerebral artery (ACA), posterior communication artery (PcomA), and posterior cerebral artery (PCA). In the eleven cases examined, the numbers of correctly segmented pixels in each branch were counted and the percentages based on the total number of pixels of the artery were calculated. Manually classified arteries of each case were used as references. Mean percentages were: ACA, 87.6%; R-ACA(A1), 44.9%; L-ACA(A1), 30.4%; R-MC, 82.4%; L-MC, 79.0%; R-PcomA, 0.5%; L-PcomA, 0.0%; R-PCA, 77.2%; L-PCA, 80.0%; R-ICA, 78.6%; L-ICA, 93.05; BA, 77.1%; and total arteries, 78.9%.
SIBS: a powerful concept for automatic segmentation of electron tomograms
Alexandros A. Linaroudis, Reiner Hegerl
A scaling index based segmentation (SIBS) method is proposed in order to improve visualization and interpretation of data obtained by electron tomography. Based on the interpretation of the scaling index as a measure for dimensionality, the pixels/voxels of an image/volume are subdivided into different categories according to the kind of structure they belong to. Using the weighted scaling index method proposed by Räth1 in conjunction with morphological operators, the approach was adapted to the field of electron microscopy, especially to three-dimensional application as needed by electron tomography. The method turns out to be quite effective for linear structures and membranes. Theory, implementation, parameter settings and results obtained with different kinds of data are presented and discussed.
Generalized expectation-maximization segmentation of brain MR images
Arnaud A. Devalkeneer, Pierre A. Robe, Jacques G. Verly, et al.
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Texture-based instrument segmentation in 3D ultrasound images
The recent development of real-time 3D ultrasound enables intracardiac beating heart procedures, but the distorted appearance of surgical instruments is a major challenge to surgeons. In addition, tissue and instruments have similar gray levels in US images and the interface between instruments and tissue is poorly defined. We present an algorithm that automatically estimates instrument location in intracardiac procedures. Expert-segmented images are used to initialize the statistical distributions of blood, tissue and instruments. Voxels are labeled of voxels through an iterative expectation-maximization algorithm using information from the neighboring voxels through a smoothing kernel. Once the three classes of voxels are separated, additional neighboring information is used to give spatial information based on the shape of instruments in order to correct for misclassifications. We analyze the major axis of segmented data through their principal components and refine the results by a watershed transform, which corrects the results at the contact between instrument and tissue. We present results on 3D in-vitro data from a tank trial, and 3D in-vivo data from a cardiac intervention on a porcine beating heart. The comparison of algorithm results to expert-annotated images shows the correct segmentation and position of the instrument shaft.
Vasculature segmentation for radio frequency ablation of non-resectable hepatic tumors
In Radio Frequency Ablation (RFA) procedures, hepatic tumor tissue is heated to a temperature where necrosis is insured. Unfortunately, recent results suggest that heating tumor tissue to necrosis is complicated because nearby major blood vessels provide a cooling effect. Therefore, it is fundamentally important for physicians to perform a careful analysis of the spatial relationship of diseased tissue to larger liver blood vessels. The liver contains many of these large vessels, which affect the RFA ablation shape and size. There are many sophisticated vasculature detection and segmentation techniques reported in the literature that identify continuous vessels as the diameter changes size and it transgresses through many bifurcation levels. However, the larger blood vessels near the treatment area are the only vessels required for proper RFA treatment plan formulation and analysis. With physician guidance and interaction, our system can segment those vessels which are most likely to affect the RFA ablations. We have found that our system provides the physician with therapeutic, geometric and spatial information necessary to accurately plan treatment of tumors near large blood vessels. The segmented liver vessels near the treatment region are also necessary for computing isolevel heating profiles used to evaluate different proposed treatment configurations.
kNN-based multi-spectral MRI brain tissue classification: manual training versus automated atlas-based training
Henri A. Vrooman, Chris A. Cocosco, Rik Stokking, et al.
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue, requires laborious training on manually labeled subjects. In this work, the performance of kNN-based segmentation of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using manual training is compared with a new method, in which training is automated using an atlas. From 12 subjects, standard T2 and PD scans and a high-resolution, high-contrast scan (Siemens T1-weighted HASTE sequence with reverse contrast) were used as feature sets. For the conventional kNN method, manual segmentations were used for training, and classifications were evaluated in a leave-one-out study. The performance as a function of the number of samples per tissue, and k was studied. For fully automated training, scans were registered to a probabilistic brain atlas. Initial training samples were randomly selected per tissue based on a threshold on the tissue probability. These initials were processed to keep the most reliable samples. Performance of the method for varying the threshold on the tissue probability method was studied. By measuring the percentage overlap (SI), classification results of both methods were validated. For conventional kNN classification, varying the number of training samples did not result in significant differences, while increasing k gave significantly better results. In the method using automated training, there is an overestimation of GM at the expense of CSF at higher thresholds on the tissue probability maps. The difference between the conventional method (k=45) and the observers was not significantly larger than inter-observer variability for all tissue types. The automated method performed slightly worse and performed equal to the observers for WM, and less for CSF and GM. From these results it can be concluded that conventional kNN classification may replace manual segmentation, and that atlas-based kNN segmentation has strong potential for fully automated segmentation, without the need of laborious manual training.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
Mohsen Abbaspour, Rafeef Abugharbieh, Mohua Podder, et al.
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
Iterative live wire and live snake: new user-steered 3D image segmentation paradigms
During any image segmentation process, two distinct tasks are performed - recognition and delineation. Recognition consists of the searching phase which roughly identifies a particular object of interest among other neighboring structures present in the image. Delineation consists of precisely defining the spatial extent of the object region. Well designed interactive segmentation methods, such as live wire (LW) and snakes, exploit the synergy between the user knowledge (for recognition) and the underlying computer processing done automatically (for delineation). We present in this paper two new methods, referred to as iterative live wire and live snake, for interactive 3D segmentation of medical images. In both methods, the segmentation initiated by the LW or snake method is propagated under user control to subsequent slices by projecting the anchor points. In iterative LW (ILW), the LW segments are iteratively updated in the new slice by selecting the mid points of previous LW segments as new anchor points. In live snake (LS), the snake method is first applied in the new slice for the projected anchor points and ended with an application of ILW. The methods have been evaluated on 30 3D MRI data sets of the breast. The results indicate that, on average, far fewer number of user interventions during segmentation and anchor point specification are needed by using the new methods than by using snakes propagation or live wire. The ILW segmentations are slightly more accurate, with statistical significance (P<0.01), than LS segmentations, and the former are more efficient than the latter (P<0.03), both being more efficient than pure live wire and snake methods.
Automatic LV volume measurement in low dose multi-phase CT by shape tracking
Jens von Berg, Philipp Begemann, Felix Stahmer, et al.
Functional assessment of cardiac ventricular function requires time consuming manual interaction. Some automated methods have been presented that predominantly used cardiac magnet resonance images. Here, an automatic shape tracking approach is followed to estimate left ventricular blood volume from multi-slice computed tomography image series acquired with retrospective ECG-gating. A deformable surface model method was chosen that utilized both shape and local appearance priors to determine the endocardial surface and to follow its motion through the cardiac cycle. Functional parameters like the ejection fraction could be calculated from the estimated shape deformation. A clinical validation was performed in a porcine model with 60 examinations on eight subjects. The functional parameters showed a good correlation with those determined by clinical experts using a commercially available semi-automatic short axes delineation tool. The correlation coefficient for the ejection fraction (EF) was 0.89. One quarter of these acquisitions were done with a low dose protocol. All of these degraded images could be processed well. Their correlation slightly decreases when compared to the normal dose cases (EF: 0.87 versus 0.88).
Fast shape-directed landmark-based deep gray matter segmentation for quantification of iron deposition
Ahmet Ekin, Radu Jasinschi, Jeroen van der Grond, et al.
This paper introduces image processing methods to automatically detect the 3D volume-of-interest (VOI) and 2D region-of-interest (ROI) for deep gray matter organs (thalamus, globus pallidus, putamen, and caudate nucleus) of patients with suspected iron deposition from MR dual echo images. Prior to the VOI and ROI detection, cerebrospinal fluid (CSF) region is segmented by a clustering algorithm. For the segmentation, we automatically determine the cluster centers with the mean shift algorithm that can quickly identify the modes of a distribution. After the identification of the modes, we employ the K-Harmonic means clustering algorithm to segment the volumetric MR data into CSF and non-CSF. Having the CSF mask and observing that the frontal lobe of the lateral ventricle has more consistent shape accross age and pathological abnormalities, we propose a shape-directed landmark detection algorithm to detect the VOI in a speedy manner. The proposed landmark detection algorithm utilizes a novel shape model of the front lobe of the lateral ventricle for the slices where thalamus, globus pallidus, putamen, and caudate nucleus are expected to appear. After this step, for each slice in the VOI, we use horizontal and vertical projections of the CSF map to detect the approximate locations of the relevant organs to define the ROI. We demonstrate the robustness of the proposed VOI and ROI localization algorithms to pathologies, including severe amounts of iron accumulation as well as white matter lesions, and anatomical variations. The proposed algorithms achieved very high detection accuracy, 100% in the VOI detection , over a large set of a challenging MR dataset.
Automated brain segmentation using neural networks
Stephanie Powell, Vincent Magnotta, Hans Johnson, et al.
Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures such as the thalamus (0.825), caudate (0.745), and putamen (0.755). One of the inputs into the ANN is the apriori probability of a structure existing at a given location. In this previous work, the apriori probability information was generated in Talairach space using a piecewise linear registration. In this work we have increased the dimensionality of this registration using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. The output of the neural network determined if the voxel was defined as one of the N regions used for training. Training was performed using a standard back propagation algorithm. The ANN was trained on a set of 15 images for 750,000,000 iterations. The resulting ANN weights were then applied to 6 test images not part of the training set. Relative overlap calculated for each structure was 0.875 for the thalamus, 0.845 for the caudate, and 0.814 for the putamen. With the modifications on the neural net algorithm and the use of multi-dimensional registration, we found substantial improvement in the automated segmentation method. The resulting segmented structures are as reliable as manual raters and the output of the neural network can be used without additional rater intervention.
Modeling shape variability for full heart segmentation in cardiac computed-tomography images
Olivier Ecabert, Jochen Peters, Jürgen Weese
An efficient way to improve the robustness of the segmentation of medical images with deformable models is to use a priori shape knowledge during the adaptation process. In this work, we investigate how the modeling of the shape variability in shape-constrained deformable models influences both the robustness and the accuracy of the segmentation of cardiac multi-slice CT images. Experiments are performed for a complex heart model, which comprises 7 anatomical parts, namely the four chambers, the myocardium, and trunks of the aorta and the pulmonary artery. In particular, we compare a common shape variability modeling technique based on principal component analysis (PCA) with a more simple approach, which consists of assigning an individual affine transformation to each anatomical subregion of the heart model. We conclude that the piecewise affine modeling leads to the smallest segmentation error, while simultaneously offering the largest flexibility without the need for training data covering the range of possible shape variability, as required by PCA.
Comparison of color clustering algorithms for segmentation of dermatological images
Automatic segmentation of skin lesions in clinical images is a very challenging task; it is necessary for visual analysis of the edges, shape and colors of the lesions to support the melanoma diagnosis, but, at the same time, it is cumbersome since lesions (both naevi and melanomas) do not have regular shape, uniform color, or univocal structure. Most of the approaches adopt unsupervised color clustering. This works compares the most spread color clustering algorithms, namely median cut, k-means, fuzzy-c means and mean shift applied to a method for automatic border extraction, providing an evaluation of the upper bound in accuracy that can be reached with these approaches. Different tests have been performed to examine the influence of the choice of the parameter settings with respect to the performances of the algorithms. Then a new supervised learning phase is proposed to select the best number of clusters and to segment the lesion automatically. Examples have been carried out in a large database of medical images, manually segmented by dermatologists. From these experiments mean shift was resulted the best technique, in term of sensitivity and specificity. Finally, a qualitative evaluation of the goodness of segmentation has been validated by the human experts too, confirming the results of the quantitative comparison.
MATLAB-ITK interface for medical image filtering, segmentation, and registration
To facilitate high level analysis of medical image data in research and clinical environments, a wrapper for the ITK toolkit is developed to allow ITK algorithms to be called in MATLAB. ITK is a powerful open-source toolkit implementing state-of-the-art algorithms in medical image processing and analysis. However, although ITK is rapidly gaining popularity, its user base is mostly restricted to technically savvy developers with expert knowledge of C++ and advanced programming concepts. MATLAB, on the other hand, is well-known for its easy-to-use, powerful prototyping capabilities that significantly improve productivity. Unfortunately, the 3D image processing capabilities of MATLAB are very limited and slow to execute. With the help of the wrapper we introduce in this paper, biomedical computing researchers familiar with MATLAB can harness the power of ITK while avoiding learning C++ and dealing with low-level programming issues. We strongly believe this functionality will be of considerable interest to the medical image computing community. In this paper we provide details about the design and usage of this interface in medical image filtering, segmentation, and registration.
An adipose segmentation and quantification scheme for the intra abdominal region on minipigs
Rasmus Engholm, Aleksandr Dubinskiy, Rasmus Larsen, et al.
This article describes a method for automatic segmentation of the abdomen into three anatomical regions: subcutaneous, retroperitoneal and visceral. For the last two regions the amount of adipose tissue (fat) is quantified. According to recent medical research, the distinction between retroperitoneal and visceral fat is important for studying metabolic syndrome, which is closely related to diabetes. However previous work has neglected to address this point, treating the two types of fat together. We use T1-weighted three-dimensional magnetic resonance data of the abdomen of obese minipigs. The pigs were manually dissected right after the scan, to produce the "ground truth" segmentation. We perform automatic segmentation on a representative slice, which on humans has been shown to correlate with the amount of adipose tissue in the abdomen. The process of automatic fat estimation consists of three steps. First, the subcutaneous fat is removed with a modified active contour approach. The energy formulation of the active contour exploits the homogeneous nature of the subcutaneous fat and the smoothness of the boundary. Subsequently the retroperitoneal fat located around the abdominal cavity is separated from the visceral fat. For this, we formulate a cost function on a contour, based on intensities, edges, distance to center and smoothness, so as to exploit the properties of the retroperitoneal fat. We then globally optimize this function using dynamic programming. Finally, the fat content of the retroperitoneal and visceral regions is quantified based on a fuzzy c-means clustering of the intensities within the segmented regions. The segmentation proved satisfactory by visual inspection, and closely correlated with the manual dissection data. The correlation was 0.89 for the retroperitoneal fat, and 0.74 for the visceral fat.
Computerized method for automated measurement of thickness of cerebral cortex for 3-D MR images
Hidetaka Arimura, Takashi Yoshiura, Seiji Kumazawa, et al.
Alzheimer's disease (AD) is associated with the degeneration of cerebral cortex, which results in focal volume change or thinning in the cerebral cortex in magnetic resonance imaging (MRI). Therefore, the measurement of the cortical thickness is important for detection of the atrophy related to AD. Our purpose was to develop a computerized method for automated measurement of the cortical thickness for three-dimensional (3-D) MRI. The cortical thickness was measured with normal vectors from white matter surface to cortical gray matter surface on a voxel-by-voxel basis. First, a head region was segmented by use of an automatic thresholding technique, and then the head region was separated into the cranium region and brain region by means of a multiple gray level thresholding with monitoring the ratio of the first maximum volume to the second one. Next, a fine white matter region was determined based on a level set method as a seed region of the rough white matter region extracted from the brain region. Finally, the cortical thickness was measured by extending normal vectors from the white matter surface to gray matter surface (brain surface) on a voxel-by-voxel basis. We applied the computerized method to high-resolution 3-D T1-weighted images of the whole brains from 7 clinically diagnosed AD patients and 8 healthy subjects. The average cortical thicknesses in the upper slices for AD patients were thinner than those for non-AD subjects, whereas the average cortical thicknesses in the lower slices for most AD patients were slightly thinner. Our preliminary results suggest that the MRI-based computerized measurement of gray matter atrophy is promising for detecting AD.
A two-stage method for lesion segmentation on digital mammograms
In this paper, we present a two-stage method for the segmentation of breast mass lesions on digitized mammograms. A radial gradient index (RGI) based segmentation method is first used to estimate a initial contour close to the lesion boundary location in a computationally efficient manner. Then a region-based active contour algorithm, which minimizes an energy fucntion based on the homogeneities inside and outside of the envolving coutour, is applied to refine the contour closer to the lesion boundary. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. By using a digitized screening film dababase with 96 biopy-proven, malignant lesions, we quantitatively compare this two-stage segmentation algorithm with a RGI-based method and a conventional region-growing algorithm by measuring the area similarity. At an overlap threshold of 0.30, the new method correctly segments 95% of the lesions while the prior methods delineate only 83% of the lesions. Our assessment demonstrates that the two-stage segmentation algorithm yields closer agreement with manually contoured lesion boundaries.
Automatic determination of the imaging plane in lumbar MRI
In this paper we describe a method for assisting radiological technologists in their routine work to automatically determine the imaging plane in lumbar MRI. The method is first to recognize the spinal cord and the intervertebral disk (ID) from the lumbar vertebra 3-plane localizer image, and then the imaging plane is automatically determined according to the recognition results. To determine the imaging plane, the spinal cord and the ID are automatically recognized from the lumbar vertebra 3-plane localizer image with a series of image processing techniques. The proposed method consists of three major steps. First, after removing the air and fat regions from the 3-plane localizer image by use of histogram analysis, the rachis region is specified with Sobel edge detection filter. Second, the spinal cord and the ID were respectively extracted from the specified rachis region making use of global thresholding and the line detection filter. Finally, the imaging plane is determined by finding the straight line between the spinal cord and the ID with the Hough transform. Image data of 10 healthy volunteers were used for investigation. To validate the usefulness of our proposed method, manual determination of the imaging plane was also conducted by five experienced radiological technologists. Our experimental results showed that the concordance rate between the manual setting and automatic determination reached to 90%. Moreover, a remarkable reduction in execution time for imaging-plane determination was also achieved.
White matter fiber tractography based on a directional diffusion field in diffusion tensor MRI
S. Kumazawa, T. Yoshiura M.D., H. Arimura, et al.
Diffusion tensor (DT) MRI provides the directional information of water molecular diffusion, which can be utilized to estimate the connectivity of white matter tract pathways in the human brain. Several white matter tractography methods have been developed to reconstruct the white matter fiber tracts using DT-MRI. With conventional methods (e.g., streamline techniques), however, it would be very difficult to trace the white matter tracts passing through the fiber crossing and branching regions due to the ambiguous directional information with the partial volume effect. The purpose of this study was to develop a new white matter tractography method which permits fiber tract branching and passing through crossing regions. Our tractography method is based on a three-dimensional (3D) directional diffusion function (DDF), which was defined by three eigenvalues and their corresponding eigenvectors of DT in each voxel. The DDF-based tractography (DDFT) consists of the segmentation of white matter tract region and fiber tracking process. The white matter tract regions were segmented by thresholding the 3D directional diffusion field, which was generated by the DDF. In fiber tracking, the DDFT method estimated the local tract direction based on overlap of the DDFs instead of the principal eigenvector, which has been used in conventional methods, and reconstructed tract branching by means of a one-to-many relation model. To investigate the feasibility and usefulness of the DDFT method, we applied it to DT-MRI data of five normal subjects and seven patients with a brain tumor. With the DDFT method, the detailed anatomy of white matter tracts was depicted more appropriately than the conventional methods.
Fuzzy c-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT
Hon-Chit Choi, Lingfeng Wen, Stefan Eberl, et al.
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K1), volume of distribution (Vd) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K1-k4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP1 & BP2) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Segmentation of ground glass opacities by asymmetric multi-phase deformable model
Yongseok Yoo, Hackjoon Shim, Il Dong Yun, et al.
Recently ground glass opacities (GGOs) have become noteworthy in lung cancer diagnosis. It is crucial to define the boundary a GGO accurately and consistently, since the growth rate is the most manifest evidence of its malignancy. The indefinite and irregular boundary of a GGO makes deformable models adequate for its segmentation. Among deformable models a level set method has the ability to handle topological changes. For the exact estimation of GGO's volume change, the pulmonary airways inside GGO should be excluded in its volume estimation, which necessitate the segmentation into more regions than two of the object and the background. Hence, we adopted a multi-phase deformable model of two level set functions and modified its energy functional into an asymmetric form. The main two modifications are the elimination of one region in four regions of the conventional 4-phase deformable model and the prevention of the outer region from spreading out of the initialization. The proposed model segments the input image into three regions of the inner and outer regions, and the background. The GGO tissues are segmented as the inner region and the outer region plays the role of blockade for the inner region not to leak out to adjacent anatomical structures of similar Hounsfield Unit (HU) values. Our experimental results confirmed the feasibility of the proposed method as a pre-processing step for three dimensional (3-D) volume measurement of the GGO.
Semi-automatic knee cartilage segmentation
Erik B. Dam, Jenny Folkesson, Paola C. Pettersen M.D., et al.
Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.
Fast and robust extraction of centerlines in 3D tubular structures using a scattered-snakelet approach
Christoph Spuhler, Matthias Harders, Gábor Székely
We present a fast and robust approach for automatic centerline extraction of tubular structures. The underlying idea is to cut traditional snakes into a set of shorter, independent segments - so-called snakelets. Following the same variational principles, each snakelet acts locally and extracts a subpart of the overall structure. After a parallel optimization step, outliers are detected and the remaining segments then form an implicit centerline. No manual initialization of the snakelets is necessary, which represents one advantage of the method. Moreover, computational complexity does not directly depend on dataset size, but on the number of snake segments necessary to cover the structure of interest, resulting in short computation times. Lastly, the approach is robust even for very complex datasets such as the small intestine. Our approach was tested on several medical datasets (CT datasets of colon, small bowel, and blood vessels) and yielded smooth, connected centerlines with few or no branches. The computation time needed is less than a minute using standard computing hardware.
Analysis of brain images using the 3D-CSC segmentation method
Lutz Priese, Frank Schmitt, Patrick Sturm, et al.
The 2D segmentation method CSC (Color Structure Code) for color images has recently been generalized to 3D color or grey valued images. To apply this technique for an automated analysis of 3D MR brain images a few preprocessing and postprocessing steps have been added. We present this new brain analysis technique and compare it with SPM.
3D echocardiographic segmentation using the mean-shift algorithm and an active surface model
The anatomical and functional cardiac cavities information obtained by Ultrasound images allows a qualitative and quantitative analysis to determine patient's health and detect possible pathologies. Several approaches have been proposed for semiautomatic or fully automatic segmentation. Texture based presegmentation combined with an active contour model have proven to be a promising way to extract cardiac structures from echographic images. In this work a novel procedure for 3D cardiac image segmentation is introduced. A robust pre-processing step that reduces noise and extracts an initial frontier of cardiac structures is combined with an Active Surface Model to obtain final 3D segmentation. Preprocessing is performed by the Mean Shift algorithm that integrates 3D edge confidence map and includes entropy, echoes intensity and spatial information as input features. This procedure locates adequately homogeneous regions in 3D echocardiographic images. The external energy terms included in the Active Surface Model are the 3D edge confidence map and the entropy component obtained by the Mean Shift pre-segmentation. The results demonstrate that the pre-processing provides homogeneous regions and a good initial frontier between blood and myocardium. The Active Surface Model adjusts the initial surface computed by the mean-shift algorithm to the cardiac border. Finally, the obtained results are compared with the experts' manual segmentation and the Tanimoto index between these segmentations is calculated.
Level sets and shape models for segmentation of cardiac perfusion MRI
Lucas Lorenzo, Robert S. MacLeod, Ross T. Whitaker, et al.
Dynamic MRI perfusion studies have proven to be useful for detecting and characterizing myocardial ischemia. Accurate segmentation of the myocardium in the dynamic contrast-enhanced (DCE) MRI images is an important step for estimation of regional perfusion. Although a great deal of research has been done for segmenting MRI scans of heart wall motion, relatively little work has been done to segment DCE MRI studies. We propose a new semi-automatic robust level set based segmentation technique that uses both spatial and temporal information. The evolution of level sets is based on a spectral speed function which is a function of the Mahalanobis distance between each pixel's time curve and the time curves of user-determined seed points in the myocardium. A curvature penalty term is included in the evolution of the contours to ensure smoothness of the evolving level sets. We also make use of shape information to constrain the evolution of the level sets. Shape models were created by using signed distance maps from manually segmented images and performing principal component analysis. Thus the algorithm has the qualities of evolving an active contour both locally, based on image values and curvature, and globally to a maximum a posteriori estimate of the left ventricle shape in order to segment the left ventricle myocardium from DCE cardiac MRI images. The algorithm was tested on 16 DCE MRI datasets and compared to manual segmentations. The results matched the manual segmentations.
Detection of joint space narrowing in hand radiographs
Joost A. Kauffman, Cornelis H. Slump, Hein J. Bernelot Moens
Radiographic assessment of joint space narrowing in hand radiographs is important for determining the progression of rheumatoid arthritis in an early stage. Clinical scoring methods are based on manual measurements that are time consuming and subjected to intra-reader and inter-reader variance. The goal is to design an automated method for measuring the joint space width with a higher sensitivity to change1 than manual methods. The large variability in joint shapes and textures, the possible presence of joint damage, and the interpretation of projection images make it difficult to detect joint margins accurately. We developed a method that uses a modified active shape model to scan for margins within a predetermined region of interest. Possible joint space margin locations are detected using a probability score based on the Mahalanobis distance. To prevent the detection of false edges, we use a dynamic programming approach. The shape model and the Mahalanobis scoring function are trained with a set of 50 hand radiographs, in which the margins have been outlined by an expert. We tested our method on a test set of 50 images. The method was evaluated by calculating the mean absolute difference with manual readings by a trained person. 90% of the joint margins are detected within 0.12 mm. We found that our joint margin detection method has a higher precision considering reproducibility than manual readings. For cases where the joint space has disappeared, the algorithm is unable to estimate the margins. In these cases it would be necessary to use a different method to quantify joint damage.
Vesselness propagation: a fast interactive vessel segmentation method
Wenli Cai, Frank Dachille, Gordon J. Harris, et al.
With the rapid development of multi-detector computed tomography (MDCT), resulting in increasing temporal and spatial resolution of data sets, clinical use of computed tomographic angiography (CTA) is rapidly increasing. Analysis of vascular structures is much needed in CTA images; however, the basis of the analysis, vessel segmentation, can still be a challenging problem. In this paper, we present a fast interactive method for CTA vessel segmentation, called vesselness propagation. This method is a two-step procedure, with a pre-processing step and an interactive step. During the pre-processing step, a vesselness volume is computed by application of a CTA transfer function followed by a multi-scale Hessian filtering. At the interactive stage, the propagation is controlled interactively in terms of the priority of the vesselness. This method was used successfully in many CTA applications such as the carotid artery, coronary artery, and peripheral arteries. It takes less than one minute for a user to segment the entire vascular structure. Thus, the proposed method provides an effective way of obtaining an overview of vascular structures.
Chroma analysis for quantitative immunohistochemistry using active learning
Nilesh Patel, Aiyesha Ma, Rajal Shah, et al.
Protein expression analysis has traditionally relied upon visual evaluation of immunohistochemical reaction by a pathologist, who analyzes the grade of staining intensity and estimates the percentage of cells stained in the area of interest. This method is effective in experienced hands but has potential limitations in its reproducibility due to subjectivity between and within operators. These limitations are particularly pronounced in gray areas where a distinction of weak from moderate protein expression can be clinically significant. Some research also suggests that sub localization of the protein expression into different components such as nuclei versus cytoplasm may be of great importance. This distinction can be particularly difficult to quantify using manual methods. In this paper, we formulate the problem of quantitative protein expression analysis as an active learning classification problem, where a very small set of pre-sampled user data is used for understanding expert evaluation. The expert coveted confidence is mapped to derive an uncertainty region to select the supplemental learning data. This is done by posing a structured query to the unknown data set. The newly identified samples are then augmented to the training set for incremental learning. The strength of our algorithm is measured in its ability to learn with minimum user interaction. Chroma analysis results of a Tissue Micro-array (TMA) images are presented to demonstrate the user interaction and learning ability. The chroma analysis results are then processed to obtain quantitative results.
Probabilistic minimal path for automated esophagus segmentation
Mikael Rousson, Ying Bai, Chenyang Xu, et al.
This paper introduces a probabilistic shortest path approach to extract the esophagus from CT images. In this modality, the absence of strong discriminative features in the observed image make the problem ill-posed without the introduction of additional knowledge constraining the problem. The solution presented in this paper relies on learning and integrating contextual information. The idea is to model spatial dependency between the structure of interest and neighboring organs that may be easier to extract. Observing that the left atrium (LA) and the aorta are such candidates for the esophagus, we propose to learn the esophagus location with respect to these two organs. This dependence is learned from a set of training images where all three structures have been segmented. Each training esophagus is registered to a reference image according to a warping that maps exactly the reference organs. From the registered esophagi, we define the probability of the esophagus centerline relative to the aorta and LA. To extract a new centerline, a probabilistic criterion is defined from a Bayesian formulation that combines the prior information with the image data. Given a new image, the aorta and LA are first segmented and registered to the reference shapes and then, the optimal esophagus centerline is obtained with a shortest path algorithm. Finally, relying on the extracted centerline, coupled ellipse fittings allow a robust detection of the esophagus outer boundary.
Variational segmentation framework in prolate spheroidal coordinates for 3D real-time echocardiography
This paper presents a new formulation of a deformable model segmentation in prolate spheroidal coordinates for segmentation of 3D cardiac echocardiography data. The prolate spheroidal coordinate system enables a representation of the segmented surface with descriptors specifically adapted to the "ellipsoidal" shape of the ventricle. A simple data energy term, based on gray-level information, guides the segmentation. The segmentation framework provides a very fast and simple algorithm to evolve an initial ellipsoidal object towards the endocardial surface of the myocardium with near real-time deformations. With near real-time performance, additional constraints on landmark points, can be used interactively to prevent leakage of the surface.
A new general method of 3D model generation for active shape image segmentation
For 3D model-based approaches, building the 3D shape model from a training set of segmented instances of an object is a major challenge and currently remains an open problem. In this paper, we propose a novel, general method for the generation of 3D statistical shape models. Given a set of training 3D shapes, 3D model generation is achieved by 1) building the mean model from the distance transform of the training shapes, 2) utilizing a tetrahedron method for automatically selecting landmarks on the mean model, and 3) subsequently propagating these landmarks to each training shape via a distance labeling method. Previous 3D modeling efforts all had severe limitations in terms of the object shape, geometry, and topology. The proposed method is very general without such assumptions and is applicable to any data set.
A fast algorithm for body extraction in CT volumes
The Computed Tomography (CT) modality shows not only the body of the patient in the volumes it generates, but also the clothing, the cushion and the table. This might be a problem especially for two applications. The first is 3D visualization, where the table has high density parts that might hide regions of interest. The second is registration of acquisitions obtained at different time points; indeed, the table and cushions might be visible in one data set only, and their positions and shapes may vary, making the registration less accurate. An automatic approach for extracting the body would solve those problems. It should be robust, reliable, and fast. We therefore propose a multi-scale method based on deformable models. The idea is to move a surface across the image that attaches to the boundaries of the body. We iteratively compute forces which take into account local information around the surface. Those make it move through the table but ensure that it stops when coming close to the body. Our model has elastic properties; moreover, we take into account the fact that some regions in the volume convey more information than others by giving them more weight. This is done by using normalized convolution when regularizing the surface. The algorithm*, tested on a database of over a hundred volumes of whole body, chest or lower abdomen, has proven to be very efficient, even for volumes with up to 900 slices, providing accurate results in an average time of 6 seconds. It is also robust against noise and variations of scale and table's shape.
Investigation on an EM framework for partial volume image segmentation
Daria Eremina, Xiang Li, Wei Zhu, et al.
This work investigates a new partial volume (PV) image segmentation framework with comparison to a previous PV approach. The new framework utilizes an expectation-maximization (EM) algorithm to estimate simultaneously (1) tissue fractions in each image voxel and (2) statistical model parameters of the image data under the principle of maximum a posteriori probability (MAP). The previous EM approach models the PV effect by down-sampling a voxel and then labels each sub-voxel as a pure tissue type, where the number of sub-voxels labeled by a given tissue type over the total number of sub-voxels reflects the fraction of that tissue type inside the original voxel. The tissue fractions in each voxel in this discrete PV model are represented by a limited number of percentage values. In the new MAP-EM approach, the PV effect is modeled in a continuous space and estimated directly as the fraction of each tissue type in the original voxel. The previous discrete PV model would converge to our continuous PV tissue-mixture model if there is an infinite number of sub-voxels within a voxel. However, in practice a voxel is usually down-sampled once or twice for computational reasons. A comparison study between this limited down-sampling approach and our continuous PV model reveals, by computer simulations, that our continuous PV model is computationally more effective and thus improves the PV segmentation over the discrete PV model.
Shortest path adjusted similarity metrics for resolving boundary perturbations in scaffold images for tissue engineering
The degree of match between the delineation result produced by a segmentation technique and the ground truth can be assessed using robust "presence-absence" resemblance measures. Previously, we had investigated and introduced an exhaustive list of similarity indices for assessing multiple segmentation techniques. However, these measures are highly sensitive to even minor boundary perturbations which imminently manifest in the segmentations of random biphasic spaces reminiscent of the stochastic pore-solid distributions in the tissue engineering scaffolds. This paper investigates the ideas adapted from ecology to emphasize global resemblances and ignore minor local dissimilarities. It uses concepts from graph theory to perform controlled local mutations in order to maximize the similarities. The effect of this adjustment is investigated on a comprehensive list (forty nine) of similarity indices sensitive to the over- and under- estimation errors associated with image delineation tasks.
Intelligent data splitting for volume data
Hong Shen, Ernst Bartsch
We describe a system that automatically extracts body sections of interest from volume data sets obtained from major medical modalities. This is critical as an effort to save storage and transmission bandwidth and improve data sharing efficiency. The data to be split is stored in a series of files, and each of the files contains one axial slice image. This is how the DICOM data is stored. The splitting of volume data will therefore be applied in the axial direction. The core of the system is an algorithm module that automatically detects lines of separation in the axial direction of the data. Afterwards, the system will copy the files that contain the desired section of slice images to the destination, according to the detected separation lines. To obtain the split lines, features are extracted from human anatomies that are specific to each body section. The method and principle can be applied to major modalities where the extraction of various data sections is needed.
Single click volumetric segmentation of abdominal organs in computed tomography images
Brian W. Whitney, Nathan J. Backman, Jacob D. Furst, et al.
Current segmentation techniques require user intervention to fine-tune thresholds and parameters, plot initial contours, refine seed placement, and engage in other optimization strategies. This can cause difficulties for physicians trying to use segmentation tools as they may not have the time or resources to overcome steep learning curves. In order to segment volumetric regions from sequential slices of computed tomography (CT) images with minimal user intervention, we propose an algorithm based on volumetric seeded region growing that employs an adaptive and prioritized expansion. This algorithm requires a user only to identify a voxel in an organ to perform volumetric segmentation. This approach overcomes the need to manually select threshold values for specific organs by analyzing the histogram of voxel similarity to automatically determine a stopping criterion. The homogeneity criterion used for region growth in this approach is calculated from volumetric texture descriptors derived from co-occurrence matrices which consider voxel-pairs in a 3-dimensional neighborhood of a given voxel. Preliminary segmentation results of the kidneys, spleen, and liver were obtained on 3D data extracted from 700 sequential CT images from various studies collected by Northwestern Memorial Hospital. We believe this approach to be a viable segmentation technique that requires significantly less user intervention when compared to other techniques by necessitating only one user intervention, namely the selection of a single seed point.
Fully automatic segmentation of left ventricular myocardium in real-time three-dimensional echocardiography
Vivek Walimbe, Vladimir Zagrodsky, Raj Shekhar
Purpose: We report a deformable model (DM)-based fully automatic segmentation of the left ventricular (LV) myocardium (endocardium + epicardium) in real-time three-dimensional (3D) echocardiography. Methods: Initialization of the DM is performed through automated mutual information-based registration of the image to be segmented with a 3D template (image + corresponding endo-epicardial wiremesh). The initialized endocardial and epicardial wiremesh templates are then simultaneously refined iteratively under the joint influence of mesh-derived internal forces, image-derived external (gradient vector flow-based) forces, and endo-epicardium mesh-interaction forces. Incorporation of adaptive mesh-interaction forces into the DM refinement, a novelty of the current work, ensures appropriate relative endo-epicardial orientation during simultaneous refinement. Repeating for the entire cardiac sequence provides the segmented myocardium for all phases. Preliminary comparison is presented between automatic and expert-defined myocardial segmentation for five subjects imaged in clinical settings using a Philips SONOS 7500 scanner. Results: Root mean square (rms) radial distance error between the algorithm-determined and expert-traced endocardial and epicardial contours in six predetermined planar views was 3.86 ± 0.72 mm and 4.0 ± 0.63 mm in end-diastole, 3.9 ± 0.51 mm and 4.04 ± 0.65 mm in systole, respectively. Mean absolute error between average myocardial thickness calculated using automatic and expert-defined contours was 1.64 ± 0.56 mm (apical), 1.3 ± 0.58 mm (mid) and 1.46 ± 0.45 mm (basal). The absolute difference in ejection fraction calculated using our algorithm and by the expert using the TomTec software was 7.2 ± 0.84 %. Conclusion: We demonstrate successful segmentation of LV myocardium, which allows clinically important LV structure and function (e.g. wall thickness, LV volume and ejection fraction) to be tracked over the entire cardiac cycle.
Pre-operative segmentation of neck CT datasets for the planning of neck dissections
Jeanette Cordes, Jana Dornheim, Bernhard Preim, et al.
For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.
Brain extraction using geodesic active contours
Albert Huang, Rafeef Abugharbieh, Roger Tam, et al.
Extracting the brain cortex from magnetic resonance imaging (MRI) head scans is an essential preprocessing step of which the accuracy greatly affects subsequent image analysis. The currently popular Brain Extraction Tool (BET) produces a brain mask which may be too smooth for practical use. This paper presents a novel brain extraction tool based on three-dimensional geodesic active contours, connected component analysis and mathematical morphology. Based on user-specified intensity and contrast levels, the proposed algorithm allows an active contour to evolve naturally and extract the brain cortex. Experiments on synthetic MRI data and scanned coronal and axial MRI image volumes indicate successful extraction of tight perimeters surrounding the brain cortex. Quantitative evaluations on both synthetic phantoms and manually labeled data resulted in better accuracy than BET in terms of true and false voxel assignment. Based on these results, we illustrate that our brain extraction tool is a robust and accurate approach for the challenging task of automatically extracting the brain cortex in MRI data.
Unsupervised definition of the tibia-femoral joint regions of the human knee and its applications to cartilage analysis
Abnormal MR findings including cartilage defects, cartilage denuded areas, osteophytes, and bone marrow edema (BME) are used in staging and evaluating the degree of osteoarthritis (OA) in the knee. The locations of the abnormal findings have been correlated to the degree of pain and stiffness of the joint in the same location. The definition of the anatomic region in MR images is not always an objective task, due to the lack of clear anatomical features. This uncertainty causes variance in the location of the abnormality between readers and time points. Therefore, it is important to have a reproducible system to define the anatomic regions. This works present a computerized approach to define the different anatomic knee regions. The approach is based on an algorithm that uses unique features of the femur and its spatial relation in the extended knee. The femur features are found from three dimensional segmentation maps of the knee. From the segmentation maps, the algorithm automatically divides the femur cartilage into five anatomic regions: trochlea, medial weight bearing area, lateral weight bearing area, posterior medial femoral condyle, and posterior lateral femoral condyle. Furthermore, the algorithm automatically labels the medial and lateral tibia cartilage. The unsupervised definition of the knee regions allows a reproducible way to evaluate regional OA changes. This works will present the application of this automated algorithm for the regional analysis of the cartilage tissue.
Evaluation of binning strategies for tissue classification in computed tomography images
Stefanie Handrick, Bahare Naimipour, Daniela Raicu, et al.
Binning strategies have been used in much research work for image compression, feature extraction, classification, segmentation and other tasks, but rarely is there any rigorous investigation into which binning strategy is the best. Binning becomes a "hidden parameter" of the research method. This work rigorously investigates the results of three different binning strategies, linear binning, clipped binning, and nonlinear binning, for co-occurrence texture-based classification of the backbone, liver, heart, renal, and splenic parenchyma in high-resolution DICOM Computed Tomography (CT) images of the human chest and abdomen. Linear binning divides the gray-level range of [0..4095] into k1 equally sized bins, while clipped binning allocates one large bin for low intensity gray-levels [0..855] (air), one for higher intensities [1368..4095] (bone), and k2 equally sized bins for the soft tissues between [856..1368]. Nonlinear binning divides the gray-level range of [0..4095] into k3 bins of different sizes. These bins are further used to calculate the co-occurrence statistical model and its ten Haralick descriptors for texture quantification of gray-level images. The results of the texture quantification using each one of the three strategies and for different values of k1, k2 and k3 are evaluated with respect to their discrimination power using a decision tree classification algorithm and four classification performance metrics (sensitivity, specificity, precision and accuracy). Our preliminary results obtained on 1368 segmented DICOM images show that the optimal number of gray-levels is equal to 128 for linear binning, 512 for clipped binning, , and 256 for non-linear binning. Furthermore, when comparing the results of the three approaches, the nonlinear binning approach shows significant improvement for heart and spleen.
Interactive lesion segmentation on dynamic contrast enhanced breast MRI using a Markov model
Qiu Wu, Marcos Salganicoff, Arun Krishnan, et al.
The purpose of this study is to develop a method for segmenting lesions on Dynamic Contrast-Enhanced (DCE) breast MRI. DCE breast MRI, in which the breast is imaged before, during, and after the administration of a contrast agent, enables a truly 3D examination of breast tissues. This functional angiogenic imaging technique provides noninvasive assessment of microcirculatory characteristics of tissues in addition to traditional anatomical structure information. Since morphological features and kinetic curves from segmented lesions are to be used for diagnosis and treatment decisions, lesion segmentation is a key pre-processing step for classification. In our study, the ROI is defined by a bounding box containing the enhancement region in the subtraction image, which is generated by subtracting the pre-contrast image from 1st post-contrast image. A maximum a posteriori (MAP) estimate of the class membership (lesion vs. non-lesion) for each voxel is obtained using the Iterative Conditional Mode (ICM) method. The prior distribution of the class membership is modeled as a multi-level logistic model, a Markov Random Field model in which the class membership of each voxel is assumed to depend upon its nearest neighbors only. The likelihood distribution is assumed to be Gaussian. The parameters of each Gaussian distribution are estimated from a dozen voxels manually selected as representative of the class. The experimental segmentation results demonstrate anatomically plausible breast tissue segmentation and the predicted class membership of voxels from the interactive segmentation algorithm agrees with the manual classifications made by inspection of the kinetic enhancement curves. The proposed method is advantageous in that it is efficient, flexible, and robust.
Automatic tracking of neuro vascular tree paths
S. Suryanarayanan, A. Gopinath, Y. Mallya, et al.
3-D analysis of blood vessels from volumetric CT and MR datasets has many applications ranging from examination of pathologies such as aneurysm and calcification to measurement of cross-sections for therapy planning. Segmentation of the vascular structures followed by tracking is an important processing step towards automating the 3-D vessel analysis workflow. This paper demonstrates a fast and automated algorithm for tracking the major arterial structures that have been previously segmented. Our algorithm uses anatomical knowledge to identify the start and end points in the vessel structure that allows automation. Voxel coding scheme is used to code every voxel in the vessel based on its geodesic distance from the start point. A shortest path based iterative region growing is used to extract the vessel tracks that are subsequently smoothed using an active contour method. The algorithm also has the ability to automatically detect bifurcation points of major arteries. Results are shown for tracking the major arteries such as the common carotid, internal carotid, vertebrals, and arteries coming off the Circle of Willis across multiple cases with various data related and pathological challenges from 7 CTA cases and 2 MR Time of Flight (TOF) cases.
Robust optic disk detection in retinal images using vessel structure and radon transform
A robust and computationally efficient algorithm is proposed for optic disk detection in retinal fundus images. The algorithm includes two steps: optic disk localization and boundary detection. In the localization step, vessels are modeled as a tree structure and the root of the vessel tree is detected automatically and served as the location of an optic disk. The implementation is based on an efficient multi-level binarization and A* search algorithm. In the boundary detection step, a circle is used to model the shape of an optic disk, and Radon transform is applied to estimate the center and radius of the circle. Experimental results of 48 retinal images with varying image qualities show 100% accuracy in localization and an accuracy of 92.36% in boundary detection. The success of the proposed algorithm is attributed to the robust features extracted from retinal images.
Prior-shape-based segmentation of various objects in ultrasound images after speckle-reduction using level-set-based curvature evolution
Joyoni Dey, Dennis A. Tighe M.D., Gopal Vijayaraghavan M.D., et al.
Medical ultrasound images are noisy with speckle, acoustic noise and other artifacts. Reduction of speckle in particular is useful for CAD algorithms. We use two algorithms, namely, mean curvature evolution of the ultrasound image surface and a variation of the mean-curvature flow, to reduce speckle. The premise is that when we view the ultrasound image as a surface, the speckle appears as a high-curvature jagged layer over the true objects intensities and will reduce quickly on curvature evolution. We compare the two speckle reduction algorithms. We apply the speckle reduction to an image of a cyst and a 4-chamber view of the heart. We show significant, if not complete, speckle reduction, while keeping the relevant organ boundaries intact. On the speckle-reduced images, we apply a segmentation algorithm to detect objects. The segmentation algorithm is two-stepped. In the first step we choose a prior-shape and optimize the pose parameters to maximize the edge-pixels the curve falls into, using gradient ascent. In the second step, a radial motion is used to draw the contour points to the local-edges. We apply the algorithm on a cyst and obtain satisfactory results. We compare the total area inside the boundary output of our segmentation algorithm and to the total area covered by a hand-drawn boundary of the cyst, and the ratio is about 97%.
Automatic pulmonary vessel segmentation in 3D computed tomographic pulmonary angiographic (CTPA) images
Automatic and accurate segmentation of the pulmonary vessels in 3D computed tomographic angiographic images (CTPA) is an essential step for computerized detection of pulmonary embolism (PE) because PEs only occur inside the pulmonary arteries. We are developing an automated method to segment the pulmonary vessels in 3D CTPA images. The lung region is first extracted using thresholding and morphological operations. 3D multiscale filters in combination with a newly developed response function derived from the eigenvalues of Hessian matrices are used to enhance all vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. At each scale, a volume of interest (VOI) containing the response function value at each voxel is defined. The voxels with a high response indicate that there is an enhanced vessel whose size matches the given filter scale. A hierarchical expectation-maximization (EM) estimation is then applied to the VOI to segment the vessel by extracting the high response voxels at this single scale. The vessel tree is finally reconstructed by combining the segmented vessels at all scales based on a "connected component" analysis. Two experienced thoracic radiologists provided the gold standard of pulmonary arteries by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. Two CTPA cases containing PEs were used to evaluate the performance. One of these two cases also contained other lung diseases. The accuracy of vessel tree segmentation was evaluated by the percentage of the "gold standard" vessel center points overlapping with the segmented vessels. The result shows that 97.3% (1868/1920) and 92.0% (2277/2476) of the manually marked center points overlapped with the segmented vessels for the cases without and with other lung disease, respectively. The results demonstrate that vessel segmentation using our method is not degraded by PE occlusion and the vessels can be accurately extracted.
Automated segmentation of three-dimensional MR brain images
Jonggeun Park, Byungjun Baek, Choong-Il Ahn, et al.
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
Unsupervised clustering of dynamic PET images on the projection domain
Segmentation of dynamic PET images is an important preprocessing step for kinetic parameter estimation. A single time activity curve (TAC) is extracted for each segmented region. This TAC is then used to estimate the kinetic parameters of the segmented region. Current methods perform this task in two independent steps; first dynamic positron emission tomography (PET) images are reconstructed from the projection data using conventional tomographic reconstruction methods, then the time activity curves (TAC) of the pixels are clustered into a predetermined number of clusters. In this paper, we propose to cluster the regions of dynamic PET images directly on the projection data and simultaneously estimate the TAC of each cluster. This method does not require an intermediate step of tomographic reconstruction for each time frame. Therefore the dimensionality of the estimation problem is reduced. We compare the proposed method with weighted least squares (WLS) and expectation maximization with Gaussian mixtures methods (GMM-EM). Filtered backprojection is used to reconstruct the emission images required by these methods. Our simulation results show that the proposed method can substantially decrease the number of mislabeled pixels and reduce the root mean squared error (RMSE) of the cluster TACs.
Content analysis of uterine cervix images: initial steps toward content based indexing and retrieval of cervigrams
This work is motivated by the need for visual information extraction and management in the growing field of medical image archives. In particular the work focuses on a unique medical repository of digital cervicographic images ("Cervigrams") collected by the National Cancer Institute (NCI) in a longitudinal multi-year study carried out in Guanacaste, Costa Rica. NCI together with the National Library of Medicine (NLM) is developing a unique Web-based database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Such a database requires specific tools that can analyze the cervigram content and represent it in a way that can be efficiently searched and compared. We present a multi-step scheme for segmenting and labeling regions of medical and anatomical interest within the cervigram, utilizing statistical tools and adequate features. The multi-step structure is motivated by the large diversity of the images within the database. The algorithm identifies the cervix region within the image. It than separates the cervix region into three main tissue types: the columnar epithelium (CE), the squamous epithelium (SE), and the acetowhite (AW), which is visible for a short time following the application of acetic acid. The algorithm is developed and tested on a subset of 120 cervigrams that were manually labeled by NCI experts. Initial segmentation results are presented and evaluated.
Shape Poster Session
icon_mobile_dropdown
Topological analysis of 3D cell nuclei using finite element template-based spherical mapping
E. Gladilin, S. Goetze, J. Mateos-Langerak, et al.
Topological analysis of cells and subcellular structures on the basis of image data is one of the major trends in modern quantitative biology. However, due to the dynamic nature of cell biology, the optical appearance of different cells or even time series of the same cell is undergoing substantial variations in shape and texture which makes the analysis of image data a non-trivial task. In the absence of canonical invariances, a natural approach to the normalization of cell images consists in dimension reduction of the 3D problem by means of spherical mapping which enables the analysis of targeted regions in terms of radial distances. In this work, we present a finite element template-based approach for physically-base spherical mapping which has been applied for topological analysis of confocal laser scanning microscopy images of cell nuclei.
Quantitative comparison of delineated structure shape in radiotherapy
G. J. Price, C. J. Moore
There has been an influx of imaging and treatment technologies into cancer radiotherapy over the past fifteen years. The result is that radiation fields can now be accurately shaped to target disease delineated on pre-treatment planning scans whilst sparing critical healthy structures. Two well known problems remain causes for concern. The first is inter- and intra-observer variability in planning scan delineations, the second is the motion and deformation of a tumour and interacting adjacent organs during the course of radiotherapy which compromise the planned targeting regime. To be able to properly address these problems, and hence accurately shape the margins of error used to account for them, an intuitive and quantitative system of describing this variability must be used. This paper discusses a method of automatically creating correspondence points over similar non-polar delineation volumes, via spherical parameterisation, so that their shape variability can be analysed as a set of independent one dimensional statistical problems. The importance of 'pole' selection to initial parameterisation and hence ease of optimisation is highlighted, the use of sparse anatomical landmarks rather than spherical harmonic expansion for establishing point correspondence discussed, and point variability mapping introduced. A case study is presented to illustrate the method. A group of observers were asked to delineate a rectum on a series of time-of-treatment Cone Beam CT scans over a patient's fractionation schedule. The overall observer variability was calculated using the above method and the significance of the organ motion over time evaluated.
Sparse principal component analysis in medical shape modeling
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
3D reconstruction of the coronary tree from two x-ray angiographic views
Nong Sang, Weixue Peng, Heng Li, et al.
In this paper, we develop a method for the reconstruction of 3D coronary artery based on two perspective projections acquired on a standard single plane angiographic system in the same systole. Our reconstruction is based on the model of generalized cylinders, which are generated by sweeping a two-dimensional cross section along an axis in three-dimensional space. We restrict the cross section to be circular and always perpendicular to the tangent of the axis. Firstly, the vascular centerlines of the X-ray angiography images on both projections are semiautomatically extracted by multiscale vessel tracking using Gabor filters, and the radius of the coronary are also acquired simultaneously. Secondly, the relative geometry of the two projections is determined by the gantry information and 2D matching is realized through the epipolar geometry and the consistency of the vessels. Thirdly, we determine the three-dimensional (3D) coordinates of the identified object points from the image coordinates of the matched points and the calculated imaging system geometry. Finally, we link the consequent cross sections which are processed according to the radius and the direction information to obtain the 3D structure of the artery. The proposed 3D reconstruction method is validated on real data and is shown to perform robustly and accurately in the presence of noise.
Creation of three-dimensional craniofacial standards from CBCT images
Krishna Subramanyan, Martin Palomo, Mark Hans
Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.
Quantifying torso deformity in scoliosis
Peter O. Ajemba, Anish Kumar, Nelson G. Durdle, et al.
Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.
CAD Poster Session
icon_mobile_dropdown
AIS TLS-ESPRIT feature selection for prostate tissue characterization
S. S. Mohamed, A. M. Youssef, E. F. El-Saadany, et al.
The work in this paper aims for analyzing spectral features of the prostate using Trans-Rectal Ultra-Sound images (TRUS) for tissue classification. This research is expected to augment beginner radiologists' decision with the experience of more experienced radiologists. Moreover, Since, in some situations the biopsy results in false negatives due to inaccurate biopsy locations, therefore this research also aims to assist in determining the biopsy locations to decrease the false negative results. In this paper, a new technique for prostate tissue characterization is developed. The proposed system is composed of four stages. The first stage is automatically identifying Regions Of Interest (ROIs). This is achieved using the Gabor multiresolution analysis method, where preliminary regions are identified using the frequency response of the pixels, pixels that have the same response to the same filter are assigned to the same cluster. Next, the radiologist knowledge is integrated to the system to select the most suspicious ROIs among the prelimianry identified regions. The second stage is constructing the spectral features from the identified ROIs. The proposed technique is based on a novel spectral feature set for the TRUS images using the Total Least Square Estimation of Signal Parameters via Rotational Invariance Techniques (TLS-ESPRIT). Classifier based feature selection is then performed to select the most salient features using the recently proposed Artificial Immune System (AIS) optimization technique. Finally, Support Vector Machine (SVM) classifier is used as an accuracy measure, our proposed system obtains a classification accuracy of 94.4%, with 100% sensitivity and 83.3% sensetivity.
An adaptive image segmentation process for the classification of lung biopsy images
Daniel W. McKee, Walker H. Land Jr., Tatyana Zhukov, et al.
The purpose of this study was to develop a computer-based second opinion diagnostic tool that could read microscope images of lung tissue and classify the tissue sample as normal or cancerous. This problem can be broken down into three areas: segmentation, feature extraction and measurement, and classification. We introduce a kernel-based extension of fuzzy c-means to provide a coarse initial segmentation, with heuristically-based mechanisms to improve the accuracy of the segmentation. The segmented image is then processed to extract and quantify features. Finally, the measured features are used by a Support Vector Machine (SVM) to classify the tissue sample. The performance of this approach was tested using a database of 85 images collected at the Moffitt Cancer Center and Research Institute. These images represent a wide variety of normal lung tissue samples, as well as multiple types of lung cancer. When used with a subset of the data containing images from the normal and adenocarcinoma classes, we were able to correctly classify 78% of the images, with a ROC AZ of 0.758.
An automated normative-based fluorodeoxyglucose positron emission tomography image-analysis procedure to aid Alzheimer disease diagnosis using statistical parametric mapping and interactive image display
Kewei Chen, Xiaolin Ge, Li Yao, et al.
Having approved fluorodeoxyglucose positron emission tomography (FDG PET) for the diagnosis of Alzheimer's disease (AD) in some patients, the Centers for Medicare and Medicaid Services suggested the need to develop and test analysis techniques to optimize diagnostic accuracy. We developed an automated computer package comparing an individual's FDG PET image to those of a group of normal volunteers. The normal control group includes FDG-PET images from 82 cognitively normal subjects, 61.89±5.67 years of age, who were characterized demographically, clinically, neuropsychologically, and by their apolipoprotein E genotype (known to be associated with a differential risk for AD). In addition, AD-affected brain regions functionally defined as based on a previous study (Alexander, et al, Am J Psychiatr, 2002) were also incorporated. Our computer package permits the user to optionally select control subjects, matching the individual patient for gender, age, and educational level. It is fully streamlined to require minimal user intervention. With one mouse click, the program runs automatically, normalizing the individual patient image, setting up a design matrix for comparing the single subject to a group of normal controls, performing the statistics, calculating the glucose reduction overlap index of the patient with the AD-affected brain regions, and displaying the findings in reference to the AD regions. In conclusion, the package automatically contrasts a single patient to a normal subject database using sound statistical procedures. With further validation, this computer package could be a valuable tool to assist physicians in decision making and communicating findings with patients and patient families.
Lesion margin analysis for automated classification of cervical cancer lesions
Viara Van Raad, Zhiyun Xue, Holger Lange
Digital colposcopy is an emerging technology, replacing the traditional colposcope for diagnosis of cervical lesions. Incorporating automated algorithms within a digital colposcopy system can improve the reliability and the diagnostic accuracy of cervical precancer and cancer. An automated computer-aided diagnosis (CAD) system can assess the three important cervical diagnostic cues: the color, the vascular patterns and the lesion margins with quantitative measures, similar to the way colposcopists use the Reid's index in traditional colposcopy. In this work we present a novel way to analyze and classify the global and the local features of one of the three major components in colposcopy diagnosis - the lesion margins. The margins of cervical lesion can be described as 'feathered,' 'geographic,' 'satellite,' 'regular or smooth' and 'margin-in-margin,' or they can be of mixed type. As margin characterization is a complex task, we use irregularity descriptors such as compactness indices and curvature descriptors. To address the complexity of the problem, the dependency of scale and the position of the lesion on the cervical image, our method use novel Fourier energy descriptors. The conceptually complex analysis of describing lesions as 'satellite' lesions or lesions with multiple margins is performed using descriptors, where the distance, the position and the local statistical estimates of image intensity play important role. We trained this new algorithm to classify and diagnose the cervix, evaluating only the lesions. The accuracy of the results is assessed against a 'ground truth' scheme, using colposcopists' annotations and pathology results. We report the resulted accuracy of the classification method assessed against this scheme.
Computer-aided diagnosis of splenic enlargement using wave pattern of spleen in abdominal CT images
Won Seong, June-Sik Cho, Seung-Moo Noh, et al.
It is known that the spleen accompanied by liver cirrhosis is hypertrophied or enlarged. We have examined a wave pattern at the left boundary of spleen on the abdominal CT images having liver cirrhosis, and found that they are different from those on the images having a normal liver. It is noticed that the abdominal CT images of patient with liver cirrhosis shows strong bending in the wave pattern. In the case of normal liver, the images may also have a wave pattern, but its bends are not strong. Therefore, the total waving area of the spleen with liver cirrhosis is found to be greater than that of the spleen with a normal liver. Moreover, we found that the waves of the spleen from the image with liver cirrhosis have the higher degree of circularity compared to the normal liver case. Based on the two observations above, we propose an automatic method to diagnose splenic enlargement by using the wave pattern of the spleen in abdominal CT images. The proposed automatic method improves the diagnostic performance compared with the conventional process based on the size of spleen.
Simulating nodules in chest radiographs with real nodules from multi-slice CT images
To improve the detection of nodules in chest radiographs, large databases of chest radiographs with annotated, proven nodules are needed for training of both radiologists and computer-aided detection systems. The construction of such databases is a laborious and time-consuming task. This study presents a novel technique to produce large amounts of chest x-rays with annotated, simulated nodules. Realistic nodules in radiographs are generated using real nodules segmented from CT images. Results from an observer study indicate that the simulated nodules can not be distinguished from real nodules. This method has great potential to aid the development of automated detection systems and to generate teaching files for human observers.
Hot spot detection, segmentation, and identification in PET images
Positron Emission Tomography (PET) images provide functional or metabolic information from areas of high concentration of [18F]fluorodeoxyglucose (FDG) tracer, the "hot spots". These hot spots can be easily detected by the eye, but delineation and size determination required e.g. for diagnosis and staging of cancer is a tedious task that demands for automation. The approach for such an automated hot spot segmentation described in this paper comprises three steps: A region of interest detection by the watershed transform, a heart identification by an evaluation of scan lines, and the final segmentation of hot spot areas by a local threshold. The region of interest detection is the essential step, since it localizes the hot spot identification and the final segmentation. The heart identification is an example of how to differentiate between hot spots. Finally, we demonstrate the combination of PET and CT data. Our method is applicable to other techniques like SPECT.
Improving computer-aided diagnosis of interstitial disease in chest radiographs by combining one-class and two-class classifiers
In this paper we compare and combine two distinct pattern classification approaches to the automated detection of regions with interstitial abnormalities in frontal chest radiographs. Standard two-class classifiers and recently developed one-class classifiers are considered. The one-class problem is to find the best model of the normal class and reject all objects that don't fit the model of normality. This one-class methodology was developed to deal with poorly balanced classes, and it uses only objects from a well-sampled class for training. This may be an advantageous approach in medical applications, where normal examples are easier to obtain than abnormal cases. We used receiver operating characteristic (ROC) analysis to evaluate classification performance by the different methods as a function of the number of abnormal cases available for training. Various two-class classifiers performed excellently in case that enough abnormal examples were available (area under ROC curve Az = 0.985 for a linear discriminant classifier). The one-class approach gave worse result when used stand-alone (Az = 0.88 for Gaussian data description) but the combination of both approaches, using a mean combining classifier resulted in better performance when only few abnormal samples were available (average Az = 0.94 for the combination and Az = 0.91 for the stand-alone linear discriminant in the same set-up). This indicates that computer-aided diagnosis schemes may benefit from using a combination of two-class and one-class approaches when only few abnormal samples are available.
Computer aided lytic bone metastasis detection using regular CT images
Jianhua Yao, Stacy D. O'Connor, Ronald Summers
This paper presents a computer aided detection system to find lytic bone metastases in the spine. The CAD system is designed to run on routine chest and/or abdominal CT exams (5mm slice thickness) obtained during a patient's evaluation for other indications. The system can therefore serve as a background procedure to detect bone metastases. The spine is first automatically extracted based on adaptive thresholding, morphological operation, and region growing. The spinal cord is then traced from thoracic spine to lumbar spine using a dynamic graph search to set up a local spine coordinate system. A watershed algorithm is then applied to detect potential lytic bone lesions. A set of 26 quantitative features (density, shape and location) are computed for each detection. After a filter on the features, Support Vector Machines (SVM) are used as classifiers to determine if a detection is a true lesion. The SVM was trained using ground truth segmentation manually defined by experts.
Hybrid committee classifiers for a computerized colonic polyp detection system
We present a hybrid committee classifier for computer-aided detection (CAD) of colonic polyps in CT colonography (CTC). The classifier involved an ensemble of support vector machines (SVM) and neural networks (NN) for classification, a progressive search algorithm for selecting a set of features used by the SVMs and a floating search algorithm for selecting features used by the NNs. A total of 102 quantitative features were calculated for each polyp candidate found by a prototype CAD system. 3 features were selected for each of 7 SVM classifiers which were then combined to form a committee of SVMs classifier. Similarly, features (numbers varied from 10-20) were selected for 11 NN classifiers which were again combined to form a NN committee classifier. Finally, a hybrid committee classifier was defined by combining the outputs of both the SVM and NN committees. The method was tested on CTC scans (supine and prone views) of 29 patients, in terms of the partial area under a free response receiving operation characteristic (FROC) curve (AUC). Our results showed that the hybrid committee classifier performed the best for the prone scans and was comparable to other classifiers for the supine scans.
Measurement of colonic polyp size from virtual colonoscopy studies: comparison of manual and automated methods
Metin N. Gurcan, Randy Ernst M.D., Aytekin Oto M.D., et al.
Polyp size is an important feature descriptor for clinical classification and follow-up decision making in CT colonography. Currently, polyp size is measured from computed tomography (CT) studies manually as the single largest dimension of the polyp head, excluding the stalk if present, in either multi-planar reconstruction (MPR) or three-dimensional (3D) views. Manual measurements are subject to intra- and inter-reader variation, and can be time-consuming. Automated polyp segmentation and size measurement can reduce the variability and speed up the process. In this study, an automated polyp size measurement technique is developed. Using this technique, the polyp is segmented from the attached healthy tissue using a novel, model-based approach. The largest diameter of the segmented polyp is measured in axial, sagitttal and coronal MPR views. An expert radiologist identified 48 polyps from either supine or prone views of 52 cases of the Walter-Reed virtual colonoscopy database. Automated polyp size measurements were carried out and compared with the manual ones. For comparison, three different statistical methods were used: overall agreement using chance-corrected kappa indices; the mean absolute differences; and Bland-Altman limits of agreement. Manual and automated measurements show good agreement both in 2D and 3D views.
Effect of quantization on co-occurrence matrix based texture features: an example study in mammography
A co-occurrence matrix is a joint probability distribution of the pixel values of two pixels in an image separated by a distance d in the direction θ. It is one of the texture analysis tools favored by the medical image processing community. The size of a co-occurrence matrix depends on gray levels re-quantization Q. Hence, when dealing with high depth resolution images, gray levels re-quantization is routinely performed to reduce the size of the co-occurrence matrix. The gray levels re-quantization may play a role in the display of spatial relationships in co-occurrence matrix but is usually dealt with lightly. In this paper, we use an example to study the effect of gray-level re-quantization in high depth resolution medical images. Digitized film-screen mammograms have a typical depth resolution of 4096 gray levels. In a study classifying masses on mammograms as benign or malignant, 260 texture features are measured on 43 regions-of-interest (ROIs) containing malignant masses and 28 ROIs containing benign masses. Of the 260 texture features, 240 are texture features measured on co-occurrence matrices with parameters θ = 0, π/2; d = 11, 15, 21, 25, 31; and Q = 50, 100, 400. A genetic algorithm is used to select a subset of features (out of 260) that has discriminative power. Results show that top performing feature combinations selected by the genetic algorithm are not restricted to a single value of Q. This indicates that instead of searching for a correct Q, it may be more appropriate to explore a range of Q values.
Development of computerized method for detection of vertebral fractures on lateral chest radiographs
Satoshi Kasai, Feng Li, Junji Shiraishi, et al.
Osteoporosis is one of the major public health concerns in the world. Several clinical trials indicated clearly that pharmacologic therapy for osteoporosis is effective for persons with vertebral fractures for preventing subsequent fractures. It is, therefore, important to diagnose vertebral fractures early. Although most vertebral fractures are asymptomatic, they can often be detected on lateral chest radiographs which may be obtained for other purposes. However, investigators have reported that vertebral fractures which were visible on lateral chest radiographs were underdiagnosed or underreported. Therefore, our purpose in this study was to develop a computerized method for detection of vertebral fractures on lateral chest radiographs and to assist radiologists' image interpretation. Our computerized scheme is based on the detection of upper and lower edges of vertebrae on lateral chest images. A curved rectangular area which included a number of visible vertebrae was identified. This area was then straightened such that the upper and lower edges of the vertebrae were oriented horizontally. For detection of vertebral edges, line components were enhanced, and a multiple thresholding technique followed by image feature analysis was applied to the line enhanced image. Finally, vertebral heights determined from the detected vertebral edges were used for characterizing the shape of the vertebrae and for distinguishing fractured from normal vertebrae. Our preliminary results indicated that all of the severely fractured vertebrae in a small database were detected correctly by our computerized method.
Automatic colonic polyp detection using multi-objective evolutionary techniques
Colonic polyps appear like elliptical protrusions on the inner wall of the colon. Curvature based features for colonic polyp detection have proved to be successful in several computer-aided diagnostic CT colonography (CTC) systems. Some simple thresholds are set for those features for creating initial polyp candidates, sophisticated classification scheme are then applied on these polyp candidates to reduce false positives. There are two objective functions, the number of missed polyps and false positive rate, that need to be minimized when setting those thresholds. These two objectives conflict and it is usually difficult to optimize them both by a gradient search. In this paper, we utilized a multiobjective evolutionary method, the Strength Pareto Evolutionary Algorithm (SPEA2), to optimize those thresholds. SPEA2 incorporates the concept of Pareto dominance and applies genetic techniques to evolve individual solutions to the Pareto front. The SPEA2 algorithm was applied to colon CT images from 27 patients each having a prone and a supine scan. There are 40 colonoscopically confirmed polyps resulting in 72 positive detections in CTC reading. The results obtained by SPEA2 were compared with those obtained by our old system, where an appropriate value was set for each of those thresholds by a histogram examination method. If we keep the sensitivity the same as that of our old system, the SPEA2 algorithm reduced false positive rate by 76.4% from average false positive 55.6 to 13.3 per data set. If the false positive rate is kept the same for both systems, SPEA2 increased the sensitivity by 13.1% from 53 to 61 among 72 ground truth detections.
False-positive elimination for computer-aided detection of pulmonary micronodules
Sukmoon Chang, Jinghao Zhou, Dimitris N. Metaxas, et al.
Computed Tomography (CT) is generally accepted as the most sensitive way for lung cancer screening. Its high contrast resolution allows the detection of small nodules and, thus, lung cancer at a very early stage. Due to the amount of data it produces, however, automating the nodule detection process is viable. The challenging problem for any nodule detection system is to keep low false-positive detection rate while maintaining high sensitivity. In this paper, we first describe a 3D filter-based method for pulmonary micronodule detection from high-resolution 3D chest CT images. Then, we propose a false-positive elimination method based on a deformable model. Finally, we present promising results of applying our method to various clinical chest CT datasets with over 90% detection rate. The proposed method focuses on the automatic detection of both calcified (high-contrast) and noncalcified (low-contrast) granulomatous nodules less than 5mm in diameter.
Confidence-based stratification of CAD recommendations with application to breast cancer detection
Piotr A. Habas, Jacek M. Zurada, Adel S. Elmaghraby, et al.
We present a risk stratification methodology for predictions made by computer-assisted detection (CAD) systems. For each positive CAD prediction, the proposed technique assigns an individualized confidence measure as a function of the actual CAD output, the case-specific uncertainty of the prediction estimated from the system's performance for similar cases and the value of the operating decision threshold. The study was performed using a mammographic database containing 1,337 regions of interest (ROIs) with known ground truth (681 with masses, 656 with normal parenchyma). Two types of decision models (1) a support vector machine (SVM) with a radial basis function kernel and (2) a back-propagation neural network (BPNN) were developed to detect masses based on 8 morphological features automatically extracted from each ROI. The study shows that as requirements on the minimum confidence value are being restricted, the positive predictive value (PPV) for qualifying cases steadily improves (from PPV = 0.73 to PPV = 0.97 for the SVM, from PPV = 0.67 to PPV = 0.95 for the BPNN). The proposed confidence metric was successfully applied for stratification of CAD recommendations into 3 categories of different expected reliability: HIGH (PPV = 0.90), LOW (PPV = 0.30) and MEDIUM (all remaining cases). Since radiologists often disregard accurate CAD cues, an individualized confidence measure should improve their ability to correctly process visual cues and thus reduce the interpretation error associated with the detection task. While keeping the clinically determined operating point satisfied, the proposed methodology draws the CAD users' attention to cases/regions of highest risk while helping them confidently eliminate cases with low risk.
Centerline-based colon segmentation for CAD of CT colonography
We developed a fast centerline-based segmentation (CBS) algorithm for the extraction of colon in computer-aided detection (CAD) for CT colonography (CTC). CBS calculates local centerpoints along thresholded components of abdominal air, and connects the centerpoints iteratively to yield a colon centerline. A thick region encompassing the colonic wall is extracted by use of region-growing around the centerline. The resulting colonic wall is employed in our CAD scheme for the detection of polyps, in which polyps are detected within the wall by use of volumetric shape features. False-positive detections are reduced by use of a Bayesian neural network. The colon extraction accuracy of CBS was evaluated by use of 38 clinical CTC scans representing various preparation conditions. On average, CBS covered more than 96% of the visible region of colon with less than 1% extracolonic components in the extracted region. The polyp detection performance of the CAD scheme was evaluated by use of 121 clinical cases with 42 colonoscopy-confirmed polyps 5-25 mm. At a 93% by-polyp detection sensitivity for polyps ≥5 mm, a leave-one-patient-out evaluation yielded 1.4 false-positive polyp detections per CT scan.
Local pulmonary structure classification for computer-aided nodule detection
Claus Bahlmann, Xianlin Li, Kazunori Okada
We propose a new method of classifying the local structure types, such as nodules, vessels, and junctions, in thoracic CT scans. This classification is important in the context of computer aided detection (CAD) of lung nodules. The proposed method can be used as a post-process component of any lung CAD system. In such a scenario, the classification results provide an effective means of removing false positives caused by vessels and junctions thus improving overall performance. As main advantage, the proposed solution transforms the complex problem of classifying various 3D topological structures into much simpler 2D data clustering problem, to which more generic and flexible solutions are available in literature, and which is better suited for visualization. Given a nodule candidate, first, our solution robustly fits an anisotropic Gaussian to the data. The resulting Gaussian center and spread parameters are used to affine-normalize the data domain so as to warp the fitted anisotropic ellipsoid into a fixed-size isotropic sphere. We propose an automatic method to extract a 3D spherical manifold, containing the appropriate bounding surface of the target structure. Scale selection is performed by a data driven entropy minimization approach. The manifold is analyzed for high intensity clusters, corresponding to protruding structures. Techniques involve EMclustering with automatic mode number estimation, directional statistics, and hierarchical clustering with a modified Bhattacharyya distance. The estimated number of high intensity clusters explicitly determines the type of pulmonary structures: nodule (0), attached nodule (1), vessel (2), junction (>3). We show accurate classification results for selected examples in thoracic CT scans. This local procedure is more flexible and efficient than current state of the art and will help to improve the accuracy of general lung CAD systems.
Power spectral analysis of mammographic parenchymal patterns
Hui Li, Maryellen L. Giger, Olufunmilayo I. Olopade
Mammographic density and parenchymal patterns have been shown to be associated with the risk of developing breast cancer. Two groups of women: gene-mutation carriers and low-risk women were included in this study. Power spectral analysis was performed within parenchymal regions of 172 digitized craniocaudal normal mammograms of the BRCA1/BRCA2 gene-mutation carriers and those of women at low-risk of developing breast cancer. The power law spectrum of the form, P(f)=B/fβ was evaluated for the mammographic patterns. Receiver Operating Characteristic (ROC) analysis was used to assess the performance of exponent β as a decision variable in the task of distinguishing between high and low-risk subjects. Power spectral analysis of mammograms demonstrated that mammographic parenchymal patterns have a power-law spectrum of the form, P(f)=B/fβ where f is radial spatial frequency, with the average β values of 2.92 and 2.47 for the gene-mutation carriers and for the low-risk women, respectively. Az values of 0.90 and 0.89 were achieved in distinguishing between the gene-mutation carriers and the low-risk women with the individual image β value as the decision variable in the entire database and the age-matched group, respectively.
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
Yi Luo, Mehmet Celenk, Prashanth Bejai
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Computer-aided detection system of breast masses on ultrasound images
We have investigated Computer-aided detection (CAD) system for breast masses on screening ultrasound (US) images. A lot of methods of Computer-aided detection and diagnosis system on US images have been developed by many researchers in the world. However, some methods require substantial computation time in analysing a US image, and some systems also need a radiologist to indicate the masses in advance. In this paper, we proposed fast automatic detection system which utilizes edge information in detecting masses. Our method consists of the following steps: (1) noise reduction and image normalization, (2) decision of the region of interest (ROI) using vertical edges detected by the canny edge detector, (3) segmentation of ROI using watershed algorithm, and (4) reduction of false positives. This study employs 11 whole breast cases with a total of 924 images. All the cases have been diagnosed by a radiologist prior to the study. This database have 11 malignant masses. These malignant masses have heterogeneous internal echo, a low or equal echo-level, and a deficient or disappearance posterior echo. Using the proposed method, the sensitivity in detecting malignant masses is 90.9% (10/11) and the number of false positives per image is 0.69 (633/924). It is concluded that our method is effective for detecting breast masses on US images.
Highly automated computer-aided diagnosis of neurological disorders using functional brain imaging
P. G. Spetsieris, Y. Ma, V. Dhawan, et al.
We have implemented a highly automated analytical method for computer aided diagnosis (CAD) of neurological disorders using functional brain imaging that is based on the Scaled Subprofile Model (SSM). Accurate diagnosis of functional brain disorders such as Parkinson's disease is often difficult clinically, particularly in early stages. Using principal component analysis (PCA) in conjunction with SSM on brain images of patients and normals, we can identify characteristic abnormal network covariance patterns which provide a subject dependent scalar score that not only discriminates a particular disease but also correlates with independent measures of disease severity. These patterns represent disease-specific brain networks that have been shown to be highly reproducible in distinct groups of patients. Topographic Profile Rating (TPR) is a reverse SSM computational algorithm that can be used to determine subject scores for new patients on a prospective basis. In our implementation, reference values for a full range of patients and controls are automatically accessed for comparison. We also implemented an automated recalibration step to produce reference scores for images generated in a different imaging environment from that used in the initial network derivation. New subjects under the same setting can then be evaluated individually and a simple report is generated indicating the subject's classification. For scores near the normal limits, additional criteria are used to make a definitive diagnosis. With further refinement, automated TPR can be used to efficiently assess disease severity, monitor disease progression and evaluate treatment efficacy.
Potential improvement of computerized mass detection on mammograms using a bilateral pairing technique
We are developing a bilateral pairing technique to help reduce false-positives identified by a single-view computer-aided detection (CAD) system for breast masses. In this study, we compare the performance of the proposed bilateral CAD to a single-view CAD. A database of 172 right/left breast pairs containing 205 biopsy-proven masses was used. Single-view CAD was run on each image using a lax selection threshold so that 5 objects per image were retained. The automated bilateral pairing algorithm identified all objects in a left breast mammogram "matching" a CAD detected object in the corresponding right breast image and visa-versa. Bilateral pairing was based on geometrical correspondence between objects with a matching score derived from a paired right/left object feature set and a linear discriminant analysis classifier. Leave-one out resampling was used to train/test the technique. We compared the FROC performances of the single-view CAD, the proposed bilateral technique and a modified CAD using manual pairing of bilateral structures. At a per-lesion detection sensitivity of 0.7, there were 3.8 FPs/image for the original CAD, 3.3 for the proposed technique, and 2.2 for the modified CAD using manual matching, a 12.6% and 42.1% reduction, respectively. At an FP rate of 1.0 per image, the sensitivities for the original CAD, the proposed technique and the modified CAD using manual matching were 0.47, 0.51 and 0.60, respectively. Preliminary results show that CAD with bilateral pairing did not achieve a significant FP-reduction.
Mammographic CADx system using an image library with an intelligent agent: a pattern matching approach
It is conceivable that a comprehensive clinical case library with intelligent agents can sort and render clinically similar cases and present clinically significant features to assist the radiologist in interpreting mammograms. In this study, we used a deformable vector diagram as the primary framework for matching the mammographic masses. The vector diagram provides gradient and shape features of the mass. The deformable algorithm allows flexible matching. The vector diagram was also incorporated with our newly developed delineation method using steepest changes of a probability based cost-function. Thus it allows us to automatically extract the main body and significant part of border region for pattern matching using a weighted mutual information technique. We have collected 86 mammograms. Of these cases, 46 contain a benign mass and the other 40 contain a malignant mass. Using the weighted mutual information technique on the vector diagram of the mass region, we found that the benign masses can be sorted into 6 groups except one case; the malignant masses can be sorted into 8 groups except two cases. For all 86 cases, the masses can be sorted into 13 groups except three cases. In addition, one group of benign masses and one group of malignant mass cases merged into one which contains 10 cases. Hence, the success sorting rate was 85.7% (12/14) in terms of group and was 84.9% (73/86) in terms of case, respectively.
Regularized discriminate analysis for breast mass detection on full field digital mammograms
Jun Wei, Berkman Sahiner, Yiheng Zhang, et al.
In computer-aided detection (CAD) applications, an important step is to design a classifier for the differentiation of the abnormal from the normal structures. We have previously developed a stepwise linear discriminant analysis (LDA) method with simplex optimization for this purpose. In this study, our goal was to investigate the performance of a regularized discriminant analysis (RDA) classifier in combination with a feature selection method for classification of the masses and normal tissues detected on full field digital mammograms (FFDM). The feature selection scheme combined a forward stepwise feature selection process and a backward stepwise feature elimination process to obtain the best feature subset. An RDA classifier and an LDA classifier in combination with this new feature selection method were compared to an LDA classifier with stepwise feature selection. A data set of 130 patients containing 260 mammograms with 130 biopsy-proven masses was used. All cases had two mammographic views. The true locations of the masses were identified by experienced radiologists. To evaluate the performance of the classifiers, we randomly divided the data set into two independent sets of approximately equal size for training and testing. The training and testing were performed using the 2-fold cross validation method. The detection performance of the CAD system was assessed by free response receiver operating characteristic (FROC) analysis. The average test FROC curve was obtained by averaging the FP rates at the same sensitivity along the two corresponding test FROC curves from the 2-fold cross validation. At the case-based sensitivities of 90%, 80% and 70% on the test set, our RDA classifier with the new feature selection scheme achieved an FP rate of 1.8, 1.1, and 0.6 FPs/image, respectively, compared to 2.1, 1.4, and 0.8 FPs/image with stepwise LDA with simplex optimization. Our results indicate that RDA in combination with the sequential forward inclusion-backward elimination feature selection method can improve the performance of mass detection on mammograms. Further work is underway to optimize the feature selection and classification scheme and to evaluate if this approach can be generalized to other CAD classification tasks.
Characterization of corresponding microcalcification clusters on temporal pairs of mammograms for interval change analysis: comparison of classifiers
Lubomir Hadjiiski, Douglas Drouillard, Heang-Ping Chan, et al.
We are developing an automated system for analysis of microcalcification clusters on serial mammograms. Our automated system consists of two stages: (1) automatic registration of corresponding clusters on temporal pairs of mammograms producing true (TP-TP) and false (TP-FP) pairs; and (2) characterization of temporal pairs of clusters as malignant and benign using a temporal classifier. In this study, we focussed on the design of the temporal classifier. Morphological and texture (RLS and GLDS) features are automatically extracted from the detected current and prior cluster locations. Additionally, difference morphological and RLS features are obtained. The automatically detected cluster locations on the temporal pairs may deviate from the optimal locations as selected by expert radiologists. This will introduce "noise" to the extracted features and make the classification task more difficult. Linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were trained to classify the true and false pairs. Leaveone-case-out resampling method was used for feature selection and classifier design. In this study, 175 serial mammogram pairs containing biopsy-proven microcalcification clusters were used. At the first stage of the system, 85% (149/175) of the TP-TP pairs were identified with 15 false matches within the 164 image pairs that had computerdetected clusters on the priors. At the second stage, an average of 7 features were selected (4 difference morphological, 1 difference RLS and 2 current GLDS). The LDA and SVM temporal classifiers achieved test Az of 0.83 and 0.82, respectively, for the classification of the 164 cluster temporal pairs as malignant or benign. In comparison, an MQSA radiologist achieved an Az of 0.72. Both the LDA and SVM classifiers were able to classify the automatically detected temporal pairs of microcalcification clusters with accuracy comparable to that of an experienced radiologist.
Computer-aided detection of clustered microcalcifications on full-field digital mammograms: a two-view information fusion scheme for FP reduction
We are developing new techniques to improve the performance of our computer-aided detection (CAD) system for clustered microcalcifications on full-field digital mammograms (FFDMs). In this study, we designed an information fusion scheme by using joint two-view information on craniocaudal (CC) and mediolateral-oblique (MLO) views. After cluster candidates were detected using a single-view detection technique, candidates on CC and MLO views were paired using their geometrical information. Candidate pairs were classified as true and false pairs with a similarity classifier that used the joint information from both views. Each cluster candidate was also characterized by its single-view features. The outputs of the similarity classifier and the single-view classifier were fused and the cluster candidate was classified as a true microcalcification cluster or a false-positive (FP) using the fused two-view information. A data set of 192 FFDM images was collected from 96 patients at the University of Michigan. All patients had two mammographic views. This data set contained 96 microcalcification clusters, of which 28 clusters were proven by biopsy to be malignant and 68 were proven to be benign. For training and testing the classifiers, the data set was partitioned into two independent subsets with the malignant cases equally distributed to the two subsets. One subset was used for training and the other subset was used for testing. We compared three computerized methods for geometrically pairing cluster candidates on two mammographic views. The areas under the fitted ROC curves were 0.75±0.01, 0.74±0.01, and 0.76±0.01 for the three methods, respectively. The difference between any two methods measured by the area under the fitted ROC curve, Az, was not statistically significant (p > 0.05). We also evaluated a new hybrid pairing scheme that used two different sensitivity levels for defining cluster pairs based on the single-view scores. The single-view CAD system achieved cluster-based sensitivities of 75%, 80%, and 85% at 0.48, 0.86, and 1.05 FPs/image, respectively. The joint two-view CAD system achieved the same sensitivity levels at 0.29, 0.46, and 0.89 FPs/image. When the hybrid pairing was used in the joint two-view CAD system, the same cluster-based sensitivities were achieved at 0.26, 0.37, and 0.88 FPs/image. Our results indicate that correspondence of cluster candidates on two different views provides valuable additional information for distinguishing FPs from true microcalcification clusters.
Computerized lung nodule detection on screening CT scans: performance on juxta-pleural and internal nodules
We are developing a computer-aided detection (CAD) system for lung nodules in thoracic CT volumes. Our CAD system includes an adaptive 3D pre-screening algorithm to segment suspicious objects, and a false-positive (FP) reduction stage to classify the segmented objects as true nodules or normal lung structures. We found that the effectiveness of the FP reduction stage was limited by the different characteristics of the objects in the internal and the juxta-pleural (JP) regions. The purpose of this study was to evaluate object characteristics in the internal and JP regions of a lung CT scan, and to develop different FP reduction classifiers for JP and internal objects. Our FP reduction technique utilized shape, grayscale, and gradient features, as well as the scores of a newly-developed neural network trained on the eigenvalues of the Hessian matrix in a volume of interest containing the suspicious object. We designed an algorithm to automatically label the objects as internal or JP. Based on a training set of 75 CT scans containing internal and JP nodules, two FP classifiers were trained separately for objects in the two types of lung regions. The system performance was evaluated on an independent test set of 27 low dose screening scans. An experienced chest radiologist identified 64 solid nodules (mean diameter: 5.3 mm, range: 3.0-12.9 mm) on the test cases, of which 33 were internal and 31 were JP. Our adaptive 3D prescreening algorithm detected 28 internal and 29 JP nodules. At 80% sensitivity, the average number of FPs was 3.9 and 9.7 in the internal and JP regions per scan, respectively. In comparison, a classifier designed to work on both types of nodules had an average of 29.4 FPs per scan at the same sensitivity. Our results indicate that it is more effective to use two different classifiers for JP and internal nodules because of their different characteristics. FPs in the JP region were more difficult to distinguish from true nodules. Further investigation of task-specific FP reduction techniques is needed.
Detection of blue-white veil areas in dermoscopy images using machine learning techniques
M. Emre Celebi, Hassan A. Kingravi, Y. Alp Aslandogan, et al.
As a result of the advances in skin imaging technology and the development of suitable image processing techniques, during the last decade, there has been a significant increase of interest in the computer-aided diagnosis of skin cancer. Dermoscopy is a non-invasive skin imaging technique which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One of the useful features in dermoscopic diagnosis is the blue-white veil (irregular, structureless areas of confluent blue pigmentation with an overlying white "ground-glass" film) which is mostly associated with invasive melanoma. In this preliminary study, a machine learning approach to the detection of blue-white veil areas in dermoscopy images is presented. The method involves pixel classification based on relative and absolute color features using a decision tree classifier. Promising results were obtained on a set of 224 dermoscopy images.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
Noah D. Bedard, Mehul P. Sampat, Patrick A. Stokes, et al.
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
Combining texture features from the MLO and CC views for mammographic CADx
Shalini Gupta, David Zhang, Mehul P. Sampat, et al.
The purpose of this study was to investigate approaches for combining information from the MLO and CC mammographic views for Computer-aided Diagnosis (CADx) algorithms. Feature level and classifier output level combinations were explored. Linear discriminant analysis (LDA) with step-wise feature selection from a set of Haralick's texture features was used to develop classifiers for distinguishing between benign and malignant mammographic lesions. The effect of correlation between features from the two views on the performance of classifiers was investigated. The single view models included: (a) an LDA model with stepwise selection based on the MLO view only (MLO-Only) and similarly (b) a CC-Only LDA model. The feature-level combination models included: (a) LDA based on concatenation of feature sets selected independently from the two views (FEAT_CON), (b) LDA based on the concatenated feature sets along with the corresponding value of each feature from the opposite view (FEAT_COR_CON) if the correlation was below a threshold, (c) LDA based on the average of the MLO and CC feature values (FEAT_AVG). The classifier output level combination models investigated included: (a) average of the outputs of the MLO-Only and CC-Only classifiers (OUTPUT_AVG), (b) maximum of the outputs of the MLO-Only and CC-Only classifiers (OUTPUT_MAX), (c) minimum of the outputs of the MLO-Only and CC-Only classifiers (OUTPUT_MIN), (d) a second level LDA classifier on the outputs of the MLO-Only and CC-Only classifiers (OUTPUT_LDA), (e) product of the output values of the two classifiers (OUTPUT_PROD). The performance of the models was assessed and compared using the ROC methodology to determine if combination models performed better than the single-view models.
The influence of CT dose and reconstruction parameters on automated detection of small pulmonary nodules
Robert Ochs, Erin Angel, Kirsten Boedeker, et al.
The aim of our investigation was to assess the influence of both CT acquisition dose and reconstruction kernel on computer-aided detection (CAD) of pulmonary nodules. Our hypothesis is that the detection of small nodules is affected by the noise characteristics of the image and the signal to noise ratio of the nodule and bronchiovascular anatomy. Knowledge gained from this experiment will assist in developing an advanced CAD system designed to detect smaller and more subtle nodules with minimal false positives. Eleven research subjects were selected from the Lung Image Database Consortium (LIDC) database based on our inclusion criteria of: 1) having at least one nodule and 2) available raw CT projection data for the series that our institution submitted to the LIDC study. Using the original raw projection data, research software simulated raw projection data acquired with a dose reduced 32-40% from the original scan. Projection data for both dose levels was reconstructed with smooth to very sharp kernels (B10f, B30f, B50f, and B70f). The resulting series were used to investigate the influence of dose and reconstruction kernel on CAD performance. A prototype CAD system was used to investigate changes in sensitivity and false positives with varying imaging parameters. In a sub-study, the prototype system was compared to a commercial CAD system. We did not have enough subjects to conclude significance, but the results indicate our research system had a higher sensitivity with the smooth or medium reconstruction kernels than with the sharper kernels. The sensitivity was similar for both dose levels. The false positive rate was higher with the smooth kernels and the lower dose levels.
Mathematical Morphology Poster Session
icon_mobile_dropdown
An efficient method for computing mathematical morphology for medical imaging
Many medical imaging techniques use mathematical morphology (MM), with discs and spheres being the structuring elements (SE) of choice. Given the non-linear nature of the underlying comparison operations (min, max, AND, OR), MM optimization can be challenging. Many efficient methods have been proposed for various types of SE based on the ability to decompose the SE by way of separability or homotopy. Usually, these methods are only able to approximate disc and sphere SE rather than accomplish MM for the exact SE obtained by discretization of such shapes. We present a method that for efficiently computing MM for binary and gray scale image volumes using digitally convex and X-Y-Z symmetric flat SE, which includes discs and spheres. The computational cost is a function of the diameter of the SE and rather than its volume. Additional memory overhead, if any, is modest. We are able to compute MM on real medical image volumes with greatly reduced running times with increasing gains for larger SE. Our method is also robust to scale: it is applicable to ellipse and ellipsoid SE which may result from discretizing a disc or sphere on an anisotropic grid. In addition, it is easy to implement and can make use of existing image comparison operations. We present performance results on large medical chest CT datasets.
Pattern Recognition Poster Session
icon_mobile_dropdown
A whole brain morphometric analysis of changes associated with pre-term birth
C. E. Thomaz, J. P. Boardman M.D., S. Counsell M.D., et al.
Pre-term birth is strongly associated with subsequent neuropsychiatric impairment. To identify structural differences in preterm infants we have examined a dataset of magnetic resonance (MR) images containing 88 preterm infants and 19 term born controls. We have analyzed these images by combining image registration, deformation based morphometry (DBM), multivariate statistics, and effect size maps (ESM). The methodology described has been performed directly on the MR intensity images rather than on segmented versions of the images. The results indicate that the approach described makes clear the statistical differences between the control and preterm samples, showing a leave-one-out classification accuracy of 94.74% and 95.45% respectively. In addition, finding the most discriminant direction between the groups and using DBM features and ESM we are able to identify not only what are the changes between preterm and term groups but also how relatively relevant they are in terms of volume expansion and contraction.
Feature-space exploration of pathology images using content-based database visualization
B. Lessmann, V. Hans, A. Degenhard, et al.
In this work we present a method for the interactive feature space exploration and content-based database visualization (CBDV) of medical image databases. Using Self Organizing Maps it is possible to visualize the content of a medical image database. This visualization provides the basis for an interactive visual exploration of the meaning of image features and a characterization of the database content.
RANSAC-based EM algorithm for robust detection and segmentation of cylindrical fragments from calibrated C-arm images
Guoyan Zheng, Xiao Dong, Xuan Zhang
Automated identification, pose and size estimation of cylindrical fragments from registered C-arm images is highly desirable in various computer-assisted, fluoroscopy-based applications including long bone fracture reduction and intramedullary nailing, where the pose and size of bone fragment need to be accurately estimated for a better treatment. In this paper, a RANSAC-based EM algorithm for robust detection and segmentation of cylindrical fragments from calibrated C-arm images is presented. By detection, we mean that the axes and the radii of the principal fragments will be automatically determined. And by segmentation, we mean that the contour of the fragment projection onto each image plane will be automatically extracted. Benefited from the cylindrical shape of the fragments, we formulate the detection problem as an optimal process for fitting parameterized three-dimensional (3D) cylinder model to images. A RANSAC-based EM algorithm is proposed to find the optimal solution by converting the fragment detection procedure to an iterative closest point (ICP) matching procedure. The outer projection boundary of the estimated cylinder model is then fed to a region-based active contour model to robustly extract the contour of the fragment projection. The proposed algorithm has been successfully applied to real patient data with/without external objects, yielding promising results.
Realtime automatic metal extraction of medical x-ray images for contrast improvement
This paper focuses on an approach for real-time metal extraction of x-ray images taken from modern x-ray machines like C-arms. Such machines are used for vessel diagnostics, surgical interventions, as well as cardiology, neurology and orthopedic examinations. They are very fast in taking images from different angles. For this reason, manual adjustment of contrast is infeasible and automatic adjustment algorithms have been applied to try to select the optimal radiation dose for contrast adjustment. Problems occur when metallic objects, e.g., a prosthesis or a screw, are in the absorption area of interest. In this case, the automatic adjustment mostly fails because the dark, metallic objects lead the algorithm to overdose the x-ray tube. This outshining effect results in overexposed images and bad contrast. To overcome this limitation, metallic objects have to be detected and extracted from images that are taken as input for the adjustment algorithm. In this paper, we present a real-time solution for extracting metallic objects of x-ray images. We will explore the characteristic features of metallic objects in x-ray images and their distinction from bone fragments which form the basis to find a successful way for object segmentation and classification. Subsequently, we will present our edge based real-time approach for successful and fast automatic segmentation and classification of metallic objects. Finally, experimental results on the effectiveness and performance of our approach based on a vast amount of input image data sets will be presented.
Histologic characterization of DCE-MRI breast tumors with dimensional data reduction
Claudio Varini, Andreas Degenhard, Tim W. Nattkemper
The assessment of similarities of breast tumors in DCE-MRI is an important step to improving diagnostic accuracy. A comparison of a breast lesion with different histologic types of tumors can in addition provide further clinical information on the nature of the lesion itself. We present an approach to the visual comparison of different histologic types of breast tumor utilizing Locally Linear Embedding (LLE), an algorithm for dimensional data reduction.The experimental dataset contains the time-series of seven benign and seven malignant breast tumors of various histologic types that were manually labeled by an expert physician from a sequence of DCE-MRI volumes. The adopted DCE-MRI protocol involves six consecutive images of the female breast, yielding to a six-dimensional time-series of MR intensity values for each voxel. The set of all time-series from the 14 tumors constitutes a six-dimensional signal space where similar time-series exhibit locality. This high-dimensional dataset is projected into two dimensions by LLE while preserving the local space topology. In this way similar time-series are mapped onto neighboring data points in the LLE projection. Its visualization with customized colors encoding the histologic information provides a convenient interface for interactive comparison of various breast tumors belonging to different histologic families.
Characterization of pulmonary nodules features on computer tomography (CT) scans using wavelet coefficients and heat maps
Using CT images from the National Lung Screening Trial (NLST) of the National Cancer Institute (NCI), interpreted by radiologists at the Georgetown University, our goal was to investigate the feature extraction method using discrete wavelet transform (DWT) and to demonstrate their potential in distinguishing between benign and malignant nodule status. We analyzed multiple 2 mm thick slices of 40 subjects with benign nodules and 7 subjects with malignant nodules for a total of 112 and 78 slices, respectively. Data was analyzed in the region-of-interest (ROI) that included nodule and surrounding areas in three different-sized windows. A linear discriminant analysis (LDA) of wavelets coefficients was used for data analysis. In particular we examined discriminative power of the wavelet based features using Fisher LDA, and evaluated the classification results using decision matrix (DM) for matched sample (MS). For visualization we used 3-D Heat Maps, originally developed in MATLAB(R) (MathWorks, Natick, MA) for gene expression array analysis, modified to display the magnitude of similarities between cases under analysis. The use of DWT in the image pre-processing modules resulted in a significant improvement in discrimination between benign and malignant nodules. The results show better classification accuracy with the DWT based features, as compared to previously proposed classification features (p-values: 0.008, 0.022, and 0.039, depending on window size). The Heat Maps provide useful data visualization for further investigation as they have the ability to identify cases that should be further explored to understand why some of the benign nodules look similar to malignant in the wavelet domain.
Quality/Restoration/Deblurring Poster Session
icon_mobile_dropdown
A new anisotropic diffusion method, application to partial volume effect reduction
The partial volume effect is a significant limitation in medical imaging that results in blurring when the boundary between two structures of interest falls in the middle of a voxel. A new anisotropic diffusion method allows one to create interpolated 3D images corrected for partial volume, without enhancement of noise. After a zero-order interpolation, we apply a modified version of the anisotropic diffusion approach, wherein the diffusion coefficient becomes negative for high gradient values. As a result, the new scheme restores edges between regions that have been blurred by partial voluming, but it acts as normal anisotropic diffusion in flat regions, where it reduces noise. We add constraints to stabilize the method and model partial volume; i.e., the sum of neighboring voxels must equal the signal in the original low resolution voxel and the signal in a voxel is kept within its neighbor's limits. The method performed well on a variety of synthetic images and MRI scans. No noticeable artifact was induced by interpolation with partial volume correction, and noise was much reduced in homogeneous regions. We validated the method using the BrainWeb project database. Partial volume effect was simulated and restored brain volumes compared to the original ones. Errors due to partial volume effect were reduced by 28% and 35% for the 5% and 0% noise cases, respectively. The method was applied to in vivo "thick" MRI carotid artery images for atherosclerosis detection. There was a remarkable increase in the delineation of the lumen of the carotid artery.
Metal artifacts reduction in CT images through Euler’s elastica and curvature based sinogram inpainting
Metal artifacts arise in CT images when X-rays traverse the high attenuating objects such as metal bodies. Portions of projection data become unavailable. In this paper, we present an Euler's elastica and curvature based sinogram inpainting (EECSI) algorithm for metal artifacts reduction, where "inpainting" is a synonym for "image interpolation". In EECSI, the unavailable data are regarded as occlusion and can be inpainted inside the inpainting domain based on elastica interpolants. Numerical simulations demonstrate that, compared to conventional interpolation methods, the algorithm proposed connects the unavailable projection region more smoothly and accurately, thus better reduces metal artifacts and more accurately reveals cross section structures, especially in the immediate neighborhood of the metallic objects.
A new image calibration technique for colposcopic images
Wenjing Li, Marcelo Soto-Thompson, Yizhi Xiong, et al.
Colposcopy is a primary diagnostic method used to detect cancer and precancerous lesions of the uterine cervix. During the examination, the metaplastic and abnormal tissues exhibit different degrees of whiteness (acetowhitening effect) after applying a 3%-5% acetic acid solution. Colposcopists evaluate the color and density of the acetowhite tissue to assess the severity of lesions for the purpose of diagnosis, telemedicine, and annotation. However, the color and illumination of the colposcopic images vary with the light sources, the instruments and camera settings, as well as the clinical environments. This makes assessment of the color information very challenging even for an expert. In terms of developing a Computer-Aided Diagnosis (CAD) system for colposcopy, these variations affect the performance of the feature extraction algorithm for the acetowhite color. Non-uniform illumination from the light source is also an obstacle for detecting acetowhite regions, lesion margins and anatomic features. Therefore, in digital colposcopy, it is critical to map the color appearance of the images taken with different colposcopes into one standard color space with normalized illumination. This paper presents a novel image calibration technique for colposcopic images. First, a specially designed calibration unit is mounted on the colposcope to acquire daily calibration data prior to performing patient examinations. The calibration routine is fast, automated, accurate and reliable. We then use our illumination correction algorithm and a color calibration algorithm to calibrate the patient data. In this paper we describe these techniques and demonstrate their applications in clinical studies.
Improved MRSI with field inhomogeneity compensation
Ildar Khalidov, Dimitri Van De Ville, Mathews Jacob, et al.
Magnetic resonance spectroscopy imaging (MRSI) is a promising and developing tool in medical imaging. Because of various difficulties imposed by the imperfections of the scanner and the reconstruction algorithms, its applicability in clinical practice is rather limited. In this paper, we suggest an extension of the constrained reconstruction technique (SLIM). Our algorithm, named B-SLIM, takes into account the the measured field inhomogeneity map, which contains both the scanner's main field inhomogeneity and the object-dependent magnetic susceptibility effects. The method is implemented and tested both with synthetic and physical two-compartment phantom data. The results demonstrate significant performance improvement over the SLIM technique. At the same time, the algorithm has the same computational complexity as SLIM.
Wavelet-based multiscale level-set curve evolution in noise reduction for MR imaging
Junmei Zhong, Bernard Dardzinski, Janaka Wansapura
In magnetic resonance (MR) imaging, there is a tradeoff between the spatial resolution, temporal resolution and signal to noise ratio (SNR). MR images usually suffer from low SNR and low resolutions. In order to make it practical for MR imaging with higher resolutions as well as sufficient SNR, it is necessary to reduce noise efficiently while preserving important image features. In this paper, we propose to use the wavelet-based multiscale level-set curve evolution algorithm to reduce noise for MR imaging. Experimental results demonstrate that this denoising algorithm can significantly improve the SNR and contrast to noise ratio (CNR) for MR images while preserving edges with good visual quality. The denoising results indicate that in MR imaging applications, we can almost doubly improve the temporal resolution or improve the spatial resolution while achieving sufficient SNR, CNR, and satisfactory image quality.
ICA domain filtering for reduction of noise in x-ray images
Radiological imaging such as x-ray CT is one of the most important tools for medical diagnostics. Since the radiological images are always with some quantum noise and the reduction of quantum noise or Poisson noise in medical images is an important issue. In this paper, we propose a new filtering based on independent component analysis (ICA) for reduction of noise. In the proposed filtering, the image (projection) is first transformed to ICA domain and then the components of scattered x-ray are removed by a soft thresholding (Shrinkage). The proposed method has been demonstrated by using both standard images and Monte Carlo simulations. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed filter
Characterization of high resolution MR images reconstructed by a GRAPPA based parallel technique
Suchandrima Banerjee, Sharmila Majumdar
This work implemented an auto-calibrating parallel imaging technique and applied it to in vivo magnetic resonance imaging (MRI) of trabecular bone micro-architecture. A Generalized auto-calibrating partially parallel acquisition (GRAPPA) based reconstruction technique using modified robust data fitting was developed. The MR data was acquired with an eight channel phased array receiver on three normal volunteers on a General Electric 3 Tesla scanner. Microstructures comprising the trabecular bone architecture are of the order of 100 microns and hence their depiction requires very high imaging resolution. This work examined the effects of GRAPPA based parallel imaging on signal and noise characteristics and effective spatial resolution in high resolution (HR) images, for the range of undersampling or reduction factors 2-4. Additionally quantitative analysis was performed to obtain structural measures of trabecular bone from the images. Image quality in terms of contrast and depiction of structures was maintained in parallel images for reduction factors up to 3. Comparison between regular and parallel images suggested similar spatial resolution for both. However differences in noise characteristics in parallel images compared to regular images affected the threshholding based quantification. This suggested that GRAPPA based parallel images might require different analysis techniques. In conclusion, the study showed the feasibility of using parallel imaging techniques in HR-MRI of trabecular bone, although quantification strategies will have to be further investigated. Reduction of acquisition time using parallel techniques can improve the clinical feasibility of MRI of trabecular bone for prognosis and staging of the skeletal disorder osteoporosis.
Robust estimation of the noise variance from background MR data
J. Sijbers, A. J. den Dekker, D. Poot, et al.
In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum likelihood (ML) estimation. Both methods are evaluated in terms of accuracy and precision using simulated MR data. It is shown that the newly proposed method outperforms the commonly used method in terms of mean-squared error (MSE).
Improvement of image quality in MDCT by high-frequency sampling of x-, y- and z-direction
The multi-detector row computed tomography (MDCT) has dramatically increased speed of scanning, and allows high-resolution imaging compared with the conventional single-detector row CT (SDCT). However, use of the MDCT was making use of three-dimensional (3D) volume scanning and four-dimensional (4D) dynamic scanning increase, and made radiation dose to patients increase simultaneously. In addition, in recent years, lung-cancer screening CT (LSCT) is introduced, and low-dose scanning is strongly required to increase the benefit/risk ratio. In this study, high-frequency volume data sampling (over-sampling) method of x-, y- and z-direction was proposed as technique for reduction of image noise in the MDCT and discussed about reduction of radiation dose and improvement of image quality. In this proposed method, volume data are obtained by over-sampling of x-, y- and z-direction and image is obtained by averaging these data. In x- and y-direction, over-sampling is equivalent to obtaining projection data using large matrix size for same scan-field of view (scan-FOV), and in z-direction, equivalent to using thin slice. Normally, when signal with which noise distribution differs are averaged n-times, signal-to-noise ratio (SNR) will increases by factor of [square root of] n. In this method, each pixel value of the image is obtained from n2x,ynz pixels by nx,y-times sampling for x- and y-direction, and nz-times sampling for z-direction. In other words, SNR of the image increases [square root of] n2x,ynz-times. In this high-frequency data sampling method, it is possible to obtain high-quality image as compared with conventional image. Moreover, by applying to noisy image obtained with low-dose scanning, reduction of radiation dose to patients is possible.
Automatic detection of specular reflections in uterine cervix images
Gali Zimmerman-Moreno, Hayit Greenspan
Specular reflections strongly affect the appearance of images, and usually hinder the computer vision algorithms applied to them. This is particularly the case with uterine cervix images. The highlights created by specular reflections are a major obstacle in the way of automatic segmentation of such images. We propose a method for the detection of specularities in cervix images that utilizes intensity, saturation and gradient information. A two-stage segmentation process is proposed for the identification of highlights. First, coarse regions that contain the reflections are defined. Second, probabilistic modeling and segmentation is used to achieve a precise segmentation inside the coarse regions. The resulting regions are filled by propagating the surrounding color information. The efficiency of the method for cervix images is demonstrated.
Noise reduction from magnetic resonance images using nonseperable transforms
Ehsan Nezhadarya, Mohammad Bagher Shamsollahi
Multi-scale transforms have got a lot of applications in image processing, in recent years. Wavelet transform is a powerful multiscale transform for denoising noisy signals and images, but the usual two-dimensional separable wavelets are sub-optimal. These separable wavelet transforms can successfully identify zero dimensional singularities in images, but can weakly identify one dimensional singularities such as edges, curves and lines. In this sense, non-separable transforms such as Ridgelet and Curvelet transforms are proposed by Candes and Donoho. The coefficients produced by these non-separable transforms have shown to be sparser than wavelet coefficients. This fact results in better denoising capabilities than wavelet transform. These new non-separable transforms can identify direction in lines and curves, because of special structure of their basis elements. Basically, Magnetic Resonance images are probable to have Rician noise. In some special cases, this kind of noise can be supposed to be white Gaussian noise. In this paper, a new method for denoising MR images is proposed. This method is based on Monoscale Ridgelet transform. It is shown that this two transform can successfully denoise MR images embedded in white Gaussian noise. The results are better in comparison with usual wavelet denoising methods, based on both visual perception and signal-to-noise ratio.
An improved denoising algorithm based on multi-scale dyadic wavelet transform
Zhihua Qi, Li Zhang, Yuxiang Xing, et al.
The presence of random noise in a CT system degrades the quality of CT images and therefore poses great difficulty to following tasks, such as segmentation and signal identification. In this paper, an efficient denoising algorithm is proposed to improve the quality of CT images. This algorithm mainly consists of three steps: (1) According to the inter-scale relationship of wavelet coefficient magnitude sum in the cone of influence (COI), wavelet coefficients are classified into two categories: edge-related and regular coefficients and irregular coefficients; (2) For edge-related and regular coefficients, only those located at the lowest decomposition level are denoised by wiener filtering, while no changes are made on coefficients located at other decomposition levels. (3) Irregular coefficients are denoised at all levels by wiener filtering. This algorithm is performed on projection data from which CT images are reconstructed. Experimental results show that: (1) It can effectively reduce the noise intensity while preserving the information of details as much as possible; (2) It is independent of CT scanning geometry and thus applicable to various CT systems. The denoising results indicate that this algorithm can offer great help to follow-up analysis based on CT images.
A weighted average algorithm for edge-preserving smoothing on MRI images
Renchao Jin, Lijuan Zhang, Bo Meng, et al.
Medical images such as MRI images normally have smooth edges and round corners, but the existing general edge-preserving smoothing algorithms often result in coarse edges and sharp corners. A new image smoothing algorithm, based on a filter utilizing the weighted average of 21 average pixel values in 21 neighborhood subregions in a 3x3 square around the center pixel, is proposed to deal with the problem. Subregions with the center point being smoothed on its sharp corner are avoided from selection to make the edges being preserved smooth and round. During the smoothing process, each subregion is assigned a weight according to its homogeneousness which is evaluated by the variance of its pixel values. The weighted average of averages for all subregions is assigned to the center pixel being smoothed. The more homogeneous neighborhood has more influence on the center pixel, so that the edges can be well preserved. But contributions from all neighborhoods are also taken into consideration especially when their homogeneousness is largely equal so that the resultant areas are smoother. An evaluation of the algorithm on the simulated MRI images is carried out. Experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better compared with the existing smoothing algorithms.
Simulation of susceptibility-induced distortions in fMRI
Ning Xu, Yong Li, Cynthia B. Paschal, et al.
It has been recently proposed that computer-simulated phantom images can be used to evaluate methods for fMRI preprocessing. It is widely recognized that Gradient-Echo Echo Planar Imaging (EPI), the most often used technique for fMRI, is strongly affected by field inhomogeneities. Accurate and realistic phantom images for use by the fMRI community for software evaluation and training must incorporate these distortions and account for the effects of head motion and respiration on the distortions. A method to generate realistic distortions caused by field inhomogeneity for the generation of an fMRI phantom is presented in this paper. Changes in field inhomogeneity due to motion are studied by means of adding motions to the brain model and calculating the induced field map numerically rather than measuring it experimentally. A fast analytic version of an MR simulation is used to generate distorted EPI images based on the calculated field maps. The new generated fMRI phantoms can be used to evaluate processing algorithms for fMRI study more accurately. We can appreciate the importance of distortions for fMRI phantom generation by simulating a distortion-free image and adding distortions afterwards. Validations are performed by comparing the calculated field maps with measured ones. In addition, we show the similarities between a simulated fMRI phantom and real EPI image from our MR scanner.
Retinal image enhancement based on the human visual system
Kamel Belkacem-Boussaid, Balaji Raman, Gilberto Zamora, et al.
Improving the quality of gray level images continues to be a challenging task, and the challenge increases for color images due to the interaction of multiple parameters within a scene. Each color plane or wavelength constitutes an image by itself, and its quality depends on many parameters such as absorption, reflectance or scattering of the object with the lighting source. Non-uniformity of the lighting, optics, electronics of the camera, and even the environment of the object are sources of degradation in the image. Therefore, segmentation and interpretation of the image may become very difficult if its quality is not enhanced. The main goal of the present work is to demonstrate image processing algorithm that is inspired from some concepts of the Human Visual System (HVS). HVS concepts have been widely used in gray level image enhancement and here we show how they can be successfully extended to color images. The resulting Multi-Scale Spatial Decomposition (MSSD) is employed to enhance the quality of color images. Of particular interest for medical imaging is the enhancement of retinal images whose quality is extremely sensitive to imaging artifacts. We show that our MSSD algorithm improves the readability and gradeability of retinal images and quantify such improvements using both subjective and objective metrics of image quality.
Prior-information-driven multiple contrast projection of T2 weighted magnetic resonance images
Despite the tremendous improvements in MR imaging, the optimization of fast imaging techniques for display of T2 contrast has not yet been accomplished. Existing methods that make use of prior-information (feature-recognizing MRI, constrained reconstruction techniques, dynamic imaging etc) are sub-optimal. In this paper, we present a fast, robust method to enhance the T2 related contrast in an MRI image acquired at, but not restricted to, "just-enough-to-highlight-T2" repetition time so as to produce a computed mosaic of the same image at different repetition (TR) and echo (TE) times. This leads to substantial reduction in scan time and simultaneous provision of multiple snapshots of the same image at different TR and TE time settings. The enhanced mapping is performed using a feature-guided, non-linear equalization technique based on prior knowledge. The proposed methodology could be synergistically cascaded with other fast imaging techniques to further improve the acquisition rate. The clinical applications of the proposed contrast enhancement technique include: a pre-scan application in which projected images assist in prescribing a subsequent image acquisition; a real-time application in which images are acquired quickly with a short TR from which projected images at long TR are produced in near real-time; and post processing applications where enhanced images are produce to assist in diagnosis.
Implications of MR contrast standardization on image computing
The process of transforming the non-linear magnetic field perturbations induced by radiowaves into linear reconstructions based on Radon and Fourier transforms has resulted in MR acquisitions in which intensities do not have a fixed meaning, not even within the same protocol, for the same body region, for images obtained on the same scanner, for the same patient, on the same day. This makes robust image interpretation and processing extremely challenging. The status quo of fine tuning an image processing algorithm with the ever-varying MRI intensity space could best be summarized as a "random search through the parameter space". This work demonstrates the implications of standardizing the contrast across multiple tissue types on the robustness and efficiency of image processing algorithms. Contrast standardization is performed using a prior-knowledge driven feature-guided, fast, non-linear equalization technique. Without loss of generality, skull stripping and brain tissue segmentation are considered in this investigation. Results show that the iterative image processing algorithms converge faster with minimal parameter tweaking and the abstractions are significantly better in the contrast standardized space than in the native stochastic space.
Contrast enhancement of soft tissues in computed tomography images
Even though soft tissues are of primary interest to radiologists, they are represented using only 12.5% of the total number of gray levels in a typical DICOM format of a Computed Tomography (CT) scan. This poor distribution of gray levels reduces the overall contrast and the texture differences between individual organs, and poses a serious visualization problem since radiologists need clear visual representations of organs to produce proper diagnoses. In order to enhance the contrast within the soft tissues, the gray levels can be redistributed both linearly and nonlinearly using the gray level frequencies of the original CT scan. We propose a new nonlinear approach for contrast enhancement of soft tissues in CT images using both clipped binning and nonlinear binning based on a k-means clustering algorithm. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function's curve. The contrast enhancement results were obtained and evaluated using 141 CT images of the chest and abdomen from two normal CT studies.
Adaptive conductance filtering for spatially varying noise in PET images
Dirk Ryan Padfield, Ravindra Manjeshwar
PET images that have been reconstructed with unregularized algorithms are commonly smoothed with linear Gaussian filters to control noise. Since these filters are spatially invariant, they degrade feature contrast in the image, compromising lesion detectability. Edge-preserving smoothing filters can differentially preserve edges and features while smoothing noise. These filters assume spatially uniform noise models. However, the noise in PET images is spatially variant, approximately following a Poisson behavior. Therefore, different regions of a PET image need smoothing by different amounts. In this work, we introduce an adaptive filter, based on anisotropic diffusion, designed specifically to overcome this problem. In this algorithm, the diffusion is varied according to a local estimate of the noise using either the local median or the grayscale image opening to weight the conductance parameter. The algorithm is thus tailored to the task of smoothing PET images, or any image with Poisson-like noise characteristics, by adapting itself to varying noise while preserving significant features in the image. This filter was compared with Gaussian smoothing and a representative anisotropic diffusion method using three quantitative task-relevant metrics calculated on simulated PET images with lesions in the lung and liver. The contrast gain and noise ratio metrics were used to measure the ability to do accurate quantitation; the Channelized Hotelling Observer lesion detectability index was used to quantify lesion detectability. The adaptive filter improved the signal-to-noise ratio by more than 45% and lesion detectability by more than 55% over the Gaussian filter while producing "natural" looking images and consistent image quality across different anatomical regions.
Denoising diffusion tensor images: preprocessing for automated detection of subtle diffusion tensor abnormalities between populations
Tin Man Lee, Usha Sinha
Diffusion tensor imaging (DTI) is the only non-invasive imaging modality to visualize fiber tracts. Many disease states, e.g. depression, show subtle changes in diffusion tensor indices, which can only be detected by comparison of population cohorts with high quality images. Further, it is important to reduce noise in the acquired diffusion weighted images to perform accurate fiber tracking. In order to obtain acceptable SNR values for DTI images, a large number of averages is required. For whole brain coverage with isotropic and high-resolution imaging, this leads to unacceptable scan times. In order to obtain high SNR images with smaller number of averages, we propose to combine the strengths of two recently developed methodologies for denoising: total variation and wavelet. Our algorithm, which uses translational invariant BayesShrink wavelet thresholding with total variation regularization, successfully removes image noise and Pseudo-Gibbs phenomena while preserving both texture and edges. We compare our results with other denoising methods proposed for DTI images based on visual and quantitative metrics.
A voxel-based partial volume correction in nuclear medicine
Most partial volume correction (PVC) methods are ROI-based, and assume uniform activity within each ROI. Here, we extended a PVC method, developed by Rousset et al (JNM, 1998) called geometric transfer matrix (GTM), to a voxel-based PVC approach called v-GTM which accounts non-uniform activity within each ROI. The v-GTM method was evaluated using simulated data (perfect co-registered MRIs). We investigated the influence of noise, the effect of compensating detector response during iterative reconstruction methods and the effect of non-uniform activity. For simulated data, noise did not affect the accuracy of v-GTM method seriously. When detector response compensation was applied in iterative reconstruction, both PVC methods did not improve the recovery values. In the non-uniform experiment, v-GTM had slightly better recovery values and less bias than those of GTM. Conclusion: v-GTM resulted better recovery values, and might be useful for PVC in small regions of interest.
Detectability improvement of early sign of acute stroke on brain CT images using an adaptive partial smoothing filter
Detection of early infarct signs on non-enhanced CT is mandatory in patients with acute ischemic stroke. We present a method for improving the detectability of early infarct signs of acute ischemic stroke. This approach is considered as the first step for computer-aided diagnosis in acute ischemic stroke. Obscuration of the gray-white matter interface at the lentiform nucleus or the insular ribbon has been an important early infarct sign, which affects decisions on thrombolytic therapy. However, its detection is difficult, since the early infarct sign is subtle hypoattenuation. In order to improve the detectability of the early infarct sign, an image processing being able to reduce local noise with edges preserved is desirable. To cope with this issue, we devised an adaptive partial smoothing filter (APSF). Because the APSF can markedly improve the visibility of the normal gray-white matter interface, the detection of conspicuity of obscuration of gray-white matter interface due to hypoattenuation could be increased. The APSF is a specifically designed filter used to perform local smoothing using a variable filter size determined by the distribution of pixel values of edges in the region of interest. By adjusting four parameters of the APSF, an optimal condition for image enhancement can be obtained. In order to determine a major one of the parameters, preliminary simulation was performed by using composite images simulated the gray-white matter. The APSF based on preliminary simulation was applied to several clinical CT scans in hyperacute stroke patients. The results showed that the detectability of early infarct signs is much improved.
A novel contrast equalization method for chest radiograph
Min Zhang, Xuanqin Mou, Ying Long
This paper proposed a novel contrast equalization algorithm for display or print ready processing of x-ray chest radiograph based on a multi-scale decomposition and reconstruction architecture. Firstly, using this architecture, the original image is decomposed into multi-scale components. At this stage, three methods are used. The first two methods are based on Gauss convolution filters and the third is based on mean curvature motion equation, which is one of nonlinear partial differential equation (PDE) models. Secondly the components at different scale are weighted using a set of controlled equalization coefficients and then integrated into the display or print ready image to ensure the improvement of the visibility of weakly contrasting details and the contrast between tissues in different area. Preliminary experiments on clinical images in deed testify the superiority of our algorithm. This algorithm can effectively improve contrast of low-contrast-region, increase the detail visibility of the rendered image to CRT or film as the image latitude has adequate broad.
On interpolation of sparsely sampled sinograms
Stephan Schröder, Ingo Stuke, Til Aach
Certain situations, for instance in flat-panel cone beam CT, permit that only a relatively low number of projections are acquired. The reconstruction quality of the volume to be imaged is then compromised by streaking artifacts. To avoid these degradations, additional projections are interpolated between the genuinely acquired ones. Since straightforward linear, non-adaptive interpolation generally results in loss of sharpness, techniques were developed which adapt to, e.g., local orientation within the sinogram. So far, such directional interpolation algorithms consider only single local orientations. Especially in x-ray imaging, however, different non-opaque orientated structures may be superimposed. We therefore show how such multiply-oriented structures can be detected, estimated, and included in the interpolation process. Furthermore, genuine sinograms meet certain conditions regarding their moments as well as regarding their spectra. Moments of 2D sinograms depend on the projection angle in the form of sinosoids, while the Fourier spectrum of the sinogram takes the form of a 'bow tie'. The consistency of the interpolated data with these conditions may therefore be viewed as additional measures of the interpolation quality. For linear interpolation, we analyze how well the interpolated data comply with these constraints. We also show how the moment constraint can be integrated into the interpolation process.
Statistical Methods Poster Session
icon_mobile_dropdown
A new post-processing method of applying independent component analysis to fMRI data
Xia Wu, Zhiying Long, Li Yao, et al.
Independent component analysis (ICA) method can be used to separate fMRI data into some task-related independent components, including one consistently task-related (CTR) and several transiently task-related (TTR) components. However, the weights, with which the CTR and TTRs contribute to the final task component, are often unknown, but are important for finding its relevant spatial activation area. Here we propose a new ICA post-processing method alternative to combine not only these CTR and TTRs which sometimes are judged in a subjective manner, but also others in an effort to identify a comprehended and summed spatial pattern that is responsible for the behavior under investigation. This proposed procedure has been successfully used in principal component analysis (PCA) based scaled subprofile modeling (SSM). Adopting this newly proposed approach, we essentially refer the ICA exploratory findings to a hypothesized temporal brain response pattern (reference function). Basically, we will use linear regression method to seek the relationship between the reference function and time courses of multi components generated from the ICA procedure. The linear regression coefficients are then used as relative weights in generating the final summed spatial pattern. Moreover, this approach allows a researcher to use T-test to statistically infer the importance of each independent component in its contribution to the final pattern and consequently the contribution to the cognitive process. Experiment result also shows that the spatial activation of the final task component becomes more accurate.
Integrating and classifying parametric features from fMRI data for brain function characterization
Yongmei Michelle Wang, Chunxiao Zhou
Recent advances in functional magnetic resonance imaging (fMRI) provide an unparalleled opportunity for measuring and characterizing brain function in humans. However, the typically small signal change is very noisy and susceptible to various artifacts, such as those caused by scanner drift, head motion, and cardio-respiratory effects. This paper presents an integrated and exploratory approach to characterize brain function from fMRI data by providing techniques for both functional segregation and integration without any prior knowledge of the experimental paradigm. We demonstrate that principal component analysis (PCA) can be used for temporal shape modeling and shape feature extraction, shedding lights from a different perspective for the application of PCA in fMRI analysis. Appropriate feature screening is also performed to eliminate the parameters corresponding to data noise or artifacts. The extracted and screened shape parameters are revealed to be effective and efficient representations of the true fMRI time series. We then propose a novel strategy which classifies the fMRI data into distinct activation regions based on the selected temporal shape features. Furthermore, we propose to infer functional connectivity of the identified patterns by the distance measures in this parametric shape feature space. Validation for accuracy, sensitivity, and efficiency of the method and comparison with existing fMRI analysis techniques are performed using both simulated and real fMRI data.
Enhanced techniques for asymmetry quantification in brain imagery
We present an automated generic methodology for symmetry identification and asymmetry quantification, novel method of identifying and delineation of brain pathology by analyzing the opposing sides of the brain utilizing of inherent left-right symmetry in the brain. After symmetry axis has been detected, we apply non-parametric statistical tests operating on the pairs of samples to identify initial seeds points which is defined defined as the pixels where the most statistically significant difference appears. Local region growing is performed on the difference map, from where the seeds are aggregating until it captures all 8-way connected high signals from the difference map. We illustrate the capability of our method with examples ranging from tumors in patient MR data to animal stroke data. The validation results on Rat stroke data have shown that this approach has promise to achieve high precision and full automation in segmenting lesions in reflectional symmetrical objects.
Texture Poster Session
icon_mobile_dropdown
Analysis of the topological properties of the proximal femur on a regional scale: evaluation of multi-detector CT-scans for the assessment of biomechanical strength using local Minkowski functionals in 3D
H. F. Boehm, T. M. Link, R. A. Monetti, et al.
In our recent studies on the analysis of bone texture in the context of Osteoporosis, we could already demonstrate the great potential of the topological evaluation of bone architecture based on the Minkowski Functionals (MF) in 2D and 3D for the prediction of the mechanical strength of cubic bone specimens depicted by high resolution MRI. Other than before, we now assess the mechanical characteristics of whole hip bone specimens imaged by multi-detector computed tomography. Due to the specific properties of the imaging modality and the bone tissue in the proximal femur, this requires to introduce a new analysis method. The internal architecture of the hip is functionally highly specialized to withstand the complex pattern of external and internal forces associated with human gait. Since the direction, connectivity and distribution of the trabeculae changes considerably within narrow spatial limits it seems most reasonable to evaluate the femoral bone structure on a local scale. The Minkowski functionals are a set of morphological descriptors for the topological characterization of binarized, multi-dimensional, convex objects with respect to shape, structure, and the connectivity of their components. The MF are usually used as global descriptors and may react very sensitively to minor structural variations which presents a major limitation in a number of applications. The objective of this work is to assess the mechanical competence of whole hip bone specimens using parameters based on the MF. We introduce an algorithm that considers the local topological aspects of the bone architecture of the proximal femur allowing to identify regions within the bone that contribute more to the overall mechanical strength than others.
Automatic textural feature selection on echocardiographic images
The quality of the information contents in echocardiographic images is often reduced by the presence of dropout, speckle, movement artifact, and far field attenuation, although ultrasound is suitable to assessing the dynamic aspect of heart. The aim of this work is to find a set of texture features that optimally characterize the cardiac chambers from echocardiographic images and to use the to segment the image. In this work, seventy-seven texture characteristics from echographic and borders map images were extracted. An optimal subset of them was selected by an automatic process based on a separateness criterion, classification rate criterion and sequential forward algorithm. As a result the optimal set of texture characteristics found was: {Echo, Homogeneity of the co-ocurrence matrix at 90°, Central moment 22 of original image and Central moment 22 of borders map}. The classification rate reached was of 76.4%.
Investigation of temporal radiographic texture analysis for the detection of periprosthetic osteolysis
Joel R. Wilkie, Maryellen L. Giger, Charles A. Engh Sr., et al.
Periprosthetic osteolysis is a disease caused by the body's response to submicron polyethylene debris particles from the hip implant in total hip replacement (THR) patients. It leads to resorption of bone surrounding the implant and deterioration of the bone's trabecular texture, but this is difficult to detect until the later stages of disease progression. Radiographic texture analysis methods have shown promise in detecting this disease at an earlier stage; however, changes in texture over time may be more important than absolute texture measures. In this research, we investigated temporal radiographic texture analysis (tRTA) methods as possible aids in the detection of osteolysis. A database of 48 THR cases with images available from four different follow-up time intervals was used. ROIs were selected within the osteolytic region of the most recent follow-up image (or comparable region for normal cases) and visually matched on all previous images. Texture features were calculated from the ROIs and then trend analysis was performed using a simple linear regression method, an LDA method and a BANN method. The performance of these three methods was evaluated by ROC analysis. Maximum AUC values of 0.68, 0.78, and 0.88 for the task of distinguishing between osteolysis and normal cases were achieved for the respective tRTA features. These performances were superior to those of our prior stationary, non-temporal texture analysis. The results suggest that tRTA may have the potential to help detect osteolysis at an earlier, more treatable stage.
Validation Poster Session
icon_mobile_dropdown
Fourier-domain based datacentric performance ranking of competing medical image processing algorithms
To accomplish a given computational task, a number of algorithmic and heuristic approaches can be employed to act upon the ever-varying input data. Depending upon the assumptions made regarding the data, the algorithm and the task, the end result from each of these approaches could be different. Currently, there does not exist an automatic, robust, precise, simple, and algorithm-independent measure to rate the accuracy of a multiplicity of algorithms to accomplish a given task on the given data. Lack of such a measure severely restricts the integration of "datacentric" computational tools. This paper proposes a Fourier-domain based method to robustly assess and rank the accuracy of a multiplicity of abstractions vis-a-vis the original data. The method is scalable across dimensions and data types and is blind to the task associated with the generation of the competing to-be-rated abstractions