Proceedings Volume 4684

Medical Imaging 2002: Image Processing

cover
Proceedings Volume 4684

Medical Imaging 2002: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 9 May 2002
Contents: 16 Sessions, 198 Papers, 0 Presentations
Conference: Medical Imaging 2002 2002
Volume Number: 4684

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Tomographic Reconstruction I
  • Tomographic Reconstruction II/Statistical Methods
  • Segmentation I
  • Segmentation II
  • Shape
  • Validation
  • Motion/Pattern Recognition
  • Segmentation III
  • Deformable Geometry II
  • Registration I
  • Registration II/Models
  • Computer-Aided Diagnosis I
  • Computer-Aided Diagnosis II
  • Computer-Aided Diagnosis III
  • Poster Session I
  • Tomographic Reconstruction II/Statistical Methods
  • Poster Session I
  • Poster Session II
  • Computer-Aided Diagnosis I
  • Poster Session II
Tomographic Reconstruction I
icon_mobile_dropdown
New solution to the gridding problem
Image reconstruction from nonuniformly sampled frequency domain data is an important problem that arises in computed imaging. The current reconstruction techniques suffer from fundamental limitations in their model and implementation that result in blurred reconstruction and/or artifacts. Here, we present a new approach for solving this problem that relies on a more realistic model and involves an explicit measure for the reconstruction accuracy that is optimized iteratively. The image is assumed piecewise constant to impose practical display constraints using pixels. We express the mapping of these unknown pixel values to the available frequency domain values as a linear system. Even though the system matrix is shown to be dense and too large to solve for practical purposes, we observe that applying a simple orthogonal transformation to the rows of this matrix converts the matrix into a sparse format. The transformed system is subsequently solved using the conjugate gradient method. The proposed method is applied to reconstruct images of a numerical phantom as well as actual magnetic resonance images using spiral sampling. The results support the theory and show that the computational load of this method is similar to that of other techniques. This suggests its potential for practical use.
Provably convergent OSEM-like reconstruction algorithm for emission tomography
Ing-Tsung Hsiao, Anand Rangarajan, Gene R. Gindi
We investigate a new, provably convergent OSEM-like (ordered-subsets expectation-maximization) reconstruction algorithm for emission tomography. The new algorithm, which we term C-OSEM (complete-data OSEM), can be shown to monotonically increase the log-likelihood at each iteration. The familiar ML-EM reconstruction algorithm for emission tomography can be derived in a novel way. One may write a single objective function with complete, incomplete data and the reconstruction variables as in the EM approach. But in the objective function approach, there is no E-step. Instead, a suitable alternating descent on the complete data and then the reconstruction variables results in two update equations that can be shown to be equivalent to the familiar EM algorithm. Hence, minimizing this objective becomes equivalent to maximizing the likelihood. We derive our C-OSEM algorithm by modifying the above approach to update the complete data only along ordered subsets. The resulting update equation is quite different from OSEM, but still retains the speed-enhancing feature of the updates due to the limited backprojection facilitated by the ordered subsets. Despite this modification, we are able to show that the objective function decreases at each iteration, and (given a few more mild assumptions regarding the number of fixed points) conclude that the C-OSEM algorithm provides a monotonic convergence toward the maximum likelihood solution. We simulated noisy and noiseless emission projection data, and reconstructed them using the ML-EM, and the proposed C-OSEM with 4 subsets. We also reconstruct the data using the OSEM method. Anecdotal results show that the C-OSEM algorithm is much faster than ML-EM though slower than OSEM.
Toward an analytical solution for 3D SPECT reconstruction with nonuniform attenuation and distance-dependent resolution variation: a Monte Carlo simulation study
Based on Kunyansky's and our previous work, an efficient, analytical solution to the reconstruction problem of myocardial perfusion SPECT has been developed that allows simultaneous compensation for non-uniform attenuation, scatter, and system-dependent resolution variation, as well as suppression of signal-dependent Poisson noise. To avoid reconstructed images being corrupted by the presence of Poisson noise, a Karhunen-Loeve (K-L) domain adaptive Wiener filter is applied first to suppress the noise in the primary- and scatter-window measurements. The scatter contribution to the primary-energy-window measurements is then removed by our scatter estimation method, which is energy spectrum based, modified from the triple-energy-window acquisition protocol. The resolution variation is corrected by the depth-dependent deconvolution, which, being based on the central-ray approximation and the distance-frequency relation, deconvolves the scatter-free data with the accurate detector-response kernel in frequency domain. Finally, the deblurred projection data are analytically reconstructed with non-uniform attenuation by an algorithm based on Novikov's explicit inversion formula. The preliminary Monte Carlo simulation results using a realistic human thoracic phantom demonstrate that, for parallel-beam geometry, the proposed analytical reconstruction scheme is computationally comparable to filtered backprojection and quantitatively equivalent to iterative maximum a posteriori expectation-maximization reconstruction. Extension to other geometries is under progress.
Incorporating known information into image reconstruction algorithms for transmission tomography
We propose an alternating minimization (AM) image estimation algorithm for iteratively reconstructing transmission tomography images. The algorithm is based on a model that accounts for much of the underlying physics, including Poisson noise in the measured data, beam hardening of polyenergetic radiation, energy dependence of the attenuation coefficients and scatter. It is well-known that these nonlinear phenomena can cause severe artifacts throughout the image when high-density objects are present in soft tissue, especially when using the conventional technique of filtered back projection (FBP). If we assume no prior knowledge of the high-density object(s), our proposed algorithm yields much improved images in comparison to FBP, but retains significant streaking between the high-density regions. When we incorporate the knowledge of the attenuation and pose parameters of the high-density objects into the algorithm, our simulations yield images with greatly reduced artifacts. To accomplish this, we adapted the algorithm to perform a search at each iteration (or after every n iterations) to find the optimal pose of the object before updating the image. The final iteration returns pose values within 0.1 millimeters and 0.01 degrees of the actual location of the high-density structures.
Maximum-likelihood dual-energy tomographic image reconstruction
Jeffrey A. Fessler, Idris A. Elbakri, Predrag Sukovic, et al.
Dual-energy (DE) X-ray computed tomography (CT) has shown promise for material characterization and for providing quantitatively accurate CT values in a variety of applications. However, DE-CT has not been used routinely in medicine to date, primarily due to dose considerations. Most methods for DE-CT have used the filtered backprojection method for image reconstruction, leading to suboptimal noise/dose properties. This paper describes a statistical (maximum-likelihood) method for dual-energy X-ray CT that accommodates a wide variety of potential system configurations and measurement noise models. Regularized methods (such as penalized-likelihood or Bayesian estimation) are straightforward extensions. One version of the algorithm monotonically decreases the negative log-likelihood cost function each iteration. An ordered-subsets variation of the algorithm provides a fast and practical version.
Tomographic Reconstruction II/Statistical Methods
icon_mobile_dropdown
Image reconstruction from cone-beam data on a circular short-scan
Frederic Noo, Dominic J. Heuscher
3D image reconstruction from cone-beam projections on a circular short-scan is analyzed using the tools of CB reconstruction theory. By definition, CB Data on a circular short-scan do not provide enough information for exact reconstruction outside the plane of the source path. However, circular short-scans are attractive in many applications due to practical constraints on the data acquisition geometry. This work attemps at determining the optimal tomographic capabilities of CB imaging with circular short-scan. Two new algorithms are suggested for reconstruction and tested on the FORBILD head phantom with a large cone angle. The results are very encouraging.
Cone-angle-dependent generalized weighting scheme for 16-slice helical CT
Jiang Hsieh, Yanting Dong, Piero Simoni, et al.
Since the recent introduction of multi-slice helical computed tomography (MHCT), new clinical applications have experienced tremendous growth in recent years. MHCT offers improved volume coverage, faster scan speed, more isotropic spatial resolution, and reduced x-ray tube loading. Similar to the single slice helical CT, the projection data collected in MHCT is inherently inconsistent due to the constant table motion. In addition, cone beam effects in MHCT produce additional complexity and image artifacts. Although the cone angle is quite smaller even for the 16-slice configuration, the impact on image artifacts cannot be ignored. Many reconstruction algorithms have been proposed and investigated recently to combat image artifacts associated with the MHCT data acquisition. In this paper, we propose a cone-angle dependent generalized weighting scheme for 16-slice helical CT that allows the production of MHCT images with only 2D backprojection. The cone-angle dependency of the algorithm suppresses image artifacts due to the cone beam effect and the generalized weighting portion enables interpolation be performed with conjugate samples for the 16-slice helical dataset. With the proposed algorithm, image artifacts are significantly reduced.
Digital stereo-optic disc image analyzer for monitoring progression of glaucoma
This paper describes an automated 3-D surface recovery algorithm for consistent and quantitative evaluation of the deformation in the ONH (optic nerve head). Additional measures, such as the changes in the volume of the cup and the disc as an improvement to the traditional cup to disc ratios, can thus be developed for longitudinal follow-up study of a patient. We propose an automated computerized technique for stereo pair registration and surface visualization of the ONH. Power cepstrum and zero mean cross correlation are embedded in the registration and a 3-D surface recovery technique is proposed. Preprocessing, as well as an overall registration, is performed upon stereo pairs. Then a coarse to fine feature matching strategy is used to reduce the ambiguity in finding the conjugate pair of the same point within the constraints of the epipolar plane. A cubic B-spline interpolation smooths the representation of the ONH obtained, while superimposition of features such as blood vessels is added. Studies show high correlation between traditional cup/disc measures derived from manual segmentation by ophthalmologists and computer generated cup/disc volume ratio. Such longitudinal studies over a large population of glaucoma patients are currently in progress for validation of the surface recovery algorithm.
Novel spatial interaction prior for Bayesian image segmentation and restoration
The task of image segmentation implies estimation of the number and associated parameters of the classes within an image, and the class label for each image voxel. In this work, an over-segmentation of the data is first obtained using a Bayesian restoration algorithm. The method incorporates a novel spatial interaction prior, in which neighboring voxels can be classified differently so long as the distance between the centroids of their intensity distributions are within a certain extent. The corresponding functional is iteratively minimized using a series of local optimizations for the label field and a half-quadratic algorithm for the restoration. Redundant classes are then grouped in a second step by making use of information obtained in the initial restoration about the degree of affinity or interaction between the classes. The method is demonstrated on MRI images of the head.
Segmentation I
icon_mobile_dropdown
Simultaneous segmentation and tree reconstruction of the airways for virtual bronchoscopy
Thorsten Schlathoelter, Cristian Lorenz, Ingwer C. Carlsen, et al.
During the last couple of years virtual endoscopic systems (VES) have emerged as standard tools that are nowadays close to be utilized in daily clinical practice. Such tools render hollow human structures, allowing a clinician to visualize their inside in an endoscopic-like paradigm. It is common practice that the camera of a virtual endoscope is attached to the centerline of the structure of interest, to facilitate navigation. This centerline has to be determined manually or automatically, prior to an investigation. While there exist techniques that can straightforwardly handle simple tube-like structures (e.g. colon, aorta), structures like the tracheobronchial tree still represent a challenge due to their complex branching. In these cases it is necessary to determine all branching points within the tree which is - because of the complexity - impractical to be accomplished in a manual manner. This paper presents a simultaneous segmentation/skeletonization algorithm that extracts all major airway branches and large parts of the minor distal branches (up to 7th order) using a front propagation approach. During the segmentation the algorithm keeps track of the centerline of the segmented structure and detects all branching points. This in turn allows the full reconstruction of the tracheobronchial tree.
Automatic vessel extraction and abdominal aortic stent planning in multislice CT
Krishna Subramanyan, Dava Smith, Jay Varma, et al.
The abdominal aorta is the most common site for an aneurysm, which may lead to hemorrhage and death, to develop. The aim of this study was to develop a semi-automated method to de-lineate the vessels and detect the center-line of these vessels to make measurements necessary for stent design from multi-detector computed tomograms. We developed a robust method of tracking the aortic vessel tree with branches from a user selected seed point along the vessel path using scale space approaches, central transformation measures, vessel direction findings, iterative corrections and a priori information in determining the vessel branches. Fifteen patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with a 3.2 mm thickness, 1.5 mm slice spacing, and a stack of 512x512x320 volume data sets were reconstructed. The algorithm required an initial user input to locate the vessel seen in axial CT slice. Next, the automated image processing took approximately two minutes to compute the centerline and borders of the aortic vessel tree. The results between the manually and automatically generated vessel diameters were compared and statistics were computed. We observed our algorithm was consistent (less than 0.01 S.D) and similar (less than 0.1 S.D) to manual results.
Axiomatic path strength definition for fuzzy connectedness and the case of multiple seeds
This paper presents an extension of the theory and algorithms for fuzzy connectedness. In this framework, a strength of connectedness is assigned to every pair of image elements by finding the strongest connecting path between them. The strength of a path is the weakest affinity between successive pairs of elements along the path. Affinity specifies the degree to which elements hang together locally in the image. A fuzzy connected object containing a particular seed element is computed via dynamic programming. In all reported works so far, the minimum of affinities has been considered for path strength and the maximum of path strengths for fuzzy connectedness. The question thus remained all along as to whether there are other valid formulations for fuzzy connectedness. One of the main contributions of this paper is a theoretical investigation under reasonable axioms to establish that maximum of path strengths of minimum of affinities along each path is indeed the one and only valid choice. The second contribution here is to generalize the theory and algorithms of fuzzy connectedness to the multi-seeded case. The importance of multi-seeded fuzzy connectedness is illustrated with examples taken from several real medical imaging applications.
Novel theory and algorithm for fuzzy distance transform and its applications
Punam K. Saha, Bryon R. Gomberg, Felix W. Wehrli
This paper describes the theory and algorithms of fuzzy distance transform (FDT). Fuzzy distance is defined as the length of the shortest path between two points. The length of a path in a fuzzy subset is defined as the integration of fuzzy membership values of the points along the path. The shortest path between two points is the one with the minimum length among all (infinitely many) paths between the two points. It is demonstrated that, unlike in the binary case, the shortest path in a fuzzy subset is not necessarily a straight-line segment. The support of a fuzzy subset is the set of points with nonzero membership values. It is shown that, for any fuzzy subset, fuzzy distance is a metric for the interior of its support. FDT is defined as the process on a fuzzy subset that assigns at each point the smallest fuzzy distance from the boundary of the support. The theoretical framework of FDT in continuous space is extended to digital spaces and a dynamic programming-based algorithm is presented for its computation. Several potential medical imaging applications are presented including the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution.
Fuzzy segmentation of x-ray fluoroscopy images
Segmentation of fluoroscopy images is useful for fluoroscopy-to-CT image registration. However, it is impossible to assign a unique tissue type to each pixel. Rather each pixel corresponds to an entire path of tissue types encountered along a ray from the X-ray source to the detector plate. Furthermore, there is an inherent many-to-one mapping between paths and pixel values. We address these issues by assigning to each pixel not a scalar value but a fuzzy vector of tissue probabilities. We perform this segmentation in a probabilistic way by first learning typical distributions of bone, air, and soft tissue that correspond to certain fluoroscopy image values and then assigning each value to a probability distribution over its most likely generating paths. We then evaluate this segmentation on ground truth patient data.
Segmentation, surface extraction, and thickness computation of articular cartilage
S. Kubilay Pakin, Jose Gerardo Tamez-Pena, Saara Totterman, et al.
Accurate computation of the thickness of articular cartilage in 3D is crucial in diagnosis of joint diseases. The purpose of this research project is to develop an unsupervised method to produce three-dimensional (3D) thickness map of articular cartilage with magnetic resonance imaging (MRI). The method consists of two main parts, cartilage extraction and thickness map computation. The initial segmentation for cartilage extraction is achieved using a recently proposed algorithm which depends on region-growing. The regions produced during this process are labeled as cartilage or non-cartilage using a voting procedure which essentially depends on local 2-class clustering and makes use of prior knowledge about cartilage regions. Following cartilage extraction, femoral and tibial cartilages are separated by detecting the interface between them using a deformable model. After the separation, the cartilage surfaces are reconstructed as a triangular mesh and divided into two plates according to the relation between surface normal at each vertex and principal axes of the structure. For surface reconstruction, we propose an algorithm which incorporates a simple MR imaging model which allows surface representations with sub-voxel accuracy. Our thickness computation algorithm treats each plate separately as a deformable model while considering the other plate as the target surface towards which it is deformed. At the end of deformation, the thickness values at each vertex is defined as the distance between the locations at pre and post-deformation instances. The performance of the cartilage segmentation is compared to manual tracing. Also, the performance evaluation of the thickness computation algorithm on phantoms resulted in RMS errors on the order of 1%.
Segmentation II
icon_mobile_dropdown
Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images
Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.
Assisted labeling techniques for the human brain cortex
With the improvements in techniques for generating surface models from magnetic resonance (MR) images, it has recently become feasible to study the morphological characteristics of the human brain cortex in vivo. Studies of the entire surface are important for measuring global features, but analysis of specific cortical regions of interest provides a more detailed understanding of structure. We have previously developed a method for automatically segmenting regions of interest from the cortical surface using a watershed transform. Each segmented region corresponds to a cortical sulcus and is thus termed a sulcal region. In this work, we describe three important augmentations of this methodology. First, we describe a user interface that allows for the efficient labeling of the segmented sulcal regions called the Interactive Program for Sulcal Labeling (IPSL). Two additional augmentations of the methodology allow for even finer division of regions on the cortex. Both employ the fast marching technique to track curves of interest on the cortical surface. These curves are then used to separate segmented regions. Validation experiments indicate that the proposed methodology gives highly repeatable results.
New approaches for measuring changes in the cortical surface using an automatic reconstruction algorithm
Dzung L. Pham, Xiao Han, Maryam E. Rettmann, et al.
In previous work, the authors presented a multi-stage procedure for the semi-automatic reconstruction of the cerebral cortex from magnetic resonance images. This method suffered from several disadvantages. First, the tissue classification algorithm used can be sensitive to noise within the image. Second, manual interaction was required for masking out undesired regions of the brain image, such as the ventricles and putamen. Third, iterated median filters were used to perform a topology correction on the initial cortical surface, resulting in an overly smoothed initial surface. Finally, the deformable surface used to converge to the cortex had difficulty capturing narrow gyri. In this work, all four disadvantages of the procedure have been addressed. A more robust tissue classification algorithm is employed and the manual masking step is replaced by an automatic method involving level set deformable models. Instead of iterated median filters, an algorithm developed specifically for topology correction is used. The last disadvantage is addressed using an algorithm that artificially separates adjacent sulcal banks. The new procedure is more automated but also more accurate than the previous one. Its utility is demonstrated by performing a preliminary study on data from the Baltimore Longitudinal Study of Aging.
Robustness of the brain parenchymal fraction for measuring brain atrophy
M. Stella Atkins, Jeffery J. Orchard, Benjamin Law, et al.
Other researchers have proposed that the brain parenchymal fraction (or brain atrophy) may be a good surrogate measure for disease progression in patients with Multiple Sclerosis. This paper considers various factors influencing the measure of the brain parenchymal fraction obtained from dual spin-echo PD and T2-weighted head MRI scans. We investigate the robustness of the brain parenchymal fraction with respect to two factors: brain-mask border placement which determines the brain intra-dural volume, and brain scan incompleteness. We show that an automatic method for brain segmentation produces an atrophy measure which is fairly sensitive to the brain-mask placement. We also show that a robust, reproducible brain atrophy measure can be obtained from incomplete brain scans, using data in a centrally placed subvolume of the brain.
Fusing Markov random fields with anatomical knowledge and shape-based analysis to segment multiple sclerosis white matter lesions in magnetic resonance images of the brain
Stephan AlZubi, Klaus D. Toennies, N. Bodammer, et al.
This paper proposes an image analysis system to segment multiple sclerosis lesions of magnetic resonance (MR) brain volumes consisting of 3 mm thick slices using three channels (images showing T1-, T2- and PD -weighted contrast). The method uses the statistical model of Markov Random Fields (MRF) both at low and high levels. The neighborhood system used in this MRF is defined in three types: (1) Voxel to voxel: a low-level heterogeneous neighborhood system is used to restore noisy images. (2) Voxel to segment: a fuzzy atlas, which indicates the probability distribution of each tissue type in the brain, is registered elastically with the MRF. It is used by the MRF as a-priori knowledge to correct miss-classified voxels. (3) Segment to segment: Remaining lesion candidates are processed by a feature based classifier that looks at unary and neighborhood information to eliminate more false positives. An expert's manual segmentation was compared with the algorithm.
Segmentation and classification of normal-appearing brain: how much is enough?
In this study, subsets of MR slices were examined to assess their ability to optimally predict the total cerebral volume of gray matter, white matter and CSF. Patients underwent a clinical imaging protocol consisting of T1-, T2-, PD-, and FLAIR-weighted images after obtaining informed consent. MR imaging sets were registered, RF-corrected, and then analyzed with a hybrid neural network segmentation and classification algorithm to identify normal brain parenchyma. After processing the data, the correlation between the image subsets and the total cerebral volumes of gray matter, white matter and CSF were examined. The 29 subjects (18F, 11M) assessed in this study were 1.7 ? 18.7 (median = 5.2) years of age. The five subsets accounted for 5%, 15%, 24%, 56%, and 79% of the total cerebral volume. The predictive correlation for gray matter, white matter, and CSF in each of these subsets were: 5% (R= 0.94, 0.92, 0.91), 15% (R= 0.93, 0.95, 0.94), 24% (R= 0.92, 0.95, 0.94), 56% (R= 0.75, 0.95, 0.89), and 79% (R= 0.89, 0.98, 0.99) respectively. All subsets of slices examined were significantly correlated (p<0.001) with the total cerebral volume of gray matter, white matter, and CSF.
Shape
icon_mobile_dropdown
Splines: a perfect fit for medical imaging
Splines, which were invented by Schoenberg more than fifty years ago, constitute an elegant framework for dealing with interpolation and discretization problems. They are widely used in computer-aided design and computer graphics, but have been neglected in medical imaging applications, mostly as a consequence of what one may call the bad press phenomenon. Thanks to some recent research efforts in signal processing and wavelet-related techniques, the virtues of splines have been revived in our community. There is now compelling evidence (several independent studies) that splines offer the best cost-performance tradeoff among available interpolation methods. In this presentation, we will argue that the spline representation is ideally suited for all processing tasks that require a continuous model of signals or images. We will show that most forms of spline fitting (interpolation, least squares approximation, smoothing splines) can be performed most efficiently using recursive digital filters. We will also have a look at their multiresolution properties which make them prime candidates for constructing wavelet bases and computing image pyramids. Typical application areas where these techniques can be useful are: image reconstruction from projection data, sampling grid conversion, geometric correction, visualization, rigid or elastic image registration, and feature extraction including edge detection and active contour models.
Effects of uncertainty in camera geometry on three-dimensional catheter reconstruction from biplane fluoroscopic images
Anthony Dietz, David B. Kynor, Eric Friets, et al.
Clinical procedures that rely on biplane x-ray images for three-dimensional (3-D) information may be enhanced by three-dimensional reconstructions. However, the accuracy of reconstructed images is dependent on the uncertainty associated with the parameters that define the geometry of the camera system. In this paper, we use a numerical simulation to examine the effect of these uncertainties and to determine the limits required for adequate three-dimensional reconstruction. We then test our conclusions with images of a calibration phantom recorded using a clinical system. A set of reconstruction routines, developed for a cardiac mapping system, were used in this evaluation. The routines include procedures for correcting image distortion and for automatically locating catheter electrodes. Test images were created using a numerical simulation of a biplane x-ray projection system. The reconstruction routines were then applied using accurate and perturbed camera geometries and error maps were produced. Our results indicate that useful catheter reconstructions are possible with reasonable bounds on the uncertainty of camera geometry provided the locations of the camera isocenters are accurate. The results of this study provide a guide for the specification of camera geometry display systems and for researchers evaluating possible methodologies for determining camera geometry.
Three-dimensional vascular network projective reconstruction from uncalibrated and non-subtracted x-ray rotational angiography image sequence
Moez Chakchouk, Sylvie Sevestre-Ghalila, Faouzi Ghorbel, et al.
X-ray rotational angiography has recently gained increasing interest for computer-assisted quantitative analysis. It provides more accurate assessment of vascular diseases and precise inspection of complex structure of the arterial network via three-dimensional (3D) vascular reconstruction. The 3D spatial information can be obtained via a stereoscopic analysis of the two-dimensional (2D) projections of the opacified blood vessels. In this work, we focus on the problem of automatic 3D reconstruction of blood vessel networks for telediagnostic applications and therefore from uncalibrated X-ray rotational angiography image sequence. Three main issues are addressed: 1) automatic accurate subpixel vascular median axis network detection from non-subtracted 2D angiography images, 2) robust matching of the extracted features by using an original method based on statistical tests, and 3) three-dimensional reconstruction through epipolar geometry determination from uncalibrated 2D images. Our reconstruction method has the advantage to be independent of the angiography acquisition system. It is therefore interesting for telemedicine and specially for telediagnostic systems.
Reconstruction of asymmetric vessel lumen from two views
Anant Gopal, Kenneth R. Hoffmann, Stephen Rudin, et al.
We have developed a technique based on epipolar geometry which allows reconstruction of vessel lumina containing asymmetries for arbitrary lumen orientation. Vessel centerlines and the imaging geometry are determined using previously described methods. Epipolar planes are established for pairs of centerline points and intensity profiles are extracted along epipolar lines. Extracted profiles are fitted to projection curves corresponding to elliptical cross-sections. The original vessel intensity profile is subtracted from the best-fit profile yielding the plaque profile. Corresponding best-fit profiles are used to reconstruct elliptical cross sections by backprojecting and identifying ray intersection points which lie within the model cross section. Assuming that the deformation of an otherwise elliptical lumen occurs as a result of plaque growing inward from the periphery, asymmetries are generated by deforming the cross sections to an extent consistent with the plaque profile in both views. The three-dimensional lumen may be obtained by combining individual cross sections in consecutive epipolar planes. The technique was evaluated using noiseless simulated angiograms. Reconstructed asymmetric lumen cross sections were found to be accurate to 95%, where accuracy was determined using area and distance criteria.
Validation
icon_mobile_dropdown
Methodology for evaluating image-segmentation algorithms
Jayaram K. Udupa, Vicki R. LaBlanc, Hilary Schmidt, et al.
The purpose of this paper is to describe a framework for evaluating image segmentation algorithms. Image segmentation consists of object recognition and delineation. For evaluating segmentation methods, three factors - precision (reproducibility), accuracy (agreement with truth, validity), and efficiency (time taken) - need to be considered for both recognition and delineation. To assess precision, we need to choose a figure of merit, repeat segmentation considering all sources of variation, and determine variations in figure of merit via statistical analysis. It is impossible usually to establish true segmentation. Hence, to assess accuracy, we need to choose a surrogate of true segmentation and proceed as for precision. In determining accuracy, it may be important to consider different landmark areas of the structure to be segmented depending on the application. To assess efficiency, both the computational and the user time required for algorithm and operator training and for algorithm execution should be measured and analyzed. Precision, accuracy, and efficiency are interdependent. It is difficult to improve one factor without affecting others. Segmentation methods must be compared based on all three factors. The weight given to each factor depends on application.
Multisite validation of image analysis methods: assessing intra- and intersite variability
Martin A. Styner, H. Cecil Charles, Jin Park, et al.
In this work, we present a unique set of 3D MRI brain data that is appropriate for testing the intra and inter-site variability of image analysis methods. A single subject was scanned two times within a 24 hour time window each at five different MR sites over a period of six weeks using GE and Phillips 1.5 T scanners. The imaging protocol included T1 weighted, Proton Density and T2 weighted images. We applied three quantitative image analysis methods and analyzed their results via the coefficients of variability (COV) and the intra correlation coefficient. The tested methods include two multi-channel tissue segmentation techniques based on an anatomically guided manual seeding and an atlas-based seeding. The third tested method was a single-channel semi-automatic segmentation of the hippocampus. The results show that the outcome of image analysis methods varies significantly for images from different sites and scanners. With the exception of total brain volume, which shows consistent low variability across all images, the COV's were clearly larger between sites than within sites. Also, the COV's between sites with different scanner types are slightly larger than between sites with the same scanner type. The presented existence of a significant inter-site variability requires adaptations in image methods to produce repeatable measurements. This is especially of importance in multi-site clinical research.
Three-level evaluation process for segmentation methods in medical imaging
We propose an evaluation process for segmentation which is made up of three different levels. It enables us to carry out the time consuming steps only for those segmentation methods for which a successful segmentation is foreseeable. In the first level the developer of a segmentation method does a coarse analysis of the usefulness of the individual segmentation methods by means of visual assessment of the results for few image examples. Methods which have been judged useful at the first level are investigated in a second evaluation step as to the stability of the segmentation results in case of slight deviations in the images. For the reproduction of the image formation process a multitude of realizations of a given region of interest are produced by means of the bootstrap technique. At the third level of the evaluation process the segmentation methods are tested for segmentation errors. The segmentation methods are judged by means of empirical discrepancy values, and the effectiveness of a method chosen for the respective task is finally estimated.
Validation of a nonrigid registration algorithm for multimodal data
Peter Rogelj, Stanislav Kovacic, James C. Gee
We describe the evaluation of a non-rigid image registration method for multi-modal data. The evaluation is made difficult by the absence of gold standard test data, for which the true transformation from one image to another is known. Different approaches have been used to deal with this deficiency, e.g., by using synthetically warped data, by comparison of anatomic regions of interest identified either manually or automatically, and by direct comparison of the registered data. Each of these approaches are limited and in this paper, we illustrate some of the problems that arise based on their application to the evaluation of our multi-modal non-rigid registration method.
Performance evaluation of an advanced method for automated identification of view positions of chest radiographs by use of a large database
Hidetaka Arimura, Shigehiko Katsuragawa, Takayuki Ishida, et al.
For implementation of computer-aided diagnostic systems for chest radiographs, it is important to correctly identify the view position, i.e., posteroanterior (PA) or lateral view. Our purpose is to develop an advanced computerized method by using a template matching technique for correctly identifying either PA or lateral view, and to apply this method to approximately 48,000 PA and 16,000 lateral chest radiographs. To evaluate the similarity with templates, correlation values of a chest image with various templates were obtained and compared for determining whether a chest image is either a PA or a lateral view. By considering the variation in terms of patient's sizes, lung opacities, and lung sizes, we produced 24 templates of PA and lateral views. In the first step, two largest correlation values of an unknown case with 3 PA and 2 lateral templates for medium-size patients were compared for determining the view position. In the second step, the unidentifiable cases in the first step were re-examined by comparison of the correlation values with 11 to 19 templates for small and large patients. With the computerized method based on a template matching technique, 99.99%(=63,788/63,791) of chest images in the large database were correctly identified in terms of PA or lateral views.
Clinical evaluation of the reproducibility of volume measurements of pulmonary nodules
Dag Wormanns, Gerhard Kohl, Ernst Klotz, et al.
High reproducibility of volumetric measurements is an important prerequisite for follow-up of small lung nodules in order to differentiate malignant from benign lesions in a lung cancer screening setting. This study was aimed to evaluate the measurement reproducibility of a new software tool for pulmonary nodule volumetry. In an ongoing study, 147 pulmonary nodules (size 1.6-17.5 mm) were examined with low-dose multidetector CT (Siemens Somatom Volume Zoom, 120 kVp, 20 mAs, detector collimation 4x1 mm, normalized pitch 1.75, slice thickness 1.25 mm, reconstruction increment 0.8 mm). Two consecutive low-dose scans covering the whole lung volume were performed within a few minutes. Between both scans, patients were asked to leave the CT scanner, and the second scan was planned independently from the first one. For all visually detected pulmonary nodules with a diameter <20 mm nodule volume was determined on both scans using a software prototype containing segmentation and volumetry algorithms. Results from both scans were compared. Nodule volume differences were determined as difference between the first and second measurement and ranged from 169 to 87%. The performance of the diagnostic test was measured using ROC analysis. For the detection of a volume doubling the area under curve (Az) was 0.98, for a growth of 50% the Az was 0.89. Further refinement of the segmentation algorithm should lead to more consistent measurements in ill-defined nodules. In conclusion, volumetric measurement of pulmonary nodules in multislice CT data sets is a reliable tool for the detection of growth in small pulmonary nodules.
Motion/Pattern Recognition
icon_mobile_dropdown
Accelerating ultrasonic strain reconstructions by introducing mechanical constraints
Claire J. M. Pellot-Barakat, Jerome J. Mai, Christian M. Kargel, et al.
Ultrasonic strain images describe the stiffness of soft tissues. Strain estimates are obtained from the spatial derivative of local displacements between echo fields acquired prior to and after applying a small compressive force. Multi-scale displacement measurements using correlation estimates yield the highest quality strain images but are computationally extensive. An approach to strain reconstruction is proposed that constrains the correlation search based on physical priors in order to accelerate and optimize the estimation of local displacements. The data correlation kernel is chosen large in regions of constant displacement to minimize noise and small in regions of varying displacement to maximize spatial resolution for strain. Multiple echo fields were acquired using a Siemens Elegra system with a 7.5 MHz linear array while slowly compressing phantoms and tissues. Gel phantoms with simulated stiff lesions and flow channels, as well as ex vivo muscle and in vivo breast tissues were examined. The new algorithm provided images with equal or lower noise as compared to the traditional algorithm. Adaptively limiting the search in smoothly compressed regions reduced computational time by a factor of 1.5 to 8 depending on the applied compression and complexity of motion.
Correction of translation-induced artifacts in wrist MRI scans using orthogonal acquisitions
Correction of motion artifacts in MRI due to interview in-plane 2-D rigid body translations is possible using only the raw data of two standard 2DFT images acquired of the same object with phase encode direction swapped. Previous techniques simply use multiple or orthogonal images to reduce artifacts by ghost interference or geometric averaging. The orthogonal k-space phase difference (ORKPHAD) provides an overdetermined system of linear equations that can be solved directly to compensate for the phase errors caused by the translation. This technique was used to correct images of a volunteer's moving wrist. For all slices of the motion corrupted data volumes, artifacts were dramatically reduced. Though the algorithm only accounts for interview translational motion, it is robust enough to correct real in vivo images that may also be corrupted by small amounts of rotational or out-of-plane motion. Experiments underway show the algorithm can tolerate bulk rotation of several degrees between orthogonal image pairs and that corrections using fractional NEX scans are possible. The current results and such ongoing advancements should make this correction technique practical for certain clinical scenarios vulnerable to in-plane translation.
Iso-shaping rigid bodies for estimating their motion from image sequences
Punam K. Saha, Jayaram K. Udupa, Bruce Elliot Hirsch
In many medical imaging applications, due to the limited field of view of imaging devices, often, acquired images include only a part of a structure. In such situations, it is impossible to guarantee that the images will contain exactly the same physical extent of the structure at different scans, which leads to difficulties in registration and in many other tasks, such as the analysis of the morphology, architecture, and kinematics. To facilitate such analysis, we developed a general method, referred to as isoshaping, that generates structures of the same shape from segmented images. The basis for this method is to automatically find a set of key points, called shape centers, in the segmented partial anatomic structure such that these points are present in all images and that they represent the same physical location in the object, and then trim the structure using these points as reference. The application area considered here is the analysis of the morphology, architecture, and kinematics of the foot joints from MR images acquired at different joint positions, load conditions, and longitudinal time instances. The accuracy of the method is studied and it is quantitatively demonstrated that isoshaping improves the results of registration.
ROC components-of-variance analysis of competing classifiers
Sergey V. Beiden, Marcus A. Maloof, Robert F. Wagner
In the last decade in the field of diagnostic imaging, the problem of variability of reader skill as well as patient case difficulty has given rise to a multivariate approach to receiver operating characteristic (ROC) analysis. The multivariate approach is the so-called multiple-reader, multiple-case (MRMC) ROC paradigm in which every reader reads every patient case and, where possible, in each of two modalities under comparison. The present paper demonstrates the isomorphism between patient cases, image readers, and imaging modalities in diagnostic imaging and, respectively, test sets, training sets, and competing discriminant algorithms in the field of statistical pattern recognition (SPR). Thus, the MRMC paradigm can be brought directly across from imaging to SPR. Recent MRMC ROC analytical methods are demonstrated in the context of SPR for the task of analyzing the natural components of variance in that problem involving test sets, training sets, and competing discriminants. Monte Carlo trials reported here indicate that the conventional wisdom that the variance of measures of classifier accuracy comes mainly from the finite test set is only true when assessing a single algorithm in a very limited context. In particular, it is generally not true when comparing competing discriminant algorithms; in that case the variance is dominated by the finite training set.
Cluster analysis of BI-RADS descriptions of biopsy-proven breast lesions
The purpose of this study was to identify and characterize clusters in a heterogeneous breast cancer computer-aided diagnosis database. Identification of subgroups within the database could help elucidate clinical trends and facilitate future model building. Agglomerative hierarchical clustering and k-means clustering were used to identify clusters in a large, heterogeneous computer-aided diagnosis database based on mammographic findings (BI-RADS) and patient age. The clusters were examined in terms of their feature distributions. The clusters showed logical separation of distinct clinical subtypes such as architectural distortions, masses, and calcifications. Moreover, the common subtypes of masses and calcifications were stratified into clusters based on age groupings. The percent of the cases that were malignant was notably different among the clusters. Cluster analysis can provide a powerful tool in discerning the subgroups present in a large, heterogeneous computer-aided diagnosis database.
Segmentation III
icon_mobile_dropdown
Statistical and adaptive approaches for segmentation and vector source encoding of medical images
Statistical as well as adaptive clustering approaches are being currently used for both segmentation and vector quantization of medical images. However, a comparative evaluation of both approaches has rarely been done to identify the efficacy of such approaches to specific applications, for example, image segmentation and vector quantization. The rate distortion functions of three clustering algorithms, namely, the statistical based deterministic annealing, the adaptive fuzzy leader clustering algorithm, and LBG, have been computed for vector quantization using multi-scale vectors in the wavelet domain. Such comparative evaluation serves as a guide for proper selection of clustering algorithms for global codebook generation in vector quantization and for image segmentation.
Adaptive speed term based on homogeneity for level-set segmentation
We tested on an edge map computed from a local homogeneity measurement, which is a potential replacement for the traditional gradient-based edge map in level-set segmentation. In existing level-set methods, the gradient information is used as a stopping criteria for curve evolution, and also provides the attracting force to the zero level-set from the target boundary. However, in a discrete implementation, the gradient-based term can never fully stop the level-set evolution even for ideal edges, leakage is often unavoidable. Also the effective distance of the attracting force and blurring of edges become a trade-off in choosing the shape and support of the smoothing filter. The proposed homogeneity measurement provides easier and more robust edge estimation, and the possibility of fully stopping the level-set evolution. The homogeneity term decreasing from a homogenous region to the boundary, which dramatically increases the effective distance of the attracting force and also provides additional measurement of the overall approximation to the target boundary. Therefore, it provides a reliable criteria of adaptively changing the advent speed. By using this term, the leakage problem was avoided effectively in most cases compared to traditional level-set methods. The computation of the homogeneity is fast and its extension to the 3D case is straightforward.
Nonparametric segmentation of multispectral MR images incorporating spatial and intensity information
Joze Derganc, Bostjan Likar, Franjo Pernus
Image segmentation is concerned with partitioning an image into non-overlapping, constituent regions, which are homogeneous with respect to certain features. In magnetic resonance imaging (MRI), the most discriminative and commonly used features are the image intensities themselves. However, due to noise, partial volume effects, natural and spurious intensity variations, intensity distributions of distinct tissues generally overlap, which makes segmentation difficult and less precise. Using multi-spectral MR images and mapping intensities into a multidimensional feature space may help in segmentation. To further facilitate segmentation, we map the intensities and second derivatives of multi-spectral images into a common multidimensional feature space. Integration of intensity and spatial information may yield complex clusters that cannot be described by Gaussian mixture models or by hyper-spherical shapes. For this reason we devise a novel segmentation method based on non-parametric valley-seeking clustering. The valleys are found by estimating feature density gradients. The proposed segmentation method, with and without spatial information, is tested on simulated and real, single- and multi-spectral, MR brain images. The segmentation results are highly consistent with the gold standard, especially when combined with a procedure for intensity non-uniformity correction, presented in MI 4684-177.
Improving statistics for hybrid segmentation of high-resolution multichannel images
High-resolution multichannel textures are difficult to characterize with simple statistics and the high level of detail makes the selection of a particular contour using classical gradient-based methods not effective. We have developed a hybrid method that combines fuzzy connectedness and Voronoi diagram classification for the segmentation of color and multichannel objects. The multi-step classification process relies on homogeneity measures derived from moment statistics and histogram information. These color features have been optimized to best combine individual channel information in the classification process. The segmentation initialization requires only a set of interior and exterior seed points, minimizing user intervention and the influence of the initialization on the overall quality of the results. The method was tested on volumes from the Visible Human and on brain multi-protocol MRI data sets. The hybrid segmentation produced robust, rapid and finely detailed contours with good visual accuracy. The addition of quantized statistics and color histogram distances as classification features improved the robustness of the method with regards to initialization when compared to our original implementation.
Automatic segmentation of prostate boundaries in transrectal ultrasound (TRUS) imaging
Haisong Liu, Gang Cheng, Deborah Rubens, et al.
An automatic segmentation method for detecting the prostate boundary in transrectal ultrasound (TRUS) images was developed. The TRUS images were preprocessed by using an adaptive directional filtering and an automatic attenuation compensation for noise removal and contrast enhancement. A directional search strategy was used to locate key-points on the prostate boundary. The prostate contour was interpolated from the key-points under the supervision of a morphological prostate boundary model, which had been trained from prior manual segmentation of a large number of TRUS images. A new prostate center was calculated based on the intermediate segmentation result. The algorithm is reiterated until the prostate boundary and center reach a stable state. The overall performance of the method was compared to manual segmentation of an expert radiologist. About 78% out of 282 TRUS images (excluding base and apex slices) from three types of ultrasound machine (Acuson, Siemens, and B&K) were correctly delineated. The segmentation error was 0.9 mm averaged on 30 selected images, 10 for each type of machine. The computation time for a typical series of TRUS images is approximately 1 minute on a Pentium-II computer.
MR-image-based tissue analysis and its clinical applications
Kun Huang, Jianhua Xuan, Jozsef Varga, et al.
This paper presents a three-dimensional (3-D) tissue analysis method and its applications in partial volume correction and change analysis. The method uses a stochastic model-based approach and consists of two steps: (1) unsupervised tissue quantification and (2) 3-D segmentation. Firstly, the MR image volume is modeled by the standard finite normal mixture (SFNM) distribution. It has been shown that the SFNM converges to the true distribution when the pixel images are asymptotically independent. Secondly, the tissue quantification is achieved through (1) model selection by minimum description length (MDL) criterion; (2) parameter initialization by optimal histogram quantization and (3) parameter estimation by a fast EM algorithm using the global 3-D histogram rather than conventionally the raw data. Finally, we develop a 3-D segmentation method using the maximum likelihood (ML) classification and contextual Bayesian relaxation labeling (CBRL). The CBRL is developed to obtain a consistent labeling solution, based on localized SFNM formulation by using neighborhood contextual regularities. The method has been applied to partial volume correction for PET brain images and change analysis for MR breast images.
Deformable Geometry II
icon_mobile_dropdown
Segmentation of cardiac MR volume data using 3D active appearance models
Active Appearance Models (AAMs) are useful for the segmentation of cardiac MR images since they exploit prior knowledge about the cardiac shape and image appearance. However, traditional AAMs only process 2D images, not taking into account the 3D data inherent to MR. This paper presents a novel, true 3D Active Appearance Model that models the intrinsic 3D shape and image appearance of the left ventricle in cardiac MR data. In 3D-AAM, shape and appearance of the Left Ventricle (LV) is modeled from a set of expert drawn contours. The contours are then resampled to a manually defined set of landmark points, and subsequently aligned. Appearance variations in both shape and texture are captured using Principal Component Analysis (PCA) on the training set. Segmentation is achieved by minimizing the model appearance-to-target differences by adjusting the model eigen-coefficients using a gradient descent approach. The clinical potential of the 3D-AAM is demonstrated in short-axis cardiac magnetic resonance (MR) images. The method's performance was assessed by comparison with manually-identified independent standards in 56 clinical MR sequences. The method showed good agreement with the independent standards using quantitative indices such as border positioning errors, endo- and epicardial volumes, and left ventricular mass. The 3D AAM method shows high promise for successful segmentation of three-dimensional images in MR.
Integrated approach to brightness- and contrast-invariant detection of the heart in SPECT imaging
Guo-Qing Wei, JianZhong Qian, John Engdahl
In this paper we propose an appearance-based method to heart detection by principal component analysis (PCA). In contrast to conventional methods of PCA-based training, there is no brightness and contrast normalization since such normalization is usually based on maximum and minimum intensity values and is very sensitive to noises. We propose to integrate the normalization procedure into the detection phase. This is achieved by projecting the intensity-transformed image (with unknown scale and shift parameters) onto the eigen-images and minimizing the error of fit. This leads to a set of equations on both the intensity transformation parameters and the projection coefficients. By using the least-squares method, these equations can be easily solved for the scale and shift parameters. After an initial detection of heart positions is conducted, robust fitting of the heart trajectory is used to correct any detection errors. Besides, we also propose an eigen-image re-orthonormalization method for multiple resolution detection without extra training on multiple scales.
Fully automated endocardial contour detection in time sequences of echocardiograms by three-dimensional active appearance models
A novel 3-D Active Appearance Model (3-D AAM) is applied to fully automated endocardial contour detection in 2-D + time (2DT) 4-chamber ultrasound sequences, without knowledge of cardiac phase (ED/ES frames). 2DT appearance of the heart is modeled in 3-D by converting the stack of 2-D time slices into a 3-D voxel space. In a training set, an expert defines corresponding endocardial contour points for one complete cardiac cycle (ED to ED). 2DT shape is represented as a 3-D surface. Image appearance is modeled as a vector of voxel intensities in a volume-patch spanned by the 3-D surface. Principal Component Analysis extracts eigenvariations of 3-D shape and appearance, capturing typical cardiac motion patterns. 3-D AAM segments the image volume by minimizing 3-D model-to-target intensity differences, adjusting eigenvariation coefficients and 3-D pose using gradient descent minimization. This provides time-continuous border localization for one beat in both time and space. The method was used on 3-beat sequences from 129 patients split randomly into a training (65) and a test set (64). An independent expert manually drew all endocardial contours. 3-D AAM converged well in 89% of test cases. Average absolute temporal error was 37.0 msec, spatial error 3.35 mm, comparable to human inter-observer variabilities.
Active-shape-model-based segmentation of abdominal aortic aneurysms in CTA images
An automated method for the segmentation of thrombus in abdominal aortic aneurysms from CTA data is presented. The method is based on Active Shape Model (ASM) fitting in sequential slices, using the contour obtained in one slice as the initialisation in the adjacent slice. The optimal fit is defined by maximum correlation of grey value profiles around the contour in successive slices, in contrast to the original ASM scheme as proposed by Cootes and Taylor, where the correlation with profiles from training data is maximised. An extension to the proposed approach prevents the inclusion of low-intensity tissue and allows the model to refine to nearby edges. The applied shape models contain either one or two image slices, the latter explicitly restricting the shape change from slice to slice. To evaluate the proposed methods a leave-one-out experiment was performed, using six datasets containing 274 slices to segment. Both adapted ASM schemes yield significantly better results than the original scheme (p<0.0001). The extended slice correlation fit of a one-slice model showed best overall performance. Using one manually delineated image slice as a reference, on average a number of 29 slices could be automatically segmented with an accuracy within the bounds of manual inter-observer variability.
Diaphragm dome surface segmentation in CT data sets: a 3D active appearance model approach
Reinhard Beichel, Georg Gotschuli, Erich Sorantin, et al.
Knowledge about the location of the diaphragm dome surface, which separates the lungs and the heart from the abdominal cavity, is of vital importance for applications like automated segmentation of adjacent organs (e.g., liver) or functional analysis of the respiratory cycle. We present a new 3D Active Appearance Model (AAM) approach to segmentation of the top layer of the diaphragm dome. The 3D AAM consists of three parts: a 2D closed curve (reference curve), an elevation image and texture layers. The first two parts combined represent 3D shape information and the third part image intensity of the diaphragm dome and the surrounding layers. Differences in height between dome voxels and a reference plane are stored in the elevation image. The reference curve is generated by a parallel projection of the diaphragm dome outline in the axial direction. Landmark point placement is only done on the (2D) reference curve, which can be seen as the bounding curve of the elevation image. Matching is based on a gradient-descent optimization process and uses image intensity appearance around the actual dome shape. Results achieved in 60 computer generated phantom data sets show a high degree of accuracy (positioning error -0.07+/-1.29 mm). Validation using real CT data sets yielded a positioning error of -0.16+/-2.95 mm. Additional training and testing on in-vivo CT image data is ongoing.
Three-dimensional knowledge-based surface model for segmentation of organic structures
Michael Kohnen, Andreas H. Mahnken, Joerg Kesten, et al.
A 3D surface model for the segmentation of organic structures in CT-and MR-datasets has been developed. The training dataset for the surface model is computed from semiautomatically generated voxelsets. Triangulated meshes of the voxelsets representing the objects' surfaces are generated. The surface model is able to learn the shape variations in the training dataset by a principal component analysis of the information provided by the points forming the triangulated surface-meshes. Furthermore, the image information at the mesh points is also added into gray value models describing the gray value distribution at this particular surface section. The optimization of the model is performed by iteratively moving the surface points of the model towards image structures fitting to the gray value models. During the optimization process the models shape information assures that the surface stays plausible. A 3D-model of the spleen consisting of 10 objects, and a kidney model generated from 7 left kidneys have been developed. The models have been tested on 3 unknown spleen- and 3 unknown kidney-datasets. The total cover between the model and the organs varied between 65% and 75%, which is a respectable result in the face of the small training datasets the models were generated from.
Hands-on experience with active appearance models
Hans Henrik Thodberg
The aim of this work is to explore the performance of active appearance models (AAMs) in reconstruction and interpretation of bones in hand radiographs. AAM is a generative approach that unifies image segmentation and image understanding. Initial locations for the AAM search are generated by an exhaustive filtering method. A series of AAMs for smaller groups of bones are used. It is found that AAM successful reconstructs 99% of metacarpals, proximal and medial phalanges and the distal 3 cm of radius and ulna. The rms accuracy is better than 240 microns (point-to-curve). The generative property is used (1) to define a measure of fit that allows the models to self-evaluate and chose between the multiple found solutions, (2) to overcome obstacles in the image in the form of rings by predicting the missing part, and (3) to detect anomalies, e.g. rheumatoid arthritis. The shape scores are used as biometrics to check the identity of patients in a longitudinal study. The conclusion is that AAM provides a highly efficient and unified framework for various tasks in diagnosis and assessment of bone related disorders.
Registration I
icon_mobile_dropdown
Rigid 2D/3D registration of intraoperative digital x-ray images and preoperative CT and MR images
Dejan Tomazevic, Bostjan Likar, Franjo Pernus
This paper describes a novel approach to register 3D computed tomography (CT) or magnetic resonance (MR) images to a set of 2D X-ray images. Such a registration may be a valuable tool for intraoperative determination of the precise position and orientation of some anatomy of interest, defined in preoperative images. The registration is based solely on the information present in 2D and 3D images. It does not require fiducial markers, X-ray image segmentation, or construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3D MR or CT data, and gradients of intraoperative X-ray images, which are back-projected towards the X-ray source. The registration is then concerned with finding that rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. The method is tested on a lumbar spine phantom. Gold standard registration is obtained by fidicual markers attached to the phantom. Volumes of interest, containing single vertebrae, are registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the gold standard position. Target registration errors and rotation errors are in order of 0.3 mm and 0.35 degrees for the CT to X-ray registration and 1.3 mm and 1.5 degrees for MR to X-ray registration. The registration is shown to be fast and accurate.
Shape-adapted motion model based on thin-plate splines and point clustering for point set registration
Julian Mattes, Johannes Fieres, Roland Eils
This paper focuses on the problem of ill-posedness of deformable point set registration and we propose a new approach to restrict the solution space using shape information. The basic elements of the investigated kind of registration algorithm are a cost functional, an optimization strategy and a motion model. The motion model determines the kind of motions and deformations that are allowed and how they are restricted. The motion model itself is mainly determined by the kind of parameterized transformation used to express the motion/deformation. Here, we observe that matching with more degrees of freedom (the parameters of the transformation) than necessary can introduce mismatches due to a higher sensitivity to noise or by destroying local shape information. In this paper we propose a cost functional which is robust to noise and we introduce a new method to specify a shape adapted deformation model based on thin-plate splines and initial control point placing using point clustering. We show that these initial positions have a strong impact on the match and we define them as cluster centers where we cluster on one of the point sets (weighting each point of this set with its distance to the other point set). Our experiments with known ground truth show that the shape adapted model recovers constantly very accurately corresponding points. In our evaluation with more than 1200 single experiments we showed that, compared to a conventional octree based scheme, we could save more than 60% of degrees of freedom while preserving matching quality.
Three-dimensional warping registration of the pelvis and prostate
Baowei Fei, Corey Kemper, David L. Wilson
We are investigating interventional MRI guided radio- frequency (RF) thermal ablation for the minimally invasive treatment of prostate cancer. Among many potential applications of registration, we wish to compare registered MR images acquired before and immediately after RF ablation in order to determine whether a tumor is adequately treated. Warping registration is desired to correct for potential deformations of the pelvic region and movement of the prostate. We created a two-step, three-dimensional (3D) registration algorithm using mutual information and thin plate spline (TPS) warping for MR images. First, automatic rigid body registration was used to capture the global transformation. Second, local warping registration was applied. Interactively placed control points were automatically optimized by maximizing the mutual information of corresponding voxels in small volumes of interest and by using a 3D TPS to express the deformation throughout the image volume. Images were acquired from healthy volunteers in different conditions simulating potential applications. A variety of evaluation methods showed that warping consistently improved registration for volume pairs whenever patient position or condition was purposely changed between acquisitions. A TPS transformation based on 180 control points generated excellent warping throughout the pelvis following rigid body registration. The prostate centroid displacement for a typical volume pair was reduced from 3.4 mm to 0.6 mm when warping was added.
Registration-based mesh construction technique for finite-element models of brains
The generation of patient specific meshes for Finite Element Methods (FEM) with application to brain deformation is a time consuming process, but is essential for the modeling of intra-operative deformation of the brain during neurosurgery using FEM techniques. We present an automatic method for the generation of FEM meshes fitting patient data. The method is based on non-rigid registration of patient MR images to an atlas brain image, followed by deformation of a high-quality mesh of this atlas brain. We demonstrate the technique on brain MRI images from 12 patients undergoing neurosurgery. We show that the FEM meshes generated by our technique are of good quality. We then demonstrate the utility of these FEM meshes by simulating simple neuro-surgical scenarios on example patients, and show that the deformations predicted by our brain model match the observed deformations. The meshes generated by our technique are of good quality, and are suitable for modeling the types of deformation observed during neurosurgery. The deformations predicted by a simple loading scenario match well with those observed following the actual surgery. This paper does not attempt an exhaustive study of brain deformation, but does provide an essential tool for such a study - a method of rapidly generating Finite Element Meshes fitting individual subject brains.
Finite-element based validation of nonrigid registration using single- and multilevel free-form deformations: application to contrast-enhanced MR mammography
This work presents a validation study for non-rigid registration of 3D contrast enhanced magnetic resonance mammography images. We are using our previously developed methodology for simulating physically plausible, biomechanical tissue deformations using finite element methods to compare two non-rigid registration algorithms based on single-level and multi-level free-form deformations using B-splines and normalized mutual information. We have constructed four patient-specific finite element models and applied the solutions to the original post-contrast scans of the patients, simulating tissue deformation between image acquisitions. The original image pairs were registered to the FEM-deformed post-contrast images using different free-form deformation mesh resolutions. The target registration error was computed for each experiment with respect to the simulated gold standard on a voxel basis. Registration error and single-level free-form deformation resolution were found to be intrinsically related: the smaller the spacing, the higher localized errors, indicating local registration failure. For multi-level free-form deformations, the registration errors improved for increasing mesh resolution. This study forms an important milestone in making our non-rigid registration framework applicable for clinical routine use.
Registration II/Models
icon_mobile_dropdown
Nonrigid mammogram registration using mutual information
Of the papers dealing with the task of mammogram registration, the majority deal with the task by matching corresponding control-points derived from anatomical landmark points. One of the caveats encountered when using pure point-matching techniques is their reliance on accurately extracted anatomical features-points. This paper proposes an innovative approach to matching mammograms which combines the use of a similarity-measure and a point-based spatial transformation. Mutual information is a cost-function used to determine the degree of similarity between the two mammograms. An initial rigid registration is performed to remove global differences and bring the mammograms into approximate alignment. The mammograms are then subdivided into smaller regions and each of the corresponding subimages is matched independently using mutual information. The centroids of each of the matched subimages are then used as corresponding control-point pairs in association with the Thin-Plate Spline radial basis function. The resulting spatial transformation generates a nonrigid match of the mammograms. The technique is illustrated by matching mammograms from the MIAS mammogram database. An experimental comparison is made between mutual information incorporating purely rigid behavior, and that incorporating a more nonrigid behavior. The effectiveness of the registration process is evaluated using image differences.
Extension of target registration error theory to the composition of transforms
This paper is an extension of work first published by Fitzpatrick et al. in 1998, and concerns accuracy prediction in point-based image registration. Fitzpatrick et al. derived a formula to predict target registration error (TRE), i.e the error introduced in identifying a target point because of the inherent errors in locating the point used to calculate the registration transform. In this work, we extend Fitzpatrick's derivation to the case in which an optically tracked probe has its position measured relative to a coordinate reference frame (CRF), which is an optically tracked device that is rigidly affixed to the patient. In this case, the registration transformation is actually a composition of two transforms. Our derivation shows that the existing TRE theory may be applied independently to the two transforms that compose the registration, and the resulting values add in quadrature to give the overall TRE. We have confirmed this result using statistical numerical simulations. This derivation has important implications for designing optically tracked instruments for image-guided surgery. Probes and CRFs may be designed separately so that each has maximal accuracy, and the configuration of the two instruments will remain optimal when they are used in conjunction.
Intensity-based registration algorithm for probabilistic images and its application for 2D to 3D image registration
Registration of 2-D projection images and 3-D volume images is still a largely unsolved problem. In order to register a pre-operative CT image to an intra-operative 2-D x-ray image, one typically computes simulated x-ray images from the attenuation coefficients in the CT image (Digital Reconstructed Radiograph, DRR). The simulated images are then compared to the actual image using intensity-based similarity measures to quantify the correctness of the current relative pose. However, the spatial information present in the CT is lost in the process of computing projections. This paper first introduces a probabilistic extension to the computation of DRRs that preserves much of the spatial separability of tissues along the simulated rays. In order to handle the resulting non-scalar data in intensity-based registration, we propose a way of computing entropy-based similarity measures such as mutual information (MI) from probabilistic images. We give an initial evaluation of the feasibility of our novel image similarity measure for 2-D to 3-D registration by registering a probabilistic DRR to a deterministic DRR computed from patient data used in frameless stereotactic radiosurgery.
MRI simulator with static field inhomogeneity
Duane Yoder, Eric Changchien, Cynthia B. Paschal, et al.
This paper describes a new MRI simulator that provides realistic images for arbitrary pulse sequences executed in the presence of static field inhomogeneities including those due to magnetic susceptibility, errors in the applied field, and chemical shift. In contrast to previous simulators, this system generates object-specific inhomogeneity patterns from first principles and propagates the consequent frequency offsets and intravoxel dephasing through the acquisition protocols to produce images with realistic artifacts. The simulator consists of two parts. Part 1 calculates a frequency offset for each voxel. It calculates the size of the static field offset at each voxel in the image based on the known magnetic susceptibility of each of the components at all voxels. It uses a novel implementation of the ?Boundary Element Method? and takes advantage of the superposition principle of magnetism to include voxels with mixtures of substances of differing susceptibilities. Part 2 produces both a signal and a reconstructed image. Its inputs include the 3D digital brain phantom introduced by the McConnell Brain Imaging Centre, frequency offsets computed by part 1, applied static field errors, chemical shift values, and a description of the acquisition protocol.
New approach to elastrograph imaging: modality-independent elastography
Medicine today relies on palpation as a first line of investigation in the detection and diagnosis of breast cancer. Tissue stiffness (e.g. a lump found in breast tissue) can signal the growth of a potential life threatening cell mass. As such, elastographic imaging techniques (i.e. direct imaging of tissue stiffness) have recently become of great interest to scientists. In this paper, a new method called Modality Independent Elastography (MIE) will be investigated within the context of a mammographic imaging alternative/complement. This new approach uses measures of image similarity in conjunction with computational models to reconstruct images of tissue stiffness. The real strength in MIE is that any imaging modality (e.g. magnetic resonance, computed tomography, ultrasound) in which the image intensity data remains consistent from a pre- to a post-deformed state could be used in this paradigm. Results illustrate: (1) the encoding of stiffness information within the context of a regional image similarity criterion, (2) the methodology for an iterative elastographic imaging framework and (3) successful elasticity reconstructions.
Temporal tracking of 3D coronary arteries in projection angiograms
Guy Shechter, Frederic Devernay, Eve Coste-Maniere, et al.
A method for 3D temporal tracking of a 3D coronary tree model through a sequence of biplane cineangiography images has been developed. A registration framework is formulated in which the coronary tree centerline model deforms in an external potential field defined by a multiscale analysis response map computed from the angiogram images. To constrain the procedure and to improve convergence, a set of three motion models is hierarchically used: a 3D rigid-body transformation, a 3D affine transformation, and a 3D B-spline deformation field. This 3D motion tracking approach has significant advantages over 2D methods: (1) coherent deformation of a single 3D coronary reconstruction preserves the topology of the arterial tree; (2) constraints on arterial length and regularity, which lack meaning in 2D projection space, are directly applicable in 3D; and (3) tracking arterial segments through occlusions and crossings in the projection images is simplified with knowledge of the 3D relationship of the arteries. The method has been applied to patient data and results are presented.
Computer-Aided Diagnosis I
icon_mobile_dropdown
Analysis of blood and bone marrow smears using digital image processing techniques
Heiko Hengen, Susanne L. Spoor, Madhukar C. Pandit
In the paper, we deal with the analysis of blood and bone marrow smears. The main aim of this long term project is to obtain a relative frequency histogram of the white blood cells of different lineage and maturity. Especially for clinical application, a proper image normalization and segmentation of the color images of blood and bone marrow smears are necessary. For the image normalization, two approaches were adopted: a) active image processing for pre acquisition standardization and b) a histogram based method for post acquisition standardization. Both methods are based on the HSI (Hue Saturation Intensity) Transform. We have developed a robust method for the declustering of the inevitable clusters of white blood cells based on a thresholded distance transform and an extended region growing algorithm that in contrast to active contours does not need any parameterization. For a successful classification, medical morphologic features are translated into feature extraction operators: the mesh structure of the cells' nucleus is analyzed using watershed transform and Gabor features, the shape of cell and nucleus is analyzed using a set of rotational invariant contour based features. The color and granularity of the cytoplasm yield further features for classification. Current work is focused on classification using the presented features.
Application of support vector machines to breast cancer screening using mammogram and history data
Walker H. Land Jr., Anab Akanda, Joseph Y. Lo, et al.
Support Vector Machines (SVMs) are a new and radically different type of classifiers and learning machines that use a hypothesis space of linear functions in a high dimensional feature space. This relatively new paradigm, based on Statistical Learning Theory (SLT) and Structural Risk Minimization (SRM), has many advantages when compared to traditional neural networks, which are based on Empirical Risk Minimization (ERM). Unlike neural networks, SVM training always finds a global minimum. Furthermore, SVMs have inherent ability to solve pattern classification without incorporating any problem-domain knowledge. In this study, the SVM was employed as a pattern classifier, operating on mammography data used for breast cancer detection. The main focus was to formulate the best learning machine configurations for optimum specificity and positive predictive value at very high sensitivities. Using a mammogram database of 500 biopsy-proven samples, the best performing SVM, on average, was able to achieve (under statistical 5-fold cross-validation) a specificity of 45.0% and a positive predictive value (PPV) of 50.1% at 100% sensitivity. At 97% sensitivity, a specificity of 55.8% and a PPV of 55.2% were obtained.
Reproducibility of the coronary calcium measurement with different cardiac algorithms using multislice spiral CT
Yun Liang, Ganapathy Krishnamurthi, Laigao Michael Chen, et al.
Subsecond multi-slice spiral CT has now been recognized with its great potential in cardiac imaging, in particularly for the coronary calcification detection (CCD). Different reconstruction algorithms have been developed in order to optimize the temporal resolution and to improve the measurement accuracy. These algorithms typically incorporate retrospectively gated reconstructions based on a synchronized electrocardiography (ECG) recording. However, these algorithms consist of different approaches in choosing spatial filters, setting ECG delays, and employing the reconstruction geometry (direct fan-beam vs. parallel rebining). These differences are likely to contribute to the intrascanner and interscanner variability in the coronary calcium measurements. This paper investigates in detail about the quantitative effect on calcium detection among different approaches.
Quantitative MR assessment of longitudinal parenchymal changes in children treated for medulloblastoma
Wilburn E. Reddick, John O. Glass, Shingjie Wu, et al.
Our research builds on the hypothesis that white matter damage, in children treated for cancer with cranial spinal irradiation, spans a continuum of severity that can be reliably probed using non-invasive MR technology and results in potentially debilitating neurological and neuropsychological problems. This longitudinal project focuses on 341 quantitative volumetric MR examinations from 58 children treated for medulloblastoma (MB) with cranial irradiation (CRT) of 35-40 Gy. Quadratic mixed effects models were used to fit changes in tissue volumes (white matter, gray matter, CSF, and cerebral) with time since CRT and age at CRT as covariates. We successfully defined algorithms that are useful in the prediction of brain development among children treated for MB.
Computer-Aided Diagnosis II
icon_mobile_dropdown
Automated lung nodule segmentation using dynamic programming and EM-based classification
Ning Xu, Narendra Ahuja, Ravi Bansal
In this paper we present a robust and automated algorithm to segment lung nodules in three dimensional (3D) Computed Tomography (CT) volume dataset. The nodule is segmented out in slice-per-slice basis, that is, we first process each CT slice separately to extract two dimensional (2D) contours of the nodule which can then be stacked together to get the whole 3D surface. The extracted 2D contours are optimal as we utilize dynamic programming based optimization algorithm. To extract each 2D contour, we utilize a shape based constraint. Given a physician specified point on the nodule, we blow a circle which gives us rough initialization of the nodule from where our dynamic programming based algorithm estimates the optimal contour. As a nodule can be calcified, we pre-process a small region-of-interest (ROI), around the physician selected point on the nodule boundary, using the Expectation Maximization (EM) based algorithm to classify and remove calcification. Our proposed approach can be consistently and robustly used to segment not only the solitary nodules but also the nodules attached to lung walls and vessels.
Computer-aided lung nodule detection on high-resolution CT data
Rafael Wiemker, Patrick Rogalla, Andre Zwartkruis, et al.
Most of the previous approaches to computer aided lung nodule detection have been designed for and tested on conventional CT with slice thickness of 5-10 mm. In this paper, we report results of a specifically designed detection algorithm which is applied to 1 mm slice data from multi slice CT. We see two prinicipal advantages of high resolution CT data with respect to computer aided lung nodule detection: First of all, the algorithm can evaluate the fully isotropic three dimensional shape information of potential nodules and thus resolve ambiguities between pulmonary nodules and vessels. Secondly, the use of 1 mm slices allows the direct utilization of the Hounsfield values due to the absence of the partial volume effect (for objects larger than 1 mm). Computer aided detection of small lung nodules (>= 2 mm) may thus experience a break-through in clinical relevance with the use of high resolution CT. The detection algorithm has been applied to image data sets from patients in clinical routine with a slice thickness of 1\ts mm and reconstruction intervals between 0.5 and 1 mm, with hard- and soft-tissue reconstruction filters. Each thorax data set comprises 300-500 images. More than 20 000 CT slices from 50 CT studies were analyzed by the computer program, and 12 studies have so far been reviewed by an experienced radiologist. Of 203 nodules with diameter >= 2 mm (including pleura-attached nodules), the detection algorithm found 193 (sensitivity of 95%), with 4.4 false positives per patient. Nodules attached to the lung wall are algorithmically harder to detect, but we observe the same high detection rate. The false positive rate drops below 1 per study for nodules >= 4 mm.
Knowledge-based automatic detection of multitype lung nodules from multidetector CT studies
JianZhong Qian, Li Fan, Guo-Qing Wei, et al.
Multi-slice computed tomography (CT) provides a promising technology for lung cancer detection and treatment. To optimize automatic detections of a more complete spectrum of lung nodules on CT requires multiple specialized algorithms in a coherently integrated detection system. We have developed a knowledge-based system for automatic lung nodule detection and analysis, which coherently integrates several robust novel detection algorithms to detect different types of nodules, including those attached to the chest wall, nodules adjacent to or fed by vessels, and solitary nodules, simultaneously. The system architecture can be easily extended in the future to include a still greater range of nodule types, most importantly so-called ground-glass opacities (GGOs). In addition, automatic local adaptive histogram analysis, dynamic cross-correlation analysis, and the automatic volume projection analysis by using by data dimension reduction method, are used in nodule detection. The proposed system has been applied to 10 patients screened with low-dose multi-slice CT. Preliminary clinical tests show that (1) the false positive rate averages about 3.2 per study; and (2) by using the system radiologists are able to detect nearly twice the number of nodules as compared with working alone.
Enhanced lung cancer detection in temporal subtraction chest radiography using directional edge filtering techniques
We have developed a series of directional edge enhancement and edge extraction methods that can accurately segment posterior and anterior ribs in chest radiography. These methods can also separate the lower and upper edges of ribs. The edges were first enhanced by two sets of proximate parabola curve models for left and right sides of the image. We used a directional edge filtering technique to remove low signals and noises on the edge enhanced image in the multiresolution domain. Finally, we employed a rib curve projection and reasoning method to reconstruct the rib edges and remove false edges for the upper and lower bound of the rib edges independently. A two-step registration, corresponding to global and local matching, is applied for current and prior images assisted by their corresponding edge images. The subtraction images were then processed by a rule-based CAD system. The FROC results were compared to that obtained by the original image using a CAD system consisting of rule-based and convolution neural network processing. The majority of lung cancer in temporal subtraction images were lit-up. The FROC results were significantly improved using the subtraction image with the rule-based CAD.
Computer-aided classification of pulmonary nodules in surrounding and internal feature spaces using three-dimensional thoracic CT images
Yoshiki Kawata, Noboru Niki, Hironobu Ohamatsu, et al.
The detection rate of small pulmonary lesions has recently increased due to the advances in imaging technology such as Multi-slice CT scanner. In assessing the malignant potential of small pulmonary nodules in thin-section CT images, it is important to examine the nodule internal structure. In our previous work, we found that internal structure features derived from CT density and curvature indexes such as shape index and curvedness were useful for differentiating malignant and benign nodules in 3-D thoracic CT images. This may be attributed to the texture changes in the nodule region due to a developing malignancy. The relationship between nodules and their surrounding structures such as vessel, bronchi, and pleura are another important cue to classification between malignant and benign nodules. We therefore develop a scheme to analyze surrounding structures of the nodule using differential geometry based vector fields in 3-D thoracic images. In addition we present a joint histogram-based representation approach of the internal and surrounding structures of the nodule to visualize the characteristics between nodules. In the present study, we explore the feasibility of combining internal and surrounding structure features for classification of pulmonary nodules.
Recognition of lung nodules from x-ray CT images using 3D Markov random field models
Hotaka Takizawa, Shinji Yamamoto, Tohru Matsumoto, et al.
In this paper we propose a new recognition method of lung nodules from x-ray CT images using 3D Markov random field (MRF) models. Pathological shadow candidates are detected by our Quoit filter which is a kind of mathematical morphology filter, and volume of interest (VOI) areas which include the shadow candidates are extracted. The probabilities of the hypotheses that the VOI areas come from nodules (which are candidates of cancers) and blood vessels are calculated using nodule and blood vessel models evaluating the relations between these object models using 3D MRF models. If the probabilities for the nodule models are higher, the shadow candidates are determined to be abnormal. Otherwise, they are determined to be normal. Experimental results for 38 samples (patients) are shown.
Incorporation of negative regions in a knowledge-based computer-aided detection scheme
Yuan-Hsiang Chang, Xiao Hui Wang, Lara A. Hardesty M.D., et al.
The purpose was to evaluate the effect of incorporating negative but suspicious regions into a knowledge-based computer-aided detection (CAD) scheme of masses depicted in mammograms. To determine if a suspicious region is positive for a mass, the region was compared not only with actually positive regions (masses), but also with known negative regions. A set of quantitative measures (i.e., a positive, a negative, and a combined likelihood measure) was computed. In addition, a process was developed to integrate two likelihood measures that were derived using two selected features. An initial evaluation with 300 positive and 300 negative regions was performed to determine the parameters associated with the likelihood measures. Then, an independent set of 500 positive and 500 negative regions was used to test the performance of the CAD scheme. During the training phase, the performance was improved from Az=0.83 to 0.87 with the incorporation of negative regions and the integration process. During the independent test, the performance was improved from Az=0.80 to 0.83. The incorporation of negative regions and the integration process was found to add information to the scheme. Hence, it may offer a relatively robust solution to differentiate masses from normal tissue in mammograms.
Computer-Aided Diagnosis III
icon_mobile_dropdown
Separation of malignant and benign masses using maximum-likelihood modeling and neural networks
Lisa M. Kinnard, Shih-Chung Benedict Lo, Paul C. Wang, et al.
This study attempted to accurately segment the masses and distinguish malignant from benign tumors. The masses were segmented using a technique that combines pixel aggregation with maximum likelihood analysis. We found that the segmentation method can delineate the tumor body as well as tumor peripheral regions covering typical mass boundaries and some spiculation patterns. We have developed a Multiple Circular Path Convolution Neural Network (MCPCNN) to analyze a set of mass intensity, shape, and texture features for determination of the tumors as malignant or benign. The features were also fed into a conventional neural network for comparison. We also used values obtained from the maximum likelihood values as inputs into a conventional backpropagation neural network. We have tested these methods on 51 mammograms using a grouped Jackknife experiment incorporated with the ROC method. Tumor sizes ranged from 6mm to 3cm. The conventional neural network whose inputs were image features achieved an Az of 0.66. However the MCPCNN achieved an Az value of 0.71. The conventional neural network whose inputs were maximum likelihood values achieved an Az value of 0.84. In addition, the maximum likelihood segmentation method can identify the mass body and boundary regions, which is essential to the analysis of mammographic masses.
Change of region conspicuity in bilateral mammograms: potential impact on CAD performance
Bin Zheng, Xiao Hui Wang, Yuan-Hsiang Chang, et al.
In this study, we test a new method to automatically search for matched regions in bilateral digitized mammograms and to compute differences in region conspicuities in pairs of matched regions. One hundred pairs of bilateral images of the same view were selected for the experiment. Each pair of images depicted one verified mass. These 100 mass regions, along with 356 suspicious but actually negative mass regions, were first detected by a single-image-based CAD scheme. To find the matched regions in the corresponding bilateral images, a Procrustean-type technique was used to register the two images, which corrects the deformation of tissue structure between images by guaranteeing the registration of nipples, skin lines, and chest walls. Then, a region growth algorithm was applied to generate a growth region in the matched area, which has the same effective size as the suspicious region in the abnormal image. The conspicuities in the two matched regions, as well as their differences, were computed. Using the conspicuity in the original mass regions and the difference of conspicuities in the two matched regions as two identification indices to classify this set of 456 suspicious regions, the computed areas under the ROC curves (Az) were 0.77 and 0.75, respectively. This preliminary study indicates that by comparing the difference of conspicuities in two matched regions that a very useful feature for the CAD schemes can be extracted.
Computer-aided characterization of malignant and benign microcalcification clusters based on the analysis of temporal change of mammographic features
We have previously demonstrated that interval change analysis can improve differentiation of malignant and benign masses. In this study, a new classification scheme using interval change information was developed to classify mammographic microcalcification clusters as malignant and benign. From each cluster, 20 run length statistic texture features (RLSF) and 21 morphological features were extracted. Twenty difference RLSF were obtained by subtracting a prior RLSF from the corresponding current RLSF. The feature space consisted of the current and the difference RLSF, as well as the current and the difference morphological features. A leave-one-case-out resampling was used to train and test the classifier using 65 temporal image pairs (19 malignant, 46 benign) containing biopsy-proven microcalcification clusters. Stepwise feature selection and a linear discriminant classifier, designed with the training subsets alone, were used to select and merge the most useful features. An average of 12 features were selected from the training subsets, of which 3 difference RLSF and 7 morphological features were consistently selected from most of the training subsets. The classifier achieved an average training Az of 0.98 and a test Az of 0.87. For comparison, a classifier based on the current single image features achieved an average training Az of 0.88 and test Az of 0.81. These results indicate that the use of temporal information improved the accuracy of microcalcification characterization.
Use of joint two-view information for computerized lesion detection on mammograms: improvement of microcalcification detection accuracy
We are developing new techniques to improve the accuracy of computerized microcalcification detection by using the joint two-view information on craniocaudal (CC) and mediolateral-oblique (MLO) views. After cluster candidates were detected using a single-view detection technique, candidates on CC and MLO views were paired using their radial distances from the nipple. Object pairs were classified with a joint two-view classifier that used the similarity of objects in a pair. Each cluster candidate was also classified as a true microcalcification cluster or a false-positive (FP) using its single-view features. The outputs of these two classifiers were fused. A data set of 38 pairs of mammograms from our database was used to train the new detection technique. The independent test set consisted of 77 pairs of mammograms from the University of South Florida public database. At a per-film sensitivity of 70%, the FP rates were 0.17 and 0.27 with the fusion and single-view detection methods, respectively. Our results indicate that correspondence of cluster candidates on two different views provides valuable additional information for distinguishing false from true microcalcification clusters.
Effect of case mix on feature selection in the computerized classification of mammographic lesions
One potential limitation of computer-aided diagnosis (CAD) studies is that a computerized method may be trained and tested on a database comprised of a limited number of cases. Thus, the performance of the CAD method may depend on the subtlety of the lesions (i.e., the case mix) in the database. The purpose of this study is to evaluate the effect of case-mix on feature selection and the performance of a computerized classification method trained on a limited database.
Intelligent CAD workstation for breast imaging using similarity to known lesions and multiple visual prompt aids
Maryellen Lissak Giger, Zhimin Huo, Carl J. Vyborny, et al.
While investigators have been successful in developing methods for the computerized analysis of mammograms and ultrasound images, optimal output strategies for the effective and efficient use of such computer analyses are still undetermined. We have incorporated our computerized mass classification method into an intelligent workstation interface that displays known malignant and benign cases similar to lesions in question using a color-coding scheme that allows instant visual feedback to the radiologist. The probability distributions of the malignant and benign cases in the known database are also graphically displayed along with the graphical location of the unknown case relative to these two distributions. The effect of the workstation on radiologists' performance was demonstrated with two preliminary studies. In each study, participants were asked to interpret cases without and with the computer output as an aid for diagnosis. Results from our demonstration studies indicate that radiologists' performance, especially specificity, increases with the use of the aid.
Poster Session I
icon_mobile_dropdown
Image reconstruction in SPECT with a half detector
In parallel beam computed tomography, the measured projections at conjugate views are mathematically identical, and, consequently, this symmetry can be exploited for reducing either the scanning angle or the size of the detector arrays. However, in single-photon emission computed tomography (SPECT), because the gamma-rays in the conjugate views suffer different photon attenuation, the measured projections at conjugate views are generally different. Therefore, it had been widely considered that projections measured data over a full angular range of 360 degrees and over the whole detector face are generally required for exactly reconstructing the distributions of gamma-ray emitters. Recently, it has been revealed that exact image can be reconstructed from projections acquired with a full detector over disjoint angular intervals whose summation is 180 degree when the attenuation medium is uniform. In this work, we show that exact SPECT images can also be reconstructed from projections over 360 degrees, but acquired with a half detector viewing half of the image space. We present an heuristic perspective that supports this claim for SPECT with both uniform and non-uniform attenuation.
Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging
Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.
Noise properties of the inverse pi-scheme exponential radon transform
Because the effects of physical factors such as photon attenuation and spatial resolution are distance-dependent in single-photon emission computed tomography (SPECT), it has been widely assumed that accurate image reconstruction requires knowledge of the data function over 2(π) . In SPECT with uniform attenuation, Noo and Wagner recently showed that an accurate image can be reconstructed from knowledge of the data function over a contiguous (π) -segment. More generally, we proposed (π) -scheme SPECT that entails data acquisition over disjoint angular intervals without conjugate views, totaling to (π) radians, thereby allowing flexibility in choosing projection views at which the emitted gamma-rays may undergo the least attenuation and blurring. In this work, we study the general properties of the (π) -scheme inverse exponential Radon Transform, and discuss how to take advantage of the (π) -scheme flexibility to improve noise properties of short-scan SPECT.
Full nonlinear inversion of microwave biomedical data
Aria Abubakar, Peter M. van den Berg, Jordi J. Mallorqui
In this paper the contrast source inversion method using a multiplicative weighted L2-norm total variation regularizer is applied to image reconstructions from electromagnetic microwave tomography experiments. This iterative method avoids solving a full forward problem in each iteration which makes the method suitable to handle a large scale computational problem. The numerical results from experimental data with high contrast biological phantom are presented and discussed.
Filter design for filtered back-projection guided by the interpolation model
We consider using spline interpolation to improve the standard filtered backprojection (FBP) tomographic reconstruction algorithm. In particular, we propose to link the design of the filtering operator with the interpolation model that is applied to the sinogram. The key idea is to combine the ramp filtering and the spline fitting process into a single filtering operation. We consider three different approaches. In the first, we simply adapt the standard FBP for spline interpolation. In the second approach, we replace the interpolation by an oblique projection onto the same spline space; this increases the peak signal-noise ratio by up to 2.5 dB. In the third approach, we perform an explicit discretization by observing that the ramp filter is equivalent to a fractional derivative operator that can be evaluated analytically for splines. This allows for an exact implementation of the ramp filter and improves the image quality by an additional 0.2 dB. This comparison is unique as the first method has been published only for degree n=0, whereas the two other methods are novel. We stress that the modification of the filter improve the reconstruction quality especially at low (faster) interpolation degrees n=0,1; the difference between the methods become marginal for cubic or higher degrees (n ≥ 3).
Tomographic Reconstruction II/Statistical Methods
icon_mobile_dropdown
Novel reconstruction scheme for cardiac volume imaging with MSCT providing cone correction
Herbert Bruder, Karl Stierstorfer, Bernd Ohnesorge, et al.
We present a novel reconstruction scheme for cardiac spiral imaging named Adaptive Cardiac Multiple Plane Reconstruction (ACMPR) which takes into account the conical shape of projection data. In cardiac imaging with multi-slice CT continuous data acquisition in spiral mode combined with the parallel acquisition of the patient's ECG enable retrospective gating of data segments for image reconstruction. ACMPR identified projection data segments <EQ(pi) using temporal information of the simultaneously recorded ECG. For each such data segment image stacks of double-oblique segment images are reconstructed and then reformatted to axial images separately. In case of multisector imaging using consecutive heart cycles for image formation the reformatted segment images have to be added to a complete CT image after the reformation step. Image results for multi-slice detector systems using an anthropomorphic computer model of the human heart will be shown. A detailed comparison to algorithms without cone- correction reveals that ACMPR does not lead to significant improvements for 16-slice detector systems but already for a 32-slice system ACMPR provides superior image quality, in particular coronaries and stents are images with less geometrical distortions.
Poster Session I
icon_mobile_dropdown
New method for 3D reconstruction in digital tomosynthesis
Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.
Image reconstruction using shift-variant resampling kernel for magnetic resonance imaging
Ahmed S. Fahmy, Bassel S. Tawfik, Yasser M. Kadah
Nonrectilinear k-space trajectories are often used in MRI applications due to their inherent fast acquisition and immunity to motion and flow artifacts. In this work, we develop a more general formulation for the problem of resampling under the same assumptions as previous techniques. The new formulation allows the new technique to overcome the present problems with these techniques while maintaining a reasonable computational complexity. The image space is decomposed into a complete set of orthogonal basis functions. Each function is sampled twice, once with a rectilinear trajectory and the other with a nonrectilinear trajectory resulting in two vectors of samples. The mapping matrix that relates the two sets of vectors is obtained by solving the set of linear equations obtained using the training basis set. In order to reduce the computational burden at the reconstruction time, only a few nonrectilinear samples in the neighborhood of the point of interest are used. The proposed technique is applied to simulated data and the results show a superior performance of the proposed technique in both accuracy and noise resistance and demonstrate the usefulness of the new technique in the clinical practice.
Effect of reconstruction parameters on defect detection in fan-beam SPECT
George K. Gregoriou
The effect of reconstruction parameters on the fan-beam filtered backprojection method in myocardial defect detection was investigated using an observer performance study and receiver operating characteristics (ROC) analysis. A mathematical phantom of the human torso was used to model the anatomy and Thallium-201 (Tl-201) uptake in humans. Half-scan fan-beam realistic projections were simulated using a low-energy high resolution (LEHR) collimator that incorporated the effects of photon attenuation, spatially varying detector response, scatter, and Poison noise. A focal length of 55 cm and a radius of rotation of 25 cm were used, which resulted to a magnification of two at the center of rotation and a maximum magnification of three in the reconstructed region of interest. By changing the reconstruction pixel size, five different projection bin width to reconstruction pixel size (PBWRPS) ratios were obtained which resulted in five classes of reconstructed images. Myocardial defects were simulated as Gaussian-shaped decreases in Tl-201 uptake distribution. The total projection count per 3 mm image slice was 44,000. A total of 96 reconstructed transaxial images from each one of the five classes were shown to eight observers for evaluation. The results indicate that the reconstruction pixel size has a significant effect on the quality of fan-beam SPECT images. Moreover, the study indicated that in order to ensure best image quality the PBWRPS ratio should be at least as large as the maximum possible magnification inside the reconstructed image array.
Novel method for reducing high-attenuation object artifacts in CT reconstructions
Laigao Michael Chen, Yun Liang, George A. Sandison, et al.
A new method to reduce the streak artifacts caused by high attenuation objects in CT images has been developed. The key part of this approach is a preprocessing procedure based on the raw projection data using an adaptive scaling-plus-filtering method. The procedure is followed by the conventional filtered-back projection to reconstruct artifact-reduced images. Phantom and clinical studies have demonstrated that the proposed method can effectively reduce the streak artifacts caused by high attenuation objects for different anatomical structures and metal materials while still faithfully reproduce the positions and dimensions of the metal objects. The visualization of tissue features adjacent to metal objects is greatly improved. The proposed method is computational efficient and can be easily adapted to the current commercial CT scanners.
Sensitivity analysis of textural parameters for vertebroplasty
Gye Rae Tack, Seung Yong Lee, Kyu-Chul Shin, et al.
Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2.10922. We could see that this equation slightly overestimated bone cement volume and those amounts would not make any serious complications. We hope this will help to control a proper amount of bone cement volume injection during vertebroplasty.
Investigation of using bone texture analysis on bone densitometry images
We previously developed bone texture analysis methods to assess bone strength on digitized radiographs. Here, we compare the analyses performed on digitized screen-film to those obtained on peripheral bone densitometry images. A leg phantom was imaged with both a PIXI (GE Medical Systems; Milwaukee, WI) bone densitometer (0.200-mm pixel size) and a screen-film system, with the films being subsequently digitized by a laser film digitizer (0.100-mm pixel size). The phantom was radiographically scanned multiple times with the densitometer at the default parameters and for increasing exposure times. Fourier-based texture features were calculated from regions of interest from images from both modalities. The bone densitometry images contained more quantum noise than the radiographs resulting in increased values for the first moment of the power spectrum texture feature (1.22 times higher than from the standard radiograph). Presence of such noise may adversely affect the texture feature's ability to distinguish between strong and weak bone. By either increasing the exposure time or averaging multiple scans in the spatial frequency domain, we showed a reduction in the effect of the quantum mottle on the first moment of the power spectrum.
Genetic algorithm and expectation maximization for parameter estimation of mixture Gaussian model phantom
We present a new approach for estimating parameters of Gaussian mixture model by Genetic Algorithms (Gas) and Expectation Maximization (EM). It has been shown that Gas is independent of initialization parameters. In this work we propose combination of Gas and EM algorithms (GA-EM) for learning Gaussian mixture components to achieve accurate parameter estimation independent of initial values. To assess the performance of the proposed method, a series of Gaussian phantoms, based on modified Shepp-Logan method, were created. In this phantom, each tissue segment presents a Gaussian density function that its mean and variance can be controlled. EM, Gas and GAs-EM were employed to estimate the tissue parameters in each phantom. The results indicate that EM algorithm, as expected is heavily impacted by the initial values. Coupling Gas with EM not only improves the overall accuracy, it also provides estimates that are independent of initial seed values. The proposed method offers a solution for accurate and stable solution for parameter estimation in for Gaussian mixture models, with higher likelihood of achieving global optimal. Obtaining such accurate parameter estimation is a key requirement for several image segmentation approaches, which rely on a priori knowledge of tissue distribution.
Optimized statistical modeling of MS lesions as MRI voxel outliers for monitoring the effect of drug therapy
This paper presents the results of applying the modified deterministic annealing (DA) algorithm to simulated and clinical magnetic resonance (MR) brain data with multiple sclerosis (MS) lesions. Modified deterministic annealing algorithm is a very efficient segmentation algorithm for isolating MS lesions in the MR images when utilizing all the information contained in all modalities. To fully utilize the information contained in all the modalities, vector segmentation is carried out instead of unimodal segmentation. The vectors to be clustered are formed by multi-modal MR brain data. Through some arithmetic manipulations synthesized image data can be obtained which greatly alleviate the effect of noise and intensity inhomogeneity. Isolated multiple sclerosis lesions are outliers to the brain tissues. Even with noise level up to 7% the MS MR brain data can still be satisfactorily segmented. This method does not need a prior model, and is conceptually very simple. It delineates not only large lesions but small ones as well. The whole process is completely automated without any intervention by an operator, which can be a very promising tool for MS follow-up studies. Comparison between the segmentation results from the simulated MS brain data and from the clinical MS brain data shows that with the current high quality MRI facilities, images with noise above 3% and intensity inhomogeneity above 20% will usually not be produced. Segmentation results for the clinical data are much better and easier to obtain than the simulated noisy data. To get even better results for the MS lesions, inverse problem techniques have to be applied. Noise model and intensity inhomogeneity model have to be established and improved using the given MRI data during iteration.
Determination of biplane geometry and centerline curvature in vascular imaging
Daryl Nazareth, Kenneth R. Hoffmann, Alan Walczak, et al.
Three-dimensional (3-D) vessel trees can provide useful visual and quantitative information during interventional procedures. To calculate the 3-D vasculature from biplane images, the transformation relating the imaging systems (i.e., the rotation matrix R and the translation vector t) must be determined. We have developed a technique to calculate these parameters, which requires only the identification of approximately corresponding vessel regions in the two images. Initial estimates of R and t are generated based on the gantry angles, and then refined using an optimization technique. The objective function to be minimized is determined as follows. For each endpoint of each vessel in the first image, an epipolar line in the second image is generated. The intersection points between these two epipolar lines and the corresponding vessel centerline in the second image are determined. The vessel arclength between these intersection points is calculated as a fraction of the entire vessel region length in the image. This procedure is repeated for every vessel in each image. The value of the objective function is calculated from the sum of these fractions, and is smallest when the total fractional arclength is greatest. The 3-D vasculature is obtained from the optimal R and t using triangulation, and vessel curvature is then determined. This technique was evaluated using simulated curves and vessel centerlines obtained from clinical images, and provided rotational, magnification and relative curvature errors of 1 degree(s), 1% and 14% respectively. Accurate 3-D and curvature measures may be useful in clinical decision making, such as in assessing vessel tortuousity and access, during interventional procedures.
Thinning algorithim on 2D gray-level images
Cherng-Min Ma, Shu-Yen Wan
Thinning on binary images is widely discussed in the past three decades. A binary image can be obtained by thresholding a gray-level image. For preventing possible information losses in the thresholding process, it may be natural to design thinning algorithms directly on the original gray-level images. This paper proposes a two-step template-based thinning algorithm on gray-level images. The first step of the algorithm is to extract 4-connected gray-level skeletons from gray-level objects. The second step is to extract 8-connected gray-level skeletons from the consequent result of the first step.
Three-dimensional imaging using low-end C-arm systems
Joern Luetjens, Reiner Koppe, Erhard Klotz, et al.
During the last years, three-dimensional X-ray imaging has become a well-established imaging modality, setting the golden standard for spatial resolution in three-dimensional X-ray imaging. Firstly introduced on a motorized C-arm system, it gained benefit from the high spatial resolution of the image intensifier. Using cone-beam reconstruction, it provided fast access to truly three-dimensional imaging with isotropic voxel dimensions. However, the non-rigid mechanics and the image distortion in the image intensifier required dedicated calibration processes and obligated the developers to use the most stable and reliable system in the C-arm device family. The need for system calibration also required the system to be able to reproducibly adjust the C-arm to the pre-calibrated positions, which seemed only possible with the motorized movement of a high-end system. On mobile, non-motorized C-arm systems, which are often used for guiding surgical procedures, however, 3D application has not been feasible due to the non-reproducibility of the mechanical movement. In this paper, first results regarding the feasibility of this approach are presented. The data were acquired on a Philips BV 26 surgical C-arm. This device is fully movable. The C arc is adjusted manually.
Feasibility of an automated technique for detection of large misregistrations
Before a retrospective registration algorithm can be used routinely in the clinic, methods must be provided for distinguishing between registration solutions that are clinically satisfactory and those that are not. One approach is to rely on a human observer. Here, we present an algorithmic procedure for assessing quality that discriminates between badly misregistered pairs and those that are clinically useful.
Mixture of principal axes registration: a neural computation approach
Rujirutana Srikanchana, Jianhua Xuan, Kun Huang, et al.
Non-rigid image registration is a prerequisite for many medical imaging applications such as change analysis in image-based diagnosis and therapy assessment. Nonlinear interpolation methods may be used to recover the deformation if the correspondence of the extracted feature points is available. However, it may be very difficult to establish such correspondence at an initial stage when confronted with large and complex deformation. In this paper, a mixture of principal axes registration (mPAR) is proposed to tackle the correspondence problem using a neural computation method. The feature is to align two point sets without needing to establish the explicit point correspondence. The mPAR aligns two point sets by minimizing the relative entropy between their probability distributions resulting in a maximum likelihood estimate of the transformation mixture. The neural computation for the mPAR is developed using a committee machine to obtain a mixture of piece-wise rigid registrations. The complete registration process consists of two steps: (1) using the mPAR to establish an improved point correspondence and (2) using a multilayer perceptron (MLP) neural network to recover the nonlinear deformation. The mPAR method has been applied to register a contrast-enhanced magnetic resonance (MR) image sequence. The experimental results show that our method not only improves the point correspondence but also results in a desirable error-resilience property for control point selection errors.
Adaptive-bases algorithm for nonrigid image registration
Nonrigid registration of medical images is an important procedure in many aspects of current biomedical and bioengineering research. For example, it is a necessary step for studying the variation of biological tissue properties, such as shape or diffusion properties across population, compute population averages, or atlas-based segmentation. Recently we have introduced the Adaptive Bases registration algorithm as a general method for performing nonrigid registration of medical images and we have shown it to be faster and more accurate than existing algorithms of the same class. The overall properties of the Adaptive Bases algorithm are reviewed here and the method is validated on applications that include the computation of average images, atlas based segmentation, and motion correction of video images. Results show the Adaptive Bases algorithm to be capable of producing high quality nonrigid matches for the applications above mentioned.
Analysis of a new method for consistent large-deformation elastic image registration
Jaincun He, Gary E. Christensen, Jay T. Rubenstein, et al.
This paper provides initial analysis of a new consistent, large-deformation elastic image registration (CLEIR) algorithm that jointly estimates a consistent set of forward and reverse transformations between two images. The estimated transformations are able to accommodate large deformations while constraining the forward and reverse transformations to be inverses of one another. The algorithm assumes that the two N-dimensional images to be registered contain topologically similar objects and were collected using the same imaging modality. The image registration problem is formulated in a (N+1)-dimensional space where the additional dimension is referred to as the temporal or time dimension. A periodic-in-time, nonlinear, (N+1)-dimensional transformation is estimated that deforms one image into the shape of the other and back again. Large deformations from one image to the other are accommodated by concatenating the small-deformation incremental transformations from one time instant to the next. An inverse consistency constraint is placed on the incremental transformations to enforce within a specified tolerance that the forward and reverse transformations between the two images are inverses of each other. The feasibility of the algorithm for accommodating nonlinear deformations was demonstrated using 2D synthesized phantom images and CT inner ear images. The effect of varying the number of intermediate templates was studied for these data sets.
Pose estimation of teeth through crown-shape matching
Vevin Mok, Sim Heng Ong, Kelvin Weng Chiong Foong, et al.
This paper presents a technique for determining a tooth's pose given a dental plaster cast and a set of generic tooth models. The ultimate goal of pose estimation is to obtain information about the sizes and positions of the roots, which lie hidden within the gums, without the use of X-rays, CT or MRI. In our approach, the tooth of interest is first extracted from the 3D dental cast image through segmentation. 2D views are then generated from the extracted tooth and are matched against a target view generated from the generic model with known pose. Additional views are generated in the vicinity of the best view and the entire process is repeated until convergence. Upon convergence, the generic tooth is superimposed onto the dental cast to show the position of the root. The results of applying the technique to canines demonstrate the excellent potential of the algorithm for generic tooth fitting.
Automatic quantification of liver-heart cross-talk for quality assessment in SPECT myocardial perfusion imaging
Guo-Qing Wei, Anant Madabhushi, JianZhong Qian, et al.
In the single-photon emission computed tomography (SPECT), it is highly desirable to provide physicians with a measure of the strength of the liver-heart cross talk as a means of assessing the quality of the images, so that appropriate actions can be taken to avoid false diagnosis. Liver-heart cross talk is an phenomenon in which the liver count interferes with the heart count in 3D reconstruction, which generates artifacts in the reconstructed images. In this paper, we propose an automatic method for quantification of such liver-heart cross talk. The system performs heart detection followed by non-heart organ segmentation and quantification of their activities. An appearance-based approach is applied to find the heart center in each image, with invariance to image intensity and contrast. Then heart and non-heart activities are quantified in each image. A measurement formula is proposed to compute the amount of liver-heart cross talk as a function of the size of the non-heart activity regions, of the strengths of the heart and non-heart activities, and of the distance of the non-heart regions to the heart. The method has been tested on 150 patient studies of different isotopes and acquisition types, with very promising results.
Automatic quality assessment of JPEG and JPEG 2000 compressed images
Walter F. Good, Glenn S. Maitz, Xiao Hui Wang
A novel figure-of-merit (FOM) for automatically quantifying the types of artifacts that appear in compressed images was investigated. This FOM is based on task specific linear combinations of magnitude, frequency and 'localized' structure information derived from difference images. For each elemental diagnostic task (e.g., detection of microcalcifications) a value is calculated as the weighted linear combination of the output of an array of filters, and the FOM is defined to be the maximum of these values, taken over all relevant diagnostic tasks. This FOM was tested by applying it to a previously assembled set of 60 mammograms that had been digitized and compressed at five different compression levels using our version of the original JPEG algorithm. The FOM results were compared to subjective assessments of image quality provided by nine radiologists. A subset consisting of 25 images was also processed with the JPEG 2000 algorithm and evaluated by the FOM. A significant correlation existed between readers' subjective ratings and FOMs for JPEG compressed images. A comparison between the results of the two compression algorithms reveals that, to achieve a comparable FOM level, the JPEG 2000 images were compressed at a bitrate that was typically 15% lower than that of images compressed with the original JPEG algorithm.
JPEG domain watermarking
Wenbin Luo, Gregory L. Heileman, Caros E. Pizano
In this paper, a JPEG domain image watermarking method that utilizes spatial masking is presented. The watermarking algorithm works in the compressed domain, and can be implemented efficiently in real-time (only 50ms is required for a 512x512 24-bit color image on a 700MHz computer). In many applications, particularly those associated with delivering images over the Internet, the ability to watermark images in real-time is required. In order to achieve a real-time watermarking capability, the proposed technique avoids many of the computation steps associated with JPEG compression. Specifically, the forward and inverse DCT do not need to be calculated, nor do any of the computations associated with quantization. Robustness to JPEG compression and additive noise attacks is achieved with the proposed system, and the relationship between watermark robustness and watermark position is described. A further advantage of the proposed method is that it allows a watermark to be detected in an image without referencing to the original unwatermarked image, or to any other information used in the watermark embedding process.
Optimizing feature selection across a multimodality database in computerized classification of breast lesions
Karla Horsch, Alfredo Fredy Ceballos, Maryellen Lissak Giger, et al.
Linear step-wise feature selection is performed for computerized analysis methods on a set of mammography features using a database of mammography cases, a set of ultrasound features using a database of ultrasound cases, and a set of mammography and sonography features using a multi- modality database of lesions with both mammograms and sonograms. The large mammography and sonography databases were randomly split 20 times into three subdatabases for feature selection, classifier training and independent validation. The average validation Az value over the 20 random splits for the mammography database was 0.82 +/- 0.04 and for the sonography database was 0.85 +/- 0.03. The average consistency feature selection Az value for the mammography and sonography databases were 0.87 +/- 0.02 and 0.88 +/- 0.02, respectively. For the multi-modality database, the consistency feature selection Az value was 0.93.
Fuzzy clustering of fMRI data: toward a theoretical basis for choosing the fuzziness index
Martin Buerki, Helmut Oswald, Gerhard Schroth
The fuzzy clustering algorithm (FCA) is a promising approach for the unsupervised analysis of complex fMRI studies with unknown input functions. Among the few parameters required by the FCA, the fuzziness index m plays an important role and the outcome of the clustering depends strongly on it. Unfortunately, there is no theoretical basis currently known for choosing the value of m and so far, empirical approaches have been carried out to find a reasonable value. The theoretical approach presented here calculates the probability distribution of the membership values uij during one iteration of the FCA and judges the regularity of this distribution, therefore indicating the degree of fuzziness of the resulting partition. This allows us to estimate the compactness of the clusters. It turns our that this probability does not only depend on the fuzziness index m, but also on the length of the time courses, a fact that was until now not noticed. Consequently, a reasonable choice of the fuzziness index depends on the signal to noise ratio and the temporal dimension of the data.
Quantitative study of renormalization transformation method to correct the inhomogeneity in MR images
The purpose of this work is to evaluate the effectiveness of using a newly proposed renormalization transformation (RT) technique to correct nonuniformity in MR images. Simulated brain T1, T2 and PD weighted images with two types of bias fields and Gaussian white noise were created using the average signal intensities of white matter, gray matter, and CSF from segmented masks of actual patient examinations. These images were then corrected by the RT method and quantitatively compared with the original non-biased simulated images. This study demonstrated that a single optimal correction exists for the RT method. At the optimal correction, the RT method can remove more than 75 percent of the bias field without significant loss of useful contrast in the images. Unfortunately, this optimal correction can not be directly determined for actual patient images where the truth is not known. However, simulated images showed that the optimal correction could be estimated from changes in the contrast ratio map, where the contrast ratio is the ratio of the local intensity standard deviation and local average intensity. Using the contrast ratio map, the optimal correction can be reliably applied in patient images.
Spatiotemporal multiscale vessel enhancement for coronary angiograms
Til Aach, Claudia Mayntz, Peter M. J. Rongen, et al.
In coronary angiography, coronaries are imaged filled by a contrast medium which is injected through a catheter. To increase vessel visibility relative to surrounding structures, a background-less, subtraction-like appearance of the angiograms may be desired. This paper describes algorithms to increase vessel contrast and to attenuate background. Due to the strong motion in coronary angiograms, direct subtraction of a mask image acquired initially without contrast agent cannot be applied. We therefore distinguish vessels from background by their contrast, their size and their motion. These criteria are evaluated on a multiscale structure. Enhancement is then applied at locations which are likely to contain vessels. To avoid unacceptable noise boosting, we integrate a multiscale noise reduction filter into this concept. Both performance and computational simplicity make our algorithms attractive.
EM-IntraSPECT algorithm with ordered subsets (OSEMIS) for nonuniform attenuation correction in cardiac imaging
Andrzej Krol, Ifeanyi Echeruo, Roberto B. Solgado, et al.
Performance of the EM-IntraSPECT (EMIS) algorithm with ordered subsets (OSEMIS) for non-uniform attenuation correction in the chest was assessed. EMIS is a maximum- likelihood expectation maximization(MLEM) algorithm for simultaneously estimating SPECT emission and attenuation parameters from emission data alone. EMIS uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. However, the reconstruction time is long. The new algorithm, OSEMIS, is a modified EMIS algorithm based on ordered subsets. Emission Tc-99m SPECT data were acquired over 360 degree(s) in non-circular orbit from a physical chest phantom using clinical protocol. Both a normal and a defect heart were considered. OSEMIS was evaluated in comparison to EMIS and a conventional MLEM with a fixed uniform attenuation map. Wide ranges of image measures were evaluated, including noise, log-likelihood, and region quantification. Uniformity was assessed from bull's eye plots of the reconstructed images. For the appropriate subset size, OSEMIS yielded essentially the same images as EMIS and better than MLEM, but required only one-tenth as many iterations. Consequently, adequate images were available in about fifteen iterations.
Adaptive robust filters in MRI
Fernando A. Barrios, Leopoldo Gonzalez-Santos, Rafael Favila, et al.
An adaptive noise filter that can be used in MRI for noise reduction is presented. The algorithm is mainly based on the robust estimator, mode. Using the mode as the gray scale value estimator it is possible to differentiate the structures of interest from the background noise. Noise reduction is one of the most common image correction procedures, used in the enhancement of digital images. Wildly used noise reduction filters for digital imaging are based on median estimation, median filters. Eery time noise reduction filters are applied to an image there is a general softening or blurring of it, in particular mean filters are characterized by a strong softening effect for the case of high amplitude noise levels, practically destroying all the fine features in the filtered image. This problem is significantly reduced when median filters are used. The adaptive mode filter proposed in this work have very good noise reduction effect without a strong softening effect and is comparable in CPU time to the median filters. This fine resolution is achieved because the filter changes the mode estimator according to the difference of the deviation of the mode calculated for each the neighborhood of pixels with the global deviation of the mode for each class in the image. We consider it robust, because it uses the mode of gray-scale intensity distribution of the pixels neighborhood and its mode deviation.
Contrast improvements in digital radiography using a scatter-reduction processing algorithm
Kent M. Ogden, Charles R. Wilson, Robert W. Cox
An expectation maximization (EM) algorithm has been developed based on a model of radiographic imaging that accounts for scatter radiation and resolution degradation. Digital radiographs of a chest phantom were acquired, and the amount of scatter in several regions was computed using the known radiographic exposure and the known material properties. Contrast and noise were measured in a step wedge in the phantom. The phantom images were processed by the EM algorithm for up to 8 iterations, and the image intensity and noise values were measured at each iteration step. These values were used to compute the scatter reduction properties of the algorithm and its effect on the contrast-to-noise ratio. The algorithm removed over 90% of the scattered radiation in the image. Image noise values were reduced an average of 50% in the first iteration, but then increased to values equal or above unprocessed images. The contrast-to- noise ratio was initially increased substantially, but gradually decreased as further iterations caused the image noise to increase. With proper selection of processing parameters, this algorithm could provide considerable qualitative enhancement of clinical images with a single iteration as well as numerically accurate scatter reduction.
Effective dose reduction in dual-energy flat panel x-ray imaging: technique and clinical evaluation
Gopal B. Avinash, Kadri N. Jabri, Renuka Uppaluri, et al.
Dual-energy (DE) chest radiography with a digital flat panel (DFP) shows significant potential for increased sensitivity and specificity of pulmonary nodule detection. DFP-based DE produces significantly better image quality compared to Computed Radiography (CR) due to high detective quantum efficiency (DQE) and wide energy separation. We developed novel noise reduction filtering that significantly improves image quality at a given dose level, thereby allowing considerable additional dose reduction compared to CR. The algorithm segments images into structures, which are processed using anisotropic smoothing and sharpening, and non-structures, which are processed using isotropic smoothing. A fraction of the original image is blended with the processed image to obtain an image with improved noise characteristics. DE decomposed radiographs were obtained at film equivalent of 400 speed chest exam dose for 12 patients (set A) and at twice the dose for 7 other patients (set C). Images from set A were filtered using our algorithm to form set B. Images were evaluated by four radiologists using a noise rating scale. A two-sample t-test showed no significant difference in ratings between B and C, while significant differences were found between A and B, and A and C. Therefore, our algorithm enables effective patient dose reduction while maintaining perceptual image quality.
Additional processing for phase unwrapping of magnetic resonance thermometry imaging
S. Suprijanto, Frans M. Vos, M. W. Vogel, et al.
Magnetic Resonance Thermometry imaging is a non-invasive method for temperature monitoring in hyperthermia treatment. The temperature can be determined from the phase shift in a gradient-echo sequence. Due to large temperature variations, the phase shift may exceed the (-π ,π) radians interval. The phase value beyond this interval will be wrapped. Unfortunately, the temperature is only proportional to the absolute phase change. Therefore, phase unwrapping (PU) is required to recover the absolute phase from the wrapped representation. While the phase may contain spurious discontinuities, the algorithm must distinguish them from true phase discontinuities. We propose additional processing to support PU in order to improve the algorithm for recovery of the best estimation of absolute phase. The Minimum Weight Discontinuity (MWD) algorithm was used for PU. The steps to be taken on additional processing consist of applying a Gaussian filter to the raw complex MRI images, deriving the weights of a quality map, and segmenting unreliable regions using the magnitude image. The raw wrapped phase images, acquired from a phantom and from a porcine liver (acquired under laser irradiation), were used to test the effect of additional processing. The effect was compared with the conventional approach (i.e. mere unwrapping with the MWD algorithm).
Sequential approach to three-dimensional geometric image correction
Frank J. Crosby, A. Patricia Nelson
This paper presents a new and comprehensive approach for correcting magnetic-resonance images that are subject to three-dimensional geometric distortion. Distortion in such images is typically caused by variations in the magnetic- field gradient in each of the three spatial dimensions. The new approach sequentially applies one-dimensional and two- dimensional correction techniques to achieve a complete three-dimensional geometric correction. It thus avoids many theoretical complications and computational inefficiencies that are inherently associated with direct (non-sequential) three-dimensional correction techniques.
Denoising of cone beam CT image using wavelet transform
Yi-Qiang Yang, Nobuyuki Nakamori, Yasuo Yoshida, et al.
We have developed a method to remove the noise from the cone beam CT image and consider the reduction of a patient's dose. In diagnostic medicine, cone beam CT increases a patient' exposure dose. The X-ray CT image is degraded by the noise that is called quantum mottle, and the noise becomes so remarkable with decreasing patient' dose. It is known that the image signal can be separated from the noise by measuring the Lipschitz exponents of the image singularities from the evolution of wavelet transform modulus maxima across scales. We identify the singularities of 2-D projections by computing the wavelet transform modulus sum (WTMS) in the direction which is indicated by the phase of wavelet transform. Our preliminary results show the validity of the method based on 2-D WTMS for removing quantum mottle from 2-D projection. And it shows the possibility that the patient's dose can be reduced by this method.
Tree-structured wavelet transform signature for classification of melanoma
Sachin V. Patwardhan, Atam P. Dhawan, Patricia A. Relue
The purpose of this work is to evaluate the use of a wavelet transform based tree structure in classifying skin lesion images in to melanoma and dysplastic nevus based on the spatial/frequency information. The classification is done using the wavelet transform tree structure analysis. Development of the tree structure in the proposed method uses energy ratio thresholds obtained from a statistical analysis of the coefficients in the wavelet domain. The method is used to obtain a tree structure signature of melanoma and dysplastic nevus, which is then used to classify the data set in to the two classes. Images are classified by using a semantic comparison of the wavelet transform tree structure signatures. Results show that the proposed method is effective and simple for classification based on spatial/frequency information, which also includes the textural information.
Line vector quantization using noninteger subsampled wavelet pyramids and its application in medical imaging
Vadim Kustov, Andrew M. Zador
We have developed two novel techniques that can improve quality and speed for wavelet based compression algorithms without major modification of the latter. We show that replacing traditional block-shaped vectors with line vectors of the same dimension (defined along a row or column of an image or its transform), significantly reduces the distortion in the reconstructed image, while accelerating coding and decoding times. To improve the performance of the clustering algorithm, we introduce a non-integer-sub-sampled wavelet pyramid. This new type of wavelet decomposition possesses certain shift-invariant properties not found in classical wavelet pyramid structures. Unlike frames and other types of mapping that introduce data redundancy into the transform in order to induce shift-invariance, our new pyramid does not introduce any data redundancy. A fast method for implementing this new pyramid is introduced. It is shown that the resulting zerotree structure is both sparser, and more efficiently coded due to the non-integer sub-sampling process. Experimental data is provided, demonstrating the performance of our proposed architecture employing line vectors. Our data also indicates that replacing the classical pyramid with this new pyramid can significantly improve performance for a wide range of quantizer designs.
Scale-based method for correcting background intensity variation in acquired images
An automatic, acquisition-protocol-independent, entirely image-based strategy for correcting background intensity variation in medical images has been developed. Local scale - a fundamental image property that is derivable entirely from the image and that does not require any prior knowledge about the imaging protocol or object material property distributions - is used to obtain a set of homogeneous regions, no matter what each region is, and to fit a 2nd degree polynomial to the intensity variation within them. This polynomial is used to correct the intensity variation. The above procedure is repeated for the corrected image until the size of segmented homogeneous regions does not change significantly from that in the previous iteration. Intensity scale standardization is effected to make sure that the corrected images are not biased by the fitting strategy. The method has been tested on 1000 3D mathematical phantoms, which include 5 levels each of blurring and noise and 4 types of background variation - additive and multiplicative Gaussian and ramp. It has also been tested on 10 clinical MRI data sets of the brain. These tests, and a comparison with the method of homomorphic filtering, indicate the effectiveness of the method.
Interslice interpolation of anisotropic 3D images using multiresolution contour correlation
Jiann-Der Lee, Shu-Yen Wan, Cherng-Min Ma
To visualize, manipulate and analyze the geometrical structure of anatomical changes, it is often required to perform three-dimensional (3-D) interpolation of the interested organ shape from a series of cross-sectional images obtained from various imaging modalities, such as ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), etc. In this paper, a novel wavelet-based interpolation scheme, which consists of four algorithms are proposed to 3-D image reconstruction. The multi-resolution characteristics of wavelet transform (WT) is completely used in this approach, which consists of two stages, boundary extraction and contour interpolation. More specifically, a wavelet-based radial search method is first designed to extract the boundary of the target object. Next, the global information of the extracted boundary is analyzed for interpolation using WT with various bases and scales. By using six performance measures to evaluate the effectiveness of the proposed scheme, experimental results show that the performance of all proposed algorithms is superior to traditional contour-based methods, linear interpolation and B-spline interpolation. The satisfactory outcome of the proposed scheme provides its capability for serving as an essential part of image processing system developed for medical applications.
Multi-scale application of the N3 method for intensity correction of MR images
Craig Jones, Erick Wong
Spatial inhomogeneity due to the radio-frequency coil in MR imaging can confound segmentation results. In 1994, Sled introduced the N3 technique, using histogram deconvolution, for reducing inhomogeneity. We found some scans whose steep inhomogeneity gradient was not fully eliminated by N3. We created a multi-scale application of N3 that further reduces this gradient, and validated it on MNI BrainWeb and actual MRI data. The algorithm was applied to proton density simulated BrainWeb scans (with known inhomogeneity) and 100 standard MRI scans. Intra-slice and inter-slice inhomogeneity measures were created to compare the technique with standard N3. The slope of the estimated bias versus the known bias of BrainWeb data was 1.0 (r=0.9844) for N3 and 1.04 (r=0.9828) for multi-scale N3. The bias field estimated by multi-scale N3 was within 1% root-mean-square of that of standard N3. Over 100 MS patient scans, the average intra-slice measure (0 meaning bias-free) was 0.0694 (uncorrected), 0.0530 (N3) and 0.0402 (multi-scale). The average inter-slice measure (1 meaning bias-free) was 0.9121 (uncorrected), 0.9367 (N3) and 0.9508 (multi-scale). The multi-scale N3 algorithm showed a greater inhomogeneity reduction than N3 in the small percentage of scans bearing a strong gradient, and results similar to N3 in the remaining scans.
Multiwavelet grading of prostate pathological images
We have developed image analysis methods to automatically grade pathological images of prostate. The proposed method generates Gleason grades to images, where each image is assigned a grade between 1 and 5. This is done using features extracted from multiwavelet transformations. We extract energy and entropy features from submatrices obtained in the decomposition. Next, we apply a k-NN classifier to grade the image. To find optimal multiwavelet basis, preprocessing, and classifier, we use features extracted by different multiwavelets with either critically sampled preprocessing or repeated row preprocessing and different k-NN classifiers and compare their performances, evaluated by total misclassification rate (TMR). To evaluate sensitivity to noise, we add white Gaussian noise to images and compare the results (TMR's). We applied proposed methods to 100 images. We evaluated the first and second levels of decomposition using Geronimo, Hardin, and Massopust (GHM), Chui and Lian (CL), and Shen (SA4) multiwavelets. We also evaluated k-NN classifier for k=1,2,3,4,5. Experimental results illustrate that first level of decomposition is quite noisy. They also show that critically sampled preprocessing outperforms repeated row preprocessing and has less sensitivity to noise. Finally, comparison studies indicate that SA4 multiwavelet and k-NN classifier (k=1) generates optimal results (with smallest TMR of 3%).
Soft parametric curve matching in scale-space
We develop a softassign method for application to curve matching. Softassign uses deterministic annealing to iteratively optimize the parameters of an energy function. It also incorporates outlier rejection by converting the energy into a stochastic matrix with entries for rejection probability. Previous applications of the method focused on finding transformations between unordered point sets. Thus, no topological constraints were required. In our application, we must consider the topology of the matching between the reference and the target curve. Our energy function also depends upon the rotation and scaling between the curves. Thus, we develop a topologically correct algorithm to update the arc length correspondence, which is then used to update the similarity transformation. We further enhance robustness by using a scale-space description of the curves. This results in a curve-matching tool that, given an approximate initialization, is invariant to similarity transformations. We demonstrate the reliability of the technique by applying it to open and closed curves extracted from real patient images (cortical sulci in three dimensions and corpora callosa in two dimensions). The set of transformations is then used to compute anatomical atlases.
Wavelet median denoising of ultrasound images
Katherine E. Macey, Wyatt H. Page
Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.
Application of an adaptive control grid interpolation technique to MR data set augmentation aimed at morphological vascular reconstruction
David Frakes, Christopher M. Sinotte, Christopher P. Conrad, et al.
The total cavopulmonary connection (TCPC) is a palliative surgical repair performed on children with a single ventricle (SV) physiology. Much of the power produced by the resultant single ventricle pump is consumed in the systemic circulation. Consequently the minimization of power loss in the TCPC is imperative for optimal surgical outcome. Toward this end we have developed a method of vascular morphology reconstruction based on adaptive control grid interpolation (ACGI) to function as a precursor to computational fluid dynamics (CFD) analysis aimed at quantifying power loss. Our technique combines positive aspects of optical flow-based and block-based motion estimation algorithms to accurately augment insufficiently dense Magnetic Resonance (MR) data sets with a minimal degree of computational complexity. The resulting enhanced data sets are used to reconstruct vascular geometries, and the subsequent reconstructions can then be used in conjunction with CFD simulations to offer th pressure and velocity information necessary to quantify power loss in the TCPC. Collectively these steps form a tool that transforms conventional MR data into more powerful information allowing surgical planning aimed at producing optimal TCPC configurations for successful surgical outcomes.
Analysis of myocardial motion in tagged MR images using nonrigid image registration
Tagged magnetic resonance imaging (MRI) is unique in its ability to non-invasively image the motion and deformation of the heart in-vivo, but one of the fundamental reasons limiting its use in the clinical environment is the absence of automated tools to derive clinically useful information from tagged MR images. In this paper we present a novel and fully automated technique based on nonrigid image registration using multi-level free-form deformations (MFFDs) for the analysis of myocardial motion using tagged MRI. The novel aspect of our technique is its integrated nature for tag localization and deformation field reconstruction. To extract the motion field within the myocardium during systole we register a sequence of images taken during systole to a set of reference images taken at end-diastole, maximizing the mutual information between images. We use both short-axis and long-axis images of the heart to estimate the full four-dimensional motion field within the myocardium. We have validated our method using a cardiac motion simulator and we also present quantitative comparisons of cardiac motion from nine volunteers.
Characterization and evaluation of inversion algorithms for MR elastography
Armando Manduca, Travis E. Oliphant, David S. Lake, et al.
Magnetic resonance elastography (MRE) can visualize and measure acoustic shear waves in tissue-like materials subjected to harmonic mechanical excitation. This allows the calculation of local values of material parameters such as shear modulus and attenuation. Various inversion algorithms to perform such calculations have been proposed. Under certain assumptions (discussed in detail), the problem reduces to local inversion of the Helmholtz equation. Three algorithms are considered to perform this inversion: Direct Inversion, Local Frequency Estimation, and Matched Filter. To study the noise sensitivity, resolution, and accuracy of these techniques, studies were conducted on synthetic and physical phantoms and on in-vivo breast data. All three algorithms accurately reconstruct shear modulus, demarcate differences between tissues, and identify tumors as areas of higher stiffness, but they vary in noise sensitivity and resolution. The Matched Filter, designed for optimal behavior in noise, provides the best combination of sharpness and smoothness. Challenges remain in pulse sequence design, delivering sufficient signal to certain areas of the body, and improvements in processing algorithms, but MRE shows great potential for non-invasive in vivo determination of mechanical properties.
Visualization of cardiac wavefronts using data fusion
David B. Kynor, Anthony Dietz, Eric Friets, et al.
Catheter ablation has emerged as a highly effective treatment for arrhythmias that are constrained by known, easily located, anatomic landmarks. However, this treatment has enjoyed limited success for arrhythmias that are characterized by complex activation patterns or are not anatomically constrained. This class of arrhythmias, which includes atrial fibrillation and ventricular tachycardia resulting from ischemic heart disease, demands improved mapping tools. Current technology forces the cardiologist to view cardiac anatomy independently from the functional information contained in the electrical activation patterns. This leads to difficulties in interpreting the large volumes of data provided by high-density recording catheters and in mapping patients with abnormal anatomy (e.g., patients with congenital heart disease). The goal of this is work is development of new data processing and display algorithms that will permit the clinician to view activation sequences superimposed onto existing fluoroscopic images depicting the location of recording catheters within the heart. In cases where biplane fluoroscopic images and x-ray camera position data are available, the position of the catheters can be reconstructed in three-dimensions.
Multiple-isovalue selection by clustering gray values of the boundary surfaces within volume image
Lisheng Wang, PhengAnn Heng, TienTsin Wong, et al.
In medical visualization, multiple isosurfaces are usually extracted from medical volume image and used to represent (approximate) the boundary surfaces of different structures in the image. In this paper, we will discuss the approximating problem of the boundary surface (contained within volume image) by isosurface. It is quite common that a medical volume image can contain multiple interesting structures; we present a novel approach for the selection of multiple isosurfaces to approximate the boundary surfaces of these multiple structures. With this approach, the discrete sampling points of the gray values of the boundary surfaces within volume image are computed first. Then by identifying appropriate clusters from the discrete sampling points and computing the mean of each cluster, we can determine the corresponding isosurfaces for approximating these multiple boundary surfaces.
Identifying image structures for content-based retrieval of digitized spine x rays
L. Rodney Long, Daniel M. Krainak, George R. Thoma
We present ongoing work for the computer-assisted indexing of biomedical images at the Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM). For any class of biomedical images, a problem confronting the researcher in image indexing is developing robust algorithms for localizing and identifying anatomy relevant for that image class and relevant to the indexing goals. This problem is particularly acute in the case of digitized spine x-rays, due to the projective nature of the data, which results in overlapping boundaries with possibly ambiguous interpretations; the highly irregular shapes of the vertebral bodies, sometimes additionally distorted by pathology; and possible occlusions of the vertebral anatomy due to subject positioning. We present algorithms that we have developed for the localization and identification of vertebral structure and show how these algorithms fit into the family of algorithms that we continue to develop for our general indexing problem. We also review the indexing goals for this particular collection of digitized spine x-rays and discuss the use of the indexed images in a content-based image retrieval system.
Automatic localization and delineation of collimation fields in digital and film-based radiographs
Thomas Martin Lehmann, Sascha Goudarzi, Nick Linnenbruegger, et al.
Collimation field detection is an important pre-processing step for automatic image analysis of radiographs. However, most approaches are restricted to a small set of form archetypes or presuppose the presence of a shutter. Hence, existing methods are not applicable to large collections of radiographs from various modalities, such as obtained in the field of content-based image retrieval in medical applications. Based on analytical evaluation, the approach of WIEMKER et al. (Procs SPIE 2000; 3979:1555-1565) was selected, modified in order to reduce false positive detection, and evaluated on a large set of 4,000 radiographs (763 containing shutter edges) taken from daily routine including any kind of projective X-ray examinations. Eight subsets (each of 500 images) were compiled randomly. A set of 500 images was used to optimize the parameters and evaluated using the remaining 3,500 images. This procedure was repeated for all eight combinations. Using the initial approach, the specificity is 96.4% with a poor sensitivity of 44.1% resulting in an overall precision of 86.7%. All figures increase up to 98.5%, 55.6%, and 89.5%, respectively, if the algorithm also minimizes the variation of radiation density values outside the detected shutter area. In terms of sensitivity and precision, the results of optimization vs. evaluation for the same combination and of evaluation vs. evaluation for different combinations differed up to 13 and 9 percentage points, respectively. This indicates that still an insufficient number of images is used to allow complete generalization of the results.
Knowledge-based image understanding and classification system for medical image databases
Hui Luo, Roger S. Gaborski, Raj S. Acharya
With the advent of Computer Radiographs(CR) and Digital Radiographs(DR), image understanding and classification in medical image databases have attracted considerable attention. In this paper, we propose a knowledge-based image understanding and classification system for medical image databases. An object-oriented knowledge model has been introduced and the idea that content features of medical images must hierarchically match to the related knowledge model is used. As a result of finding the best match model, the input image can be classified. The implementation of the system includes three stages. The first stage focuses on the match of the coarse pattern of the model class and has three steps: image preprocessing, feature extraction, and neural network classification. Once the coarse shape classification is done, a small set of plausible model candidates are then employed for a detailed match in the second stage. Its match outputs imply the result models might be contained in the processed images. Finally, an evaluation strategy is used to further confirm the results. The performance of the system has been tested on different types of digital radiographs, including pelvis, ankle, elbow and etc. The experimental results suggest that the system prototype is applicable and robust, and the accuracy of the system is near 70% in our image databases.
Poster Session II
icon_mobile_dropdown
Computer-aided diagnosis in CT colonography: detection of polyps based on geometric and texture features
A computer-aided diagnosis scheme for the detection of colonic polyps in CT colonography has been developed, and its performance has been assessed based on clinical cases with colonoscopy-confirmed polyps. In the scheme, the colon was automatically segmented by use of knowledge-guided segmentation from 3-dimensional isotropic volumes reconstructed from axial CT slices in CT colonography. Polyp candidates are detected by first computing of 3-dimensional geometric features that characterize polyps, and then segmenting of connected components corresponding to suspicious regions by hysteresis thresholding and fuzzy clustering based on these geometric features. False-positive detections are reduced by computation of 3-dimensional texture features characterizing the internal structures of the polyp candidates, followed by application of discriminant analysis to the feature space generated by the geometric and texture features. We applied our scheme to 43 CT colonographic cases with cleansed colon, including 12 polyps larger than 5 mm. In a by-dataset analysis, the CAD scheme yielded a sensitivity of 95% with 1.2 false positives per data set. The false negative was one of the two polyps in a single patient. Consequently, in by-patient analysis, our method yielded 100\% sensitivity with 2.0 false positives per patient. The results indicate that our CAD scheme has the potential to detect clinically important polyp cases with a high sensitivity and a relatively low false-positive rate.
Connective tissue representation for detection of microcalcifications in digital mammograms
Microcalcification clusters appear as an early sign of breast cancer and play an important role in interpreting mammograms. Progress is reported towards an automated computer aided detection system for clustered microcalcifications utilizing two image feature parameters: local contrast and shape. The use of a shape parameter is necessary to distinguish thin patches of connective tissue from microcalcifications. Two shape parameter techniques are compared in the segmentation of 15 digital mammogram images. The first technique implements the linear Hough transform, while the second uses image phase information in the Fourier domain. In both cases labeling of the image is performed by a deterministic relaxation scheme, in which both image data dn prior beliefs are weighted simultaneously. Similar segmentation results are obtained for each shape parameter technique however the execution time for the phase method is approximately one quarter that for the Hough method. Both techniques offer an improvement over segmentation results obtained without the shape parameter.
New deformable human brain atlas for computer-aided diagnosis
Antti J. Lahtinen, Harry Frey, Hannu Eskola
Modern software-based image analysis techniques enable accurate detection of the size and shape of various brain lesions. In order to estimate the real load caused by the lesions also their neuro-anatomical location should be taken into account. Therefore deformable brain atlases appear to be essential tools when new image diagnostics methods are developed and tested. We have developed deformable brain atlas software for research and diagnosis. The atlas is used to compare patient brain images with a segmented reference brain image so that it is possible to identify the patient neuroanatomical structures. The atlas software comes with image processing tools for transforming CT or MR image sets into atlas- compatible volume image format. The reference image is deformed to match the patient image, and the segmented neuroanatomical regions of the atlas image can then be blended with the patient image.
CAD system for lung cancer based on low-dose single-slice CT image
Mitsuru Kubo, Kazuhori Kubota, Nobuhiro Yamada, et al.
We have been developed a computer-aided diagnosis (CAD) system for the lung cancer detection of early stage from low dose single-slice computed tomography (CT) with 10 mm beam width on chest screening. The objective of this study is to solve three problems of the conventional CAD system; (1) lesion which overlaps blood vessel, (2) lesion in contact with blood vessel and (3) lesion near upper mediastinum. This paper presents a new method to solve problem-1 and problem-2. The blood vessels, which overlap lesions and others in contact with lesion, are eliminated by detecting region of interest (ROI) with accuracy. Detection method of ROIs consists of 3 processes; firstly, streak shadows elimination using linear feature detector filter, secondly, estimation of pulmonary background bias using the intensity histogram and the opening method, and finally, ROI's border detection using laplacian filter. We evaluated the new system by apply it to 155 shadows which need confirmation diagnosis. These cases were selected from clinical test from July 1997 to December 2000 in retrospective study. True positive cases of this algorithm achieved sensitivity 91.0 %. The average of false positive cases was 0.53 per slice.
Clinical test in a prospective study of a CAD system for lung cancer based on helical CT images
Kazuhori Kubota, Mitsuru Kubo, Yoshiki Kawata, et al.
We have developed a computer assisted automatic detection system for lung cancer that detects tumor candidates at an early stage from helical CT images. In July 1997, we started the comparative field trial using our system prospectively. Chest CT images obtained by helical CT scanner have drawn a great interest in the detection of suspicious regions. However, mass screening based on helical CT images leads to a considerable number of images to be diagnosed. We expect that our system can reduce the time complexity and increase diagnostic confidence. In this paper, we describe the detection results of the system for the nodules of definite diagnosis. We show the clinical test results in a prospective study. The results show that the system can detect lung cancer candidates at an early stage successfully and can be applied to a mass screening. In addition, we describe the necessity of the CAD system having the function which can be compared with the previous CT images.
Classification experiments of pulmonary nodules using high-resolution CT images
In assessing the malignant potential of small pulmonary nodules in thin-section CT images, it is important to examine the nodule internal structure. In our previous work, we found that internal structure features derived from CT density and curvature indexes such shape index and curvedness were useful for differentiating malignant and benign nodules in 3-D thoracic CT images. This may be attributed to the texture changes in the nodule region due to a developing malignancy. We present a joint histogram-based representation approach of inner structures of the nodule to visualize the characteristics between nodules.
Classification of lung area using multidetector-row CT images
Tsutomu Mukaibo, Yoshiki Kawata, Noboru Niki, et al.
Recently, we can get high quality images in the short time for the progress of X-ray CT scanner. And the three dimensional (3-D) analysis of pulmonary organs using multidetector-row CT (MDCT) images, is expected. This paper presents a method for classifying lung area into each lobe using pulmonary MDCT images of the whole lung area. It is possible to recognize the position of nodule by classifying lung area into these lobes. The structure of lungs differs on the right one and left one. The right lung is divided into three domains by major fissure and minor fissure. And, the left lung is divided into two domains by major fissure. Watching MDCT images carefully, we find that the surroundings of fissures have few blood vessels. Therefore, lung area is classified by extraction of the domain that the distance from pulmonary blood vessels is large and connective search of these extracted domains. These extraction and search are realized by 3-D weighted Hough transform.
Computer-Aided Diagnosis I
icon_mobile_dropdown
Breast biopsy prediction using a case-based reasoning classifier for masses versus calcifications
Anna O. Bilska-Wolak, Carey E. Floyd Jr.
We investigated how the subdivision of breast biopsy cases into masses and calcifications influences breast cancer prediction for a case-based reasoning (CBR) classifier system. Mammographers' BI-RADS (TM) descriptions of mammographic lesions were used as input to predict breast biopsy outcome. The CBR classifier compared the case to be examined to a reference collection of cases and identified similar cases. The decision variable for each case was formed as the ratio of malignant similar cases to all similar cases. The reference data collection consisted of 1433 biopsy-proven mammography cases, and was divided into 3 categories: mass cases, calcification cases, and other. Performance was evaluated using ROC analysis and Round Robin sampling, and variance was estimated using a bootstrap analysis. The best ROC area for masses was 0.92+/- 0.01. At 98% sensitivity, about 209 (51%) patients with benign mass lesions might have been spared biopsy, while missing 5 (2%) malignancies. The best ROC area for calcifications was only 0.64+/- 0.02. At 98% sensitivity, 50 (12%) benign calcification cases could have been spared, while missing 5 (2%) malignancies. The CBR system performed substantially better on the masses than on the calcifications.
Poster Session II
icon_mobile_dropdown
Search of microcalcification clusters with the CALMA CAD station
Maria Evelina Fantacci, Ubaldo Bottigli, Pasquale Delogu, et al.
CALMA (Computer Assisted Library for Mammography), a collaboration among physicists and radiologists, has collected a large database of digitized mammographic images (about 5000) and developed a CAD (Computer Aided Detection) which can be used also for digitization, as archive and to perform statistical analysis. In this work we present the results obtained in the automatic search of microcalcification clusters. Images (18x24 cm2, digitized by a CCD linear scanner with a 85micrometers pitch and 4096 gray levels) are fully characterized: pathological ones have a consistent description with radiologist's diagnosis and histological data; non pathological ones correspond to patients with a follow up of at least three years. The automated microcalcification clusters analysis is made using a hybrid approach containing both algorithms and neural networks by which are extracted the ROIs (Region Of Interest). These ROIs are indicated on the images and a probability of containing a microcalcification cluster is associated to each ROI. The results obtained with this analysis are described in terms of the ROC (Receiver Operating Characteristic) curve, which shows the true positive fraction (sensitivity) as a function of the false positive fraction (1-specificity) obtained varying the threshold level of the ROI selection procedure.
Computer-aided detection of lung cancer on chest radiographs: effect of machine CAD false-positive locations on radiologists' behavior
This paper describes the effect of a computer-aided detection (CAD) system's false positive marks on observer performance when interpreting films containing lung cancer. We compared the location/no location chosen initially by the radiologists and the stability or change in location that followed the provision of the CAD information. We found a difference in radiologists' behavior that depended on whether the radiologists' initial interpretation was a true positive or a false positive detection. When the radiologist made an incorrect initial decision, that decision was less stable than when the initial decision was correct.
Computerized analysis of sonograms for the detection of breast lesions
Karen Drukker, Maryellen Lissak Giger, Karla Horsch, et al.
With a renewed interest in using non-ionizing radiation for the screening of high risk women, there is a clear role for a computerized detection aid in ultrasound. Thus, we are developing a computerized detection method for the localization of lesions on breast ultrasound images. The computerized detection scheme utilizes two methods. Firstly, a radial gradient index analysis is used to distinguish potential lesions from normal parenchyma. Secondly, an image skewness analysis is performed to identify posterior acoustic shadowing. We analyzed 400 cases (757 images) consisting of complex cysts, solid benign lesions, and malignant lesions. The detection method yielded an overall sensitivity of 95% by image, and 99% by case at a false-positive rate of 0.94 per image. In 51% of all images, only the lesion itself was detected, while in 5% of the images only the shadowing was identified. For malignant lesions these numbers were 37% and 9%, respectively. In summary, we have developed a computer detection method for lesions on ultrasound images of the breast, which may ultimately aid in breast cancer screening.
Optimal neural network architecture selection: effects on computer-aided detection of mammographic microcalcifications
We evaluated the effectiveness of an optimal convolution neural network (CNN) architecture selected by simulated annealing for improving the performance of a computer-aided diagnosis (CAD) system designed for the detection of microcalcification clusters on digitized mammograms. The performances of the CAD programs with manually and optimally selected CNNs were compared using an independent test set. This set included 472 mammograms and contained 253 biopsy-proven malignant clusters. Free-response receiver operating characteristic (FROC) analysis was used for evaluation of the detection accuracy. At a false positive (FP) rate of 0.7 per image, the film-based sensitivity was 84.6% with the optimized CNN, in comparison with 77.2% with the manually selected CNN. If clusters having images in both craniocaudal and mediolateral oblique views were analyzed together and a cluster was considered to be detected when it was detected in one or both views, at 0.7 FPs/image, the sensitivity was 93.3% with the optimized CNN and 87.0% with the manually selected CNN. This study indicates that classification of true positive and FP signals is an important step of the CAD program and that the detection accuracy of the program can be considerably improved by optimizing this step with an automated optimization algorithm.
Computerized analysis of interstitial lung diseases on chest radiographs based on lung texture, geometric-pattern features, and artificial neural networks
Takayuki Ishida, Shigehiko Katsuragawa, Katsumi Nakamura, et al.
For computerized detection of interstitial lung disease on chest radiographs, we developed three different methods: texture analysis based on the Fourier transform, geometric- pattern feature analysis, and artificial neural network (ANN) analysis of image data. With these computer-aided diagnostic methods, quantitative measures can be obtained. To improve the diagnostic accuracy, we investigated combined classification schemes by using the results obtained with the three methods for distinction between normal and abnormal chest radiographs with interstitial opacities. The sensitivities of texture analysis, geometric analysis, and ANN analysis were 88.0+/- 1.6%, 91.0+/- 2.6%, and 87.5+/- 1.9%, respectively, at a specificity of 90.0%, whereas the sensitivity of a combined classification scheme with the logical OR operation was improved to 97.1%+/- 1.5% at the same specificity of 90.0%. The combined scheme can achieve higher accuracy than the individual methods for distinction between normal and abnormal cases with interstitial opacities.
Improving the automated classification of clustered calcifications on mammograms through the improved detection of individual calcifications
Robert M. Nishikawa, Maria Fernanda Salfity, Yulei Jiang, et al.
We are developing a semi-automated classification scheme in which the approximate size and location of the cluster (but not the individual calcifications), and a rough estimate of the number of calcifications in the cluster are used to segment individual calcifications. A difference of Gaussians is used to pre-process the ROI centered on the cluster. Next a global and a local grey-level thresholds are applied. The threshold values are determined iteratively based on the approximate number of calcifications in the cluster and the actual number segmented. The center position of each segmented calcification is then determined. These locations are passed to the computer classifier, which determines the likelihood of malignancy for the cluster. Using this approach, 74% of individual calcifications can be detected per cluster compared to 55% when using our cluster detection scheme that does not use the a priori information about cluster. There was no measurable decrease in the classification scheme's performance when using the segmented calcifications from our new approach compared to if all the location of the calcifications were determined manually (area under the ROC curve of 0.85 versus 0.91 respectively, p = 0.2).
Wavelet and statistical analysis for melanoma classification
Amit Nimunkar, Atam P. Dhawan, Patricia A. Relue, et al.
The present work focuses on spatial/frequency analysis of epiluminesence images of dysplastic nevus and melanoma. A three-level wavelet decomposition was performed on skin-lesion images to obtain coefficients in the wavelet domain. A total of 34 features were obtained by computing ratios of the mean, variance, energy and entropy of the wavelet coefficients along with the mean and standard deviation of image intensity. An unpaired t-test for a normal distribution based features and the Wilcoxon rank-sum test for non-normal distribution based features were performed for selecting statistically correlated features. For our data set, the statistical analysis of features reduced the feature set from 34 to 5 features. For classification, the discriminant functions were computed in the feature space using the Mahanalobis distance. ROC curves were generated and evaluated for false positive fraction from 0.1 to 0.4. Most of the discrimination functions provided a true positive rate for melanoma of 93% with a false positive rate up to 21%.
Detection algorithm of lung cancer candidate nodules on multislice CT images
Recently, multi-slice helical CT technology was developed. Unlike the conventional helical CT, we can obtain CT images of two or more slices with 1 time of scan. Therefore, we can get many pictures with a clear contrast images and thin slice images in one time of scanning. The nodule is expected to be picture more clearly, and it is expected an high diagnostic ability of screening by the expert physicians. Multi-slice CT is z-axial high-contrast resolution, but the number of images is 10 times the single-slice helical CT. Therefore, the development of a diagnosis support system is expected to diagnose these images. We have developed a computer aided diagnosis (CAD) system to detect the lung cancer from multi-slice CT images. Using the conventional algorithm, it was difficult to detect the ground glass shadow and the nodules in contact with the blood vessel. The purpose of this study is to develop a detection algorithm using the 3-D filter by orientation map of gradient vectors and the 3-D distance transformation.
Automatic segmentation of pulmonary nodules by using dynamic 3D cross-correlation for interactive CAD systems
Li Fan, JianZhong Qian, Benjamin L. Odry, et al.
We propose in this paper a novel approach to the automatic segmentation of lung nodules in a given volume of interest (VOI) from high resolution multi-slice CT images by dynamically initializing and adjusting a 3D template and analyzing its cross correlation with the structure of interest. First, thresholding techniques are used to separate the background voxels. The structure of interest, comprising of a nodule candidate and possible attached vessels, is then extracted by excluding any part of the chest wall inside the VOI. Afterwards, the proposed segmentation method finds the core of the structure of interest, which corresponds to the nodule, analyzes its orientation and size, and initializes a 3D template accordingly. Next, The template gradually expands, with its cross correlation to the original structure of interest being computed at each step. The template is then optimized based on the analysis of the cross correlation curve. A segmentation of the nodule is first roughly obtained by doing an 'AND' operation between the optimal template and the extracted structure and then refined by a spatial reasoning method. Template parameters can be recorded and recalled in later diagnosis so that reproducibility and consistency can be achieved. Preliminary results show that segmentation results are consistent, with a mean intra-scan volume measurement deviation of 2.8% for phantom data and 8.1% for real patient data.
Analysis of the feasibility of using active shape models for segmentation of gray-scale images
Active Shape Models (ASM) have been used extensively to segment images where the objects of interest show little to moderate shape variability across a training set. It is well known that the efficacy of this technique relies heavily on the quality of the training set and the initialization of the mean shape on the target image. However, little has been said about the validity of the assumptions under which the two core components of ASM, i.e. the shape model and the gray level model, are built. We explore these assumptions and test their validity with respect to both shape and gray level models. In this study, we use different training sets of real and synthetic gray scale images and investigate the reasons for their success or failure in the context of shape and gray level modeling. We show that the shape model performance is not affected by small changes in the distribution of the shapes. Furthermore, we show that a reason for segmentation failure is the lack of features in the mean profiles of gray level values that causes localization errors even under ideal conditions.
User-driven segmentation approach: interactive snakes
Tobias Kunert, Marc Heiland, Hans-Peter Meinzer
For diagnostics and therapy planning, the segmentation of medical images is an important pre-processing step. Currently, manual segmentation tools are most common in clinical routine. Because the work is very time-consuming, there is a large interest in tools assisting the physician. Most of the known segmentation techniques suffer from an inadequate user interface, which prevents their use in a clinical environment. The segmentation of medical images is very difficult. A promising method to overcome difficulties such as imaging artifacts are active contour models. In order to enhance the clinical usability, we propose a user-driven segmentation approach. Following this way, we developed a new segmentation method, which we call interactive snakes. Thereto, we elaborated an interaction style which is more intuitive to the clinical user and derived a new active contour model. The segmentation method provides a very tight coupling with the user. The physician is interactively attaching boundary markers to the image, whereby he is able to bring in his knowledge. At the same time, the segmentation is updated in real-time. Interactive snakes are a comprehensible segmentation method for the clinical use. It is reasonable to employ them both as a core tool and as an editing tool for incorrect results.
Deformable isosurface and vascular applications
Peter J. Yim, G. Boudewijn Vasbinder, Vincent B. Ho, et al.
Vascular disease produces changes in lumenal shape evident in magnetic resonance angiography (MRA). However, quantification of vascular shape from MRA is problematic due to image artifacts. Prior deformable models for vascular surface reconstruction primarily resolve problems of initialization of the surface mesh. However, initialization can be obtained in a trivial manner for MRA using isosurfaces. We propose a methodology for deforming the isosurface to conform to the boundaries of objects in the image with minimal a priori assumptions of object shape. As in conventional methods, external forces attract the surface towards edges in the image. However, smoothing is produced by torsional forces that align the normals of adjacent surface triangles. The torsional forces are unbiased with regard to determination of object size. The deformable isosurface was applied to MRA of carotid and renal arteries with moderate stenosis and to a digital phantom of an artery with high-grade stenosis (6-voxel normal diameter). The reconstruction of the carotid and renal arteries from MRA was entirely consistent with expert interpretation of the MRA. The deformable isosurface determined the degree of stenosis of the digital phantom to within 10.0% accuracy. The deformable isosurface is an excellent method for analysis of vascular shape.
Dynamic deformable models for 3D MRI heart segmentation
Leonid Zhukov, Zhaosheng Bao, Igor Gusikov, et al.
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
Optimization of active-contour model parameters using genetic algorithms: segmentation of breast lesions in mammograms
Yuan Xu, Scott Neu, Chester J. Ornes, et al.
Genetic algorithms (GA's) were used to find optimal sets of parameters for an active contour model (ACM) algorithm that segments breast lesions in mammography images. These parameters, which are typically determined empirically, are used in an energy function that is minimized by the ACM algorithm when producing a segmentation contour. Using manually segmented contours supplied by experienced radiologists, GA techniques were used to vary the parameter values until the contours produced by the ACM algorithm closely matched those of the radiologists.
Rectal tumor boundary detection by unifying active contour model
Di Xiao, Wan Sing Ng, Udantha R. Abeyratne, et al.
The Project of 3D reconstruction of rectal wall structure aims at developing an analysis system to help surgeons cope with a large quantities of rectal ultrasound images, involving muscular layer detection, rectal tumor detection, and 3D reconstruction, etc. In the procedure of tumor detection, a traditional active contour model suffers some difficulties for finding the boundary of tumor when it deforms from the seed in the interior of a tumor. In this paper, we proposed a novel united active contour model with the information of image region feature and image gradient feature for the purpose of tumor detection. Region-based method added in the model, however, introduces a statistical method into the segmentation of the image and hence becomes less sensitive to noise. The originality in this algorithm is that we introduce a Gaussian Mixture Model (GMM) into the statistical model description of seed region. This model can perform more accurate and optimal statistical description than a single Gaussian model. A K-means algorithm and an Expectation Maximization (EM) algorithm are used for optimal parameter estimation of GMM. The experimental results show the new model has more optimal performance for image segmentation and boundary finding than classical active contour model.
Prototype of a rectal wall ultrasound image analysis system
Di Xiao, Wan Sing Ng, Udantha R. Abeyratne, et al.
This paper presents a software system prototype for rectal wall ultrasound image processing, image display and 3D reconstruction and visualization of the rectal wall structure, which is aimed to help surgeons cope with large quantities of rectal wall ultrasound images. On the core image processing algorithm part, a novel multigradient field active contour model proposed by authors is used to complete the multi-layer boundary detection of the rectal wall. A novel unifying active contour model, which combines region information, gradient information and contour's internal constraint, is developed for tumor boundary detection. The region statistical information is described accurately by Gaussian Mixture Model, whose parameter solution is computed by Expectation-Maximization algorithm. The whole system is set up on Java platform. Java JAI technology is used for 2D image display, Java3D technology is employed for 3D reconstruction and visualization. The system prototype is currently composed of three main modules: image processing, image display and 3D visualization.
Extraction of polygonal boundary surfaces from volume image
Lisheng Wang, Jing Bai, PhengAnn Heng, et al.
Detection and extraction of the boundary surfaces of interested regions within the medical volume image are important research topics. In this paper, we will introduce several methods to detect, extract and approximate the boundary surfaces within volume image.
Algorithm for quantifying advanced carotid artery atherosclerosis in humans using MRI and active contours
Gareth Adams, G. Wesley Vick III, Cassius Bordelon, et al.
A new algorithm for measuring carotid artery volumes and estimating atherosclerotic plaque volumes from MRI images has been developed and validated using pressure-perfusion-fixed cadaveric carotid arteries. Our method uses an active contour algorithm with the generalized gradient vector field force as the external force to localize the boundaries of the artery on each MRI cross-section. Plaque volume is estimated by an automated algorithm based on estimating the normal wall thickness for each branch of the carotid. Triplicate volume measurements were performed by a single observer on thirty-eight pairs of cadaveric carotid arteries. The coefficient of variance (COV) was used to quantify measurement reproducibility. Aggregate volumes were computed for nine contiguous slices bounding the carotid bifurcation. The median (mean +/- SD) COV for the 76 aggregate arterial volumes was 0.93% (1.47% +/- 1.52%) for the lumen volume, 0.95% (1.06% +/- 0.67%) for the total artery volume, and 4.69% (5.39% +/- 3.97%) for the plaque volume. These results indicate that our algorithm provides repeatable measures of arterial volumes and a repeatable estimate of plaque volume of cadaveric carotid specimens through analysis of MRI images. The algorithm also significantly decreases the amount of time necessary to generate these measurements.
Automated segmentation method for the 3D ultrasound carotid image based on geometrically deformable model with automatic merge function
Stenosis of the carotid is the most common cause of the stroke. The accurate measurement of the volume of the carotid and visualization of its shape are helpful in improving diagnosis and minimizing the variability of assessment of the carotid disease. Due to the complex anatomic structure of the carotid, it is mandatory to define the initial contours in every slice, which is very difficult and usually requires tedious manual operations. The purpose of this paper is to propose an automatic segmentation method, which automatically provides the contour of the carotid from the 3-D ultrasound image and requires minimum user interaction. In this paper, we developed the Geometrically Deformable Model (GDM) with automatic merge function. In our algorithm, only two initial contours in the topmost slice and four parameters are needed in advance. Simulated 3-D ultrasound image was used to test our algorithm. 3-D display of the carotid obtained by our algorithm showed almost identical shape with true 3-D carotid image. In addition, experimental results also demonstrated that error of the volume measurement of the carotid based on the three different initial contours is less that 1% and its speed was a very fast.
Blood pool agent contrast-enhanced MRA: level-set-based artery-vein separation
Cornelis M. van Bemmel, Luuk J. Spreeuwers, Bert Verdonck, et al.
Blood pool agents (BPAs) for contrast-enhanced magnetic resonance angiography (CE-MRA) allow prolonged imaging times for higher contrast and resolution by imaging during the steady-state when the contrast agent is distributed through the complete vascular system. However, simultaneous venous and arterial enhancement hampers interpretation. It is shown that arterial and venous segmentation in this equilibrium phase can be achieved if the central arterial axis (CAA) and central venous axis (CVA) are known. Since the CAA can not straightforwardly be obtained from the steady-state data, images acquired during the first-pass of the contrast agent can be utilized to determine the CAA with minimal user initialization. Utilizing the CAA to provide a rough arterial segmentation, the CVA can subsequently be determined from the steady-state dataset. The final segmentations of the arteries and veins are achieved by simultaneously evolving two level-sets in the steady-state dataset starting from the CAA and CVA.
Vectoral-scale-based fuzzy-connected image segmentation
This paper presents an extension of previously published theory and algorithms for scale-based fuzzy connected image segmentation. In this approach, a strength of connectedness is assigned to every pair of image elements. This is done by finding the strongest among all possible connecting paths between the two elements in each pair. The strength assigned to a particular path is defined as the weakest affinity between successive pairs of elements along the path. Affinity specifies the degree to which elements hang together locally in the image. A scale is determined at every element in the image that indicates the size of the largest homogeneous region centered at the element. IN determining affinity between any two elements, all elements within their scale regions are considered. This method has been effectively utilized in several medical applications. In this paper, we generalize this scale-based fuzzy connected image segmentation method from scalar images to vectorial images. In a vectorial image, scale is defined as the radius of the largest hyperball contained in the same homogeneous region under a predefined condition of homogeneity of the image vector field. Two different components of affinity, namely homogeneity-based affinity and object-feature-based affinity, are devised in a fully vectorial manner. The original relative fuzzy connectedness algorithm is utilized to delinate a specified object via a competing strategy among multiple objects. We have tested this method in several medical applications, which qualitatively demonstrate the effectiveness of the method. Based on evaluation studies, a precision and accuracy of better than 95% has been achieved in an application involving MR brain image analysis.
Vessel-valley-course generation algorithm for quantitative analysis of angiograms
Three dimensional reconstruction and quantitative analysis of angiograms require vessel centerline determination and tracking. In a vessel profile, it is more straightforward to locate the valley point (local minimum) than the center. Therefore, vessel valley courses offer advantages over centerlines in terms of natural features and easy-to-locate. We propose a 'star' scan technique to generate a valley map, which is then traced to determine the valley courses. The angiogram is scanned along the horizontal, vertical, diagonal and anti-diagonal directions. The scan pattern resembles a 'star'; therefore, it is referred to as a 'star' scan. The scanning along each direction provides an image consisting of scan profiles, which may be multi-modal functions. We then detect and record the local minimum locations, thereby generating a valley map. By searching over the valley map, we can generate valley courses, which can be used for vessel quantitative analysis and 3-D reconstruction. Using the valley course, it is a straightforward process to generate centerlines. This is a robust and easily implementable algorithm for quantitative analysis of angiograms. Experimental validation of the algorithm will be reported using coronary angiograms and phantom images.
Automated extraction of aorta and pulmonary artery in mediastinum from 3D chest x-ray CT images without contrast medium
Takayuki Kitasaka, Kensaku Mori, Jun-ichi Hasegawa, et al.
This paper proposes a method for automated extraction of the aorta and pulmonary artery (PA) in the mediastinum of the chest from uncontrasted chest X-ray CT images. The proposed method employs a model fitting technique to use shape features of blood vessels for extraction. First, edge voxels are detected based on the standard deviation of CT values. A likelihood image, which shows the degree of likelihood on medial axes of vessels, are calculated by applying the Euclidean distance transformation to non-edge voxels. Second, the medial axis of each vessel is obtained by fitting the model. This is done by referring the likelihood image. Finally, the aorta and PA areas are recovered from the medial axes by executing the reverse Euclidean distance transformation. We applied the proposed method to seven cases of uncontrasted chest X-ray CT images and evaluated the results by calculating the coincidence index computed from the extracted regions and the regions manually traced. Experimental results showed that the extracted aorta and the PA areas coincides with manually input regions with the coincidence indexes values 90% and 80-90%,respectively.
Segmentation of burn images using the L*u*v* space and classification of their depths by color and texture imformation
Begona Acha Pinero, Carmen Serrano, Jose Ignacio Acha
In this paper a burn color image segmentation and classification algorithm is proposed. The aim of the algorithm is to separate the burn wounds from healthy skin, and the different types of burns (burn depths) among themselves. We use digital color photographs. The system is based on the color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. We use a perceptually uniform color space (L*u*v*), since Euclidean distances calculated in this space correspond to perceptually color differences. After the burn is segmented, some color and texture descriptors features are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural network classifies them into three types of burns: superficial dermal, depth dermal and full thickness. We get an average classification success rate of 88.89%.
Development of support systems for pathology using spectral transmittance: the quantification method of stain conditions
Keiko Fujii, Masahiro Yamaguchi, Nagaaki Ohyama, et al.
In pathological diagnosis, a tissue sample is usually observed under a microscope after the tissue has been stained, but the stain conditions and the characteristics of observation devices vary depending on hospitals. That makes objective or quantitative diagnosis to be more difficult. This paper proposes the method for quantification of stain conditions from microscopic image obtained as transmittance spectra that is independent of observation devices. In the proposed method, Beer-Lambert law is used to obtain the amount of stain color pigments in an arbitrary tissue sample using the absorbance spectra of the stain color pigments. Using the transmittance spectral image, which has been estimated from the signal of multispectral digital camera, we can estimate the feature parameters characterizing the stain conditions. Through the experiment using the Hematoxilyn & Eosin stained tissue samples, the effectiveness of the proposed method is confirmed.
Convex geometry for rapid tissue classification in MRI
Erick Wong, Craig Jones
We propose an efficient computational engine for solving linear combination problems that arise in tissue classification on dual-echo MRI data. In 2D feature space, each pure tissue class is represented by a central point, together with a circle representing a noise tolerance. A given unclassified voxel can be approximated by a linear combination of these pure tissue classes. With more than three tissue classes, multiple combinations can represent the same point, thus heuristics are employed to resolve this ambiguity. An optimised implementation is capable of classifying 1 million voxels per second into four tissue types on a 1.5GHz Pentium 4 machine. Used within a region-growing application, it is found to be at least as robust and over 10 times faster than numerical optimization and linear programming methods.
Segmentation-based retrospective correction of intensity nonuniformity in multispectral MR images
Bostjan Likar, Joze Derganc, Franjo Pernus
Intensity non-uniformity in magnetic resonance (MR) images is an adverse phenomenon, which manifests itself as slow intensity variations of the same tissue over the image domain. It may have serious implications for MR image analysis. For example, intensity non-uniformity increases the overlap between intensity distributions of distinct tissues and therefore makes segmentation more difficult and less precise. Because correction of intensity non-uniformity and segmentation are inherently related problems, we propose a novel method, which interleaves them, so that they support each other and gradually improve, until final correction and segmentation is reached. We derive a parametric non-uniformity correction model in a form of a linear combination of non-linear basis functions. The non-uniformity correction is based on iterative minimization of class square-error, i.e. within-class scatter, of intensity distribution that is due to non-uniformity. For this purpose we employ a non-parametric segmentation method presented in MI 4684-41. We consider inter-spectral independent non-uniformity effects and provide corresponding non-uniformity correction models and algebra for computing the parameters. The proposed method is tested on simulated and real, single- and multi-spectral, MR brain images. The method does not induce additional intensity variations in simulated uniform images and efficiently removes non-uniformity of simulated and real MR images and thereby improves the results of segmentation.
Computer-assisted measurement of cervical length from transvaginal ultrasound images
Min Wu, Robert F. Fraser, Chang Wen Chen
We describe an image processing algorithm that identifies the anatomic landmarks of the cervix on a transvaginal ultrasound image and determines the standard cervical length. The system is composed of four stages: The first stage is adaptive speckle suppression using variable length sticks algorithm. The second stage is the location of the internal cervical opening or 'os' using a region-based segmentation. The third stage is delineation of the cervical canal. The fourth stage uses gray level summation patterns and prior knowledge to first localize the tissue boundary of the external cervix. A template is then used to determine the specific location of the external os. The cervical length is determined and calculated to image scale. For validation, 101 cervical ultrasound images were selected from a series of 37 examinations performed on 17 patients over an 8-month period. Repeated measurements of cervical length using the computer assisted method were compared with those of two experienced sonographers. The mean coefficient of variation for serial measurements was 1.1% for the computer assisted method and averaged 4.7% for the manual method. In a pairwise comparison, the mean cervical length for the computer method was not different from the mean manual cervical length.
Wavelet-based segmentation for fetal ultrasound texture images
Nourhan M. Zayed, Ahmed M. Badawi, Alaa M. Elsayed, et al.
This paper introduces an efficient algorithm for segmentation of fetal ultrasound images using the multiresolution analysis technique. The proposed algorithm decomposes the input image into a multiresolution space using the packet two-dimensional wavelet transform. The system builds features vector for each pixel that contains information about the gray level, moments and other texture information. These vectors are used as inputs for the fuzzy c-means clustering method, which results in a segmented image whose regions are distinct from each other according to texture characteristic content. An Adaptive Center Weighted Median filter is used to enhance fetal ultrasound images before wavelet decomposition. Experiments indicate that this method can be applied with promising results. Preliminary experiments indicate good results in image segmentation while further studies are needed to investigate the potential of wavelet analysis and fuzzy c-means clustering methods as a tool for detecting fetus organs in digital ultrasound images.
Segmentation of the skull in 3D human MR images using mathematical morphology
Belma Dogdas, David W. Shattuck, Richard M. Leahy
We present a new technique for segmentation of skull in human T1-weighted magnetic resonance (MR) images that generates realistic models of the head for EEG and MEG source modeling. Our method performs skull segmentation using a sequence of mathematical morphological operations. Prior to the segmentation of skull, we segment the scalp and the brain from the MR image. The scalp mask allows us to quickly exclude background voxels with intensities similar to those of the skull, while the brain mask obtained from our Brain Surface Extractor algorithm ensures that the brain does not intersect our skull segmentation. We find the inner and the outer skull boundaries using thresholding and morphological closing and opening operations. We then mask the results with the scalp and brain volumes to ensure closed and nonintersecting skull boundaries. We applied our scalp and skull segmentation algorithm to several MR images and validated our method using coregistered CT-MR image data sets. We observe that our method is capable of producing scalp and skull segmentations suitable for MEG and EEG source modeling in 3D T1-weighted human MR images.
Segmentation of confocal microscopic image of insect brain
Ming-Jin Wu, Chih-Yang Lin, Yu-Tai Ching
Accurate analysis of insect brain structures in digital confocal microscopic images is valuable and important to biology research needs. The first step is to segment meaningful structures from images. Active contour model, known as snakes, is widely used for segmentation of medical images. A new class of active contour model called gradient vector flow snake has been introduced in 1998 to overcome some critical problems encountered in the traditional snake. In this paper, we use gradient vector flow snake to segment the mushroom body and the central body from the confocal microscopic insect brain images. First, an edge map is created from images by some edge filters. Second, a gradient vector flow field is calculated from the edge map using a computational diffusion process. Finally, a traditional snake deformation process starts until it reaches a stable configuration. User interface is also provided here, allowing users to edit the snake during deformation process, if desired. Using the gradient vector flow snake as the main segmentation method and assist with user interface, we can properly segment the confocal microscopic insect brain image for most of the cases. The identified mushroom and central body can then be used as the preliminary results toward a 3-D reconstruction process for further biology researches.
Determination of position and radius of ball joints
Marjolein van der Glas, Frans M. Vos, Charl P. Botha, et al.
For successful ball-joint replacement surgery, it is important to maintain the joint's geometric center. Pre-operative detection of this center is achieved by detecting the sphere that fits onto the articular surfaces in CT or MRI images. We have developed a novel technique to automatically determine the sub-voxel position and size of a sphere in unsegmented 3D images. The method is invariant to size and robust to noise. It only needs one fourth of a sphere to detect the center. Isotropically as well as anisotropically sampled images can be used. As no segmentation is required, it can be applied directly to clinical images.
Segmentation of the lumbar spine with knowledge-based shape models
Michael Kohnen, Andreas H. Mahnken, Alexander S. Brandt, et al.
A shape model for full automatic segmentation and recognition of lateral lumbar spine radiographs has been developed. The shape model is able to learn the shape variations from a training dataset by a principal component analysis of the shape information. Furthermore, specific image features at each contour point are added into models of gray value profiles. These models were computed from a training dataset consisting of 25 manually segmented lumbar spines. The application of the model containing both shape and image information is optimized on unknown images using a simulated annealing search first to acquire a coarse localization of the model. Further on, the shape points are iteratively moved towards image structures matching the gray value models. During optimization the shape information of the model assures that the segmented object boundary stays plausible. The shape model was tested on 65 unknown images achieving a mean segmentation accuracy of 88% measured from the percental cover of the resulting and manually drawn contours.
Protocol-independent brain MRI segmentation method
Laszlo G. Nyul, Jayaram K. Udupa
We present a segmentation method that combines the robust, accurate, and efficient techniques of fuzzy connectedness with standardized MRI intensities and fast algorithms. The result is a general segmentation framework that more efficiently utilizes the user input (for recognition) and the power of computer (for delineation). This same method has been applied to segment brain tissues from a variety of MRI protocols. Images were corrected for inhomogeneity and standardized to yield tissue-specific intensity values. All parameters for the fuzzy affinity relations were fixed for a specific input protocol. Scale-based fuzzy affinity was used to better capture fine structures. Brain tissues were segmented as 3D fuzzy-connected objects by using relative fuzzy connectedness. The user can specify seed points in about a minute and tracking the 3D fuzzy-connected objects takes about 20 seconds per object. All other computations were performed before any user interaction took place. Segmentation of brain tissues as 3D fuzzy-connected objects from MRI data is feasible at interactive speeds. Utilizing the robust fuzzy connectedness principles and fast algorithms, it is possible to interactively select fuzzy affinity, seed point, and threshold parameters and perform efficient, precise, and accurate segmentations.
Combining snake-based and intensity-based processing for segmentation of renal structure in lower-torso CT data
We introduce a new method that allows the kidney to be extracted (i.e., segmented) from lower torso computerized tomography (CT) datasets. The method combines active contour (snake)-, intensity-, and shape-based processing to extract the kidney. Initially, a general-purpose coarse mathematical shape model of the kidney is extracted by the method. This coarse segmentation is then refined by snake-based deformation.
Adaptive RBF network with active contour coupling for multispectral MRI segmentation
A segmentation procedure using a radial basis function network (RBFN), coupled with an active contour (AC) model based on a cubic splines formulation is presented for the detection of the gray-white matter boundary in axial MMRI (T1, T2 and PD). A RBFN classifier has been previously introduced for MMRI segmentation, with good generalization at a rate of 10% misclassification over white and gray matter pixels on the validation set. The coupled RBFN and AC model system incorporates the posterior probability estimation map into the AC energy term as a restriction force. The RBFN output is also employed to provide an initial contour for the AC. Furthermore, an adaptation strategy for the network weights, guided by a feedback from the contour model adjustment at each iteration, is described. In order to compare the algorithm's performance, the segmentations using the adaptive, as well as the non-adaptive schemes were computed. It was observed that the major differences are located around deep circonvolutions, where the result of the adaptive process is superior than that obtained with the non-adaptive scheme, even in moderate noise conditions. In summary, the RBFN provides a good initial contour for the AC, the coupling of both processes keeps the final contour within the desired region and the adaptive strategy enhances the contour location.
Information fusion approach for detection of brain structures in MRI
This paper presents an information fusion approach for automatic detection of mid-brain nuclei (caudate, putamen, globus pallidus, and thalamus) from MRI. The method is based on fusion of anatomical information, obtained from brain atlases and expert physicians, into MRI numerical information within a fuzzy framework, employed to model intrinsic uncertainty of problem. First step of this method is segmentation of brain tissues (gray matter, white matter, and cerebrospinal fluid). Physical landmarks such as inter-hemispheric plane alongside numerical information from segmentation step are then used to describe the nuclei. Each nucleus is defined according to a unique description according to physical landmarks and anatomical landmarks, most of which are the previously detected nuclei. Also, a detected nucleus in slice n serves as key landmark to detect same nucleus in slice n+1. These steps construct fuzzy decision maps. Overall decision is made after fusing all of decisions according to a fusion operator. This approach has been implemented to detect caudate, putamen, and thalamus from a sequence of axial T1-weighted brain MRI's. Our experience shows that final nuclei detection results are highly dependent upon primary tissue segmentation. The method is validated by comparing resultant nuclei volumes with those obtained using manual segmentation performed by expert physicians.
Robust semi-automatic segmentation of single- and multichannel MRI volumes through adaptable class-specific representation
Casper F. Nielsen, Peter J. Passmore
Segmentation of MRI volumes is complicated by noise, inhomogeneity and partial volume artefacts. Fully or semi-automatic methods often require time consuming or unintuitive initialization. Adaptable Class-Specific Representation (ACSR) is a semi-automatic segmentation framework implemented by the Path Growing Algorithm (PGA), which reduces artefacts near segment boundaries. The user visually defines the desired segment classes through the selection of class templates and the following segmentation process is fully automatic. Good results have previously been achieved with color cryo section segmentation and ACSR has been developed further for the MRI modality. In this paper we present two optimizations for robust ACSR segmentation of MRI volumes. Automatic template creation based on an initial segmentation step using Learning Vector Quantization is applied for higher robustness to noise. Inhomogeneity correction is added as a pre-processing step, comparing the EQ and N3 algorithms. Results based on simulated T1-weighed and multispectral (T1 and T2) MRI data from the BrainWeb database and real data from the Internet Brain Segmentation Repository are presented. We show that ACSR segmentation compares favorably to previously published results on the same volumes and discuss the pros and cons of using quantitative ground truth evaluation compared to qualitative visual assessment.
Segmentation of anatomical structures in x-ray computed tomography images using artificial neural networks
Hierarchies of artificial neural networks(ANN's) were trained to segment regularly-shaped and constantly-located anatomical structures in x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point using the image pixel intensity values located in a pattern around the point. The single layer ANN and the bilayer and multi-layer hierarchies of neural networks were developed and evaluated. The hierarchical Artificial Neural Networks(HANN's) consisted of a high-level ANN that identified large-scale anatomical structures (e.g., the head or chest), whose result was passed to a group of neural networks that identified smaller structures (e.g., the brain, sinus, soft tissue, skull, bone, or lung) within the large-scale structures. The ANN's were trained to segment and classify images based on different numbers of training images, numbers of sampling points per image, pixel intensity sampling patterns, hidden layer configuration. The experimental results indicate that multi-layer hierarchy of ANN's trained with data collected from multiple image series accurately classified anatomical structures in unknown chest and head CT images.
Fast vessel identification using polyphase decomposition and intercomponent processing
Polyphase decomposition is a down sampling operation that produces a set of low-resolution representations of an image. Such representations themselves are different by a phase in frequency domain, hence called polyphase components. An inter-component processing operation extracts meaningful features by performing simple logical operations over selected components. This strategy is applied to angiographic analysis to develop a fast feature-oriented vessel identification technique, which consists of polyphase decomposition on a binary image, followed by inter-component processing. The inter-component processing among selected components produces a feature map in which a non-zero pixel indicates an occurrence of a vessel geometrical feature or pattern in the original image. Using feature templates, a sequence of vessel-featured maps is generated. Fast vessel identification is performed by fusing the feature maps and displaying them according to the emergence orders of vessel geometric features, such as position, diameter, length and direction. Collective display provides a method to visualize vessel features across multiple resolutions. High-speed performance is attributed to low-resolution representation of polyphase components and simple data manipulation of inter-component processing. The tradeoff of such vessel identification technique is associated with an uncertainty for accurate measurement, arising from the inherent translations in polyphase decomposition. Therefore, accurate vessel measurements will need refinement in the original image.
Object-tracking segmentation method: vertebra and rib segmentation in CT images
Dongsung Kim, Hanyoung Kim, Heung Sik Kang
This paper proposes a method that segments an object in 3D image slices by tracking not only the target objects to segment but also adjacent non-target objects. The tracking methodology enables the proposed method to detect and prevent a leakage, which is one of the most challenging problems in medical image segmentation. The method also provides an efficient way of modification of segmentation results by informing only suspicious image slices because of a leakage. The proposed method is applied to segment vertebrae in angio image slices in which blood vessels are enhanced with contrast media, and produced promising results in terms of both accuracy and efficiency.
Automatic detection method of lung cancers including ground-glass opacities from chest x-ray CT images
Toshiharu Ezoe, Hotaka Takizawa, Shinji Yamamoto, et al.
In this paper, we described an algorithm of automatic detection of ground glass opacities (GGO) from X-ray CT images. In this algorithm, at first, pathological shadow candidates are extracted by our variable N-Quoit filter which is a kind of mathematical morphology filter. Next, shadow candidates are classified into some classes using feature values calculated from the shadow candidates. By using discriminate functions, at last, shadow candidates are discriminated between normal shadows and abnormal ones. This method was examined by 38 samples (including GGO's shadows) of chest CT images, and proved to be very effective.
3D image analysis of abdominal aortic aneurysm
Marko Subasic, Sven Loncaric, Erich Sorantin
This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.
Graph-based region growing for mass-segmentation in digital mammography
Yong Chu, Lihua Li, Robert A. Clark M.D.
Mass segmentation is a vital step in CAD mass detection and classification. A challenge for mass segmentation in mammograms is that masses may contact with some surrounding tissues, which have the similar intensity. In this paper, a novel graph-based algorithm has been proposed to segment masses in mammograms. In the proposed algorithm, the procedure of region growing is represented as a growing tree whose root is the selected seed. Active leaves, which have the ability to grow, in the connection area between adjacent regions are deleted to stop growing, then separating the adjacent regions while keeping the spiculation of masses, which is a primary sign of malignancy for masses. The new constrained segmentation was tested with 20 cases in USF moffitt mammography database against the conventional region growing algorithm. The segmented mass regions were evaluated in terms of the overlap area with annotations made by the radiologist. We found that the new graph-based segmentation more closely match radiologists' outlines of these masses.
Tumor volume measurement for nasopharyngeal carcinoma using knowledge-based fuzzy clustering MRI segmentation
Jiayin Zhou, Tuan-Kay Lim, Vincent Chong
A knowledge-based fuzzy clustering (KBFC) MRI segmentation algorithm was proposed to obtain accurate tumor segmentation for tumor volume measurement of nasopharyngeal carcinoma (NPC). An initial segmentation was performed on T1 and contrast enhanced T1 MR images using a semi-supervised fuzzy c-means (SFCM) algorithm. Then, three types of anatomic and space knowledge--symmetry, connectivity and cluster center were used for image analysis which contributed the final tumor segmentation. After the segmentation, tumor volume was obtained by multi-planimetry method. Visual and quantitative validations were performed on phantom model and six data volumes of NPC patients, compared with ground truth (GT) and the results acquired using seeds growing (SG) for tumor segmentation. In visual format, KBFC showed better tumor segmentation image than SG. In quantitative segmentation quality estimation, on phantom model, the matching percent (MP) / correspondence ratio (CR) was 94.1-96.4% / 0.888-0.925 for KBFC and 94.1-96.0% / 0.884-0.918 for SG while on patient data volumes, it was 92.1+/- 2.6% / 0.884+/- 0.014 for KBFC and 87.4+/- 4.3% / 0.843+/- 0.041 for SG. In tumor volume measurement, on phantom model, measurement error was 4.2-5.0% for KBFC and 4.8-6.1% for SG while on patient data volumes, it was 6.6+/- 3.5% for KBFC and 8.8+/- 5.4% for SG. Based on these results, KBFC could provide high quality of MRI tumor segmentation for tumor volume measurement of NPC.
Multiresolution segmentation technique for spine MRI images
Haiyun Li, Chye Hwang Yan, Sim Heng Ong, et al.
In this paper, we describe a hybrid method for segmentation of spinal magnetic resonance imaging that has been developed based on the natural phenomenon of stones appearing as water recedes. The candidate segmentation region corresponds to the stones with characteristics similar to that of intensity extrema, edges, intensity ridge and grey-level blobs. The segmentation method is implemented based on a combination of wavelet multiresolution decomposition and fuzzy clustering. First thresholding is performed dynamically according to local characteristic to detect possible target areas, We then use fuzzy c-means clustering in concert with wavelet multiscale edge detection to identify the maximum likelihood anatomical and functional target areas. Fuzzy C-Means uses iterative optimization of an objective function based on a weighted similarity measure between the pixels in the image and each of c cluster centers. Local extrema of this objective function are indicative of an optimal clustering of the input data. The multiscale edges can be detected and characterized from local maxima of the modulus of the wavelet transform while the noise can be reduced to some extent by enacting thresholds. The method provides an efficient and robust algorithm for spinal image segmentation. Examples are presented to demonstrate the efficiency of the technique on some spinal MRI images.
Segmentation of ECG-gated multidetector row-CT cardiac images for functional analysis
Multi-row detector CT (MDCT) gated with ECG-tracing allows continuous image acquisition of the heart during a breath-hold with a high spatial and temporal resolution. Dynamic segmentation and display of CT images, especially short- and long-axis view, is important in functional analysis of cardiac morphology. The size of dynamic MDCT cardiac images, however, is typically very large involving several hundred CT images and thus a manual analysis of these images can be time-consuming and tedious. In this paper, an automatic scheme was proposed to segment and reorient the left ventricular images in MDCT. Two segmentation techniques, deformable model and region-growing methods, were developed and tested. The contour of the ventricular cavity was segmented iteratively from a set of initial coarse boundary points placed on a transaxial CT image and was propagated to adjacent CT images. Segmented transaxial diastolic cardiac phase MDCT images were reoriented along the long- and short-axis of the left ventricle. The axes were estimated by calculating the principal components of the ventricular boundary points and then confirmed or adjusted by an operator. The reorientation of the coordinates was applied to other transaxial MDCT image sets reconstructed at different cardiac phases. Estimated short-axes of the left ventricle were in a close agreement with the qualitative assessment by a radiologist. Preliminary results from our methods were promising, with a considerable reduction in analysis time and manual operations.
IVUS image segmentation based on contrast
Hui Zhu, Yun Liang, Morton H. Friedman
In this paper, we present a new method to segment the walls of coronary arteries in IVUS (Intravascular Ultrasound) images based on a deformable model, which integrates both edge and region information. The whole image is supposed to have three regions - lumen, vessel wall, and adventitia plus surroundings, which are separated by two closed contours - the inner and outer boundaries. Our method has two steps: firstly, the outer vessel wall boundary is detected by minimizing an energy function of the contrast along it; secondly, by minimizing another energy function that considers the different gray level distributions of the lumen and the vessel wall, and the contrast along the edge between these two regions, the inner vessel wall is located. Dynamic programming is adopted to implement this method. Experimental results show that contrast information is a good feature for boundary detection in IVUS images.
Semi-automatic volumetrics system to parcellate ROI on neocortex
Ou Tan, Tetsuya Ichimiya, Fumihiko Yasuno, et al.
A template-based and semi-automatic volumetrics system--BrainVol is build to divide the any given patient brain to neo-cortical and sub-cortical regions. The standard region is given as standard ROI drawn on a standard brain volume. After normalization between the standard MR image and the patient MR image, the sub-cortical ROIs' boundary are refined based on gray matter. The neo-cortical ROIs are refined by sulcus information that is semi-automatically marked on the patient brain. Then the segmentation is applied to 4D PET image of same patient for calculation of TAC (Time Activity Curve) by co-registration between MR and PET.
Unsupervised MRI segmentation with spatial connectivity
Magnetic Resonance Imaging (MRI) offers a wealth of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. We have developed a new unsupervised MRI segmentation method based on k-means and fuzzy c-means (FCM) algorithms, which uses spatial constraints. Spatial constraints are included by the use of a Markov Random Field model. The result of segmentation with a four-neighbor Markov Random Field model applied to multi-spectral MRI (5 images including one T1-weighted image, one Proton Density image and three T2-weighted images) in different noise levels is compared to the segmentation results of standard k-means and FCM algorithms. This comparison shows that the proposed method outperforms previous methods.
Robust pointwise tracking of countours
For several applications in Medicine, it is fundamental to determine the spatial tracking, along the time, of structure parts such as the apex of heart. The objective of this work is to present a robust technique for pointwise tracking of contours. Given two n-dimensional closed contours (S, D) derived from two consecutive image scenes, the basic idea is to find the cheapest path that connects each point in S to a point in D. The critical steps are the definition of the cost function and the numerical approach for the global discrete minimization. For the cost function, we have used the minimum distance to the contours and curvature of a level set function. The global discrete optimization can be achieved using dynamic programming on a constrained region with privileged tracks. A privileged path is associated to all elements that form destination contour D and a point of this contour is set as seed for the dynamic programming. By this artifice, all paths can be obtained. Simulations with several polygons showed encouraging results. For instance, for 10 prominent points and distortion of 15 degree(s) plus 20% of expansion, in a 400 x 400 pixels image, the mean distance error was below 2 pixels.
Unsupervised brain segmentation using T2 window
Measurement of brain structures could lead to important diagnostic information and could indicate the success or failure of a certain pharmaceutical drug. We have developed a totally unsupervised technique that segments and quantifies brain structures from T2 dual echo MR images. The technique classified four different tissue clusters in a scatter plot (air, CSF, brain, and face). Several novel image-processing techniques were implemented to reduce the spread of these clusters and subsequently generate tissue based T2 windows. These T2 windows encompassed all the information needed to segment and subsequently quantify the corresponding tissues in an automatic fashion. We have applied the technique on nineteen MR data sets (16 normal and 3 Alzheimer diseased [AD] patients). The measurements from the T2 window technique differentiated AD patients from normal subjects. The mean value of the %CSF from total the brain was %29.2 higher for AD patients from the %CSF for normal subjects. Furthermore, the technique ran under 30 seconds per data set on a PC with 550 MHz dual processors.
Automated estimation of breast composition from MR images
We present a simple algorithm for determining the fat fraction in magnetic resonance images of the breast. These computed values are intended to help train neural networks for determining breast composition from x-ray mammograms. The method relies on simple intensity thresholding to form a binary mask followed by morphological dilations and erosions, automated region selection and clustering the tissues within the mask into fat and parenchymal components. Correcting the image intensity nonuniformity due to the spatial sensitivity profile of the breast coil was found to be essential and easily accomplished with homologous filtering. In the absence of large artifacts, the algorithm was able to accurately calculate breast fat fractions.
Image segmentation and 3D visualization for MRI mammography
Lihua Li, Yong Chu, Angela F. Salem, et al.
MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.
Mean shift detection using active learning in dermatological images
Gabriela Maletti, Bjarne Kjaer Ersboll, Knut Conradsen
A scheme for detecting heterogenous regions in dermatological images with malignant melanoma is proposed. The scheme works without setting any parameter. The mean shift detection problem is divided into two stages: window size optimization and detection. In the first stage, the maximum circular neighborhood centered on each pixel for which it is true that all the elements belong to the same class as the central one is estimated using redundant data sets generated with overlapping groups. Statistics are computed from all these neighborhoods and associated ot the respective central pixels. As expected, larger values of a minimizing energy function are assigned to pixels belonging to heterogeneous regions. In the second stage, those regions are detected by applying first an expectation-maximization algorithm and, afterwards, automatically defining a threshold between homogeneous and heterogeneous regions. The present scheme is tested on a set of synthetical images. Results are shown on synthetical and real images. Extensions of the scheme to textural cases are also shown.
Method for quantitative assessment of atherosclerotic lesion burden on the basis of high-resolution black-blood MRI
Jeffrey Duda, Hee Kwon Song, Ronald Wolf M.D., et al.
The aim of this work was to develop a reliable semi-automatic method for quantifying carotid atherosclerotic lesion burden using black-blood high-resolution MR images. Vessel wall volume was quantified by measuring its cross-sectional area in adjacent slices. Two methods for obtaining this measure are presented. The first method approximates the outer boundary of the vessel on a slice-by-slice basis by fitting an ellipse to user-identified points and automatically identifying the lumen through examination of the histogram obtained from a local region of interest (ROI). The second, method identifies the lumen and wall throughout the entire volume based upon user-selected points in a single slice. Radially directed intensity profiles are examined in order to automatically locate points on the outer boundary, and the same histogram-based method is used for lumen delineation. The measure of wall area provided by the manual outer boundary selection has an intra-class correlation coefficient (ICC) of 0.83 for test-retest comparisons, but the ICC values for the inter-observer comparisons (0.84, 0.65) suggest that user bias remains a potential source of error. A susceptibility to low image signal-to-noise ratio (SNR) may present a limitation on the usefulness of the automated outer boundary selection method for use on whole image volumes.
Comparison of biomechanical breast models: a case study
Christine Tanner, Andreas Degenhard, Julia Anne Schnabel, et al.
We present initial results from evaluating the accuracy with which biomechanical breast models based on finite element methods can predict the displacements of tissue within the breast. We investigate the influence of different tissue elasticity values, Poisson's ratios, boundary conditions, finite element solvers and mesh resolutions on one data set. MR images were acquired before and after compressing a volunteer's breast gently. These images were aligned using a 3D non-rigid registration algorithm. The boundary conditions were derived from the result of the non-rigid registration or by assuming no patient motion at the deep or medial side. Three linear and two non-linear elastic material models were tested. The accuracy of the BBMs was assessed by the Euclidean distance of twelve corresponding anatomical landmarks. Overall, none of the tested material models was obviously superior to another regarding the set of investigated values. A major average error increase was noted for partially inaccurate boundary conditions at high Poisson's ratios due to introduced volume change. Maximal errors remained, however, high for low Poisson's ratio due to the landmarks closeness to the inaccurate boundary conditions. The choice of finite element solver or mesh resolution had almost no effect on the performance outcome.
From plastic to gold: a unified classification scheme for reference standards in medical image processing
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
Automated quantification of MS lesions in MRI: a validation study
Edward A. Ashton, Chihiro Takahashi, Michel J. Berg, et al.
Two novel methods for automated quantification of total lesion burden in multiple sclerosis patients using multi-spectral magnetic resonance (MR) imaging are examined. The first method, geometrically constrained region growth, requires user specification of lesion location. The second, directed multi-spectral segmentation, requires only the location of a single exemplar lesion. The performances of these methods are compared to manual tracing using three parameters: speed, precision, and accuracy. Both methods are shown to provide significant improvement over manual tracing in terms of processing time, inter- and intra-operator coefficients of variation, and global accuracy using both phantoms and clinical data.
Difficulties of T1 brain MRI segmentation techniques
M. Stella Atkins, Kevin Siu, Benjamin Law, et al.
This paper looks at the difficulties that can confound published T1-weighted Magnetic Resonance Imaging (MRI) brain segmentation methods, and compares their strengths and weaknesses. Using data from the Internet Brain Segmentation Repository (IBSR) as a gold standard, we ran three different segmentation methods with and without correcting for intensity inhomogeneity. We then calculated the similarity index between the brain masks produced by the segmentation methods and the mask provided by the IBSR. The intensity histograms under the segmented masks were also analyzed to see if a Bi-Gaussian model could be fit onto T1 brain data. Contrary to our initial beliefs, our study found that intensity based T1-weighted segmentation methods were comparable or even superior to, methods utilizing spatial information. All methods appear to have parameters that need adjustment depending on the data set used. Furthermore, it seems that the methods we tested for intensity inhomogeneity did not improve the segmentations due to the nature of the IBSR data set.
3D visualization for medical volume segmentation validation
Ayman M. Eldeib
This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.