Proceedings Volume 1808

Visualization in Biomedical Computing '92

cover
Proceedings Volume 1808

Visualization in Biomedical Computing '92

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 22 September 1992
Contents: 11 Sessions, 63 Papers, 0 Presentations
Conference: Visualization in Biomedical Computing 1992
Volume Number: 1808

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Volume Segmentation
  • Feature Analysis
  • Multimodality Registration
  • Processing and Transforms
  • Rendering and Interpretation
  • Viusalization Tools
  • Visualization Systems
  • Surgery and Treatment Planning
  • Diagnosis and Interpretation
  • Biology and Function
  • Tutorials
Volume Segmentation
icon_mobile_dropdown
Model-based segmentation of individual brain structures from MRI data
D. Louis Collins, Terence M. Peters, Weiqian Dai, et al.
This paper proposes a methodology that enables an arbitrary 3-D MRI brain image-volume to be automatically segmented and classified into neuro-anatomical components using multiresolution registration and matching with a novel volumetric brain structure model (VBSM). This model contains both raster and geometric data. The raster component comprises the mean MRI volume after a set of individual volumes of normal volunteers have been transformed to a standardized brain-based coordinate space. The geometric data consists of polyhedral objects representing anatomically important structures such as cortical gyri and deep gray matter nuclei. The method consists of iteratively registering the data set to be segmented to the VBSM using deformations based on local image correlation. This segmentation process is performed hierarchically in scale-space. Each step in decreasing levels of scale refines the fit of the previous step and provides input to the next. Results from phantom and real MR data are presented.
Structure-sensitive scale and the hierarchical segmentation of gray-level images
Lewis D. Griffin, Alan C.F. Colchester, Glynn P. Robinson, et al.
We present a graph-based approach to the production of hierarchical segmentations of grey- level images. The technique is designed to be as responsive as possible to the structure of the image but general in the sense that it is independent of the exact edge measure used. We present results using a novel edge measure which combines the first and second derivatives of the image to calculate the phase of the image with respect to scale. We combine this with the gradient to produce a value which reflects both the strength and the stability of candidate edges.
Boundary detection via dynamic programming
Jayaram K. Udupa, Supun Samarasekera, William A. Barrett
This paper reports a new method for detecting optimal boundaries in multidimensional scene data via dynamic programming (DP). In its current form the algorithm detects 2-D contours on slices and differs from other reported DP-based algorithms in an essential way in that it allows freedom in 2-D for finding optimal contour paths (as opposed to a single degree of freedom in the published methods). The method is being successfully used in segmenting object boundaries in a variety of medical applications including orbital volume from CT images (for craniofacial surgical planning), segmenting bone in MR images for kinematic analysis of the joints of the foot, segmenting the surface of the brain from the inner surface of the cranial vault, segmenting pituitary gland tumor for following the effect of a drug on the tumor, segmenting the boundaries of the heart in MR images, and segmenting the olfactory bulb for verifying hypotheses related to the size of this bulb in certain disease states.
Probabilistic segmentation using edge detection and region growing
Russell R. Stringham, William A. Barrett, David C. Taylor
A new segmentation algorithm is described which incorporates both region and edge information. The algorithm allows simultaneous segmentation of multiple anatomical objects given one or more user-specified disc-shaped seed regions which sample the density characteristics of the underlying anatomy. The algorithm is iterative in nature, using the seed discs to grow out the specified region(s), for the initial image slice, through a type of connected component labeling. The final segmentation from the previous image slice seeds the segmentation for the next adjoining slice until the entire image volume is processed. The algorithm requires no training, is adaptive, demonstrating good performance for differing data types including CT and MRI, and requires minimal user input. The output of the segmentation algorithm is a three-dimensional (3-D) n-ary scene (where n specifies the number of segmented regions) which is amenable to surface rendering, via surface tracking, or volume rendering by masking the n-ary scene against the original image volume.
Spatiotemporal detection of arterial structure using active contours
M. Eric Hyche, Norberto F. Ezquerra, Rakesh Mullick
Active contour models (commonly called `snakes'') have shown themselves to be a powerful and flexible paradigm for many problems in image understanding. Active contour models are now applied to the problem of detecting coronary blood vessels in cardiac angiography images, an important medical image understanding problem. Given two endpoints along a vessel, active contour models are used to find the vessel within a single angiography image. Snakes will also be employed to detect vessels across multiple frames, a task which involves computing local measures of interframe vessel motion. This is particularly important, since often the most useful information to clinicians is found in the motion of a vessel across the cardiac cycle.
Probabilistic multiscale image segmentation: set-up and first results (Proceedings Only)
Koen L. Vincken, Andre S.E. Koster, Max A. Viergever
We have developed a method to segment two- and three-dimensional images using a multiscale (hyperstack) approach with probabilistic linking. A hyperstack is a voxel-based multiscale data structure containing linkages between voxels at different scales. The scale-space is constructed by repeatedly applying a discrete convolution with a Gaussian kernel to the original input image. Between these levels of increasing scale we establish child-parent linkages according to a linkage scheme that is based on affection. In the resulting tree-like data structure roots are formed to indicate the most plausible locations in scale-space where objects (of different sizes) are actually defined by a single voxel. Tracing the linkages back from every root to the ground level produces a segmented image. The present paper deals with probabilistic linking, i.e., a set-up in which a child voxel can be linked to more than one parent voxel. The output of the thus constructed hyperstack -- a list of object probabilities per voxel -- can be directly related to the opacities used in volume renderers.
Feature Analysis
icon_mobile_dropdown
Surface parametrization and shape description
Christian Brechbuehler, Guido Gerig, Olaf Kuebler
Procedures for the parameterization and description of the surface of simply connected 3-D objects are presented. Critical issues for shape-based categorization and comparison of 3-D objects are addressed, which are generality with respect to object complexity, invariance to standard transformations, and descriptive power in terms of object geometry. Starting from segmented volume data, a relational data structure describing the adjacency of local surface elements is generated. The representation is used to parametrize the surface by defining a continuous, one-to-one mapping from the surface of the original object to the surface of a unit sphere. The mapping is constrained by two requirements, minimization of distortions and preservation of area. The former is formulated as the goal function of a nonlinear optimization problem and the latter as its constraints. Practicable starting values are obtained by an initial mapping based on a heat conduction model. In contract to earlier approaches, the novel parameterization method provides a mapping of arbitrarily shaped simply connected objects, i.e., it performs an unfolding of convoluted surface structures. This global parameterization allows the systematical scanning of the object surface by the variation of two parameters. As one possible approach to shape analysis, it enables us to expand the object surface into a series of spherical harmonic functions, extending the concept of elliptical Fourier descriptors for 2-D closed curves. The novel parameterization overcomes the traditional limitations of expressing an object surface in polar coordinates, which restricts such descriptions to star-shaped objects. The numerical coefficients in the Fourier series form an object-centered, surface-oriented descriptor of the object''s form. Rotating the coefficients in parameter space and object space puts the object into a standard position and yields a spherical harmonic descriptor which is invariant to translations, rotations, and scaling of the object. The series can be truncated after a number of harmonics chosen according to the amount of detail to be expressed. The new methods are illustrated with simple 3-D test objects. Potential applications are recognition, classification, and comparison of convoluted surfaces or parts of surfaces of 3-D shapes, e.g., of anatomical objects segmented from multidimensional medical image data.
Deformable Fourier models for surface finding in 3-D images
Lawrence H. Staib, James S. Duncan
This paper describes a new global shape parametrization for smoothly deformable three- dimensional objects, such as those found in biomedical images, whose diversity and irregularity make them difficult to represent in terms of fixed features or parts. This representation is used for geometric surface matching to three-dimensional image data. The parametrization decomposes the surface into sinusoidal basis functions. Four types of surfaces are modeled: tori, open surfaces, closed surfaces, and tubes. This parametrization allows a wide variety of smooth surfaces to be described with a small number of parameters. Surface finding is formulated as an optimization problem. Results of the method applied to synthetic and medical three-dimensional images are presented.
Medial description of gray-scale image structure by gradient-limited diffusion
Daniel S. Fritsch
The representation of object shape in grayscale images is an important precursor to many common image interpretation needs, including recognition, registration, and measurement. Typically, such computer vision tasks have required the preliminary step of image segmentation, often via the detection of object edges. This paper presents a new means of describing grayscale object shape that obviates the need for edge-finding, i.e., classifying pixels as being inside or outside of an object boundary. Instead, this technique operates directly on the image intensity distribution to produce a set of `medialness' measurements at every pixel and across multiple spatial scales that capture more global properties of object shape. The application of an orientation-sensitive, gradient-limited diffusion model provides many of the benefits of global, multiscale structural analysis while preserving the local region- insulating effects of pixels having edge-like properties. The result of this procedure is a multiscale medial axis (MMA), which represents an object at multiple and simultaneous levels of scale, and which provides several desirable properties for describing the shape of grayscale forms.
From partial derivatives of 3-D density images to ridge lines
Olivier Monga, Serge Benayoun, Olivier D. Faugeras
Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3-D structures. The next stage is to characterize the local geometry of these surfaces in order to extract points or lines which may be used by registration and tracking procedures. Typically one must calculate second order differential characteristics of the surfaces such as the maximum, mean, and Gaussian curvatures. The classical approach is to use local surface fitting, thereby confronting the problem of establishing links between 3-D edge detection and local surface approximation. To avoid this problem, we propose to compute the curvatures at locations designated as edge points using directly the partial derivatives of the image. By assuming that the surface is defined locally by an iso-intensity- contour (i.e., the 3-D gradient at an edge point corresponds to the normal to the surface) one can calculate directly the curvatures and characterize the local curvature extrema (ridge points) from the first, second, and third derivatives of the grey level function. These partial derivatives can be computed using the operators of the edge detection. We present experimental results obtained using real data (x-ray scanner data) applying these two methods. As an example of the stability, we extract ridge lines in two 3-D x-ray scanner data of a skull taken in different positions.
Mapping the human cerebral cortex using 3-D medial manifolds
Gabor Szekely, Christian Brechbuehler, Olaf Kuebler, et al.
Novel imaging technologies provide a detailed look at structure and function of the tremendously complex and variable human brain. Optimal exploitation of the information stored in the rapidly growing collection of acquired and segmented MRI data calls for robust and reliable descriptions of the individual geometry of the cerebral cortex. A mathematical description and representation of 3-D shape, capable of dealing with form of variable appearance, is at the focus of this paper. We base our development on the Medial Axis Transformation (MAT) customarily defined in 2-D although the concept generalizes to any number of dimensions. Our implementation of the 3-D MAT combines full 3-D Voronoitesselation generated by the set of all border points with regularization procedures to obtain geometrically and topologically correct medial manifolds. The proposed algorithm was tested on synthetic objects and has been applied to 3-D MRI data of 1 mm isotropic resolution to obtain a description of the sulci in the cerebral cortex. Description and representation of the cortical anatomy is significant in clinical applications, medical research, and instrumentation developments.
Statistical investigations of multiscale image structure (Proceedings Only)
This artificial visual system (AVS) is a computational framework for computer vision based on spatial filtering and statistical pattern recognition. Computer vision tasks are often poorly defined; the AVS clarifies the kinds of visual tasks that can be defined and what constitutes a well-defined task. `Segmentation'' is not a well-defined task. Edge detection is revealed to be an absurd task. A filter set composed of multiscale Gaussians alone captures the structure of Koenderink''s generic neighborhood operators when a pattern is constructed from the responses at a pixel and neighboring locations, where the distance to the selected neighbors increases with larger scale. Prior studies of the feature space formed by multiscale Gaussians reveal surprising power in the multiscale Gaussians alone. New studies support this observation. Contrary to common belief, we show how nonlocal, spatial, geometric structure can be captured using statistical pattern recognition operations in the AVS framework. A procedure is defined for deriving a single composite filter providing optimal separation of two clusters in feature space.
Multiparameter image visualization by projection pursuit (Proceedings Only)
G. Harikumar, Yoram Bresler
This paper addresses the display of multi-parameter medical image data, such as arises in MRI or multimodality image fusion. MRI or multi modality studies produce several different images of a given cross-section of the body, each providing different levels of contrast sensitivity between different tissues. The question then arises as to how to present this wealth of data to the diagnostician. While each of the different images may be misleading (as illustrated later by an example), in combination they may contain the correct information. Unfortunately, a human observer is not likely to be able to extract this information when presented with a parallel display of the distinct images. Given the sequential nature of detailed visual examination of a picture, a human observer is quite ineffective at integrating complex visual data from parallel sources. The development of a display technology that overcomes this difficulty by synthesizing a display method matched to the capabilities of the human observer is the subject of this paper. The ultimate goal of diagnostic imaging is the detection, localization, and quantification of abnormality. An intermediate goal, which is the one we address, is to present the diagnostician with an image that will maximize his changes to classify correctly different regions in the image as belonging to different tissue types. Our premise is that the diagnostician is able to bring to bear all his knowledge and experience, which are difficult to capture in a computer program, on the final analysis process. This is often key to the detection of subtle and otherwise elusive features in the image. We therefore rule out the generation of an automatically segmented image, which not only fails to include this knowledge, but also would deprive the diagnostician of the opportunity to exercise it, by presenting him with a hard-labeled segmentation. Instead we concentrate on the fusion of the multiple images of the same cross-section into a single most informative grey-scale image.
Multimodality Registration
icon_mobile_dropdown
Image fusion using geometrical features
Petra A. van den Elsen M.D., J. B. Antoine Maintz, Evert-Jan D. Pol, et al.
This paper describes a new approach to register images obtained from different modalities. Differential operators in scale space are used to extract geometric features from the images corresponding to similar structures. The resulting feature images may be matched by minimizing some function of the distances between the features in the respective images. Our first application concerns matching of brain images. We discuss a differential operator that produces ridge-like feature images from which the center curve of the cranium is easily extracted in CT and MRI. Results of the performance of these operators in 2-D matching tasks are presented. In addition, the potential of this approach for multimodality matching of 3-D medical images is illustrated by the striking similarity of the ridge images extracted from CT and MR images by the 3-D version of the operator.
Quantitative integration of multimodality medical images
Kiyoyuki Chinzei, Takeyoshi Dohi, Takashi Horiuchi, et al.
Integration of medical images is an ever more popular topic in recent years. However, is it really possible to merge them? More precisely, in which degree is such integration reliable in a clinical and mathematical sense, and what are the criteria to accept the integration? In this study, the authors address the methodology of integration and evaluate the integrated medical images.
New approach to 3-D registration of multimodality medical images by surface matching
Hongjian Jiang, Richard A. Robb, Kerrie S. Holton Tainter
Multimodality images obtained from medical imaging systems such as computed tomography (CT), magnetic resonance (MR) imaging, positron emission tomography (PET), and single photon emission computed tomography (SPECT), generally provide complementary characteristic and diagnostic information. Synthesis of these image data sets into a single composite image containing these complementary attributes in accurate registration and congruence would provide truly synergistic information about the object(s) under examination. We have developed a new method which produces such correlation using parametric Chamfer matching. The method is fast, accurate, and reproducible. Surfaces ar initially extracted from two different images to be matched using semi-automatic segmentation techniques. These surfaces are represented as contours with common features to be matched. A distance transformation is performed for one surface image, and a cost function for the matching process is developed using the distance image. The geometric transformation includes three- dimensional translation, rotation, and scaling to accommodate images of different position, orientation, and size. The matching process involves searching this multi-parameter space to find the best fit which minimizes the cost function. The local minima problem is addressed by using a large number of starting points. A pyramid multi-resolution approach is employed to speed up both the distance transformation and the multi-parameter minimization processes. Robustness in noise handling is accomplished using multiple thresholds embedded in the multi- resolution search. The algorithm can register both partially overlapped and fragmented surfaces. Manual intervention is generally not necessary. Preliminary results suggest registration accuracy on the order of the voxel size used in the registration process. Computational time scales with the number of matching elements used, with about five minutes typical for 2563 images using a modern desktop workstation.
Toward frameless stereotaxy: anatomical-vascular correlation and registration
Christopher J. Henri, A. Cukiert, D. Louis Collins, et al.
We present a method to correlate and register a projection angiogram with volume rendered tomographic data from the same patient. Previously, we have described how this may be accomplished using a stereotactic frame to handle the required coordinate transformations. Here we examine the efficacy of employing anatomically based landmarks as opposed to external fiducials to achieve the same results. The experiments required a neurosurgeon to identify several homologous points in a DSA image and a MRI volume which were subsequently used to compute the coordinate transformations governing the matching procedure. Correlation accuracy was assessed by comparing these results to those employing fiducial markers on a stereotactic frame, and by examining how different levels of noise in the positions of the homologous points affect the resulting coordinate transformations. Further simulations suggest that this method has potential to be used in planning stereotactic procedures without the use of a frame.
3-dimensional registration and visualization of reconstructed coronary arterial trees on myocardial perfusion distributions
John W. Peifer, Ernest V. Garcia, C. David Cooke, et al.
Models created independently from nuclear perfusion images of the myocardium and x-ray angiograms of the coronary arteries are placed in register to produce a unified three- dimensional representation that more clearly displays and quantifies the relationship between stenotic defects in the arterial tree and perfusion distribution in the myocardium
Automatic superimposition of CT and SPET immunoscintigraphic images in the pelvis (Proceedings Only)
Catherine Perault, Andres Loboguerrero, Jean-Claude Liehn, et al.
A method of superimposing computed tomography (CT) and immunoscintigraphic (IS) single photon emission tomography (SPET) pelvic images is applied to patients with suspected cancer recurrence. Bone scintigraphy (BS) SPET recorded at the same time as IS allows a fully automatic geometric registration using bone structures. Resulting fused CT-BS and CT-IS images are displayed in a bicolor red-gray scale.
Processing and Transforms
icon_mobile_dropdown
Edge information at landmarks in medical images
Fred L. Bookstein, William D. K. Green
In many current medical applications of image analysis, objects are detected and delimited by boundary curves or surfaces. Yet the most effective multivariate statistics available pertain to labelled points (`landmarks') only. In the finite-dimensional feature space that landmarks support, each case of a data set is equivalent to a deformation map deriving it from the average form. This paper reviews a recent extension of a spline-based approach so as to incorporate edge information, and extends it further to apply to images that incorporate landmarks. In this implementation, edgels are restricted to landmark loci: they are interpreted as pairs of landmarks at infinitesimal separation in a specific direction. To shears of these infinitesimal edgels correspond well-defined incremental deformations of the entire image. There results a very flexible new strategy for normalizing the geometry of a specimen scene before representing its grey levels as functions of position. Applications of this strategy will include more powerful approaches to picture averaging and more precise visualizations of biological processes that affect the shapes of medical images, their content, or both.
Smoothing and matching of 3-D space curves
Andre P. Gueziec, Nicholas Ayache
We present a new approach to the problem of matching 3-D curves. The approach has an algorithmic complexity sublinear with the number of models, and can operate in the presence of noise and partial occlusions. We make use of non-uniform B-spline approximations, which permits us to better retain information at high curvature locations. The spline approximations are controlled (i.e., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy. These measures allow a more accurate estimation of position, curvature, torsion, and Frenet frames along the curve. The computational complexity of the recognition process is considerably decreased with explicit use of the Frenet frame for hypotheses generation. As opposed to previous approaches, the method better copes with partial occlusion. Moreover, following a statistical study of the curvature and torsion covariances, we optimize the hash table discretization and discover improved invariants for recognition, different than the torsion measure. Finally, knowledge of invariant uncertainties is used to compute an optimal global transformation using an extended Kalman filter. We present experimental results using synthetic data and also using characteristic curves extracted from 3-D medical images. An earlier version of this paper was presented at ECCV'92.
Local filtering and global optimization methods for 3-D magnetic-resonance angiography image enhancement
Dirk Vandermeulen, D. Delaere, Paul Suetens, et al.
In this presentation, we discuss the visualization of cerebral blood vessels in 3-D MR angiography images. Two techniques for an improved visualization are investigated: 3-D non- linear morphological filters that enhance the contrast of blood-vessel-like structures and a global stochastic optimization framework incorporating shape constraints. The resulting filtered images are combined into a novel hybrid volume rendering visualization method for the integrated viewing of brain structures and cerebral vasculature.
Combining shape-based and gray-level interpolations
Sectional images generated by medical scanners usually have lower interslice resolution than resolution within the slices. Shape-based interpolation is a method of interpolation that can be applied to the segmented 3-D volume to create an isotropic data set. It uses a distance transform applied to every slice prior to estimation of intermediate binary slices. Gray-level interpolation has been the classical way of estimating intermediate slices. The method reported here is a combination of these two forms of interpolation, using the local gradient as a normalizing factor of the combination. Overall, this combination of the methods performs better than either of them applied individually.
Optimized intensity thresholds for volumetric analysis of magnetic-resonance imaging data (Proceedings Only)
Marijn E. Brummer
A technique is presented for computing optimal threshold values for volumetric analysis of MRI data, in the presence of partial volume effects and noise. Thresholds are computed from nominal structure intensities, detected from intensity histograms. Histogram modification methods are described to improve the detectability of these intensities. The implementation of the technique is discussed for detection of contours of the brain and the ventricular system. Results are presented of validation experiments using simulated data, generated from MRI scans of a phantom of the ventricular system in the brain, and of six whole-brain MRI scans.
Rendering and Interpretation
icon_mobile_dropdown
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
Ryutarou Ohbuchi, David Chen, Henry Fuchs
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
Acceleration of ray-casting using 3-D distance transforms
Karel J. Zuiderveld, Anton H. J. Koning, Max A. Viergever
This paper introduces a novel approach for speeding up the ray casting process commonly used in volume visualization methods. This new method called Ray Acceleration by Distance Coding (RADC) uses a 3-D distance transform to determine the minimum distance to the nearest interesting object; the implementation of a fast and accurate distance transform is described in detail. High distance values, typically found at off-center parts of the volume, cause many sample points to be skipped, thus significantly reducing the number of samples to be evaluated during the ray casting step. The minimum distance values that are encountered while traversing the volume can be used for the identification of rays that do not hit objects. Our experiments indicate that the RADC method can reduce the number of sample points by a factor between 5 and 20.
Evaluation and optimization of contrast enhancement methods for medical images
Derek T. Puff, Robert Cromartie, Etta D. Pisano, et al.
We have developed and are applying two methods of image quality assessment with the aim of optimizing contrast enhancement parameter settings and evaluating competing methods. Our first approach uses observer studies employing psychophysical methods and a realistic clinical task; the second incorporates a model of human vision in a computer simulation of the performance of an observer.
New paradigm for optimal multiparameter image visualization
Robert W. Boesel, Yoram Bresler
This paper addresses the effective display of multi-parameter medical diagnostic data, such as arises in MRI or in multimodality image-fusion. A new decision-theoretic model of the human observer in a visualization task is developed, and used to derive optimal algorithms for the display of a maximally informative fused monochrome image.
Multimode 3-D renderer for clinical and scientific visualization (Proceedings Only)
Rod D. Gilchrist
A 3-D renderer is described that, based on requirements of clinical medical applications, integrates the display of volume data, biological objects, angiographic data, geometric surfaces, and isosurfaces in the same image. The renderer generates images at interactive speeds within windowed (e.g., X11) applications on standard UNIX workstations with an optional accelerator and fully exploits workstation or accelerator parallelism.
Viusalization Tools
icon_mobile_dropdown
Interactive 3-D segmentation
Thomas Schiemann, M. Bomans, Ulf Tiede, et al.
Segmentation is a prerequisite for 3-D visualization of image volumes. It has turned out to be extremely difficult to formalize for automatic computation. We describe an interactive segmentation method that circumvents this difficulty by using low level segmentation tools, which are interactively controlled by a human user via 3-D display. Segmentation tools implemented so far are simple thresholding and morphological operations. The method has been implemented on a workstation under UNIX using an X-Window interface based on the OSF/MOTIF toolkit. It is shown with examples from different applications that this simple approach delivers good results in only a short amount of time.
G2: the design and realization of an accelerator for volume visualization
Louis T. S. Leung, Warren Synnott
The trend in increasing volume and complexity of data to be examined in biomedical visualization indicates the need for hardware acceleration. This paper describes a hardware accelerator designed for visualization in biomedical applications. The general architectural approach taken was that of creating a low cost, general purpose supercomputer rather than that of producing acceleration by hardwiring certain functions. The result is an affordable, scalable, symmetric multiprocessor with 10 to 20 times the performance of today's state of the art deskside workstations. The design goals, major design decisions, and the resulting G2 architecture are discussed in this paper. The first major design decision discussed is the general architecture, leading to a conclusion of a symmetric multiprocessor with some characteristics of a distributed memory multiprocessor. The memory architecture chosen is that of a memory hierarchy, with the lower levels (levels closer to the processors) of the hierarchy operating independently, thus allowing for more aggregate bandwidth at lower levels. Many RISC, CISC, and DSP commercial offerings were considered for the processor, and the final one chosen is the next generation Motorola 88K RISC processor, the MC88110. The interconnection scheme between processors and memories was determined to be best implemented using a proprietary, synchronous, hierarchical bus with a bandwidth of 200 MBytes/s at the top of the hierarchy. The performance of the system in nine benchmarks are estimated and some of them verified with the prototype. At the time of submission of this paper, the other benchmarks are being verified.
Data representation and visualization in 4-D microscopy
Andres Kriete, Steffen Rohrbach, Tim Schwebel, et al.
Computer representation in biological microscopy is progressing from the well established modeling of three-dimensional (3-D) structural information towards the visualization of spatio- temporal (4-D) information. This paper describes two new methods to process sequential volumes, where each data set corresponds to a time sample. The first technique is based on surface rendering to study organ and tissue development. Contour stacks are rendered and in- between stages are interpolated. This technique allows the analysis and simulation of growth following different mathematical models and relates them with experimental findings. The second technique got appreciation for volume rendering of morphogenesis in living tissue. Sequences scanned with a confocal microscope are packed. The combination of ray-casting reconstructions within a color model allows for a rendering of morphogenetic activity.
Interactive wavelet-based image compression with arbitrary region preservation
We have developed a software module which performs 2-D and 3-D image compression based on wavelet transforms. The compression is not lossless, but the software allows the user to interactively determine the degree of compression (by viewing the results) and to interactively define specific regions of the image -- of any size and shape -- that are to be preserved with full fidelity while the rest of the image is compressed (again, viewing the results). These capabilities may be of interest in applications such as teleradiology.
Estimating ankle rotational constraints from anatomic structure
H. Harlyn Baker, Janice S. Bruckner, John H. Langdon
Three-dimensional biomedical data obtained through tomography provide exceptional views of biological anatomy. While visualization is one of the primary purposes for obtaining these data, other more quantitative and analytic uses are possible. These include modeling of tissue properties and interrelationships, simulation of physical processes, interactive surgical investigation, and analysis of kinematics and dynamics. As an application of our research in modeling tissue structure and function, we have been working to develop interactive and automated tools for studying joint geometry and kinematics. We focus here on discrimination of morphological variations in the foot and determining the implications of these on both hominid bipedal evolution and physical therapy treatment for foot disorders.
Visualization tools for computational electrocardiology (Proceedings Only)
Robert S. MacLeod, Christopher R. Johnson, Mike A. Matheson
We have developed a suite of visualization and model construction tools for use in computing voltages and currents in the human thorax due to cardiac electrical activity. The programs support interactive digitization of MRI images, automatic generation, manipulation and editing of three-dimensional meshes, and display of both surface and volume potential distributions and volume current vectors.
3-dimensional visualization of pose determination: application to SPECT imaging (Proceedings Only)
Rakesh Mullick, Norberto F. Ezquerra, Ernest V. Garcia, et al.
Pose determination of 3-D objects is a topic of active research, particularly in the are of medical imaging. The determination of the pose or orientation of a 3-D object in a 3-D dataset can be viewed as a two-fold problem: (1) the determination of the orientation of the object; and (2) the visualization of the object along with the determined orientation of the object. An attempt has been made to compute and visualize the orientation of left ventricle (LV) of the heart from SPECT data. A semi-automatic technique to first filter the acquired SPECT data and then compute the 3-D orientation of the LV is under development. Volume visualization techniques have been employed to better visualize and comprehend the structure and orientation of the LV.
Method for interactive manipulation and animation of volumetric data (Proceedings Only)
Yves D. Jean, Larry F. Hodges, Roderic Pettigrew
We outline an efficient method for visualizing and manipulating volumetric data, in particular, cardiac MRI data sets. The approach is designed to allow interactive manipulation and real- time animation of volumetric data sets. The underlying model provides an efficient graphical representation for interactive rendering while not eliminating data from the volume of interest. We believe this model to be a valuable medical imaging tool that is applicable to other volume rendering problems.
SpiderWeb algorithm for surface construction in noisy volume data (Proceedings Only)
Daniel B. Karron
The `SpiderWeb'' is an algorithm that constructs an oriented, manifold, and unbounded triangle mesh isosurface in noisy volume data. The surface normals consistently point outward in the direction of lower density. Each triangle is edge adjacent to precisely one other triangle. Because of this property, there can be no artifactual `holes'' or `tears'' in the surface. The surface is constructed using only local information and has these global properties. SpiderWeb''s main application is surface construction in noisy MRI and ultrasound data. The constructed surface is appropriate for modeling brain anatomy geometry for further functional modeling applications.
Visualization Systems
icon_mobile_dropdown
Electronic imaging of the human body
Michael W. Vannier M.D., Randall E. Yates, Jennifer J. Whitestone
The Human Engineering Division of the Armstrong Laboratory (USAF); the Mallinckrodt Institute of Radiology; the Washington University School of Medicine; and the Lister-Hill National Center for Biomedical Communication, National Library of Medicine are sponsoring a working group on electronic imaging of the human body. Electronic imaging of the surface of the human body has been pursued and developed by a number of disciplines including radiology, forensics, surgery, engineering, medical education, and anthropometry. The applications range from reconstructive surgery to computer-aided design (CAD) of protective equipment. Although these areas appear unrelated, they have a great deal of commonality. All the organizations working in this area are faced with the challenges of collecting, reducing, and formatting the data in an efficient and standard manner; storing this data in a computerized database to make it readily accessible; and developing software applications that can visualize, manipulate, and analyze the data. This working group is being established to encourage effective use of the resources of all the various groups and disciplines involved in electronic imaging of the human body surface by providing a forum for discussing progress and challenges with these types of data.
Multimodality radiological image processing system
Ronald L. Levin, Mary A. Douglas, Joseph A. Frank, et al.
A new image processing system, MRIPS, is being developed to facilitate the visualization and analysis of multidimensional images and spectra obtained from different radiological imaging modalities.
Imaging applications platform: concept to implementation
Patrick B. Heffernan, Doron Dekel
This paper describes a software system called the Imaging Applications Platform (IAP), which is designed to meet the visualization tasks of biomedical applications. The design decisions are explained, and the chosen architecture is described. The functionality of IAP is summarized and demonstrated with some example pipelines.
Framework for the generation of 3-D anatomical atlases
Karl Heinz Hoehne, Andreas Pommert, Martin Riemer, et al.
In current practice computerized anatomical atlases are based on a collection of images that can be accessed via a hypermedia program shell. In order to overcome the drawback of a limited number of available views, we propose an approach that uses an anatomical model as data base. The model has a two layer structure. The lower level is a volume model with a set of semantic attributes belonging to each voxel. Its spatial representation is derived from data sets of magnetic resonance imaging and computer tomography. The semantic attributes are assigned by an anatomist using a volume editor. The upper level is a set of relations between these attributes which are specified by the expert as well. Interactive visualization tools such as multiple surface display, transparent rendering, and cutting are provided. As a substantial feature of the implementation the semantic and the visualization oriented descriptions are stored in a knowledge base. It is shown that the combination of this object oriented data structure with advanced volume visualization tools provides the `look and feel' of a real dissection. The concept, which even allows simulations like surgery rehearsal, is claimed to be superior to all presently known atlas techniques.
Computer vision and graphics in fluorescence microscopy
Lawrence M. Lifshitz, Kevin Fogarty, John M. Gauch, et al.
The focus of this paper is on the visualization of intracellular structures in 3-D, dual labelled, fluorescent images, and the quantification of the spatial relationships among these structures. Specifically, a local, fast deformable model has been developed which finds the membrane of a cell. Two visualization algorithms have also been developed. One is a surface-based algorithm which converts voxel data into planar surfaces and hence permits the visualization of the cell membrane (as found by the deformable model) in conjunction with the original data. The second is a 3-D voxel-based algorithm which permits the simultaneous visualization of two data sets and is optimized to emphasize the extent to which the data in two 3-D images (one from each fluorescent label) co-localize. In addition, a quantitative analysis of overlap between two data volumes has also been performed. The application of the developed tools to several biologically motivated problems is discussed.
3-dimensional acquisition and visualization of ultrasound data (Proceedings Only)
Umamaheswari Ganapathy, Arie E. Kaufman
We present two approaches to capture and visualize a volumetric ultrasound dataset. One is a straightforward one degree-of-freedom method. The other employs a six degree-of-freedom electromagnetic tracking device attached to the ultrasound transducer. Using an efficient incremental algorithm the 3-D volume is reconstructed from the acquired arbitrarily oriented and positioned sections. The 3-D volume is visualized and manipulated using the VolVis and MediView volume visualization systems.
Surgery and Treatment Planning
icon_mobile_dropdown
Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions
Hans-Heino Ehricke, Gerhard Daiber, Ralf Sonntag, et al.
In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.
Interactive visualization and manipulation of 3-D reconstructions for the planning of surgical procedures
Ron Kikinis, Harvey E. Cline, David Altobelli, et al.
The requirements for 3-D reconstructions to be useful in a clinical environment include availability of the imaging and computing hardware; sophisticated and user-friendly software that can be used by physicians or technicians; robust data that can be efficiently segmented into clinically relevant structures; interactive speed; and the ability to manipulate the visualized structures for the simulation of surgical procedures. We have developed an efficient hardware, software, and application environment that fulfills these requirements and have initiated testing of its performance.
Visualization of multimodal images for the planning of skull base surgery
Derek L.G. Hill, S. E.M. Green, J. E. Crossman, et al.
This paper describes our methodology for combining and visualizing registered MR, CT, and angiographic images of the head. We register the individual datasets using the location of a number of user identified anatomical point landmarks to derive the rigid body transformation between datasets. The combined images are displayed either as 2-D slices or as 3-D rendered scenes. Three independent observers performed a detailed assessment of the usefulness of the combined images in the planning of resection of skull base lesions in seven patients. We have shown that in all patients studied at least one of our observers obtained significant extra clinical information from the combined images, while all observers showed significantly increased confidence in the pre-operative surgical plan in all but one patient. Initial evaluation of the 3-D rendered displays showed that the size, shape, and extent of the tumors were better visualized, 3-D spatial relationships between structures were clarified, viewing the resection site in 3-D was very useful, and movie loops provided a very strong 3-D cue. An improved method of registering information from multiple imaging modalities is described and future directions for image combination and visualization are suggested.
Physical model of facial tissue and muscle articulation derived from computer tomography data
Keith Waters
A facial tissue model, articulated by synthetic muscles, provides a tool for observing, analyzing, and predicting soft tissue mobility on the face. A geometric model of facial tissue extracted from CT data improves the skin tissue simulation by using accurate skin tissue depths. This paper suggests that the ability to control the model resolution, muscle placement and activity requires an integrated modeling and animation system.
Volume rendering: application in static field conformal radiosurgery
J. Daniel Bourland, Jon J. Camp, Richard A. Robb
Lesions in the head which are large or irregularly shaped present challenges for radiosurgical treatment by linear accelerator or other radiosurgery modalities. To treat these lesions we are developing static field, conformal stereotactic radiosurgery. In this procedure seven to eleven megavoltage x-ray beams are aimed at the target volume. Each beam is designed from the beam's-eye view, and has its own unique geometry: gantry angle, table angle, and shape which conforms to the projected cross-section of the target. A difficulty with this and other 3- D treatment plans is the visualization of the treatment geometry and proposed treatment plan. Is the target volume geometrically covered by the arrangement of beams, and is the dose distribution adequate? To answer these questions we have been investigating the use of ANALYZETM volume rendering to display the target anatomy and the resultant dose distribution.
Diagnosis and Interpretation
icon_mobile_dropdown
Model-based 3-D segmentation of multiple sclerosis lesions in dual-echo MRI data
Micheline Kamber, D. Louis Collins, Rajjan Shinghal, et al.
This paper describes the development and use of a brain tissue probability model for the segmentation of multiple sclerosis lesions in magnetic resonance (MR) images of the human brain. Based on MR data obtained from a group of healthy volunteers, the model was constructed to provide prior probabilities of grey matter, white matter, ventricular cerebrospinal fluid (CSF), and external CSF distribution per unit voxel in a standardized 3- dimensional `brain space.' In comparison to purely data-driven segmentation, the use of the model to guide the segmentation of multiple sclerosis lesions reduced the volume of false positive lesions by 50%.
Informing interested parties of changes in the optical performance of the cornea caused by keratorefractive surgery: a ray-tracing model that tailors presentation of results to fit the level of soph
Leo J. Maguire, Jon J. Camp, Richard A. Robb
Keratorefractive surgery changes a patient's spectacle correction by altering the curve of the cornea. Often the optical performance of the cornea is degraded as a result of surgery. Clinical tests such as visual acuity testing with high contrast optotypes are too insensitive to measure how the operation degrades optical quality. Ray tracing models offer promise as a sensitive indicator of optical degradation, but unfortunately most patients and many ophthalmologists and health care analysts do not understand results from such models when they are displayed as wither Fourier representations of optical degradation or as point spread functions. To address this problem, we improved on an earlier ray tracing program that models the optical performance of the cornea so that it now presents results in whatever format is best understood by the target audience.
Wavelet processing techniques for digital mammography
Andrew F. Laine, Shuwu Song
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Multidimensional image and structure data representation: a generalization of the ACR-NEMA standards (Proceedings Only)
Jayaram K. Udupa, Hsiu-Mei Hung, Dewey Odhner, et al.
Multidimensional image data are becoming increasingly common in biomedical imaging. Three-dimensional visualization and analysis techniques based on three-dimensional image data have become an established discipline in biomedicine. Some imaging problems generate image data of even higher dimensions. It often becomes necessary to consider the higher dimensional data as a whole to adequately answer the underlying imaging questions. In spite of this established need for convenient exchange of image and image-derived information, no exchange protocols are available that adequately meet the needs of multidimensional imaging systems. This paper describes an exchange protocol that has been designed after careful consideration of the common requirements of methodologies for visualization and analysis of multidimensional data. It is based on and is a generalization of the widely accepted ACR- NEMA standards specified for two-dimensional images. It is implemented and actively being used in a data-, application-, and machine-independent software environment called 3DVIEWNIX, being developed in the authors'' department, for the visualization and analysis of multidimensional images.
Biology and Function
icon_mobile_dropdown
Three-dimensional structure of neurons from peripheral autonomic ganglia using confocal microscope images
Steven M. Miller, Philip Schmalz, Leonid Ermilov, et al.
An understanding of neuronal physiology at the single unit level requires knowledge of the functional properties of the neurons as well as a detailed description of its morphology. A full morphological description should include a three-dimensional (3-D) analysis. In this paper, we describe for the first time, 3-D morphology of individual neurons of the peripheral autonomic nervous system intact within whole-mount preparations. Neurons were filled with the fluorescent dye Lucifer Yellow and imaged by laser scanning confocal microscopy. Three- dimensional reconstructions were obtained using volume rendering methods on a set of serial optical sections obtained for each neuron. The 3-D images of the neurons we studied showed a complexity of shape and detail that is not readily apparent from images viewed by traditional microscopy.
Data processing for 3-D ultrasound visualization of tumor anatomy and blood flow
Jeffery C. Bamber, R. J. Eckersley, P. Hubregtse, et al.
Using simple tumor anatomy and flow phantoms and an algorithm for segmenting and remapping Doppler color flow data, it was demonstrated that (1) adaptive speckle reduction (ASR) can be extended to work in three dimensions with likely benefits to the performance of the algorithm and the quality of three dimensional presentations, particularly those obtained from reflection volume rendering methods, (2) three dimensional visualization of the Doppler flow data helps the observer to comprehend the vascular pattern and interpret some artefacts, and (3) volume and surface rendering methods in common usage are not ideal for these data. A new volume rendering method, termed trichromatic summed voxel projection (TSVP), was proposed which may permit three dimensional visualization of combined vascular and anatomical structures while preserving most of the echo amplitude and color velocity information.
Joint kinematics via three-dimensional MR imaging
Jayaram K. Udupa, Bruce Elliot Hirsch, Supun Samarasekera, et al.
The methodology reported here enables us to mathematically model and quantify the motion of each component bone, relative motion of bones, the contact surface of bones and their change during motion for complex joints, from a time sequence of MR image volumes. Additionally, since we model the bone surfaces, we are able to display in vivo joint motion. Through a variety of new rendering techniques we are able to create realistic displays of bones from MR images and to combine these displays with the motion parameters.
Three-dimensional visualization of cardiac single-photon-emission computed-tomography studies
C. David Cooke, Ernest V. Garcia, Russell D. Folks, et al.
We have previously reported on a method for visualizing cardiovascular nuclear medicine tomographic perfusion studies which included: long and short axis slices, two-dimensional (2- D) polar maps, three-dimensional (3-D) surface models, and four-dimensional surface models. We have since validated the 3-D surface model in a prospective patient population, added the ability to generate 3-D models from any polar map, and added a panoramic display that allows two 3-D models to be compared side-by-side. This paper describes the methodologies involved in these enhancements.
Overlay of neuromagnetic current-density images and morphological MR images
Manfred Fuchs, Hans-Aloys Wischmann, Olaf Doessel
Neuromagnetic imaging is a relatively new diagnostic tool for examination of electric activities in the nervous system. It is based on the noninvasive detection of extremely weak magnetic fields around the human body with superconducting quantum interference device (SQUID) detectors. `Equivalent current dipoles' and linear estimation reconstructions of current distributions both with spherical volume conductor models are used to localize the neural activity. For practical use in medical diagnosis a combination of the abstract neuromagnetic images with magnetic resonance (MR)- or computer tomography (CT)-images is required in order to match the functional activity with anatomy and morphology. The neuromagnetic images can be overlayed onto three-dimensional morphological images with spatially arbitrarily selectable slices. The matching of both imaging modalities is discussed. Based on the detection of auditory evoked magnetic fields, neuromagnetic images are reconstructed with linear estimation theory algorithms. The MR images are used as a-priori information of the volume conductor geometry and they allow an attachment of functional and morphological properties.
Smooth muscle current-density distribution: value and estimation from potential data (Proceedings Only)
Abdalla S.A. Mohamed, Fabian D'Souza, D. N. Ghista, et al.
One aspect of smooth muscle electrical activity interpretation concerns the topographical estimation of the regions of the observed features.The quantities measured correspond to differences in potential between points of the muscle.These potentials are due to the activity of some distribution of sources(pacemakers)with timevarying amplitudes and locations.A three-dimensional model is introduced to describe the basic anatomical structure of CI tract and the conduction characteristics especially locations of pacemakers and their stability.Using FEM,the spatial distribution of time-varying current density is presented.Moreover,displaying the migration of pacemaker location simplifies the interpretation of pathways of conduction and functional behaviour of CI tract under different modes of stimulation.
Tutorials
icon_mobile_dropdown
Introduction to volume visualization and its biomedical applications
Arie E. Kaufman, Karl Heinz Hoehne, William E. Lorensen, et al.
Volume visualization is emerging in the nineties as a key field of visualization, graphics, and imaging. The use of volume visualization in practical applications in medicine and biology is becoming a common reality. Volume visualization encompasses an array of techniques, a technology, and a nomenclature, and holds substantial challenges. The techniques provide mechanisms that make possible display and exploration of the inner or unseen structures of volumetric data and allow visual insight into opaque or complex datasets. Volume visualization, as a technology, brings a revolution to computer graphics and promises important breakthroughs in numerous biomedical applications. Volume visualization is concerned with the tasks of representing, manipulating, and rendering volumetric data. This course provides an overview of the technology, the nomenclature, and the techniques for these tasks, emphasizing algorithms and applications. The course covers and compares different approaches in volume representation, volume synthesis, volume and surface viewing, volume shading, and biomedical applications of volume visualization. This is a beginning/intermediate level tutorial designed for biomedical scientists and engineers, and for medical researchers and practitioners who are new to the field of volume visualization or interested in expanding their knowledge in that field.
Statistical image pattern recognition
Image pattern recognition involves decision-making based on image data. Statistical tools for automatic, rational decision-making are numerous and can be applied some of the kinds of decisions that need to be made in medical imaging. This tutorial will present an introduction to statistical pattern recognition and show how those techniques can be applied to several kinds of image analysis problems, including analysis of multiple modalities and multiple scales. The course introduction will review problems in biomedical image analysis and provide both a framework and taxonomy for approaching vision problems. The course will include detailed discussion on the structure of images, statistical pattern recognition techniques, image pattern recognition using this technique, and statistical representations of image geometry.
Advanced techniques in volume visualization and analysis
Richard A. Robb, Armando Manduca, Dennis P. Hanson, et al.
Human vision provides an extraordinarily powerful and effective means for acquiring information. Much of what we know about ourselves and our environment has been derived from images — images produced by various instruments, ranging form microscopes to telescopes and spanning orders of magnitude in scale — extending the range of human vision into realms beyond that which is naturally accessible. The full scientific, educational, and/or clinical value of these images is profoundly significant, and recent development of advanced methods to fully visualize and quantitatively analyze the intrinsic information contained in biomedical images, in particular, have begun to recognize and unearth the rich treasures o such recordings. This tutorial will provide an introduction to several and demonstration of (using ANALYZE) advanced methodologies being developed and applied to address the need for new approaches to image display and analysis as improvements in imaging technology enable more complex objects and processes to be imaged and simulated. These will include methods for segmentation (e.g. by math morphology) of 3-D images and for integration (e.g. by surface matching) of multi-modality images. Arguably, these are two of the most important and challenging problems in multidimensional biomedical imaging today. In addition the tutorial will emphasize the significant potential of 3-D image classification (e.g. using multi-spectral analysis) and feature measurements to enhance specificity and sensitivity in studies of biological structure-to-function relationships; in diagnostic accuracy and discriminating power; and in clinical/surgical treatment planning and delivery. Advanced volume rendering (using several ray casting algorithms) to simultaneously and interactively produce multiple individual, combined, and transparent objects and parametric displays for visualization of segmented, fused and/or classified objects will be discussed and demonstrated.
Three-dimensional confocal microscopy and visualization
This tutorial provides a fundamental introduction to confocal microscopy and its applications. The course will review the historical developments of confocal microscopy and describe all of the components of confocal microscopes, including light sources, scanning systems, microscope objectives, apertures, detection systems, and mechanical xyz stages. A comparison between ideal and real optical systems will be developed, with "focus" on optical aberrations and digital deconvolutions.